
Not way back, I had a conversation with ChatGPT to learn how properly it will do in a course I train for professors at my establishment. This was an attention-grabbing speak, the place I skilled its nice potential in addition to the necessary limitations of the bot. Now the time has come for us to have a second dialog. Though I’ll deal with the identical course (one about the right way to write multiple-choice questions), this time I need to discover out whether or not ChatGPT can operate as my educating assist. For instance, can it give you examples and questions that I can use in studying supplies and actions? Can it produce good summaries of scholars’ enter, due to this fact serving to me present higher suggestions or talk extra effectively with contributors within the course?
Since ChatGPT made its grand entrance just a few months in the past, it has turn into evident that how properly we immediate it determines how good its output is. Due to this fact, I needed to pay particular consideration to prompting, and for that, I determine to observe Philippa Hardman’s recommendation. In ChatGPT for Educators: Part 2, Hardman mentions some errors we steadily make when prompting: We don’t present sufficient context and our prompts are too lengthy, unstructured or obscure. Hardman additionally factors out that we belief the bot an excessive amount of, and that given ChatGPT “is extra assured than it’s competent,” we must always “assume errors and validate every thing.” Based mostly on my first dialog with ChatGPT, I can actually relate to this remark: The bot actually appears educated and correct, however we frequently discover out that that is solely what it “appears,” not what it “is.” Hardman additionally offers a easy system to jot down good prompts: Give the bot a job, a job, and a few directions. With these suggestions in thoughts, I’m prepared for my second dialog with ChatGPT.
I begin by asking the bot to generate reflection questions primarily based on some course content material I present. Within the first week of the course, contributors are requested to put up reflections in a web based discussion board and I need to discover out whether or not ChatGPT may give me concepts for good inquiries to spark reflection. This was my immediate: “You run a course for larger training academics. Your job is to supply reflection questions for contributors to mirror on this content material.” [Here I pasted the course content about strengths and limitations of multiple-choice questions].
The bot provides me six bullet factors, every one consists of two or three questions on sure sub-topics. For instance, the primary bullet level has two questions associated to benefits of utilizing multiple-choice questions: “What are some benefits of utilizing MCQs in evaluation? How may these benefits profit your college students and your educating follow?” Though among the bullet factors include questions that don’t relate strictly to the enter I offered, and I’m not positive that I might use these questions precisely as ChatGPT has written them, it’s a nice assist to get an inventory of related questions in a matter of seconds. It’s just like a really quick brainstorm.
Subsequent, I need to discover whether or not GPT can present me with examples of multiple-choice questions, each good and dangerous. Good examples are helpful as an example the rules offered within the course and poor examples are helpful for contributors to follow the right way to enhance questions. This might probably assist me with some studying supplies and actions within the course. I take advantage of the next immediate: “You run a course for larger training academics. Your job is to assist me collect samples of multiple-choice questions. Are you able to present some examples of dangerous a number of alternative questions, adopted by their corresponding improved variations and explanations of how the questions have been improved?”
ChatGPT’s response is someway disappointing, additionally someway hilarious. The primary “dangerous” query the bot offers is, “What’s the capital of France? A. Paris, B. Rome, C. Berlin, D. Madrid,” the prompt improved model is, “Which metropolis is the capital of France? A. Paris, B. Rome, C. Berlin, D. Madrid,” and the reason is that “the improved query clarifies what’s being requested and removes any ambiguity.” The remainder of the questions and explanations observe a really comparable sample. Other than the truth that I used to be on the lookout for extra complicated questions than the one the bot offered, I believe most readers will agree that it is extremely questionable that “What’s the capital of France?” was a foul query to start with, that it was improved, or that the bot’s rationalization pretty displays the modifications made to the query.
At this level, I remind myself that ChatGPT “is extra assured than it’s competent” (Hardman, 2023); however I additionally notice that my enter might need not been particular sufficient. I attempt a brand new immediate, this time giving the bot a particular query to enhance in relation to a particular guideline: “You’re a trainer working within the design of efficient multiple-choice questions. Are you able to present an improved model of the query beneath by avoiding writing another that’s for much longer than the remainder?” [Here I pasted the question with its answers]. This time the query is improved following the rule given, and the alternate options are higher high quality than those within the unique query. Moreover, after I ask ChatGPT why C is the right reply, it provides me clear and concise explanations that would work as examples of suggestions for this specific multiple-choice query. This isn’t what I used to be on the lookout for, nevertheless it may very well be very helpful.
Lastly, I attempt to discover out if ChatGPT may also help establish key factors in contributions made by course contributors to a web based discussion board. I often take notes of probably the most talked about subjects, as I learn reflections and observe interactions within the discussion board. I then use my notes to answer some posts within the discussion board and to jot down an end-of-week wrap up that I put up within the course LMS. I immediate the bot by pasting three contributions from contributors and asking for a abstract of details. ChatGPT is certainly in a position to establish the important thing factors talked about, however I notice that as a result of they’re decontextualized (I can’t see the necessary particulars and nuances that contributors point out about their educating follow), they’ll’t assist me reply to the posts within the discussion board and even write the end-of-week abstract. This was the incorrect method and this limitation ought to have been apparent to me. Did I neglect that I used to be speaking with a bot, not an individual? Having mentioned that, the abstract of scholars’ enter produced by the bot is likely to be useful for different functions, for instance, figuring out which elements of the course content material generates extra curiosity, questions, or doubts.
I had got down to discover out whether or not ChatGPT might work as my educating assist. My predominant conclusion is that it might, however just for some functions and underneath some circumstances. Utilizing ChatGPT to generate questions appears fairly simple. Something extra particular or complicated requires particular person and well-contextualized prompts, which may be time consuming in itself, and in my case, would contain spending a while enhancing my prompting expertise. I additionally skilled that in asking the bot about one concern, I truly ended up with helpful enter for one thing else; so with a purpose to uncover ChatGPT’s potential you will need to spend a while exploring it. A last takeaway for me is that the bot is usually useful and typically ineffective. It’s typically correct and typically incorrect. The one characteristic it persistently retains is its confidence—maybe we must always name it overconfidence. Possibly it’s the one factor that we, as customers, must look out for.
Nuria Lopez, PhD, taught at larger training for twenty years earlier than transferring to a job of pedagogical assist for school. She presently works as studying guide on the Instructing and Studying Unit of the Copenhagen Enterprise Faculty (Denmark).
Reference:
Hardman, P. (2023) ChatGPT for Educators: Part 2
Put up Views: 105