Credit: PCH Vector / Adobe Stock
Since its launch in November 2022, the Open AI chatbot, ChatGPT, has been flexing its artificial intelligence and causing moral and practical panics on university campuses across the world. It is unsurprising that universities are concerned about the ramifications of using Large Language Models (LLMs) to create responses to assessments because this:
- Challenges reliable identification of academic standards; and
- Initiates detailed reviews of certain types of assessment and their future applicability.
The ability of a faceless, brainless machine to answer questions, write poetry, compose songs (although musician, Nick Cave disagrees) and create visual art, all in a matter of seconds, presents us with some astonishing food for thought. The future of not just how we write, but what we might wish to say, is looking potentially very different as more free LLMs become available and ‘learn’ how to write for us. As someone who researches educational assessment, and who teaches and assesses students, there are many questions that I’m grappling with in relation to this new landscape in education, but here I’m focusing on some that academics might wish to consider with their students:
- Does the creation and release of LLMs mean students will be more tempted to let an AI model do their academic work for them?
- What characterizes cheating when using LLMs in assessment?