X Close

IOE Blog

Home

Expert opinion from IOE, UCL's Faculty of Education and Society

Menu

What we should really be asking about ChatGPT et al. when it comes to educational assessment

By Blog Editor, IOE Digital, on 27 April 2023

Cartoon robot and university student writing on laptops at a desk. Credit: PCH Vector / Adobe Stock

Credit: PCH Vector / Adobe Stock

Mary Richardson.

Since its launch in November 2022, the Open AI chatbot, ChatGPT, has been flexing its artificial intelligence and causing moral and practical panics on university campuses across the world. It is unsurprising that universities are concerned about the ramifications of using Large Language Models (LLMs) to create responses to assessments because this:

  1. Challenges reliable identification of academic standards; and
  2. Initiates detailed reviews of certain types of assessment and their future applicability.

The ability of a faceless, brainless machine to answer questions, write poetry, compose songs (although musician, Nick Cave disagrees) and create visual art, all in a matter of seconds, presents us with some astonishing food for thought. The future of not just how we write, but what we might wish to say, is looking potentially very different as more free LLMs become available and ‘learn’ how to write for us. As someone who researches educational assessment, and who teaches and assesses students, there are many questions that I’m grappling with in relation to this new landscape in education, but here I’m focusing on some that academics might wish to consider with their students:

  1. Does the creation and release of LLMs mean students will be more tempted to let an AI model do their academic work for them?
  2. What characterizes cheating when using LLMs in assessment?

Creative writing?

Of course, some students will cheat, and this has long been the case: artefacts from ancient China reveal tiny booklets hidden by candidates taking the prestigious civil service examinations; more recently, photographs from India caused outrage as parents were seen scaling exam hall walls to pass answers in to the students. What these events reveal is not that individuals should be viewed as potentially all morally corrupt, but that if the stakes are high, there will be some who feel the need to increase their chances of success because a poor outcome might be ruinous for a range of reasons, including employment prospects, financial security and/or personal pride.

Research suggests that prior to the advent of LLMs, there was some increase in cheating among higher education students exacerbated by the move to online learning and assessment during the pandemic. Whilst I don’t believe we should ignore such actions, what really matters is exploring why students feel the need to behave dishonestly. In my own institution, an academic integrity course starts by asking students why they might not act with integrity, and the results reveal concerns about how to write and how to reference alongside some (dis)honest folk who admit they want good grades!

What this all suggests is not that we should be more fearful of students embracing new technologies to game assessment systems in universities, but instead we should be looking at things a bit more critically.  LLMs such as Chat GPT could provide an excellent teaching resource to help with the core activity of undergraduate and postgraduate study: becoming a critical reader, writer and thinker – and it is the last of these three things that LLMs will never do.

But when is it cheating?

The nature of dishonesty is a complex idea, but it needs unpacking to determine where we might draw the line between what constitutes cheating via the use of an LLM in constructing an answer to an assessment. Whilst leading a seminar on ChatGPT with a group of around 40 experienced lecturers I noted a range of views on the nature of cheating, from those who felt it was perfectly fine to use an LLM to draft out an initial ‘stimulus’ piece of writing on a topic, to those who believed this to be plagiarism if submitted without significant editing. The word ‘scary’ was used a lot during the seminar and reflects that age-old concern about the rise of the machines: will they take away our jobs?

This is why we, in education, need good education about new technologies and how they can be used to support and enhance learning. The doom mongers may be predicting the end of writing as we know it, and maybe they will be proved right, but it doesn’t automatically mean we are losing something. Who knows what we might learn from chatting and critically engaging – we’re the ones with the brains.

Print Friendly, PDF & Email

One Response to “What we should really be asking about ChatGPT et al. when it comes to educational assessment”

  • 1
    Phuong wrote on 27 April 2023:

    pen and pencil type of assessment/ examination can avoid cheating by LLMs. In some cases, it might be helpful for students if they used it as a tool to revise the knowledge