X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

Archive for the 'Digital Assessment' Category

Updating Our Academic Integrity Resources

By Marieke Guy and i.bowditch, on 10 October 2023

In the ever-evolving landscape of higher education, maintaining academic integrity is paramount. Educational institutions are tasked not only with upholding these standards but also with fostering a culture of academic honesty. At UCL the commitment to academic integrity has led to a revamp of existing resources, driven by a desire to offer the most effective support possible.

We recognise that when it comes to guiding students on academic integrity, a punitive approach falls short. Instead, we want to start with positive framing that taps into the broader motivations of students and positions them as valued contributors to an academic community of practice. The institution does not assume that students inherently understand these practices or that violations should always result in punishment. Rather we view the key causes of plagiarism as opportunities for learning and growth. For instance, Turnitin, a well-known plagiarism detection service, is seen as a tool to assist students in learning rather than merely as a plagiarism detector.

Review and Refresh

At the end of last year, the Digital Assessment Team carried out an audit of academic integrity resources at UCL, which uncovered the need for a refresh. This need became even more pronounced with the advent of Generative Artificial Intelligence (AI). We have now completed the review and refresh of our academic integrity resources for the academic year.

Turnitin Similarity Checker

One of the longstanding resources, the “Plagiarism and Academic Writing for Students” course, has served UCL for over a decade. This course primarily allows students to check their assignments for plagiarism by generating a similarity report through Turnitin. The assignments are not added to the institutional repository, and the course is reset regularly.

The course has now been streamlined to focus solely on explaining Turnitin’s purpose and guiding students on how to create and use the similarity report. An introduction from Ayanna Prevatt-Goldstein, Head of UCL Academic Communication Centre, has been added to give context on how use of Turnitin relates to good academic practice.T o provide a comprehensive experience, an additional section now offers links to other UCL resources related to academic integrity. These are:

  • Academic integrity hub – A student-facing hub area for all guidance on academic integrity including links to information on academic misconduct, academic misconduct panels and Frequently Asked Questions.
  • UCL Academic Communication Centre – The UCL Academic Communication Centre (ACC) supports UCL students to develop their academic language and literacies. We assist students of all language backgrounds, across faculties, at all levels of study, to communicate more effectively in their discipline.

Understanding Academic Integrity Course for Students

UCL has also recently released an updated version of the Understanding Academic Integrity course for students, now hosted on the primary UCL Moodle site: the course previously sat on the UCL Extend platform. This course aims to educate students about all aspects of academic integrity and covers:

  1. How much do I know about academic integrity?
  2. What is academic integrity?
  3. Acknowledging the work of others
  4. Using collaboration positively
  5. Contract cheating
  6. Artificial Intelligence and Academic Integrity
  7. Check your understanding of academic integrity and academic good practice

The revised course content has been built collaboratively with staff and students and incorporates insights from academic integrity and academic writing experts at UCL. It addresses emerging concerns like the use of Generative AI in academia and the course features various elements, including short videos, reflective activities, quizzes, and a final certification quiz.

Students can self-enrol for the course and on completion of all required activities and a success rate in the quiz will receive a certificate of completion, which can serve as evidence of their commitment to academic integrity and be shared with their tutors.

At the start of the course students are asked to post their responses on a mentimeter activity asking  ‘Why do you think students don’t always act with academic integrity?’ . These are the results so far (mid October 2023, 1011 participants, 2547 votes):


To ensure that academic integrity remains current, UCL has devised a plan for annual course refreshers. Annual refreshers are particularly important in the evolving context of Generative AI. Course content on GenAI and its relation to academic integrity will need to be revised in line with both technological and policy developments in this area.

Course video on Artificial Intelligence and Academic Integrity

Older versions of the course are archived to maintain access to logs if needed for academic misconduct panels. In cases where students may still access the previous Extend version, a notice redirects them to the new version on Moodle.

As UCL continues to evolve its approach to academic integrity, it exemplifies a commitment to not just maintaining standards but enhancing the support and resources available to students. This proactive approach ensures that UCL students are well-equipped to navigate the complexities of academic integrity while upholding the institution’s values of learning and growth.

Managing mark release during the marking and assessment boycott

By Marieke Guy and Zaman Wong, on 13 June 2023

This post gives useful information for admin staff on how to manage mark release and mark upload during the Marking and Assessment boycott.

Using AssessmentUCL/Wiseflow

Step 1: Identifying students who have and have not been marked

1.1 Identify students who have been given a final grade:

Students that have been marked and given a final grade can be identified by Administrators (under the Manager role) by downloading the ‘grade export’ report (image below).

  • Student details (candidate number, student ID, names – these columns can be hidden/shown as desired)
  • Students that have submitted / not submitted
  • Students that have been given a final grade (if blank – no grade has been agreed, but marking may have taken place – please see section 1.2)

Guidance to download report.

Marking

1.2 Identify students that have been (by a first or second marker) but not been given a final grade

Administrators should add themselves as Reviewers on their assessments, which will allow them to download a grade sheet which will display a list of candidates and any marks that have submitted by individual markers (including the name of the marker). If you have issues with adding yourself as a Reviewer, please submit a staff query form to request access.

Once you have opened the assessment in the Reviewing tab, you should select the Offline marking option and follow the steps to export the grade sheet:offline marking

The downloaded grade sheet will show you a list of candidates and any marks that have been submitted by first or second markers (highlighted in red in image below):

Greadesheet

Please note that if the Grade column is empty, this means that no grades have been finalised and a Reviewer will need to submit a finalised grade for students that have been marked (this will allow administrators to complete the grade export to Portico in Step 3).

Guidance

Step 2: Allow students without grades/feedback to be marked after the original marking deadline has concluded:

Student grades and feedback are released on the platform under two conditions; once the marking end date has arrived, and the ‘Show final grades’ option has been enabled.

To allow remaining students to be marked, there are two methods (option b is preferable but may be time consuming if dealing with a large no. of students that have yet to be marked):

  1. a) Administrator / Manager can extend the overall marking end date for all students (to allow for further marking to take place). Caveat: this will mean that students who already have a final grade will not be able to view this on the platform, until the extended marking end date has arrived).

Guidance to Extend overall marking end-date.

  1. b) Administrator / Manager can extend the individual marking end dates for only those students who have not yet been marked (this will mean students that have already been marked will be able to see their final grades on the platform, while allowing markers continue further marking for those that have not been marked).

Guidance to extend individual marking end dates.

Step 3. Grade export to Portico

It is recommended to do this once (when there is a full set of grades), however the grade export button can be pushed more than once (Caveat: if administrator pushes the grade export more than once, you may encounter a ‘Fail’ message for students where their grades were previously exported – this error message can be ignored for those students).

Guidance to complete grade export to Portico.

Using Moodle

Student identities for Moodle assignments and Turnitin assignments cannot be revealed and then hidden again.  Each activity type has a different process to acheive this, which is detailed in our Partial mark entry miniguide.

If you have any queries, please contact the Digital Education team with the assignment title and url at:
digi-ed@ucl.ac.uk

UCL and Jisc event: Reimagining Assessment and Feedback

By Marieke Guy, on 6 June 2023

Earlier this week UCL hosted a one-day event entitled Reimagining Assessment and Feedback, the second in a series of Jisc events on Demonstrating digital transformation.  The event was held in Bentham house, the main building for the UCL Faculty of Laws. The purpose these events is to share best practice from universities who have made significant advances in developing innovative approaches to taking forward their digital agenda. As with the Jisc framework for digital transformation, the events are designed to showcase and highlight the broad spectrum of activity needed across an institution to effectively support and implement a digital culture.

The event organising team , Simon Birkett (Jisc), Peter Phillips (UCL) and Sandra Lusk (UCL)

The event organising team , Simon Birkett (Jisc), Peter Phillips (UCL) and Sandra Lusk (UCL)

The event gave the 50+ delegates the opportunity hear how UCL has evolved its assessment and feedback practices and processes and the role technology plays. Here at UCL we have been at the forefront of the shift to digital assessment and have successfully implemented a digital assessment platform for all centrally managed assessments taken remotely. To achieve this we have needed to address other challenges including assessment design, consistency across programmes, regulations and policy, and enhanced support for professional development.

Opening plenary

The event was opened by Pro-Vice-Provost Education (Student Academic Engagement) Kathryn Woods who talked a little about our wider institutional change programme including the UCL strategic plan 2022-27 consultation and Education framework.

Simon Walker presents

Simon Walker presents on assessment at UCL

Professor Simon Walker (previously Director of Programme Development, UCL, now an educational consultant) and I provided an overview of the UCL assessment journey. We discussed the implementation of Wiseflow/AssessmentUCL and the subsequent challenges we have faced regarding AI and academic integrity. Although we haven’t resolved all the problems, we have encountered numerous challenges and have an valuable story to share. You can see our slides below.

Breakout groups

There were  two sets of breakout group sessions on core themes with lunch slotted in between. Each session featured a UCL facilitator to give an opening introduction, a Jisc scribe to lead the related Padlet board and a UCL student to give the student perspective.

Demonstrating digital transformation – assessment futures

This session considered the potential future of assessment in Higher education. Participants looked at areas including AI, assessment technology and new ideas and ways of working in assessment. The group discussed a whole range of challenges from limited understanding of AI technology and capacity constraints, to AI false alerts, digital inequality,  and ethical considerations. The solutions include student co-design, emphasis on assessment design, oral evaluations and better use of AI for formative assessments.

Demonstrating digital transformation – Academic integrity

Discussions in the academic integrity session

Discussions in the academic integrity session

This session considered ways to ensure academic integrity is maintained across the institution through design, education and detection. It considered how policy and regulations need to change in the light of new challenges that technology brings. Much of the discussion covered current practices such as the use of AI proctoring for remote assessments, efforts to establish clear assessment requirements, and conducting fair assessment misconduct panels. The challenges include terminology clarification, legal concerns surrounding AI usage and the implications of more diverse assessment formats on identifying misconduct. Some of the effective strategies identified were additional training for students during the transition into higher education, varied assessment formats, technical approaches such as random question allocation and limited time allocation, and less punitive approaches to academic integrity such as hand holding through academic practice and referencing requirements.

Demonstrating digital transformation – Institutional change

Kathryn Woods

Kathryn Woods facilitates the breakout session on managing institutional change

This session considered how institutions manage change and encourage new academic practices. The group looked at areas including framing of change and the balance between cultural and technological change. Some of the main challenges explored were around large cohorts, diverse student body, the digital skills of academic staff and general change fatigue. Some successful practices highlighted were feedback from externals and industry experts, personalised feedback at scale, external engagement for formative feedback and audio feedback. The support needed to enable this includes includes surfacing assessment and feedback technologies, integrating professional services into curriculum development teams, and providing timely tech and pedagogic support for staff.

Demonstrating digital transformation -Pedagogy and assessment design

This session considered the full assessment design process and focal points and drivers for different staff involved in making changes. The group looked at areas including what contemporary assessment design looks like: authentic, for social justice, reusable etc. Interesting  practice includes assessment processes that focus on the production and process of assessment, the use of student portfolios for employability, co-designing assessments with employers and utilising creative and authentic assessments with tools like Adobe Creative Suite. The main challenges might be the impact of high-stakes assessment and grades on students, clarifying what is actually being assessed, aligning institutional priorities with assessment innovation and supporting group work assessments. Future support needed could involve utilising the Postgraduate Certificate (PG Cert) program to address assessment and curriculum design with technology and digital skills.

Demonstrating digital transformation -Larger cohorts and workloads

This session considered how you assess large cohorts in highly modularised programmes with large student cohorts from different disciplines. It looked at areas including workload models, interdisciplinary assessment, integrated assessment. There are already examples of interdisciplinary group work, using contribution marks to evaluate individual efforts in group work, implementing peer assessment, utilizing multiple-choice questions (MCQs), and employing marking teams for large cohorts. However these face challenges including PSRB accreditation processes, modularisation of assessments, over-assessment and duplication and scaling alternative assessment practices. One of the best approaches identified is programme-level assessment strategies and embracing the principle of “less is more” by focusing on quality rather than quantity.

Demonstrating digital transformation – Strategic direction

This session considered how you respond to the drivers of change and go about a co-design process across an institution by looking at environmental scanning, involving stakeholders, styles of leadership. The challenges identified involve ensuring continuity while managing future aspirations, considering student demographics and adopting an agile approach to strategy development. Clear communication about assessment and being agile in strategic thinking were identified as practices that work well. The support needed includes access to assessment platforms and curriculum mapping software, partnership support from industry organisations, and collaboration with Jisc and UCISA to advocate for change across the sector.

Panel session

The afternoon panel session on assessment, chaired by Sarah Knight, Head of learning and teaching transformation at Jisc featured a diverse group of experts representing academia and student engagement who all provided valuable insights.

Panel session

Panel session: Mary McHarg, Dr Irene Ctori, Professor Sam Smidt, Dr Ailsa Crum, Marieke Guy, and Sarah Knight

  • Professor Sam Smidt, the Academic Director of King’s Academy, KCL;
  • Dr Irene Ctori, Associate Dean of Education Quality and Student Experience at City, University of London;
  • Mary McHarg, SU Activities and Engagement Sabbatical Officer at UCL;
  • Dr Ailsa Crum, Director of Membership, Quality Enhancement and Standards at QAA;
  • Marieke Guy, Head of Digital Assessment at UCL

Each panellist introduced themselves, explained their roles and organisations, and outlined their current work on assessment. They then shared their key takeaways from the discussions and presentations of the day. These included the need to work together collaboratively as a sector and to look at more fundamental, areas such as curriculum design, as places where change could originate. Some also noted the absence of discussion around feedback, interesting given that NSS scores are very dependent on successful approaches here. The panel addressed important questions, including how higher education providers can better support students’ assessment literacy, ways universities can enable staff to effectively use technology for assessment and feedback, methods to engage in dialogue with PSRBs regarding technology in assessments, and predictions for the future of assessment methods in five years’ time. One of the most interesting questions thrown at the panel was what they would do to assessment if they had a magic want, much of the focus was on the current grading model and other areas such of potential such as improving assessment for students with adjustments by adding in optionality and better support.

The day concluded with an overview of the support available from Jisc provided by Simon Birkett, Senior Consultant (see slides). Tweets from the day can be accessed using the #HEdigitaltx tag. Many of the attendees were then treated to a bespoke UCL tour led by Steve Rowett, Head of the Digital Education Futures team. The highlight for many is Jeremy Bentham’s auto-icon.

It was great to bring together those working in strategic change across UK Higher Education in the area of assessment and feedback.  Clearly there is much work to be done but a sector-wide understanding and appreciation of the difficulties faced, and a unified approach to ensuring quality of student experience and learning benefit can only be a good thing.

This article is reposted from the Digital Assessment Team blog.

Panel members

Panel members

Generative AI: Lifeline for students or threat to traditional assessment?

By Marieke Guy, on 21 April 2023

Our increasingly complex world has made the potential impact of artificial intelligence on education more relevant than ever. Gone are the days when AI’s role in academic assessment required extensive explanation; it has become embedded in our daily lives. This shift has caused a wave of concern in Higher Education as traditional assessment practices risk becoming obsolete.

This post is a version of a one that appears on the National Centre for AI blog. It was reframed using Chat-GPT4.

In March, Russell Group university leaders convened to discuss the impact of AI on education and the implications for the sector. The event, chaired by Kathy Armour, Vice-Provost (Education & Student Experience) at UCL, featured a panel of students from various disciplines, sharing their experiences and insights on how AI tools, such as ChatGPT, have transformed their approach to learning.

Student panel on AI and assessment facilitated by Chris Thomson, Jisc. The panel summary was provided by Kathy Armour, Vice-Provost (Education & Student Experience) at UCL

Student panel on AI and assessment facilitated by Chris Thomson, Jisc. The panel summary was provided by Kathy Armour, Vice-Provost (Education & Student Experience) at UCL

The students’ accounts made it clear that the genie is out of the bottle; AI is now so deeply integrated into their learning experience that it would be futile and dangerous to resist the change. For many, AI has become a “lifechanging” educational companion, offering a level of support that is impossible to ignore. As such, the students argued, returning to traditional exam halls or engaging in an AI detection arms race would be detrimental to their future employability and wellbeing.

It is evident that a collaborative approach between students and educational leaders is necessary to navigate this brave new world.

Prior to the event the students contributed to the drafting of a set of future-proof principles related to AI and assessment, addressing concerns such as relevance, literacy, rigour, transparency, fairness, and human-centred education.

Working with the students to co-design the AI and assessment principles

Working with the students to co-design the AI and assessment principles

The students expressed a desire for their education to prepare them for the wider world and the future workplace, necessitating the adoption of AI in learning, teaching, and assessment. Additionally, students and staff must be supported in developing academic skills in relation to AI, ensuring that learning and development opportunities are not missed. The students pointed to friends who were already creating AI-based start-ups.

Transparency and fairness are crucial when AI tools are used in assessment and marking. Students are particularly concerned about the potential for a widening gap between those who can afford AI tools and those who cannot. This raises the question of whether universities should provide paid-for versions of AI tools as part of their standard IT provision.

Moreover, learning, teaching, and assessment must remain human-centred. AI should enhance, not replace, relationships between students and educators, and AI interactions should promote a pedagogy of care. If students rely on AI to bypass required academic work, it is essential to ask why and provide additional support as needed.

This thought-provoking event demonstrated the importance of engaging in open dialogue with students about the role of AI in education and assessment. As Kathy Armour noted, the challenges posed by AI and assessment are not new; they are rooted in longstanding issues of assessment and curriculum design that continue to challenge the sector. Embracing the potential of AI in education can offer a lifeline to students, but it requires a delicate balance between technological innovation and maintaining the integrity of traditional learning experiences. By working together, students and educators can create a path forward that incorporates AI in a way that benefits all.

The event also featured visionary case-studies from sector-experts on AI: Sue Attewell, Head of edtech and lead at Jisc’s national Centre for AI in tertiary education, Professor Mike Sharples from the Institute of Educational Technology at the Open University and Michael Veale, Associate Professor and Deputy Vice Dean (Education) in the Faculty of Laws at UCL.

Draft principles

Draft principles

Thanks go to those involved in this work:

Students:

  • Matthew Banner – Postgraduate in the third year of a PhD in Biochemical Engineering, leading on a student-led partnership project considering assessment design and AI.
  • Sophie Bush – Undergraduate student on History and the Philosophy of Science BSc and lead course rep for Science and Technology studies.
  • Megan Fisher – Second-year undergraduate student studying Economics, with chosen modules in Environmental Economics and Algebra.
  • Rachel Lam – First-year undergraduate law student, serves as a student partner on the assessment design and quality review team.
  • Jennifer Seon – In last year of my part-time master’s programme studying Education and Technology, dissertation will focus on collaborative problem-solving in assessment. Recently interviewed AI expert Wayne Holmes for a podcast with the UCL AI Society.
  • Bernice Yeo – Postgraduate student taking the MA in Education and Technology. Works as an examiner for the International Baccalaureate.
  • Sopio Zhgenti – Postgraduate student studying Education and Technology at the Institute of Education with special interest in Artificial Intelligence.

Staff:

  • Marieke Guy (Head of Digital Assessment), UCL
  • Zak Liddell (Director of Education & Student Experience, MAPS), UCL
  • Joanne Moles (Head of Assessment Delivery and Platforms), UCL
  • Jennifer Griffiths (Associate Director in the UCL Arena Centre for Research-based Education), UCL
  • Lizzie Vinton (Assessment Regulations and Governance Manager, Academic Services) , UCL
  • Chris Thomson (Programme lead for teaching, learning and assessment), Jisc

Support for AUCL assessments

By Marieke Guy, on 1 February 2023

The Digital Assessment Team have updated the support processes for users of the AssessmentUCL (Wiseflow) platform. The new support process are outlined on the following pages:

  • Staff support during an AssessmentUCL assessment – This page includes a link to the staff query form which asks for additional information on the assessment and module affected. This information will help the Digital Assessment Team and the Digital Education Support Analysts (DESAs) with diagnosing and rectifying issues quicker.
  • Student support during your assessment – This page includes a link to the Assessment Query Form, which is used by students for any technical failures that happen during an assessment.

The increased use of online support forms is to ensure quicker support during peak assessment periods and avoid direct communication with individual members of staff, which can cause bottlenecks. Departments or modules interested in assessment redesign and use of the AssessmentUCL platform should initially consult the recommendations for readiness guidance and identify if their assessment approach is suitable for the platform. If they wish to continue the discussion, they can email assessment-advisory@ucl.ac.uk to organise a meeting with a Digital Assessment Advisor.

Assessment process

The team will be developing a series of AssessmentUCL ‘Blueprints’ over the forthcoming months. These will offer a clear path in using the AssessmentUCL platform for particular assessment approaches (e.g.  for fully automatically marked Multiple Choice Questionnaire (MCQ), or for blind-marking). They are intended as a mechanism to enable staff to better self-support, and use functionality that has been fully tried, tested and documented.

The AssessmentUCL Resource centre continues to be updated with additional guidance and videos. The team also run regular training sessions.

An institutional environment that increases academic integrity

By Marieke Guy, on 25 November 2022

Over the last two years UCL has made major strides in its use, support and understanding of digital assessment at scale. Prior to the pandemic there had been interest in providing a centrally-managed environment for exams and other assessment type. Covid accelerated these plans resulting in the procurement of the AssessmentUCL platform.  The move to digital assessments brings many benefits for users (as evidenced by Jisc and others). Students can be assessed in an environment they are increasingly comfortable with; there are administration benefits, for example through system integrations and streamlined processes; and, pedagogically, digital assessments allow us to experiment with diverse, authentic, ways to appraise, and contribute to, learning.

However, at their core, digital assessment environments consist of a set of tools and technologies which can be used well or badly. In a recent Blog post,  ‘Are digital assessments bad?’ I discussed many of the reservations people have about their use including issues related to digital equity, technology limitations and academic misconduct. My conclusion was that “these issues are not insignificant, but all of them, perhaps with the exception of a preference for offline, are surmountable. The solutions are complex and require resource and commitment from institutions. They also require a change management approach that recognises both the importance of these issues, and how they impact on individuals. It would be a massive fail to brush them under the carpet.” We in the Digital Assessment Team stand by this conclusion.  The move to online assessment during Covid has surfaced a lot of pre-existing issues with ‘traditional’ assessment as identified by Sally Brown and Kay Sambell, and others.

In this post I’d like to look at one of the identified problem areas: Academic integrity. I want to consider the relationship between digital assessments and academic integrity, and then put forward some ideas that could better support academic integrity. Is the solution removing digital assessments and insisting on in-person handwritten exams? Or if it is something more complex and nuanced, like creating the right institutional environment to support academic integrity.

A very brief history of cheating

Cheating in assessments has been around since the origin of assessments. We can all remember fellow students who wrote on their arms, or stored items in their calculator memory. The reddit Museum of Cheating shares cheating stories and images from old Chinese dynasties to the present day. However, the move to digital assessments and online exams has not only shone a light on existing misconduct, but also presented new possibilities for collusion and cheating. Figures vary across the sector but one dramatic poll by Alpha Academic Appeal found that 1 in 6 students have cheated over the last year, most of whom were not caught by their institution. And “almost 8 out 10 believed that it was easier to cheat in online exams than in exam halls”. Clearly this is not acceptable. If we cannot ensure that our degree standards are being upheld this devalues both the Higher Education sector and the qualifications awarded.

We need to begin by accepting that issues exist, as is often the case with any technological progress. For example, think about online banking. It brings many benefits but also challenges as we adapt to online scams and fraud. Artificial Intelligence (AI) adds additional complexity to this picture. Deep learning tools such GPT-3 now have the power to write original text that can fool plagiarism checking software, but AI can also support streamline marking processes and personalise learning.

A multi-faceted, multi-stakeholder approach

So what is the answer? Is it moving all students back to in-person assessments and reinstating the sort of exams that we were running previously? Will this ensure that we create exceptional UCL graduates ready to deal with the challenges of the world we live in? Will it prevent all forms of cheating?

UCL has heavily invested in a digital assessment platform and digital assessment approaches.  How can we take the learning from the pandemic and the progress we have made in digital provision and build on that for the benefit of students and the university. See Peter Bryant’s recent post ‘…and the way that it ends is that the way it began’. While there are some cases where in-person assessment may be the best option temporarily, long-term we actually hope to take a more progressive and future-proof approach. This requires an institutional conversation around our assessment approaches and how our awards remain relevant and sustainable in an ever-uncertain environment.   If we are to stand by our commitments to an inclusive, accessible, developmental and rewarding learning experience for students, will this be achieved by returning to ‘traditional’ practice?  How can we support staff by managing workload and processes so that they can focus on learning and assessment that make a difference to students?

Changing an institutional environment or culture isn’t a straightforward process. It encompasses many ideas and methods, and requires a multi-stakeholder commitment.

Within the Digital Assessment Team we have begun to look to Phill Dawson’s academic integrity tiers list for reference. Phill Dawson is Associate Director of the Centre for Research in Assessment and Digital Learning at Deakin University in Australia. In his tier list exercise Phil is asking the question about assessment priorities. If the priority is academic integrity /preventing cheating what strategies will enable you to achieve this?  He often talks about an anti-cheating methodology that involves multiple approaches (the Swiss cheese approach), with programmatic assessment, central teams and student amnesties being the most effective mechanisms.

For reference Tier lists are a popular online classification system usually used to rank video games characters. Their adoption has increasingly extended to other areas including education. Within the tier system S stands for ‘superb’, or the best, A-F are the next levels in decreasing value.   You can see the full explanation of Phill Dawson’s tier list along with research to justify his placement of activities in his HERDSA keynote talk.

Phill Dawson’s academic integrity tiers list

A possible UCL approach?

At UCL we are already undertaking many of these approaches and more in order to build an institutional environment that increases academic integrity. You could say that we have our own Swiss cheese model. Let me give you a flavour of the type of activities that are already happening.

Assessment design – I have heard it said that that you can’t design out cheating, but you can inadvertently design it in. Focusing on assessment design is the first step in building better assessments that discourage students from academic misconduct. Within UCL we are bringing together Arena, the Digital Assessment Team and others to look assessment design early in the process, often during the programme design level, or at the point a programme or module design workshop is run. The aim is diverse, engaging, well-designed assessments (formative and summative) that are connected to learning outcomes and are often authentic and reflective. The UCL Academic Integrity toolkit supports academics in this design process. We are collecting case studies on our team blog.

Assessment support – We need to build student and staff capability with regard to academic integrity. We plan to rationalise and promote academic integrity resources and skills-building training for students. These resources need to give the right message to our students and be constructive, not punitive. They need to be co-developed in collaboration with faculty and students and be part of an ongoing dialogue. They also need to be accessible to our diverse student body including our international cohort who may join our institution with a different understanding of academic practice. There also needs to be well-communicated clarity around assessment expectations and requirements for students. We are supporting Arena with further training specifically for staff, for example on MCQ and rubric design, and with supporting their students in understanding this complex area. There is also scope for training on misconduct identification and reporting processes.  We are also working with Arena to review Academic Integrity courses for students.

Technical approaches – There is a stated need for assessments that focus on knowledge retention in certain discipline areas, which is why we are piloting the Wiseflow lockdown browser. The pilot has three different threads: (1) a Bring Your Own Device pilot with Pharmacy; (2) a pilot using hired laptops in the ExCel Centre, likely with modules from School of European Languages, Culture and Society (SELCS); and (3) a departmental pilot using iPads in the Medical School. In parallel, the Maths Department are using Crowdmark to combine in-person assessments with digital exams. And as a team we are building up our knowledge of detection platforms. such as Turnitin and their stylometry tool draft coach, and of the use of AI in relation to assessment.

Better understanding of academic misconduct – In order to tackle academic misconduct we need to understand more about why it takes place, not in a general way but at UCL on particular modules and particular assessments, and with particular cohorts or students. Some of this information can be identified through better analysis of data. For example, one recent initiative is the redevelopment of an assessment visualisation (‘Chart’) tool, which depicts the assessment load and patterns across modules. This can help us identify choking points for students. We are also involved in a mini-project working with the student changemakers team in which we will be running a series of workshops with students looking at the area of academic integrity.

For discussion

We are unlikely to stamp out all forms of cheating, regardless of where our assessments take place. Thinking we can do this is both naïve and short-sighted.

However, we can work to creating an institutional environment where misconduct is less likely. In this environment we develop students who are well-supported and fully understand good academic practice and its benefits, for themselves as students, and the community more widely. They are able to prepare well for their assessments and are not put under unnecessary pressure that doesn’t relate to learning outcomes of their course. They are provided with well-explained, well-designed, challenging and relevant assessments that engage their interest and build capability and skills. They are supported in understanding the misconduct process and can be honest and open about their academic behaviour throughout their time at UCL. If the assessment is heavily knowledge-based, then it takes place in an environment in which access to outside resources is limited, but it is non-invasive and students are still respected and trusted throughout. An approach like this is compassionate and supports student wellbeing. It is cost effective, by reducing academic misconduct panels and the need for large-scale in-person assessments. It is progressive and scalable with useful online audit trails.  It should work for academic and administrative staff.  The Digital Assessment Team is committed to working with academic teams to realise this vision.

What do others think? How does such a vision support our institutional strategy and could it form part of a UCL teaching and assessment framework?