X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

Archive for the 'e-Assessment' Category

Generative AI: Lifeline for students or threat to traditional assessment?

By Marieke Guy, on 21 April 2023

Our increasingly complex world has made the potential impact of artificial intelligence on education more relevant than ever. Gone are the days when AI’s role in academic assessment required extensive explanation; it has become embedded in our daily lives. This shift has caused a wave of concern in Higher Education as traditional assessment practices risk becoming obsolete.

This post is a version of a one that appears on the National Centre for AI blog. It was reframed using Chat-GPT4.

In March, Russell Group university leaders convened to discuss the impact of AI on education and the implications for the sector. The event, chaired by Kathy Armour, Vice-Provost (Education & Student Experience) at UCL, featured a panel of students from various disciplines, sharing their experiences and insights on how AI tools, such as ChatGPT, have transformed their approach to learning.

Student panel on AI and assessment facilitated by Chris Thomson, Jisc. The panel summary was provided by Kathy Armour, Vice-Provost (Education & Student Experience) at UCL

Student panel on AI and assessment facilitated by Chris Thomson, Jisc. The panel summary was provided by Kathy Armour, Vice-Provost (Education & Student Experience) at UCL

The students’ accounts made it clear that the genie is out of the bottle; AI is now so deeply integrated into their learning experience that it would be futile and dangerous to resist the change. For many, AI has become a “lifechanging” educational companion, offering a level of support that is impossible to ignore. As such, the students argued, returning to traditional exam halls or engaging in an AI detection arms race would be detrimental to their future employability and wellbeing.

It is evident that a collaborative approach between students and educational leaders is necessary to navigate this brave new world.

Prior to the event the students contributed to the drafting of a set of future-proof principles related to AI and assessment, addressing concerns such as relevance, literacy, rigour, transparency, fairness, and human-centred education.

Working with the students to co-design the AI and assessment principles

Working with the students to co-design the AI and assessment principles

The students expressed a desire for their education to prepare them for the wider world and the future workplace, necessitating the adoption of AI in learning, teaching, and assessment. Additionally, students and staff must be supported in developing academic skills in relation to AI, ensuring that learning and development opportunities are not missed. The students pointed to friends who were already creating AI-based start-ups.

Transparency and fairness are crucial when AI tools are used in assessment and marking. Students are particularly concerned about the potential for a widening gap between those who can afford AI tools and those who cannot. This raises the question of whether universities should provide paid-for versions of AI tools as part of their standard IT provision.

Moreover, learning, teaching, and assessment must remain human-centred. AI should enhance, not replace, relationships between students and educators, and AI interactions should promote a pedagogy of care. If students rely on AI to bypass required academic work, it is essential to ask why and provide additional support as needed.

This thought-provoking event demonstrated the importance of engaging in open dialogue with students about the role of AI in education and assessment. As Kathy Armour noted, the challenges posed by AI and assessment are not new; they are rooted in longstanding issues of assessment and curriculum design that continue to challenge the sector. Embracing the potential of AI in education can offer a lifeline to students, but it requires a delicate balance between technological innovation and maintaining the integrity of traditional learning experiences. By working together, students and educators can create a path forward that incorporates AI in a way that benefits all.

The event also featured visionary case-studies from sector-experts on AI: Sue Attewell, Head of edtech and lead at Jisc’s national Centre for AI in tertiary education, Professor Mike Sharples from the Institute of Educational Technology at the Open University and Michael Veale, Associate Professor and Deputy Vice Dean (Education) in the Faculty of Laws at UCL.

Draft principles

Draft principles

Thanks go to those involved in this work:

Students:

  • Matthew Banner – Postgraduate in the third year of a PhD in Biochemical Engineering, leading on a student-led partnership project considering assessment design and AI.
  • Sophie Bush – Undergraduate student on History and the Philosophy of Science BSc and lead course rep for Science and Technology studies.
  • Megan Fisher – Second-year undergraduate student studying Economics, with chosen modules in Environmental Economics and Algebra.
  • Rachel Lam – First-year undergraduate law student, serves as a student partner on the assessment design and quality review team.
  • Jennifer Seon – In last year of my part-time master’s programme studying Education and Technology, dissertation will focus on collaborative problem-solving in assessment. Recently interviewed AI expert Wayne Holmes for a podcast with the UCL AI Society.
  • Bernice Yeo – Postgraduate student taking the MA in Education and Technology. Works as an examiner for the International Baccalaureate.
  • Sopio Zhgenti – Postgraduate student studying Education and Technology at the Institute of Education with special interest in Artificial Intelligence.

Staff:

  • Marieke Guy (Head of Digital Assessment), UCL
  • Zak Liddell (Director of Education & Student Experience, MAPS), UCL
  • Joanne Moles (Head of Assessment Delivery and Platforms), UCL
  • Jennifer Griffiths (Associate Director in the UCL Arena Centre for Research-based Education), UCL
  • Lizzie Vinton (Assessment Regulations and Governance Manager, Academic Services) , UCL
  • Chris Thomson (Programme lead for teaching, learning and assessment), Jisc

Support for AUCL assessments

By Marieke Guy, on 1 February 2023

The Digital Assessment Team have updated the support processes for users of the AssessmentUCL (Wiseflow) platform. The new support process are outlined on the following pages:

  • Staff support during an AssessmentUCL assessment – This page includes a link to the staff query form which asks for additional information on the assessment and module affected. This information will help the Digital Assessment Team and the Digital Education Support Analysts (DESAs) with diagnosing and rectifying issues quicker.
  • Student support during your assessment – This page includes a link to the Assessment Query Form, which is used by students for any technical failures that happen during an assessment.

The increased use of online support forms is to ensure quicker support during peak assessment periods and avoid direct communication with individual members of staff, which can cause bottlenecks. Departments or modules interested in assessment redesign and use of the AssessmentUCL platform should initially consult the recommendations for readiness guidance and identify if their assessment approach is suitable for the platform. If they wish to continue the discussion, they can email assessment-advisory@ucl.ac.uk to organise a meeting with a Digital Assessment Advisor.

Assessment process

The team will be developing a series of AssessmentUCL ‘Blueprints’ over the forthcoming months. These will offer a clear path in using the AssessmentUCL platform for particular assessment approaches (e.g.  for fully automatically marked Multiple Choice Questionnaire (MCQ), or for blind-marking). They are intended as a mechanism to enable staff to better self-support, and use functionality that has been fully tried, tested and documented.

The AssessmentUCL Resource centre continues to be updated with additional guidance and videos. The team also run regular training sessions.

An institutional environment that increases academic integrity

By Marieke Guy, on 25 November 2022

Over the last two years UCL has made major strides in its use, support and understanding of digital assessment at scale. Prior to the pandemic there had been interest in providing a centrally-managed environment for exams and other assessment type. Covid accelerated these plans resulting in the procurement of the AssessmentUCL platform.  The move to digital assessments brings many benefits for users (as evidenced by Jisc and others). Students can be assessed in an environment they are increasingly comfortable with; there are administration benefits, for example through system integrations and streamlined processes; and, pedagogically, digital assessments allow us to experiment with diverse, authentic, ways to appraise, and contribute to, learning.

However, at their core, digital assessment environments consist of a set of tools and technologies which can be used well or badly. In a recent Blog post,  ‘Are digital assessments bad?’ I discussed many of the reservations people have about their use including issues related to digital equity, technology limitations and academic misconduct. My conclusion was that “these issues are not insignificant, but all of them, perhaps with the exception of a preference for offline, are surmountable. The solutions are complex and require resource and commitment from institutions. They also require a change management approach that recognises both the importance of these issues, and how they impact on individuals. It would be a massive fail to brush them under the carpet.” We in the Digital Assessment Team stand by this conclusion.  The move to online assessment during Covid has surfaced a lot of pre-existing issues with ‘traditional’ assessment as identified by Sally Brown and Kay Sambell, and others.

In this post I’d like to look at one of the identified problem areas: Academic integrity. I want to consider the relationship between digital assessments and academic integrity, and then put forward some ideas that could better support academic integrity. Is the solution removing digital assessments and insisting on in-person handwritten exams? Or if it is something more complex and nuanced, like creating the right institutional environment to support academic integrity.

A very brief history of cheating

Cheating in assessments has been around since the origin of assessments. We can all remember fellow students who wrote on their arms, or stored items in their calculator memory. The reddit Museum of Cheating shares cheating stories and images from old Chinese dynasties to the present day. However, the move to digital assessments and online exams has not only shone a light on existing misconduct, but also presented new possibilities for collusion and cheating. Figures vary across the sector but one dramatic poll by Alpha Academic Appeal found that 1 in 6 students have cheated over the last year, most of whom were not caught by their institution. And “almost 8 out 10 believed that it was easier to cheat in online exams than in exam halls”. Clearly this is not acceptable. If we cannot ensure that our degree standards are being upheld this devalues both the Higher Education sector and the qualifications awarded.

We need to begin by accepting that issues exist, as is often the case with any technological progress. For example, think about online banking. It brings many benefits but also challenges as we adapt to online scams and fraud. Artificial Intelligence (AI) adds additional complexity to this picture. Deep learning tools such GPT-3 now have the power to write original text that can fool plagiarism checking software, but AI can also support streamline marking processes and personalise learning.

A multi-faceted, multi-stakeholder approach

So what is the answer? Is it moving all students back to in-person assessments and reinstating the sort of exams that we were running previously? Will this ensure that we create exceptional UCL graduates ready to deal with the challenges of the world we live in? Will it prevent all forms of cheating?

UCL has heavily invested in a digital assessment platform and digital assessment approaches.  How can we take the learning from the pandemic and the progress we have made in digital provision and build on that for the benefit of students and the university. See Peter Bryant’s recent post ‘…and the way that it ends is that the way it began’. While there are some cases where in-person assessment may be the best option temporarily, long-term we actually hope to take a more progressive and future-proof approach. This requires an institutional conversation around our assessment approaches and how our awards remain relevant and sustainable in an ever-uncertain environment.   If we are to stand by our commitments to an inclusive, accessible, developmental and rewarding learning experience for students, will this be achieved by returning to ‘traditional’ practice?  How can we support staff by managing workload and processes so that they can focus on learning and assessment that make a difference to students?

Changing an institutional environment or culture isn’t a straightforward process. It encompasses many ideas and methods, and requires a multi-stakeholder commitment.

Within the Digital Assessment Team we have begun to look to Phill Dawson’s academic integrity tiers list for reference. Phill Dawson is Associate Director of the Centre for Research in Assessment and Digital Learning at Deakin University in Australia. In his tier list exercise Phil is asking the question about assessment priorities. If the priority is academic integrity /preventing cheating what strategies will enable you to achieve this?  He often talks about an anti-cheating methodology that involves multiple approaches (the Swiss cheese approach), with programmatic assessment, central teams and student amnesties being the most effective mechanisms.

For reference Tier lists are a popular online classification system usually used to rank video games characters. Their adoption has increasingly extended to other areas including education. Within the tier system S stands for ‘superb’, or the best, A-F are the next levels in decreasing value.   You can see the full explanation of Phill Dawson’s tier list along with research to justify his placement of activities in his HERDSA keynote talk.

Phill Dawson’s academic integrity tiers list

A possible UCL approach?

At UCL we are already undertaking many of these approaches and more in order to build an institutional environment that increases academic integrity. You could say that we have our own Swiss cheese model. Let me give you a flavour of the type of activities that are already happening.

Assessment design – I have heard it said that that you can’t design out cheating, but you can inadvertently design it in. Focusing on assessment design is the first step in building better assessments that discourage students from academic misconduct. Within UCL we are bringing together Arena, the Digital Assessment Team and others to look assessment design early in the process, often during the programme design level, or at the point a programme or module design workshop is run. The aim is diverse, engaging, well-designed assessments (formative and summative) that are connected to learning outcomes and are often authentic and reflective. The UCL Academic Integrity toolkit supports academics in this design process. We are collecting case studies on our team blog.

Assessment support – We need to build student and staff capability with regard to academic integrity. We plan to rationalise and promote academic integrity resources and skills-building training for students. These resources need to give the right message to our students and be constructive, not punitive. They need to be co-developed in collaboration with faculty and students and be part of an ongoing dialogue. They also need to be accessible to our diverse student body including our international cohort who may join our institution with a different understanding of academic practice. There also needs to be well-communicated clarity around assessment expectations and requirements for students. We are supporting Arena with further training specifically for staff, for example on MCQ and rubric design, and with supporting their students in understanding this complex area. There is also scope for training on misconduct identification and reporting processes.  We are also working with Arena to review Academic Integrity courses for students.

Technical approaches – There is a stated need for assessments that focus on knowledge retention in certain discipline areas, which is why we are piloting the Wiseflow lockdown browser. The pilot has three different threads: (1) a Bring Your Own Device pilot with Pharmacy; (2) a pilot using hired laptops in the ExCel Centre, likely with modules from School of European Languages, Culture and Society (SELCS); and (3) a departmental pilot using iPads in the Medical School. In parallel, the Maths Department are using Crowdmark to combine in-person assessments with digital exams. And as a team we are building up our knowledge of detection platforms. such as Turnitin and their stylometry tool draft coach, and of the use of AI in relation to assessment.

Better understanding of academic misconduct – In order to tackle academic misconduct we need to understand more about why it takes place, not in a general way but at UCL on particular modules and particular assessments, and with particular cohorts or students. Some of this information can be identified through better analysis of data. For example, one recent initiative is the redevelopment of an assessment visualisation (‘Chart’) tool, which depicts the assessment load and patterns across modules. This can help us identify choking points for students. We are also involved in a mini-project working with the student changemakers team in which we will be running a series of workshops with students looking at the area of academic integrity.

For discussion

We are unlikely to stamp out all forms of cheating, regardless of where our assessments take place. Thinking we can do this is both naïve and short-sighted.

However, we can work to creating an institutional environment where misconduct is less likely. In this environment we develop students who are well-supported and fully understand good academic practice and its benefits, for themselves as students, and the community more widely. They are able to prepare well for their assessments and are not put under unnecessary pressure that doesn’t relate to learning outcomes of their course. They are provided with well-explained, well-designed, challenging and relevant assessments that engage their interest and build capability and skills. They are supported in understanding the misconduct process and can be honest and open about their academic behaviour throughout their time at UCL. If the assessment is heavily knowledge-based, then it takes place in an environment in which access to outside resources is limited, but it is non-invasive and students are still respected and trusted throughout. An approach like this is compassionate and supports student wellbeing. It is cost effective, by reducing academic misconduct panels and the need for large-scale in-person assessments. It is progressive and scalable with useful online audit trails.  It should work for academic and administrative staff.  The Digital Assessment Team is committed to working with academic teams to realise this vision.

What do others think? How does such a vision support our institutional strategy and could it form part of a UCL teaching and assessment framework?

What’s next for the Digital Assessment Team?

By Marieke Guy, on 10 October 2022

In his conversation with Mary Richardson at the UCL Education conference earlier this year Professor David Boud laid out the unavoidable truth  that “students are profoundly affected by how they are assessed, it provides messages to them about what really counts”. He explained that assessment “has to contribute to students learning” as student learning behaviour is both framed and driven by assessment. In essence good assessment is imperative in ensuring good education. At UCL we are lucky to have a dedicated Digital Assessment team based within the Digital Education team in ISD. This gives us a real opportunity to improve the assessment journey for both staff and students.

A little history

Back in 2020 the AssessmentUCL Project (or product) was established  to “procure and deliver an end-to-end digital assessment delivery platform to support online assessments”. The project successfully purchased and implemented Wiseflow – the first part of the platform known as AssessmentUCL. During the first exam period of 2020-21 more than 1,071 assessments were delivered in 48,742 sittings. This increased to 1,717 assessments at 57,590 sittings in 2021-22. Throughout the process the product team have listened to feedback on experiences from staff and students, and shaped regulations and policy accordingly.

To support the transition to online assessment at UCL, the Digital Assessment team was formed in September 2021 and led by Anisa Patel. The new team initially comprised of a team of five Digital Assessment Advisors (DAA) responsible for different faculties and a Learning Technologist. Over the last year the team has worked incredibly hard. They have developed hands-on training (running 45+ training sessions over the last year) and comprehensive guidance covering the AssessmentUCL platform, they have championed the capabilities of the platform and tested all aspects of functionality. They have also spoken at length with faculty colleagues to understand their assessment needs and requirements. In some sense they have been acting in an ‘ambassador’ role – making sure students, academics and professional staff are heard. This has resulted in positive enhancements for both the platform and wider assessment practice.

Plans for this year

The team continues to evolve and right now we have three DAAs: Eliot Hoving, Isobel Bowditch and Lene-Marie Kjems; one Learning Technologist: Nadia Hussain and one Senior Learning Technologist: John Spittles. You can read more about the team on the Digital Education Team site, and which faculty they are responsible for supporting on the Education support contacts page. I have recently taken on the position of Head of Digital Assessment at UCL, moving on from my previous DAA role.

Digital Assessment Team

Digital Education Team. Top row: Eliot Hoving, Marieke Guy, John Spittles. Bottom row: Lene-Marie Kjems, Isobel Bowditch. Nadia Hussain joined after this photo was taken.

In the 2022-23 academic year our approach two pronged.  Firstly, we plan to ensure scalable and effective support for AssessmentUCL. We will do this through improving our support model and training resources, continuing to push forward improvements to the AssessmentUCL platform and ensuring the effectiveness of current policies and practices. Some of the areas we will be looking at including the use of formative assessment and on-campus digital assessment using a lockdown browser. And secondly, we will look at other areas in which we can support good assessment design and delivery more widely. We want to explore and promote best practice and be leaders in the assessment area. For example, we have begun work on an academic integrity and misconduct project to help us better understand the issue from both staff and student perspectives, and to identify strategies to promote academic integrity and support in this area. Whenever possible we will be working in collaboration with Arena, our academic colleagues, the Central Assessment Teams and the AssessmentUCL product team. It is likely to be a busy year!

We have started a team blog to share our activities and support wider assessment discussion, and also plan to post more regularly on the Digi-Ed blog.

Assessment hackathon event – Collaboratively exploring the digital assessment challenge  

By Anisa Patel, on 28 March 2022

“In a digital era, how can we design assessments differently and better?” 

This is the question that a group of more than 35 key partners in UCL’s teaching and assessment community gathered to consider last week. Hosted jointly by ARENA and Digital Education, the group comprised academics, students, the Digital Assessment team, Faculty Learning Technologists, professional services staff and representatives from UNIwise (suppliers of our digital assessment platform, AssessmentUCL).  

Attendees were split into teams of mixed disciplines to share their experience of assessment at UCL and bring forward ideas and recommendations on how digital assessments could look in the future. 

The breadth of representation made for a rich and varied discussion and enabled each partner to express their principle areas of focus: 

  • From our student representatives we heard a genuine desire both for continued improvement enabled through assessment feedback not just marks and for assessments to test modern-day marketplace skills (e.g. distilling information, writing reports).   
  • Our academic representatives expressed their overarching concern to do the right thing by our students: seeking to understand and share methods and tools for designing assessments to help students in future life.
  • Our Digital Assessment team focussed on ways to build and share capabilities within faculties enabling us to connect academics, students and technologists and ways to understand and share how technology can support innovative assessment design both now and in the future.  
  • Like our students, our Faculty Learning Technologists focussed on the importance of feedback (or “feedforward”) which is easily accessible, timely and meaningful: enabling students to act upon it.  
  • Our Professional Services representatives focused on how to connect people to ensure that fantastic work around assessment design is shared widely and to ensure that academics and faculty teams are all aware of the tools and supporting resources available to them. 
  • Our UNIwise representatives were interested in all points raised: keen to consider how future enhancements to the AssessmentUCL platform might facilitate the continued evolvement of digital assessment. 

Group discussions were wide ranging and often raised more questions than answers but surfaced a clear desire to continue the conversations about the issues raised and to focus on how we share knowledge to maximise the depth of expertise in assessment design across departments. 

Next steps 

As a result of the discussion, Arena and the Digital Assessment team will focus their attention on the following key themes over the coming months:  

  1. Enabling assessment design knowledge sharing: Helping academics and others involved in assessment design to understand what is possible and how to achieve it. Ensuring clear information channels and networks are established to enable academics and others involved in assessment design to share experience and learn from the experience of others as well as raising awareness of existing resources and support.  
  2. Continuing to improve the markers journey: Exploring how to enable flexibility in marking where one may allocate all work to all markers and each takes one off the top of the pile and can go to the next unmarked (or not yet started to mark script), to ensure that there is no loss of marks when marking allocations change pre/post marking. 
  3. Continuing the conversation: Building on the foundations and links established during this event, we plan to set up a learning lab through which staff and students can continue the discussion around how we can design digital assessments differently and better using existing mechanisms like the Chart tool, as well as rethinking how assessments are delivered currently to make them more applicable to real-world situtations and careers.
  4. Working collaboratively with suppliers and academic colleagues to shape enhancements / design solutions to particular issues: We hope to connect key members in Departments with our suppliers Uniwise to workshop and work through desirability and current system functionalities.

If you would like to join events like this in the future, please let us know by contacting assessments@ucl.ac.uk.

Marking centrally managed exams in 2021

By Steve Rowett, on 22 March 2021

On this page:

Please note that this page will be updated regularly.


Background

As part of UCL’s continued COVID-19 response, centrally managed examinations for 2021 will be held online. Approximately 19,000 students will undertake over 1,000 exam papers resulting in about 48,000 submitted pieces of work. These exams are timetabled, and (for the very most part) students will submit a PDF document as their response. Students have been provided with their exam timetable and guidance on creating and submitting their documents. The exception to this is some ‘pilot’ examinations that are taking place using other methods on the AssessmentUCL platform, but unless you are within that pilot group, the methods described here will apply.

The move to online 24 hour assessments that replace traditional exams leads to a challenge for those that have to grade and mark the work. This blog post updates a similar post from last year with updated guidance, although the process is broadly the same.

Physical exams are being replaced with 24 hour online papers, scheduled through the exam timetabling system. Some papers will be available for students to complete for the full 24 hours, in other cases students ‘start the clock’ themselves to take a shorter timed exam within that 24 hour window.

We start from a place of two knowns:

  • Students are submitting work as a PDF document to the AssessmentUCL platform during the 24 hour window; and
  • Final grades need to be stored in Portico, our student records system.

But in between those two endpoints, there are many different workflows by which marking can take place. These are set out by the UCL’s Academic Manual but encompass a range of choices, particularly in how second marking is completed. One key difference between regular courseworks is that this is not about providing feedback to students, but instead about supporting the marking process, the communication between markers and the required record of the marking process. At the end of the marking process departments will need to ensure that scripts are stored securely but can be accessed by relevant staff as required, much in line with requirements for paper versions over previous years.

There is no requirement to use a particular platform or method for marking, so there is considerable flexibility for departments to use processes that work best for them. We are suggesting a menu of options which provide a basis for departments to build on if they so choose. We are also running regular training sessions which as listed at the foot of this document.

The menu options are:

  • Markers review the scripts and mark or annotate them using AssessmentUCL’s annotation and markup tools;
  • Departments can download PDF copies of scripts which can be annotated using PDF annotation software on a computer or tablet device;
  • Markers review the scripts on-screen using AssessmentUCL, but keep a ‘marker file’ or notes and comments on the marking process;
  • Markers print the scripts and mark them, then scan them for storage or keep them for return to the department on paper.

The rest of this post goes into these options in more detail. There is also a growing AssessmentUCL resource centre with detailed guidance on exams, which will be launched shortly and this will evolve as the AssessmentUCL platform becomes more widely used across UCL.


Overview of central exam marking

This video provides a short (4 minute) introduction to the methods of marking exam papers in 2021. This video has captions available.


Marking online using AssessmentUCL’s annotation tools

AssessmentUCL provides a web-based interface where comments can be overlaid on a student’s work. A range of second marking options are available to allow comments to be shared with other markers or kept hidden from them. The central examinations team will set up all centrally managed exams based on the papers and information submitted by departments.

The video (24 minutes) below provides a walkthrough of the marking process using the annotation and grading tools in AssessmentUCL. It also shows how module leaders can download PDFs of student papers if they wish to mark using other methods or download marks if they are using AssessmentUCL. This video has captions available.

This video (7 minutes) gives more detailed guidance on ‘section-based marking’ where different markers are marking different questions across the submitted papers. This video has captions available.

 


Guidance for external examiners

This video provides guidance for external examiners who are using AssessmentUCL to view papers and marks. This video has captions.


Annotation using PDF documents

Where you annotation needs are more sophisticated, or you want to ‘write’ on the paper using a graphics tablet or a tablet and pencil/stylus, then this option may suit you better.

Module leads and exams liaison officers can download a ZIP file containing all the submitted work for a given exam. Unlike last year, a student’s candidate number is prefixed onto the filename, and can be included within the document itself, to make identifying the correct student much easier.

You can then use tools you already have or prefer to use to do your marking. There is more flexibility here, and we will not be able to advise and support every PDF tool available or give precise instructions for every workflow used by departments, but we give some examples here.

Marking on an iPad using OneDrive

Many staff have reported using an iPad with Apple Pencil or an Android tablet with a stylus to be a very effective marking tool. You can use the free Microsoft OneDrive app, or Apple’s built in Files app if you are using an iPad. Both can connect to your OneDrive account which could be a very useful way to store your files. An example of this using OneDrive is shown below, the Apple Files version is very similar.

There’s further guidance from Microsoft on each individual annotation tool.

Marking on a PC or Surface Pro using Microsoft Drawboard PDF

Microsoft Drawboard PDF is a very comprehensive annotation tool, but is only available for Windows 10 and is really designed to be used with a Surface Pro or a desktop with a graphics tablet. Dewi Lewis from UCL Chemistry has produced a video illustrating the annotation tools available and how to mark a set of files easily. Drawboard PDF is available free of charge from Microsoft.

Marking on a PC, Mac or Linux machine using a PDF annotation program.

Of course there are plenty of third party tools that support annotating PDF documents. Some requirement payment to access the annotation facilities (or to save files that have been annotated) but two that do not are Xodo and Foxit PDF.

Things to think about with this approach:

  • Your marking process: if you use double blind marking you might need to make two copies of the files, one for each marker. If you use check marking then a single copy will suffice.
  • You will need to ensure the files are stored securely and can be accessed by the relevant departmental staff in case of any query. You might share the exam submission files with key contacts such as teaching administrators or directors of teaching.
  • Some of the products listed above have a small charge, as would any stylus or pencil that staff would need. These cannot be supplied centrally, so you may need a process for staff claiming back the costs from departments.

Using a ‘marker file’

Accessing the students’ scripts is done using AssessmentUCL, which allows all the papers to be viewed online individually or downloaded in one go. Then a separate document is kept (either one per script, or one overall) containing the marks and marker feedback for each comment. If double-blind marking is being used, then it is easy to see that two such documents or sets of documents could be kept in this way.


Printing scripts and marking on paper

Although we have moved to online submission this year, colleagues are still welcome to print documents and mark on paper. However there is no central printing service available for completed scripts to be printed, and this would have to be managed individually or locally by departments.


The evidence about marking online

In this video Dr Mary Richardson, Associate Professor in Educational Assessment at the IOE, gives a guide to how online marking can differ from paper-based marking and offers some tips for those new to online marking. The video has captions.


Training sessions and support

Markers can choose to mark Late Summer Assessment (LSA) exams in the same way as they marked the main round of exams. For a refresher or if you were not involved in marking centrally managed exams, please refer to the recording of one of Digital Education’s previous training sessions (UCL staff login required) and the slides used in the training session which can be downloaded.

Throughout the LSA period (16th Aug to 3rd September), drop-ins will be run Tuesday, Wednesday, Thursday 12-1pm. The drop in session is relevant to markers, moderators, the Module Lead and Exams Liaison Officer. Attendees can ask questions, or see a demo of the marking options. You can find the link for these and join immediately.

You can of course contact UCL Digital Education for further help and support.