X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

#LearnHack 7 reflections

By Geraldine Foley, on 8 February 2024

On the weekend 26 – 28 January I helped to facilitate and took part in the seventh iteration of #LearnHack.

#LearnHack is a community hackathon organised by an interdisciplinary UCL team. The original event was held in November 2015 in collaboration with UCL Innovation and Enterprise at IDEALondon. The 2024 version was the first time it has been run as a hybrid event. It was held over the weekend of 26-28 January in the School of Management department at Canary Wharf in collaboration with the Faculty of Engineering, Digital Education and UCL Changemakers. Participants came from 12 different UCL departments, alumni, and external guests from Jisc. Everyone was invited to submit project proposals for how to improve UCL based on pre-agreed themes. The themes this year were AI and Assessment with overlap between the two.

Being fairly new to UCL I had not come across this event before, but when I was told about the ethos behind it which is to empower a community of staff, students, researchers and alumni to tackle challenges collaboratively and creatively, it sounded right up my street. I am a big advocate of playful learning and creating a safe space for experimentation and failure. I also liked the interdisciplinary approach which encourages people from all backgrounds to work together and learn from each other.  Anyone with a valid UCL email address can submit a project proposal to be worked on over the weekend and anyone can run a learning session to share their skills or ideas with participants. Everyone is encouraged to attend welcome talks on the Friday evening to hear about the different projects and get to know each other and form teams. Participants have the weekend to work on their chosen project and also take part in learning sessions.

I’m always up for a challenge, so I not only put forward a project proposal and ran a learning session, but I also helped to facilitate the online attendees on the Friday evening and Saturday morning. This meant it was a packed weekend and I got to experience all the different elements of #LearnHack, including joining online on the second day. 

View from UCL School of Management at Canary Wharf.

View from UCL School of Management at Canary Wharf.

The venue was amazing, with great views of London, and the School of Management spaces were perfect for collaboration and hybrid events. The learning sessions were great, I particularly enjoyed learning how to use Lumi and GitHub to create and host H5P activities outside of Moodle so that they can be shared externally. I also found out about the game that ARC had devised for engineers and developers to learn about the issues associated with generative AI where players can help prevent or create an AI Fiasco.

My own session on making a playful AI chatbot was run online but many people joined from the room. The session encouraged people to experiment with different types of chat bots and have a go at creating their own. We managed to create some interesting applications in the short time we had including a bot that accurately answered questions on using Moodle, Zoom and Turnitin. We also explored how a bot’s personality can impact a user’s interactions and perceptions on the accuracy of its responses and had some interesting discussions on some of the ethical issues involved with users uploading material to datasets.

In-between games, food and learning sessions, teams worked on five different projects. I was impressed with all the project teams and the work they managed to produce in such a short space of time. The winning team stood out in particular, as they created a working prototype using ChatGPT. Their project aims to reduce the time that medical science students spend manually searching through articles looking for replicable research. This team now have Student ChangeMaker funding to create an optimiser to filter through biomedical research papers and extract quality quantitative methods. It is hoped that the ‘protocol optimiser’ will streamline workflows for researchers and students to find suitable lab work. I am looking forward to following the development of their project and hopefully they will report back at a changemaker event later in the year.

#LearnHack 7 Feedback on participants ‘best bits’ of the event.

Despite smaller numbers of attendees than hoped, feedback from participants was positive with calls to raise awareness amongst the student population with promotion in freshers’ week and from careers to encourage students to join. Personally, I had a great time, although next time I wouldn’t try to do quite so much and would either stick to being involved in a project or helping to facilitate and run sessions. The Faculty of Engineering has already given the go ahead for #LearnHack8 and we are currently exploring possibilities with running some mini #LearnHack events before then, so watch this space for more details and if you have an idea for a project then get in touch.

The Turnitin Plagiarism plugin tool for Moodle assignments is finally here…

By Janice Kiugu, on 29 March 2019

Digital Education are pleased to announce that the Turnitin Plagiarism plugin tool for Moodle assignments will be available on New Moodle from 2nd April 2019.

This means that assignments submitted via the Moodle assignment tool can now be checked for similarities in text and a Similarity report generated.

Enabling this will ensure parity across assignment types in Moodle (with regards to similarity checking) and will allow staff and students to have the ability to check all pieces of work for similarities in text and not just those submitted via a standard Turnitin assignment.

All staff will be able to enable the setting on Moodle assignments. However, this will be run as a pilot as some but not all institutions that have the plugin enabled have reported a few issues with the plugin that can be resolved, but require workarounds. Digital Education need to ensure that these issues can be resolved and managed.  The pilot will run till the Moodle Snapshot is taken on 26th July 2019.

There are a few key things to note:

Does the tool work with group submissions?

Yes, however note that:

  • Only the student who made the submission will be able to view the similarity report and will need to share it with other students in the group.
  • All marking should be done using the Moodle Grading tools to ensure all students in the group have a grade recorded and can see any feedback given.

What impact will the plugin have on existing Moodle Assignments?

None at all. Existing assignments will remain as they are. If the plugin is enabled for assignments that have already been set and submissions made, then no similarity report will be generated. Students would have to resubmit to get a similarity report.

Can I set up a Moodle assignment with Turnitin enabled and grade the work in Turnitin feedback studio?

No, we do not recommend this. Turnitin should only be used for similarity checking and NOT for grading when it is enabled in a Moodle assignment.   We have detailed reasons for this in on the wiki guide.

For additional guidance on how to use this tool as a staff member, please refer to the guide – Moodle Assignment with Turnitin integration. Alternatively, if you are a student please see – the student guide .

For specific queries or support, email: digi-ed@ucl.ac.uk

Digital Education Services

New E-Book on Assessment, Feedback and Technology

By Tim Neumann, on 1 November 2017

UCL Digital Education Advisory members contributed to a new Open Access e-book that provides valuable insight into the way technology can enhance assessment and feedback. The book was launched formally on 26th October by Birkbeck College Secretary Keith Harrison, with talks from the editors Leo Havemann (Birkbeck, University of London) and Sarah Sherman (BLE Consortium), three case study authors, and event sponsor Panopto.

Havemann, Leo; Sherman, Sarah (2017): Assessment, Feedback and Technology: Contexts and Case Studies in Bloomsbury. London: Bloomsbury Learning Environment.
View and download from: https://doi.org/10.6084/m9.figshare.5315224.v1

 

The Book

E-Book thumbnail

E-Book Cover

The book is a result of a two-year project on e-assessment and feedback run by the Bloomsbury Learning Environment (BLE), a collaboration between five colleges, including the UCL Institute of Education, on issues around digital technology in Higher Education. It contains three research papers which capture snapshots of current practice, and 21 case studies from the BLE partner institutions and a little beyond, thus including practice from wider UCL.

The three papers focus on

  • the use of technology across the assessment lifecycle,
  • the roles played by administrative staff in assessment processes,
  • technology-supported assessment in distance learning.

The case studies are categorised under the headings:

  • alternative [assessment] tasks and formats,
  • students feeding back,
  • assessing at scale,
  • multimedia approaches, and
  • technical developments.

Seven of the 21 case studies were provided by UCL Digital Education colleagues Jess Gramp, Jo Stroud, Mira Vogel (2), and Tim Neumann (3), reporting on examples of blogging, group assessment, peer feedback, assessment in MOOCs, student presentations at a distance, and the UCL-developed My Feedback Report plugin for Moodle.

 

Why you should read the e-book

Launch Event Photo

BLE E-Book Launch Event

As one of the speakers at the entertaining launch event, I suggested three reasons why everybody involved in Higher Education should read this book, in particular the case studies:

  1. Processes in context:
    The case studies succinctly describe assessment and feedback processes in context, so you can quickly decide whether these processes are transferable to your own situation, and you will get a basic prompt on how implement the assessment/feedback process.
  2. Problems are highlighted:
    Some case studies don’t shy away from raising issues and difficulties, so you can judge for yourself whether these difficulties represent risks in your context, and how these risks can be managed.
  3. Practical tips:
    All case studies follow the same structure. If you are in a hurry, make sure to read at least the Take Away sections of each case study, which are full of tips and tricks, many of which apply to situations beyond the case study.

Overall, this collection of papers and case studies on assessment and feedback is easily digestible and contributes to an exchange of good practice.

 

View and Download the Book

The e-book is an Open Access publication freely available below.

For further information, see ble.ac.uk/ebook.html, and view author profiles at ble.ac.uk/ebook_contributors.html

 

About the BLE:
The Bloomsbury Learning Environment is a collaboration between Birkbeck, London School of Hygiene and Tropical Medicine (LSHTM), Royal Veterinary College (RVC), School of Oriental and African Studies (SOAS),  UCL Institute of Education (IOE), and the University of London with a focus on technologies for teaching and learning, including libraries and administration.
See www.ble.ac.uk for more information.

Comparing Moodle Assignment and Turnitin for assessment criteria and feedback

By Mira Vogel, on 8 November 2016

Elodie Douarin (Lecturer in Economics, UCL School of Slavonic and Eastern European Studies) and I have been comparing how assessment criteria can be presented to engage a large cohort of students with feedback in Moodle Assignment and Turnitin Assignment (report now available). We took a mixed methods approach using questionnaire, focus group and student screencasts as they accessed their feedback and responded to our question prompts. Here are some our key findings.

Spoiler – we didn’t get a clear steer over which technology is (currently) better – they have different advantages. Students said Moodle seemed “better-made” (which I take to relate to theming issues rather than software architecture ones) while the tutor appreciated the expanded range of feedback available in Moodle 3.1.

Assessment criteria

  • Students need an opportunity to discuss, and ideally practice with, the criteria in advance, so that they and the assessors can reach a shared view of the standards by which their work will be assessed.
  • Students need to know that criteria exist and be supported to use them. Moodle Assignment is good for making rubrics salient, whereas Turnitin requires students to know to click an icon.
  • Students need support to benchmark their own work to the criteria. Moodle or Turnitin rubrics allow assessors to indicate which levels students have achieved. Moreover, Moodle allows a summary comment for each criterion.
  • Since students doubt that assessors refer to the criteria during marking, it is important to make the educational case for criteria (i.e. beyond grading) as a way of reaching a shared understanding about standards, for giving and receiving feedback, and for self/peer assessment.

Feedback

  • The feedback comments most valued by students explain the issue, make links with the assessment criteria, and include advice about what students should do next.
  • Giving feedback digitally is legible and easily accessible from any web connected device.
  • Every mode of feedback should be conspicuously communicated to students and suggestions on how to cross-reference these different modes should be provided. Some thoughts should be given to ways to facilitate access to and interpretation of all the elements of feedback provided.
  • Students need to know that digital feedback exists and how to access it. A slideshow of screenshots would allow tutors to hide and unhide slides depending on which feedback aspects they are using.

Effort

  • The more feedback is dispersed between different modes, the more effortful it is for students to relate it to their own work and thinking. Where more than one mode is used, there is a need to distinguish between the purpose and content of each kind of feedback, signpost their relationships, and communicate this to students. Turnitin offers some support for cross referencing between bubble comments and criteria.
  • It would be possible to ask students to indicate on their work which mode (out of a choice of possibilities) they would like assessors to use.
  • The submission of formative assessment produced with minimal effort may impose a disproportionate burden on markers, who are likely to be commenting on mistakes that students could have corrected easily by themselves. Shorter formative assessment, group works, clearer statements of the benefits of submitting formative work may all help limiting the incidence of low-effort submissions.
  • If individual summary comments have a lot in common, consider releasing them as general feedback for the cohort, spending the saved time on more student-specific comments instead. However, this needs to be signposted clearly to help students cross-reference with their individual feedback.
  • As a group, teaching teams can organise a hands-on session with Digital Education to explore Moodle Assignment and Turnitin from the perspectives of students, markers and administrators. This exposure will help immeasurably with designing efficient, considerate processes and workflows.
  • The kind of ‘community work’ referred to by Bloxham and colleagues (2015) would be an opportunity to reach shared understandings of the roles of students and markers with respect to criteria and feedback, which would in turn help to build confidence in the assessment process.

 

Bloxham, S., den-Outer, B., Hudson, J., Price, M., 2015. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education 1–16. doi:10.1080/02602938.2015.1024607