Digital Education team blog
  • We support Staff and Students using technology to enhance education at UCL.

    Here you'll find updates on institutional developments, projects we're involved in, updates on educational technology, events, case studies and personal experiences (or views!).

    Subscribe to our elearning newsletters.

  • Subscribe to this blog

  • Meta

  • Tags

  • Creative Commons Licence

  • A A A

    New E-Book on Assessment, Feedback and Technology

    By Tim Neumann, on 1 November 2017

    UCL Digital Education Advisory members contributed to a new Open Access e-book that provides valuable insight into the way technology can enhance assessment and feedback. The book was launched formally on 26th October by Birkbeck College Secretary Keith Harrison, with talks from the editors Leo Havemann (Birkbeck, University of London) and Sarah Sherman (BLE Consortium), three case study authors, and event sponsor Panopto.

    Havemann, Leo; Sherman, Sarah (2017): Assessment, Feedback and Technology: Contexts and Case Studies in Bloomsbury. London: Bloomsbury Learning Environment.
    View and download from: https://doi.org/10.6084/m9.figshare.5315224.v1

     

    The Book

    E-Book thumbnail

    E-Book Cover

    The book is a result of a two-year project on e-assessment and feedback run by the Bloomsbury Learning Environment (BLE), a collaboration between five colleges, including the UCL Institute of Education, on issues around digital technology in Higher Education. It contains three research papers which capture snapshots of current practice, and 21 case studies from the BLE partner institutions and a little beyond, thus including practice from wider UCL.

    The three papers focus on

    • the use of technology across the assessment lifecycle,
    • the roles played by administrative staff in assessment processes,
    • technology-supported assessment in distance learning.

    The case studies are categorised under the headings:

    • alternative [assessment] tasks and formats,
    • students feeding back,
    • assessing at scale,
    • multimedia approaches, and
    • technical developments.

    Seven of the 21 case studies were provided by UCL Digital Education colleagues Jess Gramp, Jo Stroud, Mira Vogel (2), and Tim Neumann (3), reporting on examples of blogging, group assessment, peer feedback, assessment in MOOCs, student presentations at a distance, and the UCL-developed My Feedback Report plugin for Moodle.

     

    Why you should read the e-book

    Launch Event Photo

    BLE E-Book Launch Event

    As one of the speakers at the entertaining launch event, I suggested three reasons why everybody involved in Higher Education should read this book, in particular the case studies:

    1. Processes in context:
      The case studies succinctly describe assessment and feedback processes in context, so you can quickly decide whether these processes are transferable to your own situation, and you will get a basic prompt on how implement the assessment/feedback process.
    2. Problems are highlighted:
      Some case studies don’t shy away from raising issues and difficulties, so you can judge for yourself whether these difficulties represent risks in your context, and how these risks can be managed.
    3. Practical tips:
      All case studies follow the same structure. If you are in a hurry, make sure to read at least the Take Away sections of each case study, which are full of tips and tricks, many of which apply to situations beyond the case study.

    Overall, this collection of papers and case studies on assessment and feedback is easily digestible and contributes to an exchange of good practice.

     

    View and Download the Book

    The e-book is an Open Access publication freely available below.

    For further information, see ble.ac.uk/ebook.html, and view author profiles at ble.ac.uk/ebook_contributors.html

     

    About the BLE:
    The Bloomsbury Learning Environment is a collaboration between Birkbeck, London School of Hygiene and Tropical Medicine (LSHTM), Royal Veterinary College (RVC), School of Oriental and African Studies (SOAS),  UCL Institute of Education (IOE), and the University of London with a focus on technologies for teaching and learning, including libraries and administration.
    See www.ble.ac.uk for more information.

    Comparing Moodle Assignment and Turnitin for assessment criteria and feedback

    By Mira Vogel, on 8 November 2016

    Elodie Douarin (Lecturer in Economics, UCL School of Slavonic and Eastern European Studies) and I have been comparing how assessment criteria can be presented to engage a large cohort of students with feedback in Moodle Assignment and Turnitin Assignment (report now available). We took a mixed methods approach using questionnaire, focus group and student screencasts as they accessed their feedback and responded to our question prompts. Here are some our key findings.

    Spoiler – we didn’t get a clear steer over which technology is (currently) better – they have different advantages. Students said Moodle seemed “better-made” (which I take to relate to theming issues rather than software architecture ones) while the tutor appreciated the expanded range of feedback available in Moodle 3.1.

    Assessment criteria

    • Students need an opportunity to discuss, and ideally practice with, the criteria in advance, so that they and the assessors can reach a shared view of the standards by which their work will be assessed.
    • Students need to know that criteria exist and be supported to use them. Moodle Assignment is good for making rubrics salient, whereas Turnitin requires students to know to click an icon.
    • Students need support to benchmark their own work to the criteria. Moodle or Turnitin rubrics allow assessors to indicate which levels students have achieved. Moreover, Moodle allows a summary comment for each criterion.
    • Since students doubt that assessors refer to the criteria during marking, it is important to make the educational case for criteria (i.e. beyond grading) as a way of reaching a shared understanding about standards, for giving and receiving feedback, and for self/peer assessment.

    Feedback

    • The feedback comments most valued by students explain the issue, make links with the assessment criteria, and include advice about what students should do next.
    • Giving feedback digitally is legible and easily accessible from any web connected device.
    • Every mode of feedback should be conspicuously communicated to students and suggestions on how to cross-reference these different modes should be provided. Some thoughts should be given to ways to facilitate access to and interpretation of all the elements of feedback provided.
    • Students need to know that digital feedback exists and how to access it. A slideshow of screenshots would allow tutors to hide and unhide slides depending on which feedback aspects they are using.

    Effort

    • The more feedback is dispersed between different modes, the more effortful it is for students to relate it to their own work and thinking. Where more than one mode is used, there is a need to distinguish between the purpose and content of each kind of feedback, signpost their relationships, and communicate this to students. Turnitin offers some support for cross referencing between bubble comments and criteria.
    • It would be possible to ask students to indicate on their work which mode (out of a choice of possibilities) they would like assessors to use.
    • The submission of formative assessment produced with minimal effort may impose a disproportionate burden on markers, who are likely to be commenting on mistakes that students could have corrected easily by themselves. Shorter formative assessment, group works, clearer statements of the benefits of submitting formative work may all help limiting the incidence of low-effort submissions.
    • If individual summary comments have a lot in common, consider releasing them as general feedback for the cohort, spending the saved time on more student-specific comments instead. However, this needs to be signposted clearly to help students cross-reference with their individual feedback.
    • As a group, teaching teams can organise a hands-on session with Digital Education to explore Moodle Assignment and Turnitin from the perspectives of students, markers and administrators. This exposure will help immeasurably with designing efficient, considerate processes and workflows.
    • The kind of ‘community work’ referred to by Bloxham and colleagues (2015) would be an opportunity to reach shared understandings of the roles of students and markers with respect to criteria and feedback, which would in turn help to build confidence in the assessment process.

     

    Bloxham, S., den-Outer, B., Hudson, J., Price, M., 2015. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education 1–16. doi:10.1080/02602938.2015.1024607