X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

HeLF – Electronic Management of Assessment (EMA) – 18th June 2013, Falmer

By Martin Burrow, on 21 June 2013

Some thoughts /notes on Tuesdays meeting.

 

The first presentation was an overview of the HeLF EMA survey. This was a poll of HeLF members about where they thought their institutions are/will be.Desk with paper

(Available at http://www.slideshare.net/barbaranewland/ and the quantitative data is at http://w01.helfcms.wf.ulcc.ac.uk/projects.html)

It was noted that this type of survey only captures respondents ‘best guesses’ about what is going on – so more a confirmation of expectations rather than any hard data. The main point to note was that very few institutions had an institution-wide policy on e-assessment. The survey split out e-assessment into component parts, e-submission, e-marking, e-feedback, and e-return and it was generally agreed that this was a good thing because they all had their own requirements/challenges.

There was not a lot of mention about the drivers for moving more to EMA, but the predominant factor was student expectations (National Student Survey results mentioned). No great clamour from the staff side and I did get the feeling this was one of those things being pushed by the techies.

People that were doing work on implementing EMA were doing some process mapping to allow them to benchmark what was going on and also to inform any policies that were written. The 4 areas mentioned above were split into constituent steps and these were mapped to range of ways/technologies that could be used to complete these steps. Done both for ‘as it stands now’ and ‘where we would like to move to’  This process mapping was generally done on a school by school basis. The resulting data looked pretty useful and this would definately be a starting point for anyone wanting to pilot/encourage EMA.

Discussion about institutional policy revolved around what level it was appropriate to be set at; institution, dept, school etc. How it should sit on the restrictive/encouraging balance, how IT systems integrate with manual/paper based systems, and probably easiest of all, how it should deal with IT system failures – fall back processes, extensions etc.

There was lots of talk about the difficulties in encouraging e-marking, with lots of evidence of markers preferring paper based marking. My personal take on it is that if you enforce e-submission, e-feedback, and e-return, you can leave the marking (notice here I didnt say e-marking) as a ‘black box’ component, up to personal preference to individual markers – with that caveat that however they choose to mark, their output (grades feedback etc) has to be entered back into the system in electronic format. Ways mentioned to encourage e-marking were allocation of hardware (iPads, large or second PC monitor screens) and extended time periods for marking. The was no evidence that any of these had either large or widespread effect on the uptake of e-marking.

Other points to note were that students were very keen on marking/feedback within a published rubric/schema system, and that using such a system also eased the burden on the markers side. Some institutions (University of the Arts) were introducing cross-department, generic marking criteria that could apply to different subjects.

Also, on the wish list side, there was demand from staff and students for a tool where you could see all a student’s feedback for their whole time at the institution, across all courses and submission points.

All in all, it was a nicely informative little session, well worth being present at.

image from ralenhill Flickr

 

 

 

 

Santa uses Grademark.

By Domi C Sinclair, on 20 December 2012

Have you ever wondered how Santa manages to grade the naughty and nice list so fast? Well the answer is technology! Just like many academic staff he uses Grademark, and very efficiently at that.

The text accompanying the video, posted by Turnitin on the video sharing site Vimeo, reads:

‘Every December, millions of children around the world write letters to Santa, explaining how they’ve been good boys and girls and letting him know what they want to see under their trees come December 25th.

Over the years, the number of kids sending him letters skyrocket. His mailbox was flooded and he found himself buried in letters, unable to respond to all of them.

One day, a little elf told Santa about Turnitin—how he could use it to accept submissions from the children, check the letters for originality, give immediate feedback, and even use rubrics to help determine if they’ve been naughty or nice. So he gave it a shot.

Share this video with your colleagues, especially the ones that look like they’ve been in an avalanche of essays.’

Watch the video and see how Santa does it.

How Santa grades millions of Christmas letters

Certainty Based Marking Webinar

6 April 2011

Emeritus Professor Tony Gardner-Medwin gave a Webinar presentation on Wednesday 6th April about using Certainty Based Marking (CBM) for both formative self-tests and summative e-exams.

This type of assessment helps students to understand what areas of a topic they really do know and what areas they need to work on by asking them to choose, on a 3 point scale, how confident they are that their answer is correct.

Questions they answer correctly and with high certainty score the most points, while those they answer correctly with low certainty score fewer points. Questions they get wrong are negatively marked in a similar fashion.

The scoring method is best demonstrated in the following table:

Certainty level No reply C=1 C=2 C=3
Mark if correct 0 1 2 3
Penalty if incorrect (T/F Q) 0 0 -2 -6

When used formatively, students can review their marks and focus on reviewing the material where they were either unsure of an answer or confident of their answer, but incorrect.

Certainty Based Marking can also be used for exams. Evidence has shown that exam results evaluated using CBM closely matches ( tending to be slightly higher) than the scores the students would have received if traditional incorrect/correct marking were used. This can be easily compared, because each CBM exam result can be marked in both ways.

Find out more about Certainty Based Marking here: http://www.ucl.ac.uk/lapt

The CBM Webinar presentation will shortly be available here: http://transformingassessment.com

UPDATE: The Webinar and related materials are now available from here: http://www.ucl.ac.uk/~ucgbarg/pubteach.htm

Effective Assessment in a Digital Age

By Jessica Gramp, on 9 February 2011

On February 3rd practitioners from universities in and around the region met in Birmingham to discuss how technology can be used to promote effective learning by looking at good practice in assessment and feedback.

The workshops were based around the principles from the Effective Assessment in a Digital Age: A guide to technology-enhanced assessment and feedback publication.

Some of the ideas that emerged from the workshop activities are summarised here:

  • Set an assessment where group members contribute to a forum as they collect research towards a final outcome
  • Set an assessment where individuals produce a poster illustrating the information they have sourced in their research.
  • Set formative assessment for complex questions that the majority of students are likely to fail towards the beginning of a course, so they become familiar with learning from their mistakes in a safe and productive way.
  • Review students’ answers to assessments to see which questions many students got wrong and support them in understanding why and how to reach the correct answer.
  • Develop formative assessments that reveal hints to the correct answer and allow students to have another go if they get it wrong initially and when they do get it right (or wrong a number of times) explain the correct answer in detail.
  • Use text matching technology to produce free-text, short-answer questions, rather than the commonly used multiple choice question type. Note: To do this effectively can take time and requires large quantities of real student answers to mark accurately, so may only be viable to large cohorts of students.
  • Use various assessment methods to cater for different learning styles, engage students and allow those who have strengths in some areas to take advantage of these.
  • Assess frequently throughout the term to allow tutors to evaluate students’ progress and steer them in the right direction if they begin to go off track before the final submission. This also allows tutors to distribute the time they spend providing feedback and marking across the term, rather than the marking and feedback process being concentrated at the end.

The output from the workshops and other useful materials are available here: http://bit.ly/jiscassess

E-assessment 2.0 – making assessment Crisper…

By Fiona Strawbridge, on 15 September 2010

CALT organised a stimulating presentation by Prof Geoffrey Crisp of the University of Adelaide about assessment in the Web 2.0 world. Much information at http://www.transformingassessment.com and a similar presentation is on slideshare.

Crisp calls for much more ‘authentic’ learning and assessment – the need to set big questions; for instance in aeronautical engineering we should set students a task to build a rocket in 3 years. This allows them to see reasons for the smaller things. The tendency with conventional assessment is for everything to become very granular – little learning outcomes are assessed with discrete assessment tasks which don’t encourage students to make connections, and which encourage surface and strategic rather than deep approaches to learning.

Of course moving away from more traditional forms of assessment entails proving that the alternative works – traditional approaches are very deeply engrained in the culture of institutions and are not easily challenged. Crisp acknowledged that even in his own institution there is some way to go.

Three points to start with:

1.    Assessment tasks should be worth doing – if students can get answers by copying from web, or asking google, or guessing, then the task is not worth doing. We need to stop setting tasks which are about information since information is everywhere.

2.    We should separate out diagnostic assessment from formative assessment. Diagnostic assessment is essential before teaching and can be an excellent way of starting relationship with students at the outset. The teacher can then build their teaching on students’ current level of understanding.

3.    Think about assessment tasks which result in divergent rather than convergent responses.  In the traditional approach we tend to seek convergent responses in which all students are expected to come up with same answer but divergent responses are more authentic.  Peer- and self-review approaches can support this approach.

Bearing this in mind, and drawing on the work by Bobby Elliot (see http://www.scribd.com/doc/461041/Assessment-20), we heard that:

  • Assessment 1.0 is traditional assessment – paper-based, classroom-based, synchronous in time and space, formalised and controlled.
  • Assessment 1.5 is basic computer assisted assessment – using quizzes which tend to replicate the paper-based experience, and portfolios used mainly as storage for students’ work. Tasks tend to be done alone -competition is encouraged and collaboration is cheating.  They tend to encourage focus on passing the test rather than on gaining knowledge, skills and understanding and don’t lead to deeper levels of learning (indeed Elliot argues that factual knowledge is valueless in the era of Wikipedia and Google.)
  • Assessment 2.0 is tool-assisted assessment in which students do things using a variety of tools and resources and then simply use the VLE (typically) to submit the results. This kind of assessment is typically authentic, personalised, negotiated, engaging, recognises existing skills, researched, assesses deeper levels of learning, problem oriented, collaborative, done anywhere peer- and self-assessed, and supported by IT tools especially the open web.

Some nice examples of interactive e-assessment 2.0 design included:

  • Examine QuickTime VR image of a geological formation then answer questions based on that – drawing on things wouldn’t be able to see from static image.
  • Examine panograph (scrolling and zoomable image) of Bayeux Tapestry and answer questions drawing together different parts – students selecting evidence from different segments of the tapestry.
  • Interactive spreadsheets – Excel with macros.  Students can change certain bits and answer questions on resulting trends in graphs. Can have nested response questions so that the answer to the second is based on first. (But there is a need for care with dependences so that a wrong move early on doesn’t lead to total failure).
  • Chemical structures using the Molinspiration tool. Students can draw molecular structures using the tool and copy and paste the resulting text string into answer which is held in the VLE quiz tool.
  • Problem solving using a tool called IMMEX (‘It Makes You Think’) which tracks how students approach problems.  The tutor adds in real, redundant and false information that the students can draw on to solve the problem.  They can use it all but the more failed attempts they make the fewer marks they get. We saw an archaeology example in which students had to date an artefact.
  • Role plays which can be done using regular VLE features such as announcements, discussion forums, wikis.  Students adopt different personas and enter into discussion and debate through those personas.
  • Scenario based learning – this is more prescriptive than role play. The recommended tool is Pblinteractive.com
  • Simulations – the Bized.co.uk site offers a virtual bank and factory. Students can work within bized then answer questions in the VLE.
  • Second Life (virtual world) assessment in which the avatar answers questions which go back into Moodle.

Examples of these and more are available through the http://www.transformingassessment.com/ site – it’s Moodle-based and anyone with a .ac.uk email address can self-register and try out the various tasks. (They also run a series of webinars.)

Crisp argues convincingly for much more authentic and immersive assessment, and for assessments in which  process as well as outcome is evaluated – for example approaches to problem solving;  efficiency; ethical considerations; involvement of others.

A good closing question was whether teachers will be able to construct future assessments or will this be a specialist activity. Is it all going to get too hard for people? There may be a need for more team based approaches in future.

Useful resources

Boud, D., 2009, Assessment 2020 – Seven propositions for assessment reform in higher education, Available at: http://www.iml.uts.edu.au/assessment-futures/Assessment-2020_propositions_final.pdf

Crisp, G., 2007, The e-Assessment Handbook. Continuum International Publishing Group Ltd

Crisp, G., 2009, Designing and using e-Assessments. HERDSA Guide, Higher Education Research Society of Australasia

Elliott, B., 2008. Assessment 2.0 – Modernising assessment in the age of Web 2.0. Available at: http://www.scribd.com/doc/461041/Assessment-20.

Effective Assessment in a Digital Age

By Clive Young, on 13 September 2010

The new JISC guide Effective Assessment in a Digital Age has just been published. Assessment lies at the heart of the learning experience and this guide draws together recent JISC reports and case studies to explore the relationship between technology-enhanced assessment and feedback practices and meaningful, well-supported learning experiences. Effective Assessment in a Digital Age complements the excellent  Effective Practice in a Digital Age, the 2009 JISC guide to learning and teaching with technology, and Effective practice with e-Assessment (JISC 2007) by focusing on the potential enhancement to assessment and feedback practices offered by both purpose-designed and more familiar technologies.