HeLF – Electronic Management of Assessment (EMA) – 18th June 2013, Falmer
By Martin Burrow, on 21 June 2013
Some thoughts /notes on Tuesdays meeting.
The first presentation was an overview of the HeLF EMA survey. This was a poll of HeLF members about where they thought their institutions are/will be.
(Available at http://www.slideshare.net/barbaranewland/ and the quantitative data is at http://w01.helfcms.wf.ulcc.ac.uk/projects.html)
It was noted that this type of survey only captures respondents ‘best guesses’ about what is going on – so more a confirmation of expectations rather than any hard data. The main point to note was that very few institutions had an institution-wide policy on e-assessment. The survey split out e-assessment into component parts, e-submission, e-marking, e-feedback, and e-return and it was generally agreed that this was a good thing because they all had their own requirements/challenges.
There was not a lot of mention about the drivers for moving more to EMA, but the predominant factor was student expectations (National Student Survey results mentioned). No great clamour from the staff side and I did get the feeling this was one of those things being pushed by the techies.
People that were doing work on implementing EMA were doing some process mapping to allow them to benchmark what was going on and also to inform any policies that were written. The 4 areas mentioned above were split into constituent steps and these were mapped to range of ways/technologies that could be used to complete these steps. Done both for ‘as it stands now’ and ‘where we would like to move to’ This process mapping was generally done on a school by school basis. The resulting data looked pretty useful and this would definately be a starting point for anyone wanting to pilot/encourage EMA.
Discussion about institutional policy revolved around what level it was appropriate to be set at; institution, dept, school etc. How it should sit on the restrictive/encouraging balance, how IT systems integrate with manual/paper based systems, and probably easiest of all, how it should deal with IT system failures – fall back processes, extensions etc.
There was lots of talk about the difficulties in encouraging e-marking, with lots of evidence of markers preferring paper based marking. My personal take on it is that if you enforce e-submission, e-feedback, and e-return, you can leave the marking (notice here I didnt say e-marking) as a ‘black box’ component, up to personal preference to individual markers – with that caveat that however they choose to mark, their output (grades feedback etc) has to be entered back into the system in electronic format. Ways mentioned to encourage e-marking were allocation of hardware (iPads, large or second PC monitor screens) and extended time periods for marking. The was no evidence that any of these had either large or widespread effect on the uptake of e-marking.
Other points to note were that students were very keen on marking/feedback within a published rubric/schema system, and that using such a system also eased the burden on the markers side. Some institutions (University of the Arts) were introducing cross-department, generic marking criteria that could apply to different subjects.
Also, on the wish list side, there was demand from staff and students for a tool where you could see all a student’s feedback for their whole time at the institution, across all courses and submission points.
All in all, it was a nicely informative little session, well worth being present at.
image from ralenhill Flickr