X Close

Digital Education team blog


Ideas and reflections from UCL's Digital Education team


E-assessment 2.0 – making assessment Crisper…

By Fiona Strawbridge, on 15 September 2010

CALT organised a stimulating presentation by Prof Geoffrey Crisp of the University of Adelaide about assessment in the Web 2.0 world. Much information at http://www.transformingassessment.com and a similar presentation is on slideshare.

Crisp calls for much more ‘authentic’ learning and assessment – the need to set big questions; for instance in aeronautical engineering we should set students a task to build a rocket in 3 years. This allows them to see reasons for the smaller things. The tendency with conventional assessment is for everything to become very granular – little learning outcomes are assessed with discrete assessment tasks which don’t encourage students to make connections, and which encourage surface and strategic rather than deep approaches to learning.

Of course moving away from more traditional forms of assessment entails proving that the alternative works – traditional approaches are very deeply engrained in the culture of institutions and are not easily challenged. Crisp acknowledged that even in his own institution there is some way to go.

Three points to start with:

1.    Assessment tasks should be worth doing – if students can get answers by copying from web, or asking google, or guessing, then the task is not worth doing. We need to stop setting tasks which are about information since information is everywhere.

2.    We should separate out diagnostic assessment from formative assessment. Diagnostic assessment is essential before teaching and can be an excellent way of starting relationship with students at the outset. The teacher can then build their teaching on students’ current level of understanding.

3.    Think about assessment tasks which result in divergent rather than convergent responses.  In the traditional approach we tend to seek convergent responses in which all students are expected to come up with same answer but divergent responses are more authentic.  Peer- and self-review approaches can support this approach.

Bearing this in mind, and drawing on the work by Bobby Elliot (see http://www.scribd.com/doc/461041/Assessment-20), we heard that:

  • Assessment 1.0 is traditional assessment – paper-based, classroom-based, synchronous in time and space, formalised and controlled.
  • Assessment 1.5 is basic computer assisted assessment – using quizzes which tend to replicate the paper-based experience, and portfolios used mainly as storage for students’ work. Tasks tend to be done alone -competition is encouraged and collaboration is cheating.  They tend to encourage focus on passing the test rather than on gaining knowledge, skills and understanding and don’t lead to deeper levels of learning (indeed Elliot argues that factual knowledge is valueless in the era of Wikipedia and Google.)
  • Assessment 2.0 is tool-assisted assessment in which students do things using a variety of tools and resources and then simply use the VLE (typically) to submit the results. This kind of assessment is typically authentic, personalised, negotiated, engaging, recognises existing skills, researched, assesses deeper levels of learning, problem oriented, collaborative, done anywhere peer- and self-assessed, and supported by IT tools especially the open web.

Some nice examples of interactive e-assessment 2.0 design included:

  • Examine QuickTime VR image of a geological formation then answer questions based on that – drawing on things wouldn’t be able to see from static image.
  • Examine panograph (scrolling and zoomable image) of Bayeux Tapestry and answer questions drawing together different parts – students selecting evidence from different segments of the tapestry.
  • Interactive spreadsheets – Excel with macros.  Students can change certain bits and answer questions on resulting trends in graphs. Can have nested response questions so that the answer to the second is based on first. (But there is a need for care with dependences so that a wrong move early on doesn’t lead to total failure).
  • Chemical structures using the Molinspiration tool. Students can draw molecular structures using the tool and copy and paste the resulting text string into answer which is held in the VLE quiz tool.
  • Problem solving using a tool called IMMEX (‘It Makes You Think’) which tracks how students approach problems.  The tutor adds in real, redundant and false information that the students can draw on to solve the problem.  They can use it all but the more failed attempts they make the fewer marks they get. We saw an archaeology example in which students had to date an artefact.
  • Role plays which can be done using regular VLE features such as announcements, discussion forums, wikis.  Students adopt different personas and enter into discussion and debate through those personas.
  • Scenario based learning – this is more prescriptive than role play. The recommended tool is Pblinteractive.com
  • Simulations – the Bized.co.uk site offers a virtual bank and factory. Students can work within bized then answer questions in the VLE.
  • Second Life (virtual world) assessment in which the avatar answers questions which go back into Moodle.

Examples of these and more are available through the http://www.transformingassessment.com/ site – it’s Moodle-based and anyone with a .ac.uk email address can self-register and try out the various tasks. (They also run a series of webinars.)

Crisp argues convincingly for much more authentic and immersive assessment, and for assessments in which  process as well as outcome is evaluated – for example approaches to problem solving;  efficiency; ethical considerations; involvement of others.

A good closing question was whether teachers will be able to construct future assessments or will this be a specialist activity. Is it all going to get too hard for people? There may be a need for more team based approaches in future.

Useful resources

Boud, D., 2009, Assessment 2020 – Seven propositions for assessment reform in higher education, Available at: http://www.iml.uts.edu.au/assessment-futures/Assessment-2020_propositions_final.pdf

Crisp, G., 2007, The e-Assessment Handbook. Continuum International Publishing Group Ltd

Crisp, G., 2009, Designing and using e-Assessments. HERDSA Guide, Higher Education Research Society of Australasia

Elliott, B., 2008. Assessment 2.0 – Modernising assessment in the age of Web 2.0. Available at: http://www.scribd.com/doc/461041/Assessment-20.

R&D by asking the world !

By Rod Digges, on 21 January 2010



If you’re very pressed for time and interested in how this posting might
help you in your teaching you can go straight to the end of this
article, if you have a minute to spare, read on….

The internet continues to provide interesting new models of working and
although not new (8 years old in fact) the Innocentive web presence has
now established itself firmly in the arena of collaborative R&D.

Being relatively new to UCL but knowing the wealth of talent that exists
here, I had assumed that many people would have heard of innocentive,
but conversations with colleagues and academics, so far at least, have
proved this assumption to be wrong. So in light of the above I thought
it might be worth circulating some information about the site in the
hope that others may find it interesting or useful.

I won’t go into too much detail here about innocentive as
simply following the above URL will allow anyone interested to explore
the site and the collaborative models it provides in depth.
In a nutshell, the innocentive website provides a space where
companies/institutions faced with particular R&D problems can
‘challenge’ a community of ‘seekers’ to provide innovative solutions –
I’m sure this has been done elsewhere, but innocentive have done it
particularly well – their recently announced partnership with the
publishers of Nature in the US gives an indication of how well the site is regarded.

In addition to Innocentive’s commercial partnerships an agreement with
the Rockefeller foundation in 2006 has lead to Innocentive providing a space for
non-profit ‘seekers’ particularly aimed at providing technology solutions
to pressing problems in the third world – so it’s an interesting mix of
for profit and altruistic challenges.

The innocentive model gives much ‘food for thought’ in areas such
(global) knowledge transfer, IP, research & collaboration and the
effectiveness of ‘crowdsourcing’ (apparently 50% of solutions come from
people who’s domain of expertise lie outside that of the ‘perceived’
problem domain.

Whatever you may think of the innocentive model – brilliant idea or R&D
on cheap – it does provide those who want to integrate problem based
learning into their course materials with a set of ready-made real
world problems and for this reason alone its worth taking a

With students finding it harder prove themselves and find places after graduating
wouldn’t a UK or European innocentive be one place where students might prove themselves before graduating?

The question in my mind is – Why haven’t we got something as open and innovative as this in Europe or the UK? – maybe we have and I just need a pointer….anybody?