X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

Assessment in Higher Education conference, an account

By Mira Vogel, on 25 July 2017

Assessment in Higher Education is a biennial conference which this year was held in Manchester on June 28th and 29th. It is attended by a mix of educators, researchers and educational developers, along with a small number of people with a specific digital education remit of one kind or another (hello Tim Hunt). Here is a summary – it’s organised it around the speakers so there are some counter-currents. The abstracts are linked from each paragraphy, and for more conversation see the Twitter hashtag .

Jill Barber presented on adaptive comparative judgement – assessment by comparing different algorithmically-generated pairs of submissions until saturation is reached. This is found to be easier than judging on a scale, allows peer assessment and its reliability bears up favourably against expert judgement.  I can throw in a link to a fairly recent presentation on ACJ by Richard Kimbell (Goldsmiths), including a useful Q&A part which considers matters of extrapolating grades, finding grade boundaries, and giving feedback. The question of whether it helps students understand the criteria is an interesting one. At UCL we could deploy this for formative, but not credit-bearing, assessment – here’s a platform which I think is still free. Jill helpfully made a demonstration of the platform she used available – username: PharmEd19 p/ wd: Pharmacy17.

Paul Collins presented on assessing a student-group-authored wiki textbook using Moodle wiki. His assessment design anticipated many pitfalls of wiki work, such as tendency to fall back on task specialisation, leading to cooperation rather than collaboration (where members influence each other – and he explained at length why collaboration was desirable in his context), and reluctance to edit others’ work (which leads to additions which are not woven in). His evaluation asked many interesting questions which you can read more about in this paper to last year’s International Conference on Engaging Pedagogy. He learned that delegating induction entirely to a learning technologist led students to approach her with queries – this meant that the responses took on a learning technology perspective rather than a subject-oriented one. She also encouraged students to keep a word processed copy, which led them to draft in Word and paste into Moodle Wiki, losing a lot of the drafting process which the wiki history could have revealed. He recommends lettings students know whether you are more interested in the product, or the process, or both.

Jan McArthur began her keynote presentation (for slides see the AHE site) on assessment for social justice by arguing that SMART (specific, measurable, agreed-on, realistic, and time-bound) objectives in assessment overlook precisely the kinds of knowledge which are ‘higher’ – that is, reached through inquiry; dynamic, contested or not easily known. She cautioned about over-confidence in rubrics and other procedures. In particular she criticised Turnitin, calling it “instrumentalisation\ industrialisation of a pedagogic relationship” which could lead students to change something they were happy with because “Turnitin wasn’t happy with it”, and calling its support for academic writing “a mirage”. I don’t like Turnitin, but felt it was mischaracterised here. I wanted to point out that Turnitin has pivoted away from ‘plagiarism detection’ in recent years, to the extent that it is barely mentioned in the promotional material. The problems are where it is deployed for policing plagiarism – it doesn’t work well for that. Meanwhile its Feedback Studio is often appreciated by students, especially where assessors give feedback specific to their own work, and comments which link to the assessment criteria. In this respect it has developed in parallel with Moodle Assignment.

Paul Orsmond and Stephen Merry summarised the past 40 years of peer assessment research as ’80s focus on reliability and validity, ’90s focus on the nature of the learning, and a more recent focus on the inseparability of identity development and learning – a socio-cultural approach. Here they discussed their interview research, excerpting quotations and interpreting them with reference to peer assessment research. There were so many ideas in the presentation I am currently awaiting their speaker notes.

David Boud presented his and Philip Dawson’s work on developing students’ evaluative judgement. Their premise is that the world is all about evaluative judgement and understanding ‘good’ is a premise to producing ‘good’, so it follows that assessment should be oriented to informing students’ judgments rather “making unilateral decisions about students”. They perceived two aspects of this approach: calibrating quality through exemplars, and using criteria to give feedback, and urged more use of self-assessment, especially for high-stakes work. They also urged starting early, and cautioned against waiting until “students know more”.

Teresa McConlogue, Clare Goudy and Helen Matthews presented on UCL’s review of assessment in a research intensive university. Large, collegiate, multidisciplinary institutions tend to have very diverse data corresponding to diverse practices, so reviewing is a dual challenge of finding out what is going on and designing interventions to bring about improvements. Over-assessment is widespread, and often students have to undertake the same form of assessment. The principles of the review included focusing on structural factors and groups, rather than individuals, and aiming for flexible, workload-neutral interventions. The work will generate improved digital platforms, raised awareness of pedagogy of assessment design and feedback, and equitable management of workloads.

David Boud presented his and others’ interim findings from a survey to investigate effective feedback practices at Deakin and Monash. They discovered that by half way through a semester nearly 90% of students had not had an assessment activity. 70% received no staff feedback on their work before submitting – more were getting it from friends or peers. They also discovered skepticism about feedback – 17% of staff responded they could not judge whether feedback improved students’ performance, while students tended to be less positive about feedback the closer they were to completion – this has implications for how feedback is given to more advanced undergraduate students. 80% of students recognised that feedback was effective when it changed them. They perceived differences between indvidualised and personalised feedback. When this project makes its recommendations they will be found on its website.

Head of School of Physical Science at the OU Sally Jordan explained that for many in the assessment community, learning analytics is a dirty word, because if you go in for analytics, why would you need separate assessment points? Yet analytics and assessment are likely to paint very different pictures – which is right? She suggested that, having taken a view of assessment as ‘of’, ‘for’ and ‘as’ learning, the assessment community might consider the imminent possibility of ‘learning as assessment’. This is already happening as ‘stealth assessment‘ when students learn with adaptable games.

Denise Whitelock gave the final keynote (slides on the AHE site) asking whether assessment technology is a sheep in wolf’s clothing. She surveyed a career working at the Open University on meaningful automated feedback which contributes to a growth mindset in students (rather than consolidating a fixed mindset). The LISC project aimed to give language learners feedback on sentence translation – immediacy is particularly important in language learning to avoid fossilisation of errors. Another project, Open Mentor, aimed to imbue automated feedback with emotional support using Bales’ interaction process categories to code feedback comments. The SAFeSEA project generated Open Essayist which aims to interpret the structure and content of draft essays, identifies key words, phrases and sentences, identifies summary, conclusion and discussion, and presents these to the author. If Open Essayist has misinterpreted the ideas in the essay, the onus is on the author to make amendments. How it would handle some more avant-garde essay forms I am not sure – and this also recalls Sally Jordan’s question about how to resolve inevitable differences between machine and  human judgement. The second part of the talk set out and gave examples of the qualities of feedback which contributes to a growth mindset.

I presented Elodie Douarin’s and my work on enacting assessment principles with assessment technologies – a project to compare the feedback capabilities of Moodle Assignment and Turnitin Assignment for engaging students with assessment criteria.

More blogging on the conference from Liz Austen, Richard Nelson, and a related webinar on feedback.

Leave a Reply