Digital Education team blog
  • We support Staff and Students using technology to enhance education at UCL.

    Here you'll find updates on institutional developments, projects we're involved in, updates on educational technology, events, case studies and personal experiences (or views!).

  • Subscribe to the Digital Education blog

  • Meta

  • Tags

  • A A A

    Archive for the 'Mira’s Mire' Category

    Assessment in Higher Education conference, an account

    By Mira Vogel, on 25 July 2017

    Assessment in Higher Education is a biennial conference which this year was held in Manchester on June 28th and 29th. It is attended by a mix of educators, researchers and educational developers, along with a small number of people with a specific digital education remit of one kind or another (hello Tim Hunt). Here is a summary – it’s organised it around the speakers so there are some counter-currents. The abstracts are linked from each paragraphy, and for more conversation see the Twitter hashtag .

    Jill Barber presented on adaptive comparative judgement – assessment by comparing different algorithmically-generated pairs of submissions until saturation is reached. This is found to be easier than judging on a scale, allows peer assessment and its reliability bears up favourably against expert judgement.  I can throw in a link to a fairly recent presentation on ACJ by Richard Kimbell (Goldsmiths), including a useful Q&A part which considers matters of extrapolating grades, finding grade boundaries, and giving feedback. The question of whether it helps students understand the criteria is an interesting one. At UCL we could deploy this for formative, but not credit-bearing, assessment – here’s a platform which I think is still free. Jill helpfully made a demonstration of the platform she used available – username: PharmEd19 p/ wd: Pharmacy17.

    Paul Collins presented on assessing a student-group-authored wiki textbook using Moodle wiki. His assessment design anticipated many pitfalls of wiki work, such as tendency to fall back on task specialisation, leading to cooperation rather than collaboration (where members influence each other – and he explained at length why collaboration was desirable in his context), and reluctance to edit others’ work (which leads to additions which are not woven in). His evaluation asked many interesting questions which you can read more about in this paper to last year’s International Conference on Engaging Pedagogy. He learned that delegating induction entirely to a learning technologist led students to approach her with queries – this meant that the responses took on a learning technology perspective rather than a subject-oriented one. She also encouraged students to keep a word processed copy, which led them to draft in Word and paste into Moodle Wiki, losing a lot of the drafting process which the wiki history could have revealed. He recommends lettings students know whether you are more interested in the product, or the process, or both.

    Jan McArthur began her keynote presentation (for slides see the AHE site) on assessment for social justice by arguing that SMART (specific, measurable, agreed-on, realistic, and time-bound) objectives in assessment overlook precisely the kinds of knowledge which are ‘higher’ – that is, reached through inquiry; dynamic, contested or not easily known. She cautioned about over-confidence in rubrics and other procedures. In particular she criticised Turnitin, calling it “instrumentalisation\ industrialisation of a pedagogic relationship” which could lead students to change something they were happy with because “Turnitin wasn’t happy with it”, and calling its support for academic writing “a mirage”. I don’t like Turnitin, but felt it was mischaracterised here. I wanted to point out that Turnitin has pivoted away from ‘plagiarism detection’ in recent years, to the extent that it is barely mentioned in the promotional material. The problems are where it is deployed for policing plagiarism – it doesn’t work well for that. Meanwhile its Feedback Studio is often appreciated by students, especially where assessors give feedback specific to their own work, and comments which link to the assessment criteria. In this respect it has developed in parallel with Moodle Assignment.

    Paul Orsmond and Stephen Merry summarised the past 40 years of peer assessment research as ’80s focus on reliability and validity, ’90s focus on the nature of the learning, and a more recent focus on the inseparability of identity development and learning – a socio-cultural approach. Here they discussed their interview research, excerpting quotations and interpreting them with reference to peer assessment research. There were so many ideas in the presentation I am currently awaiting their speaker notes.

    David Boud presented his and Philip Dawson’s work on developing students’ evaluative judgement. Their premise is that the world is all about evaluative judgement and understanding ‘good’ is a premise to producing ‘good’, so it follows that assessment should be oriented to informing students’ judgments rather “making unilateral decisions about students”. They perceived two aspects of this approach: calibrating quality through exemplars, and using criteria to give feedback, and urged more use of self-assessment, especially for high-stakes work. They also urged starting early, and cautioned against waiting until “students know more”.

    Teresa McConlogue, Clare Goudy and Helen Matthews presented on UCL’s review of assessment in a research intensive university. Large, collegiate, multidisciplinary institutions tend to have very diverse data corresponding to diverse practices, so reviewing is a dual challenge of finding out what is going on and designing interventions to bring about improvements. Over-assessment is widespread, and often students have to undertake the same form of assessment. The principles of the review included focusing on structural factors and groups, rather than individuals, and aiming for flexible, workload-neutral interventions. The work will generate improved digital platforms, raised awareness of pedagogy of assessment design and feedback, and equitable management of workloads.

    David Boud presented his and others’ interim findings from a survey to investigate effective feedback practices at Deakin and Monash. They discovered that by half way through a semester nearly 90% of students had not had an assessment activity. 70% received no staff feedback on their work before submitting – more were getting it from friends or peers. They also discovered skepticism about feedback – 17% of staff responded they could not judge whether feedback improved students’ performance, while students tended to be less positive about feedback the closer they were to completion – this has implications for how feedback is given to more advanced undergraduate students. 80% of students recognised that feedback was effective when it changed them. They perceived differences between indvidualised and personalised feedback. When this project makes its recommendations they will be found on its website.

    Head of School of Physical Science at the OU Sally Jordan explained that for many in the assessment community, learning analytics is a dirty word, because if you go in for analytics, why would you need separate assessment points? Yet analytics and assessment are likely to paint very different pictures – which is right? She suggested that, having taken a view of assessment as ‘of’, ‘for’ and ‘as’ learning, the assessment community might consider the imminent possibility of ‘learning as assessment’. This is already happening as ‘stealth assessment‘ when students learn with adaptable games.

    Denise Whitelock gave the final keynote (slides on the AHE site) asking whether assessment technology is a sheep in wolf’s clothing. She surveyed a career working at the Open University on meaningful automated feedback which contributes to a growth mindset in students (rather than consolidating a fixed mindset). The LISC project aimed to give language learners feedback on sentence translation – immediacy is particularly important in language learning to avoid fossilisation of errors. Another project, Open Mentor, aimed to imbue automated feedback with emotional support using Bales’ interaction process categories to code feedback comments. The SAFeSEA project generated Open Essayist which aims to interpret the structure and content of draft essays, identifies key words, phrases and sentences, identifies summary, conclusion and discussion, and presents these to the author. If Open Essayist has misinterpreted the ideas in the essay, the onus is on the author to make amendments. How it would handle some more avant-garde essay forms I am not sure – and this also recalls Sally Jordan’s question about how to resolve inevitable differences between machine and  human judgement. The second part of the talk set out and gave examples of the qualities of feedback which contributes to a growth mindset.

    I presented Elodie Douarin’s and my work on enacting assessment principles with assessment technologies – a project to compare the feedback capabilities of Moodle Assignment and Turnitin Assignment for engaging students with assessment criteria.

    More blogging on the conference from Liz Austen, Richard Nelson, and a related webinar on feedback.

    Fake news and Wikidata

    By Mira Vogel, on 20 February 2017

    James Martin Charlton, Head of the Media Department at Middlesex University and co-host of today’s Wikimedia Education Summit, framed Wikimedia as a defence against the fake news currently spread and popularised by dominant search engine algorithms. Fake news undermines knowledge as power and renders societies easily manipulable. This is one reason several programme leaders I work with – one of whom was at the event – have expressed interest in incorporating Wikimedia into their curricula. (Wikimedia is the collection of projects of which Wikipedia is the best known, but which also includes Wikivoyage, Wikisource and Wikimedia Commons).

    Broadly there are two aspects to Wikimedia in education. One is the content – for example, the articles in Wikipedia, the media in Wikimedia Commons, the textbooks in Wikisource. All of this content is in the public domain, available to use freely in our projects and subject to correction and improvement by that public. The other aspect is process. Contributing to Wikimedia can qualify as higher education when students are tasked with, say, digesting complex or technical information for a non-expert Wikipedia readership, or negotiating changes to an article which has an existing community of editors, or contributing an audio-recording which they later use in a project they publish under an open licence. More recently, Wikidata has emerged as a major presence on the linked and open data scene. I want to focus on Wikidata because it seems very promising as an approach to engaging students in the structured data which is increasingly shaping our world.

    Wikidata is conceived as the central data storage for the aforementioned Wikimedia projects. Unlike Wikipedia, Wikidata can be read by machines as well as humans, which means it can be queried. So if you – as we did today – wish to see at a glance the notable alumni from a given university, you can. Today we gave a little back to our hosts by contributing an ‘Educated at’ value to a number of alumni which lacked it on Wikidata. This enabled those people to be picked up by a Wikidata query and visualised. But institutions tend to merge or change their names, so I added a ‘Followed by’ attribute to the Wikidata entry for Hornsey College of Art (which merged into Middlesex Polytechnic), allowing the query to be refine to include Hornsey alumni too. I also visualised UCL’s notable alumni as a timeline (crowded – zoom out!) and a map. The timeline platform is called Histropedia and is the work of Navino Evans. It is available to all and – thinking public engagement – is reputedly a very good way to visualise research data without needing to hire somebody in.

    So far so good. But is it correct? I dare say it’s at least slightly incorrect, and more than slightly incomplete. Yes, I’d have to mend it, or get it mended, at source. But that state of affairs is pretty normal, as anyone involved in learning analytics understands. And can’t Wikidata be sabotaged? Yes – and because the data is linked, any sabotage would have potentially far reaching effects – so there will need to be defences such as limiting the ability to make mass edits, or edit entries which are both disputed and ‘hot’. But the point is, if I can grasp the SPARQL query language (which is said to be pretty straightforward and, being related to SQL, a transferable skill) then – without an intermediary – I can generate information which I can check, and triangulate against other information to reach a judgement. How does this play out in practice? Here’s Oxford University Wikimedian in Residence Martin Poulter with an account of how he queried Wikidata’s biographical data about UK MPs and US Senators to find out – and, importantly, visualise – where they were educated, and what occupation they’ve had (153 cricketers!?).

    So, say I want to master the SPARQL query language? Thanks to Ewan McAndrew, Wikimedian in Residence at the University of Edinburgh, there’s a SPARQL query video featuring Navino Evans on Edinburgh’s Wikimedia in Residence media channel.

    Which brings me to the beginning, when Melissa Highton set out the benefits Wikimedians have brought to Edinburgh University, where she is Assistant Principal. These benefits include building digital capabilities, public engagement for researchers, and addressing the gender gap in Wikimedia representation, demonstrating to Athena Swann assessors that the institution is addressing structural barriers to women contributing in science and technology. Here’s Melissa’s talk in full. Bodleian Library Web and Digital Media Manager Liz McCarthy made a similarly strong case – they have had to stop advertising their Wikimedian in Residence’s services since so many Oxford University researchers have woken up to Wikimedia’s public engagement potential.

    We also heard from Wikimedians with educational ideas, tutor Stefan Lutschinger on designing Wikimedia assignments, and the students who presented on their work in his Publishing Cultures module – and there were parallel sessions. You can follow the Wikimedia Education Summit tweets at .

    Comparing Moodle Assignment and Turnitin for assessment criteria and feedback

    By Mira Vogel, on 8 November 2016

    Elodie Douarin (Lecturer in Economics, UCL School of Slavonic and Eastern European Studies) and I have been comparing how assessment criteria can be presented to engage a large cohort of students with feedback in Moodle Assignment and Turnitin Assignment (report now available). We took a mixed methods approach using questionnaire, focus group and student screencasts as they accessed their feedback and responded to our question prompts. Here are some our key findings.

    Spoiler – we didn’t get a clear steer over which technology is (currently) better – they have different advantages. Students said Moodle seemed “better-made” (which I take to relate to theming issues rather than software architecture ones) while the tutor appreciated the expanded range of feedback available in Moodle 3.1.

    Assessment criteria

    • Students need an opportunity to discuss, and ideally practice with, the criteria in advance, so that they and the assessors can reach a shared view of the standards by which their work will be assessed.
    • Students need to know that criteria exist and be supported to use them. Moodle Assignment is good for making rubrics salient, whereas Turnitin requires students to know to click an icon.
    • Students need support to benchmark their own work to the criteria. Moodle or Turnitin rubrics allow assessors to indicate which levels students have achieved. Moreover, Moodle allows a summary comment for each criterion.
    • Since students doubt that assessors refer to the criteria during marking, it is important to make the educational case for criteria (i.e. beyond grading) as a way of reaching a shared understanding about standards, for giving and receiving feedback, and for self/peer assessment.

    Feedback

    • The feedback comments most valued by students explain the issue, make links with the assessment criteria, and include advice about what students should do next.
    • Giving feedback digitally is legible and easily accessible from any web connected device.
    • Every mode of feedback should be conspicuously communicated to students and suggestions on how to cross-reference these different modes should be provided. Some thoughts should be given to ways to facilitate access to and interpretation of all the elements of feedback provided.
    • Students need to know that digital feedback exists and how to access it. A slideshow of screenshots would allow tutors to hide and unhide slides depending on which feedback aspects they are using.

    Effort

    • The more feedback is dispersed between different modes, the more effortful it is for students to relate it to their own work and thinking. Where more than one mode is used, there is a need to distinguish between the purpose and content of each kind of feedback, signpost their relationships, and communicate this to students. Turnitin offers some support for cross referencing between bubble comments and criteria.
    • It would be possible to ask students to indicate on their work which mode (out of a choice of possibilities) they would like assessors to use.
    • The submission of formative assessment produced with minimal effort may impose a disproportionate burden on markers, who are likely to be commenting on mistakes that students could have corrected easily by themselves. Shorter formative assessment, group works, clearer statements of the benefits of submitting formative work may all help limiting the incidence of low-effort submissions.
    • If individual summary comments have a lot in common, consider releasing them as general feedback for the cohort, spending the saved time on more student-specific comments instead. However, this needs to be signposted clearly to help students cross-reference with their individual feedback.
    • As a group, teaching teams can organise a hands-on session with Digital Education to explore Moodle Assignment and Turnitin from the perspectives of students, markers and administrators. This exposure will help immeasurably with designing efficient, considerate processes and workflows.
    • The kind of ‘community work’ referred to by Bloxham and colleagues (2015) would be an opportunity to reach shared understandings of the roles of students and markers with respect to criteria and feedback, which would in turn help to build confidence in the assessment process.

     

    Bloxham, S., den-Outer, B., Hudson, J., Price, M., 2015. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education 1–16. doi:10.1080/02602938.2015.1024607

     

    Authentic multimodal assessments

    By Mira Vogel, on 7 October 2016

    Cross-posted to the Connected Curriculum Fellows blog.

    My Connected Curriculum Fellowship project explores current practice with Connected Curriculum dimension 5 – ‘Students learn to produce outputs – assessments directed at an audience’. My emphasis is on assessing students’ digital (including digitised) multimodal outputs for an audience. What does ‘multimodal’ mean? Modes can be thought of as styles of communication –  register and voice, for example – while media and be thought of as its fabric. In practice, though, the line between the two is quite blurry (Kress, 2012). This work will look at multimodal assessment from the following angles.

    What kinds of digital multimodal outputs are students producing at UCL, and using which media? The theoretic specificity of verbal media, such as essay or talk, explains its dominance in academia. Some multimodal forms, such as documentaries, are recognised as (potentially) academic, while others are straightforwardly authentic, such as curation students producing online exhibitions. At the margins are works which bring dilemmas about academic validity, such as fan fiction submitted for the From Codex To Kindle module, or the Internet Cultures student who blogged as a dog.

    How are students supported to conceptualise their audiences? DePalma and Alexander (2015) observe that students who are used to writing for one or two academic markers may struggle with the complex notions of audience called for by an expanded range of rhetorical resources. The 2016 Making History convenor has pointed out that students admitted to UCL on strength of their essays may find the transition to multimodal assessment unsettling and question its validity.  I hope to explore tutor and student perspectives here with a focus on how the tasks are introduced to students. I will maintain awareness of the Liberating the Curriculum emphasis on diverse audiences. I will also explore matters of consent and intellectual property, and ask what happens to the outputs once the assessment is complete.

    What approaches are taken to assessing multimodal work? A 2006 survey (Anderson et al) reported several assessment challenges for markers, including separation of rhetorical from aesthetic effects, diversity of skills, technologies and interpretation, and balancing credit between effort and quality where the output may be unpolished. Adsanatham (2012) describes how his students generated more complex criteria than he could have alone, helping “enrich our ever-evolving understanding and learning of technology and literacies”. DePalma and Alexander (2015) discuss written commentaries or reflective pieces as companions to students’ multimodal submissions. Finding out about the practices of staff and students across UCL promises to illuminate possibilities, questions, contrasts and dilemmas.

    I plan to identify participants by drawing on my and colleagues’ networks, the Teaching and Learning Portal, and calls via appropriate channels. Building on previous work, I hope to collect screen-capture recordings, based on question prompts, in which students explain their work and tutors explain how they marked it. These kinds of recordings provide very rich data but, anticipating difficulties obtaining consent to publish these, I also plan to transcribe and analyse them using NVivo to produce a written report. I aim to produce a collection of examples of multimodal work, practical suggestions for managing the trickier areas of assessment, and ideas for supporting students in their activities. I will ask participants to validate these outputs.

    Would you like to get involved? Contact Mira Vogel.

    References

    Adsanatham, C. 2012. Integrating Assessment and Instruction: Using Student-Generated Grading Criteria to Evaluate Multimodal Digital Projects. Computers and Composition 29(2): 152–174.

    Anderson, D., Atkins, A., Ball, C., et al. 2006. Integrating Multimodality into Composition Curricula: Survey Methodology and Results from a CCCC Research Grant. Composition Studies 34(2). http://www.uc.edu/journals/composition-studies/issues/archives/fall2006-34-2.html.

    DePalma, M.J., and Alexander, K.P. 2015. A Bag Full of Snakes: Negotiating the Challenges of Multimodal Composition. Computers and Composition 37: 182–200.

    Gunther, K. and Staffan Selander, S. 2012. Multimodal Design, Learning and Cultures of Recognition. The Internet and Higher Education 15(4): 265–268.

    Vogel, M., Kador, T., Smith, F., Potter, J. 2016. Considering new media in scholarly assessment. UCL Teaching and Learning Conference. 19 April 2016. Institute of Education, UCL, London, UK. https://www.ucl.ac.uk/teaching-learning/events/conference/2016/UCLTL2016Abstracts; https://goo.gl/nqygUH

    Wikipedia Course Leaders’ event

    By Mira Vogel, on 15 August 2016

    Wikimedia UK held a Wikipedia Course Leaders event on the afternoon of July 19th. The meeting brought together academics who use Wikipedia in their modules, Wikipedians in Residence, and other Wikipedia and higher education enthusiasts (like me) to exchange their practice and think about some of the challenges of working for assessment in an environment which is very much alive and out in the world.

    As you can imagine, we were all in agreement about the potential of Wikipedia in our respective disciplines, which included Applied Human Geography, Psychology, Law, World Christianity, and Research Methods for Film. As you can see from the notes we took, we discussed colleagues’ and students’ reservations, tensions and intersections between Wikimedia and institutional agendas, relationships between students and other Wikipedians, assessment which is fair and well-supported, and Wikipedia tools for keeping track of students. There are plenty of ideas, solutions, and examples of good and interesting practice. There is a new and developing Wikimedia page for UK universities.

    If you are interested in using Wikipedia to give your students the experience of public writing on the Web and contributing within a global community of interest, there is plenty of support.

    ELESIG London 3rd Meeting – Evaluation By Numbers

    By Mira Vogel, on 13 July 2016

    The third ELESIG London event, ‘Evaluation By Numbers‘, was a two-hour event on July 7th. Building on the successful format of our last meeting, we invited two presenters on the theme of ‘proto-analytics’ – an important aspect of institutional readiness for learning analytics which empowers individuals to work with their own log data to come up with theories about what to do next. There were 15 participants with a range of experiences and interests, including artificial intelligence, ethics, stats and data visualisation, and a range of priorities including academic research, academic development, and data security, and real-time data analysis.
    After a convivial round of introductions there was a talk from Michele Milner, Head of Centre for Excellence in Teaching and Learning at the University of East London, titled Empowering Staff And Students. Determined to avoid data-driven decision making, UEL’s investigations had confirmed a lack of enthusiasm and wariness on the part of most staff to work with log data. This is normal in the sector and probably attributable to a combination of inexperience and overwork. The UEL project had different strands. One was attendance monitoring  feeding into a student engagement metric with more predictive power including correlation between engagement (operationalised as e.g. library and VLE access, data from the tablets students are issued) and achievement. This feeds a student retention app, along with demographic weightings. Turnitin and Panopto (lecture capture) data have so far been elusive, but UEL is persisting on the basis these gross measures do correlate.
    The project gave academic departments a way to visualise retention as an overall red-amber-green rating, and simulate the expected effects of different interventions. The feedback they received from academics was broadly positive but short of enthused, and with good questions about cut-off dates, workload allocation, and nature and timing of interventions. Focus group with students revealed that there was low awareness of data collection, that students weren’t particularly keen to see the data, and that if presented with it they would prefer barcharts by date rather than comparators with other students. We discussed ethics of data collection, including the possibility of student opt-in or opt-out of opening their anonymised data set.
    Our next speaker was Andreas Konstantinidis from Kings College London, on Utilising Moodle Logs (slides).  He attributes the low numbers of educators are currently working with VLE data to limitations of logs. In Moodle’s case this is particularly to do with limited filtering, and the exclusion of some potentially important data including Book pages and links within Labels. To address this, he and his colleague Cat Grafton worked on some macros to allow individual academics to import and visualise logs downloaded from their VLE (KEATS) in a MS Excel spreadsheet.
    To dodge death by data avarice they first had to consider which data to include, deciding on the following. Mean session length does not firmly correspond to anything but the fluctuations are interesting. Bounce rate indicates students are having difficulty finding what they need. Time of use, combining two or more filters, can inform plans about when to schedule events or release materials. You can also see the top and bottom 10 students engaging with Moodle, and the top and bottom resources used – this data can be an ice breaker to be able to discuss reasons and support. IP addresses, may reveal where students are gathering e.g. a certain IT room, which in turn may inform decisions about where to reach students.
    Kings have made KEATS Analytics available to all (includes workbook), and you can download it from http://tinyurl.com/ELESIG-LA. It currently supports Moodle 2.6 and 2.8, with 3.X coming soon. At UCL we’re on 2.8 only for the next few weeks, so if you want to work with KEATS analytics here’s some guidance for downloading your logs now.
    As Michele (quoting Eliot) said, “Hell is a place where nothing connects with nothing”. Although it is not always fit to use immediately, data abounds – so what we’re looking for now are good pedagogical questions which data can help to answer. I’ve found Anna Lea Dyckhoff’s meta-analysis of tutors’ action research questions helpful. To empower individuals and build data capabilities in an era of potentially data-driven decision-making, a good start might be to address these questions in short worksheets which take colleagues who aren’t statisticians through statistical analysis of their data. If you are good with data and its role in educational decision-making, please get in touch.
    A participant pointed us to a series of podcasts from Jisc around the ethical and legal issues of learning analytics. Richard Treves has as write-up of the event and my co-organiser Leo Havemann has collected the tweets. For a report on the current state of play with learning analytics, see Sclater and colleagues’ April 2016 review of UK and International Practice. Sam Ahern mentioned there are still places on a 28th July data visualization workshop being run by the Software Sustainability Institute.
    To receive communications from us, including details of our next ELESIG London meeting, please sign up to the ELESIG London group on Ning. It’s free and open to all with an interest in educational evaluation.

    KEATS Analytics screenshot