X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

Archive for the 'Mira’s Mire' Category

Joint Faculty Best Practice Event on Digital Education, February 2018

By Mira Vogel, on 1 March 2018

Arne Hofmann and Helen Matthews from the UCL Joint Faculty (Arts and Humanities and Social and Historical Science) have hit on a successful format for a practice sharing session. Speakers make brief presentations and then disperse to ‘stations’ around the room so that participants can circulate and discuss. At the end of the event is a plenary discussion.

The third event in this series had a digital education focus. Sanjay Karia who heads up IT for SLASH kindly contributed display screens for the stations. I matchmade colleagues in Digital Education with the presenters based on interest; they made notes of the conversations (using MS Teams as recommended by IT for SLASH) and I largely owe this blogpost to them.

The splendid presenters and their presentations, some of which include links to examples of student work:

  • Mark Lake (Senior Lecturer, Archaeology) described undergraduate students blogging for ARCL3097 Archeology in the World  – a particularly gutsy initiative given it was a compulsory module for final year undergraduates in the NSS zone [Mark’s slides as PDF].
  • Riitta Valijarvi (Senior Teaching Fellow, Finnish) talked about her Wikipedia Translatathon, a cultural and linguistic event marking the centenary of Finland which brought together students, Finnish or Finnish speaking staff from UCL, members of the public, and Wikimedia UK staff.
  • Jakob Stougaard-Nielsen (Senior Lecturer, SELCS) discussed students producing digital objects for the Qualitative Thinking module of the BASc [Jakob’s slides as PDF];
  • Jacky Derrick (Deputy Module Convenor, History) described how first year undergraduate student groups produce web sites together on a subject which gets them engaging with London;
  • Nick Grindle and Jesper Hansen (Senior Teaching Fellows, Arena Centre) reviewed their experiences organising peer feedback via the fearsome-looking but actually wonderful Moodle Workshop activity [Nick’s slides as PDF];
  • Maria Sibiryakova (Senior Teaching Fellow, Russian) talked about how the multimedia discuss app VoiceThread can advance the four skills of language learning [Maria’s slides as PDF];
  • Jonathan Holmes (Professor of Physical Geography) and Nick Mann (Learning Resources Coordinator, Geography) on designing digital multiple choice exams [Jonathan’s and Nick’s slides as PDF];
  • Clive Young (Digital Education Advisory Team Leader) on meeting the UCL minimum quality standard known as ‘the UCL E-Learning Baseline‘.

Here are some of the themes from the event.

How should students be inducted to new technical platforms? For some cohorts this was hardly an issues and staff soon felt comfortable abandoning the training session at the beginning of the module in favour of a drop-in as the deadline approached. However, there are disciplinary differences and not all groups can be guaranteed to have somebody particularly comfortable with using technologies, so the drop-ins are important.  Based on my own experience inducting some large cohorts to Mahara, if it’s done at all then it’s best done when the students have some vision about what they want to do there – i.e. not at the very start of the module, and close enough to the deadline that there is no hiatus between the induction and putting the knowledge to use. In addition, students seemed to know less about copyright and intellectual property than the technologies, so some modules had incorporated sessions on those.

How do we assess digital multimodal work? Formative assessment was considered very worthwhile, especially where students were new to the activity. Currently there is sometimes a criterion related to appropriate use of the mode or format, such as “use of text formatting and good quality images and/or multimedia which clearly enhance the text”. Often there is an element of writing in the work which would be run through Turnitin according to departmental policy. I think it is probably fair to say that (like most of the sector) we are in transition to explicitly recognising the distinctive qualities of digital multimodal composition. I have seen how, in many cases, new and potentially challenging practices need to be eased through teaching committees by anchoring them to the accepted standards and criteria – at least for the first few iterations. With time and experience comes new awareness and recognition of distinctive practices which work well in a given context. Jakob’s slides are particularly detailed on this – the BASc have been giving this kind of thing consideration from day one.

Where should digital multimodal work be positioned in the curriculum? There was a general sense that modularisation tends to isolate digital activities within programmes. This could lead either to them not being built upon (where they happened early) or having their academic validity questioned (where they happened later). Support includes showing students exemplars of blogs and creating opportunities for them to carry out guided marking to help them grasp standards and apply the assessment criteria to their own work.

What if students question the academic validity of a digital activity? Where new forms of digital assessment are introduced later in a programme, expect students to query whether it is really necessary for their degree. The challenge, summarised by Mark, is to pre-emptively “tackle student perception” by advocating for the activity in terms of student learning and success. Archaeology in the World saw their evaluation questionnaire results slowly improve as the tutors learned to advocate for the activity, and students came to recognise it as useful.

When can students’ work be public? In cases like the Wikipedia event, the work is born public. In other cases this is something to be negotiated with students – but there is often groundwork to do beforehand. Students need guidance to use media that is itself licensed to be made public. Where the work happens in groups, licensing their work needs to be a joint and unanimous decision with a take-down policy.

How can different skills levels be accommodated? The intermediate Russian language students were at different levels, which meant that multimedia production such as recordings of poetry read aloud helped them practice speaking (one of the four skills of modern language learning), and the individualised recorded feedback they were given helped them with listening (another of the skills). VoiceThread brought a privacy and timeliness to the feedback which had not previously existed – exposing students to the risk of embedding their mistakes. Another approach to different skills levels is to create groups of students on the assumption that they will either sustain each other in acquiring the skills or divide the labour according to skills, and a third is to give extra guidance to students who need it (as with the Finnish-English Wikipedia Translatathon).

How can the new practice be made to work first time? When the Arena Centre pioneered large scale use of the Moodle Workshop activity for peer feedback, they worked closely with Digital Education – we made those early deadlines our own, and together we prepared for different contingencies. As well as working closely with Digital Education, Geography subjected their digital examination to a number of rigorous checks involving academic and professional services colleagues, students, and internal and external examiners. Digital Education has produced the Baseline to support the quality aspects – these are not intuitive. One participant remarked to me later that he had been skeptical, bordering on resentful, of the Baseline until he started working through it, at which point he realised how useful it is.

What do students get out of the digital side of things? Some indicative comments from Jakob’s students: “learnt to consider digital content in a very different way”; “through creating a Digital Object rather than a traditional essay, I was able to engage with our topic at a much deeper level”, and “I have also developed transferable skills”. Mark received correspondence that the activity “really made me think and synthesise in a new way”. Nick’s and Jesper’s Arena participants have been very positive about giving and receiving peer feedback.

~~~

There are a few things I’d change about how I organised the event. One is that either it should be extended by half an hour (to two hours) or else the number of speakers should be reduced. As it was, we overran and I was very sad to have to cut off a very interesting plenary discussion just as colleagues were beginning to really want to talk with each other. Another is that teaching languages has a distinct set of needs which justify a dedicated event. I might also consider asking the presenters to circulate rather than the participants (though I can see pros and cons there).

That aside, it was a lively, spontaneous, humorous, sophisticated event which balanced different sets of needs – educational, disciplinary, colleagues and students. It is so often the case that when colleagues have the opportunity to seek each other out based on mutual interest, the fruits soon make themselves evident. One participant told me he went from the event straight to his department’s Staff Student Consultative Committee where he proposed an idea which was accepted. “That’s impact”, he said.

What I saw at ALTC 2017

By Mira Vogel, on 8 September 2017

I’ve been at ALTC , the Association for Learning Technology Conference 2017. To come, a harder piece to write where I make sense of it all – but for now I’m going to summarise each session I attended, mainly because I really enjoyed hearing from everyone else about what they went to. Incidentally, the keynotes and all of the sessions which took place in the largest room are available to watch on ALT’s YouTube (where there will hopefully be a playlist in due course).

Day 1

Bonnie Stewart, a keynote speaker from a non-traditional background, spoke about the exclusions which ensue from only planning for norms. Among many insights she shared was Ronald Heifetz’s about actively distinguishing between problems which technology can solve and problems which require humans to adapt their behaviour.

Helen Walmsley-Smith introduced eDAT, a tool for analysing the content of online learning activity design. The data  could then be analysed with feedback and retention data to allow a learning design to be evaluated, and successful types in different contexts to be identified. eDAT is freely available. There are early signs that interactivity is related to improved retention.
Emma Mayhew and Vicki Holmes from Reading described the shift from paper-based to digital assessment processes. Part of a major programme of EMA funding. With eight academic and student secondees, they aim to improve each part of cycle, from better awareness at the ‘Setting’ stage to better monitoring of progress at the ‘Reflection’ stage. They found that the idea of ‘consistency’ was problematic and might refer to satisfaction rather than practices. Their review of other institutions found that the most successful outcomes were in institutions which consulted carefully.
Peter Alston (Liverpool) discussed how ‘the academy’ does not mean the same thing when it discusses e-assessment. This highlighted the differences between professional services and academic perspectives. Adopting Whitchurch’s (2008) ‘third space’ approach, and the contestation, reconciliation and reconstruction (Whitchurch 2010) around practices, rules, regulations and language.
Why are the rates of e-submission and feedback at the University of Essex so high? Ben Steeples looked back at a decade of electronic submission and feedback on a platform built in-house, which designed out a number of problems affecting other platforms. Maintaining the in-house system costs £75k a year, but the integrations with e.g. calendar and student records are excellent and the service is very reliable. They expect to develop analytics. I love hearing from in-house developers making large strategically important institutional systems which work well.
Daniel Roberts and Tunde Varga-Atkins #1637 discussed the minimum standards (‘hygiene factors’) for Liverpool’s VLE, and the development of an evaluation model involving students which could be used with other initiatives. Students are a transient presence who can be hard to reach; different evaluation approaches to involving them included as auditors and in focus groups. Between staff and students at Liverpool there was little mutual recognition of the respective effort which goes into using the VLE.
One of the stand-out sessions for me, Simon Thomson and Lawrie Phipps summarised Jisc’s #Codesign16 consultation on needs for a next-generation digital learning environment. There was a sense that the tools drive the pedagogy, that they exist to control the academy, and that administration processes were de facto more important than education. Jisc found that students were using laptops and phones had almost equally (only 40% used a tablet). Students arrive at university networked, but the VLE currently stands alone without interfacing with those networks. At Leeds Beckett PULSE (Personalised User Learning and Social Environment) set out to address this by letting individuals connect spaces where they had existing relationships, allowing them to post once and selectively release to multiple places. The data within PULSE is entirely owned by students. When they leave, they can take it with them. Unsurprisingly, student’s expressed no strong desire to integrate personal tools with uni platforms – as ever, educators needs to design use of PULSE into the curriculum. However, the VLE vendor would not give access to the APIs to allow the kind of integration this would require.
Helen Beetham and Ellen Lessner introduced video accounts of learning digitally from 12 students not all of whom loved technology. The institutional technologies do not come out well in Jisc’s ‘Student digital experience tracker 2017’, but we have no idea whether that is to do with the task design, the support for new ways of learning, or the technologies themselves. Find resources at bit.ly/ALTC17digijourneys.
Carina Dolch asked whether students are getting used to learning technology. She described the massification and diversification of Germany’s higher education system, and how students’ media usage was changing over time. A survey of 3666 students confirmed that while there was an increase in time spent online since 2012. However – which is hard to explain – the frequency of text media use has been decreasing, as did the use of both general tools (search engines, Skype, etc) and e-learning tools and services (Moocs, lecture recordings, etc). Non-traditional students tend to use technologies functionally tied to their institution, whereas traditional students tended to use technologies more recreationally. Students expressed reluctance to be at the forefront of innovations, and there were more active decisions to be offline.

Day 2

I loved Sian Bayne’s keynote about anonymity. She used the demise of Yik Yak the anonymous hyperlocal networking app, to talk about campus networks and privacy. Yik Yak’s high point in the download chart was 2014. In 2016 they withdrew anonymity, which is reflected by a plunge in usage at Edinburgh. Yik Yak restored anonymity shortly before closing in 2017 to no particular regret in the media. It had not been able to use personal data to finance itself. Moral panics about anonymous social media served platform capitalism by demanding that everyone be reachable and accountable. Edinburgh students discussed student life (including mental health), sex and dating, with some academic and political issues. Most students found it a kind and supportive network. Anonymity studies notes the ‘psychic numbing’ which allows most social media users to join up their accounts in the interests of living an “effective life”, inuring them to the risks of surveillance capitalism. Some users resist surveillance by cloaking one’s identity – however this seems over-reliant on other users not cloaking theirs, otherwise the enterprise, relying as it does on personal data, inevitably folds. I can’t see any other way to escape platform capitalism than to organise sustainable resourcing for open platforms such as Mastodon and Diaspora.
Fotios Mispoulos took a University of Liverpool instructor’s perspective on the effectiveness of learner-to-learner interactions. Most of the research into learner-to-learner interactions happened in the 1990s and found improved satisfaction and outcomes, though there are some counter findings. As usual the particulars of the task design, year group etc were glossed so we may be trying to compare apples and bananas.
Vicki Holmes and Adam Bailey talked about introducing Blackboard Collaborate Ultra (which we have at UCL) for web meeting at Reading. I thought their approach was very good – to clarify purposes and promote commitment hey asked for formal expressions of interest, they then ran workshops with selected colleagues to build confidence and technical readiness (headphones, the right web browser). These refined designs for meetings around placement support, sessions between campuses, assessment support tutorials, and pre-session workshops, among other purposes. Participants from Politics, Finance, Careers observed positive outcomes. Recommendations include avoiding simply lecturing since students disengage quickly,  designing interactions carefully (rather than expecting them to happen), to develop the distinct presentation techniques, and to prepare students (again around technical readiness and role). 87% of students felt it was appropriate to their learning.
Beth Snowden and Bronwen Swinnerton presented on rethinking lectures in three redesigned tiered theatres at the University of Leeds. Each ‘pod’ has a mic, top-lighting, and a wired-in thinkpad device which can be used to send responses and also to present via the data projector. Lecturers observed how students who had chatted to each other were more likely to chat with him and to ask questions. Another doubted he could continue referring to the session as a ‘lecture’. Responses to the evaluation survey found that the average time listening to the lecturer was 49%, which was assumed to be less than in the other lecture theatres. Just over half of staff felt that the new lecture theatres created extra work, but more felt they were a positive development. Future evaluation will focus on educational uses.
[See YouTube University of Leeds “upgrade of teaching spaces”]
Catherine Naamani looked at the impact of space design on collaborative approaches at the University of South Wales. The flexible spaces had colour coded chairs round triangular tables with their own screen which students could present to using an app, and which the tutor could access. The more confident groups gained more tutor attention while the least engaged groups tended to be international students, so more group-to-group activity needed to be designed. Staff tended to identify training needs with the technology, but not developmental needs around educational approach using that technology.
Another stand-out session – as digital education strategists and academics at their respective institutions, Kyriaki Agnostopoulou, Don Passey, Neil Morris and Amber Thomas looked at the evidence bases and business cases for digital education. Amber noted academic, administrative and technical don’t speak to each other until the top of the organisation. How do digital education workers influence their organisations strategies? There are four distinct origins of evidence: technology affordances, uses, outcomes and impact. The former kinds of evidence can be provided through qualitative case studies while the latter through quantitative independent control group studies. Case studies are abundant, but far rarer are studies which show evidence of impact over time. Amber urged us to learn the language of ITIL and Prince 2 to “understand them as much as you want them to understand you”. Return on investment, laying out true costs (staff time, supply costs, simultaneous users), use cases (and edge cases), capital spend and recurrent spend) strategic alignment, gains (educational, efficiency and PR), options appraisals, sustainability and scalability, and risk analyses are a way to be ready for management critique of any idea. Neil Morris (Leeds) took the view that using evidence is the most powerful way of making change. Making the academic case first gets the idea talked about.
Online submission continues to outstrip e-marking at the University of Nottingham. Helen Whitehead introduced ‘Escape from paper mountain‘, an educational development escape game through which staff would understand how to use an online marking environment [see ALT Winter Conference]. The scenario is an assessor who has completed his marking but then disappeared; the mission is to find his marking and get it to the Exam Board in 60 minutes. The puzzles, to be solved in groups, are all localised, sometimes even at the subject-specific level. There are plenty of materials at yammer.com/escapehe.
Kamakshi Rajagopal from the Open University of The Netherland ran a workshop on practical measures to break out of online echo chambers aka filter bubbles – people from similar backgrounds and strata of societies in the context of an egocentric, personally and intentionally created personal learning network. One group came up with the idea of a ‘Challenge me’ or ‘Forget me’ button to be able to serve yourself different feeds

Day 3

(The amount of notes reflects the amount of sleep).
Peter Goodyear’s keynote was very good. He talked about the designing physical spaces for digital learning, which he called ‘multidimensional chess’. He introduced these as apprentice spaces where students learn to participate in valued practices. While STEM subjects require a lot of physical infrastructure, arts, humanities and social sciences require cognitive structures to learn to use knowledge and work with others. Designers reduce complexity by concentrating on what learners will do in the spaces. The activities themselves are not designable, but the guides and scaffolds are. Active learning risks cognitive overload due to the mechanics of the tasks – the instructions, navigating the task. The activity-centred activity design framework set out how to mitigate this.  Find the slides at petergoodyear.net.
John Traxler described initial thoughts about an Erasmus+ project to empower refugee learners  from Middle East and North Africa through digital literacy. Few Moocs are oriented to refugees, and those which are depend on the availabilities of volunteers. Engaging in a Mooc obviously depends on digital access and capabilities. Other challenges include language, expectations and cultural assumptions. Digital literacy can be interpreted as employability skills, or alternatively with a more liberal, individualistic definition to do with self-expression. The group is very hard to reach, so it is hard to carry out a valid needs assessment. The project is moonlite.
Lubna Alharbi talked about emotion analysis to investigate lecturer-student relationship in a fully online setting. Emotions which interfere with learning include isolation and loneliness arising from lack of interaction. To motivate students it is very important for the tutor to interpret and react to emotions. The International Survey on Emotional Antecedents and Reactions (ISEAR) dataset consists of sentences related to different emotions. Synesketch tool.
Another stand-out, Khaled Abuhlfaia asked how the usability of learning technologies affects learners. In usability research, usability is conceived as effectiveness, efficiency, learnability, memorability, error handling and satisfaction. The literature review was very well reported, and he found that there is far more evidence about the effectiveness, efficiency and satisfaction dimensions mostly questionnaires and interviews, while the other dimensions, while important, have been neglected.
Academic course leaders choose textbooks in a climate of acute student worries about living costs (not to mention the huge debts they graduate with). Viv Rolfe, David Kernohan and Martin Weller compared open textbook use in the UK and the US. In the US open textbook use has been driven by student debt – and in the UK nearly 50% of students graduating in 2015 had debt worries.
Ian McNicoll talked about the learning technologist role as a ‘fleshy interface’ between educators (who view LTs as techies), technies (who view LTs as quasi-academic), students (as helpdesk staff) and the institution (as strategic enablers).
John Tepper and Alaa Bafail discussed ways to calibrate designs for learning activities in STEM subjects. These are currently tied to outcomes statements, where outcomes are constructivist – teachers create a learning environment supportive of learning activities appropriate to the outcomes. Quality was operationalised as student satisfaction, which I thought might be problematic since it does not itself relate to outcomes. I also wondered about the role of context for each activity e.g. demographic differences, level which I missed in the talk. The presenters took a systems approach to evaluating quality, through which designs which elicited high student satisfaction were surfaced. Anyone interested in designing educational activities will probably be interested in Learning Designer, which was mentioned in the talk, is really good, and is still being maintained. It’s increasingly rare for software developers to talk at ALTC, so it was good to hear about this. I found this talk fascinating and baffling in equal measures, but fully intriguing.
Sam Ahern discussed learning analytics as a tool for supporting student wellbeing. One fifth of all adults surveyed by the NHS have a longterm common mental health problem, with variation between demographic groups. The numbers reporting mental health problems on entry has jumped 220% as students numbers have climbed. Poor mental health manifests as behaviour change around attendance, meeting deadlines, self-care and signs of frustration. Certain online behaviours can predict depressive episodes.

Assessment in Higher Education conference, an account

By Mira Vogel, on 25 July 2017

Assessment in Higher Education is a biennial conference which this year was held in Manchester on June 28th and 29th. It is attended by a mix of educators, researchers and educational developers, along with a small number of people with a specific digital education remit of one kind or another (hello Tim Hunt). Here is a summary – it’s organised it around the speakers so there are some counter-currents. The abstracts are linked from each paragraphy, and for more conversation see the Twitter hashtag .

Jill Barber presented on adaptive comparative judgement – assessment by comparing different algorithmically-generated pairs of submissions until saturation is reached. This is found to be easier than judging on a scale, allows peer assessment and its reliability bears up favourably against expert judgement.  I can throw in a link to a fairly recent presentation on ACJ by Richard Kimbell (Goldsmiths), including a useful Q&A part which considers matters of extrapolating grades, finding grade boundaries, and giving feedback. The question of whether it helps students understand the criteria is an interesting one. At UCL we could deploy this for formative, but not credit-bearing, assessment – here’s a platform which I think is still free. Jill helpfully made a demonstration of the platform she used available – username: PharmEd19 p/ wd: Pharmacy17.

Paul Collins presented on assessing a student-group-authored wiki textbook using Moodle wiki. His assessment design anticipated many pitfalls of wiki work, such as tendency to fall back on task specialisation, leading to cooperation rather than collaboration (where members influence each other – and he explained at length why collaboration was desirable in his context), and reluctance to edit others’ work (which leads to additions which are not woven in). His evaluation asked many interesting questions which you can read more about in this paper to last year’s International Conference on Engaging Pedagogy. He learned that delegating induction entirely to a learning technologist led students to approach her with queries – this meant that the responses took on a learning technology perspective rather than a subject-oriented one. She also encouraged students to keep a word processed copy, which led them to draft in Word and paste into Moodle Wiki, losing a lot of the drafting process which the wiki history could have revealed. He recommends lettings students know whether you are more interested in the product, or the process, or both.

Jan McArthur began her keynote presentation (for slides see the AHE site) on assessment for social justice by arguing that SMART (specific, measurable, agreed-on, realistic, and time-bound) objectives in assessment overlook precisely the kinds of knowledge which are ‘higher’ – that is, reached through inquiry; dynamic, contested or not easily known. She cautioned about over-confidence in rubrics and other procedures. In particular she criticised Turnitin, calling it “instrumentalisation\ industrialisation of a pedagogic relationship” which could lead students to change something they were happy with because “Turnitin wasn’t happy with it”, and calling its support for academic writing “a mirage”. I don’t like Turnitin, but felt it was mischaracterised here. I wanted to point out that Turnitin has pivoted away from ‘plagiarism detection’ in recent years, to the extent that it is barely mentioned in the promotional material. The problems are where it is deployed for policing plagiarism – it doesn’t work well for that. Meanwhile its Feedback Studio is often appreciated by students, especially where assessors give feedback specific to their own work, and comments which link to the assessment criteria. In this respect it has developed in parallel with Moodle Assignment.

Paul Orsmond and Stephen Merry summarised the past 40 years of peer assessment research as ’80s focus on reliability and validity, ’90s focus on the nature of the learning, and a more recent focus on the inseparability of identity development and learning – a socio-cultural approach. Here they discussed their interview research, excerpting quotations and interpreting them with reference to peer assessment research. There were so many ideas in the presentation I am currently awaiting their speaker notes.

David Boud presented his and Philip Dawson’s work on developing students’ evaluative judgement. Their premise is that the world is all about evaluative judgement and understanding ‘good’ is a premise to producing ‘good’, so it follows that assessment should be oriented to informing students’ judgments rather “making unilateral decisions about students”. They perceived two aspects of this approach: calibrating quality through exemplars, and using criteria to give feedback, and urged more use of self-assessment, especially for high-stakes work. They also urged starting early, and cautioned against waiting until “students know more”.

Teresa McConlogue, Clare Goudy and Helen Matthews presented on UCL’s review of assessment in a research intensive university. Large, collegiate, multidisciplinary institutions tend to have very diverse data corresponding to diverse practices, so reviewing is a dual challenge of finding out what is going on and designing interventions to bring about improvements. Over-assessment is widespread, and often students have to undertake the same form of assessment. The principles of the review included focusing on structural factors and groups, rather than individuals, and aiming for flexible, workload-neutral interventions. The work will generate improved digital platforms, raised awareness of pedagogy of assessment design and feedback, and equitable management of workloads.

David Boud presented his and others’ interim findings from a survey to investigate effective feedback practices at Deakin and Monash. They discovered that by half way through a semester nearly 90% of students had not had an assessment activity. 70% received no staff feedback on their work before submitting – more were getting it from friends or peers. They also discovered skepticism about feedback – 17% of staff responded they could not judge whether feedback improved students’ performance, while students tended to be less positive about feedback the closer they were to completion – this has implications for how feedback is given to more advanced undergraduate students. 80% of students recognised that feedback was effective when it changed them. They perceived differences between indvidualised and personalised feedback. When this project makes its recommendations they will be found on its website.

Head of School of Physical Science at the OU Sally Jordan explained that for many in the assessment community, learning analytics is a dirty word, because if you go in for analytics, why would you need separate assessment points? Yet analytics and assessment are likely to paint very different pictures – which is right? She suggested that, having taken a view of assessment as ‘of’, ‘for’ and ‘as’ learning, the assessment community might consider the imminent possibility of ‘learning as assessment’. This is already happening as ‘stealth assessment‘ when students learn with adaptable games.

Denise Whitelock gave the final keynote (slides on the AHE site) asking whether assessment technology is a sheep in wolf’s clothing. She surveyed a career working at the Open University on meaningful automated feedback which contributes to a growth mindset in students (rather than consolidating a fixed mindset). The LISC project aimed to give language learners feedback on sentence translation – immediacy is particularly important in language learning to avoid fossilisation of errors. Another project, Open Mentor, aimed to imbue automated feedback with emotional support using Bales’ interaction process categories to code feedback comments. The SAFeSEA project generated Open Essayist which aims to interpret the structure and content of draft essays, identifies key words, phrases and sentences, identifies summary, conclusion and discussion, and presents these to the author. If Open Essayist has misinterpreted the ideas in the essay, the onus is on the author to make amendments. How it would handle some more avant-garde essay forms I am not sure – and this also recalls Sally Jordan’s question about how to resolve inevitable differences between machine and  human judgement. The second part of the talk set out and gave examples of the qualities of feedback which contributes to a growth mindset.

I presented Elodie Douarin’s and my work on enacting assessment principles with assessment technologies – a project to compare the feedback capabilities of Moodle Assignment and Turnitin Assignment for engaging students with assessment criteria.

More blogging on the conference from Liz Austen, Richard Nelson, and a related webinar on feedback.

Fake news and Wikidata

By Mira Vogel, on 20 February 2017

James Martin Charlton, Head of the Media Department at Middlesex University and co-host of today’s Wikimedia Education Summit, framed Wikimedia as a defence against the fake news currently spread and popularised by dominant search engine algorithms. Fake news undermines knowledge as power and renders societies easily manipulable. This is one reason several programme leaders I work with – one of whom was at the event – have expressed interest in incorporating Wikimedia into their curricula. (Wikimedia is the collection of projects of which Wikipedia is the best known, but which also includes Wikivoyage, Wikisource and Wikimedia Commons).

Broadly there are two aspects to Wikimedia in education. One is the content – for example, the articles in Wikipedia, the media in Wikimedia Commons, the textbooks in Wikisource. All of this content is in the public domain, available to use freely in our projects and subject to correction and improvement by that public. The other aspect is process. Contributing to Wikimedia can qualify as higher education when students are tasked with, say, digesting complex or technical information for a non-expert Wikipedia readership, or negotiating changes to an article which has an existing community of editors, or contributing an audio-recording which they later use in a project they publish under an open licence. More recently, Wikidata has emerged as a major presence on the linked and open data scene. I want to focus on Wikidata because it seems very promising as an approach to engaging students in the structured data which is increasingly shaping our world.

Wikidata is conceived as the central data storage for the aforementioned Wikimedia projects. Unlike Wikipedia, Wikidata can be read by machines as well as humans, which means it can be queried. So if you – as we did today – wish to see at a glance the notable alumni from a given university, you can. Today we gave a little back to our hosts by contributing an ‘Educated at’ value to a number of alumni which lacked it on Wikidata. This enabled those people to be picked up by a Wikidata query and visualised. But institutions tend to merge or change their names, so I added a ‘Followed by’ attribute to the Wikidata entry for Hornsey College of Art (which merged into Middlesex Polytechnic), allowing the query to be refine to include Hornsey alumni too. I also visualised UCL’s notable alumni as a timeline (crowded – zoom out!) and a map. The timeline platform is called Histropedia and is the work of Navino Evans. It is available to all and – thinking public engagement – is reputedly a very good way to visualise research data without needing to hire somebody in.

So far so good. But is it correct? I dare say it’s at least slightly incorrect, and more than slightly incomplete. Yes, I’d have to mend it, or get it mended, at source. But that state of affairs is pretty normal, as anyone involved in learning analytics understands. And can’t Wikidata be sabotaged? Yes – and because the data is linked, any sabotage would have potentially far reaching effects – so there will need to be defences such as limiting the ability to make mass edits, or edit entries which are both disputed and ‘hot’. But the point is, if I can grasp the SPARQL query language (which is said to be pretty straightforward and, being related to SQL, a transferable skill) then – without an intermediary – I can generate information which I can check, and triangulate against other information to reach a judgement. How does this play out in practice? Here’s Oxford University Wikimedian in Residence Martin Poulter with an account of how he queried Wikidata’s biographical data about UK MPs and US Senators to find out – and, importantly, visualise – where they were educated, and what occupation they’ve had (153 cricketers!?).

So, say I want to master the SPARQL query language? Thanks to Ewan McAndrew, Wikimedian in Residence at the University of Edinburgh, there’s a SPARQL query video featuring Navino Evans on Edinburgh’s Wikimedia in Residence media channel.

Which brings me to the beginning, when Melissa Highton set out the benefits Wikimedians have brought to Edinburgh University, where she is Assistant Principal. These benefits include building digital capabilities, public engagement for researchers, and addressing the gender gap in Wikimedia representation, demonstrating to Athena Swann assessors that the institution is addressing structural barriers to women contributing in science and technology. Here’s Melissa’s talk in full. Bodleian Library Web and Digital Media Manager Liz McCarthy made a similarly strong case – they have had to stop advertising their Wikimedian in Residence’s services since so many Oxford University researchers have woken up to Wikimedia’s public engagement potential.

We also heard from Wikimedians with educational ideas, tutor Stefan Lutschinger on designing Wikimedia assignments, and the students who presented on their work in his Publishing Cultures module – and there were parallel sessions. You can follow the Wikimedia Education Summit tweets at .

Comparing Moodle Assignment and Turnitin for assessment criteria and feedback

By Mira Vogel, on 8 November 2016

Elodie Douarin (Lecturer in Economics, UCL School of Slavonic and Eastern European Studies) and I have been comparing how assessment criteria can be presented to engage a large cohort of students with feedback in Moodle Assignment and Turnitin Assignment (report now available). We took a mixed methods approach using questionnaire, focus group and student screencasts as they accessed their feedback and responded to our question prompts. Here are some our key findings.

Spoiler – we didn’t get a clear steer over which technology is (currently) better – they have different advantages. Students said Moodle seemed “better-made” (which I take to relate to theming issues rather than software architecture ones) while the tutor appreciated the expanded range of feedback available in Moodle 3.1.

Assessment criteria

  • Students need an opportunity to discuss, and ideally practice with, the criteria in advance, so that they and the assessors can reach a shared view of the standards by which their work will be assessed.
  • Students need to know that criteria exist and be supported to use them. Moodle Assignment is good for making rubrics salient, whereas Turnitin requires students to know to click an icon.
  • Students need support to benchmark their own work to the criteria. Moodle or Turnitin rubrics allow assessors to indicate which levels students have achieved. Moreover, Moodle allows a summary comment for each criterion.
  • Since students doubt that assessors refer to the criteria during marking, it is important to make the educational case for criteria (i.e. beyond grading) as a way of reaching a shared understanding about standards, for giving and receiving feedback, and for self/peer assessment.

Feedback

  • The feedback comments most valued by students explain the issue, make links with the assessment criteria, and include advice about what students should do next.
  • Giving feedback digitally is legible and easily accessible from any web connected device.
  • Every mode of feedback should be conspicuously communicated to students and suggestions on how to cross-reference these different modes should be provided. Some thoughts should be given to ways to facilitate access to and interpretation of all the elements of feedback provided.
  • Students need to know that digital feedback exists and how to access it. A slideshow of screenshots would allow tutors to hide and unhide slides depending on which feedback aspects they are using.

Effort

  • The more feedback is dispersed between different modes, the more effortful it is for students to relate it to their own work and thinking. Where more than one mode is used, there is a need to distinguish between the purpose and content of each kind of feedback, signpost their relationships, and communicate this to students. Turnitin offers some support for cross referencing between bubble comments and criteria.
  • It would be possible to ask students to indicate on their work which mode (out of a choice of possibilities) they would like assessors to use.
  • The submission of formative assessment produced with minimal effort may impose a disproportionate burden on markers, who are likely to be commenting on mistakes that students could have corrected easily by themselves. Shorter formative assessment, group works, clearer statements of the benefits of submitting formative work may all help limiting the incidence of low-effort submissions.
  • If individual summary comments have a lot in common, consider releasing them as general feedback for the cohort, spending the saved time on more student-specific comments instead. However, this needs to be signposted clearly to help students cross-reference with their individual feedback.
  • As a group, teaching teams can organise a hands-on session with Digital Education to explore Moodle Assignment and Turnitin from the perspectives of students, markers and administrators. This exposure will help immeasurably with designing efficient, considerate processes and workflows.
  • The kind of ‘community work’ referred to by Bloxham and colleagues (2015) would be an opportunity to reach shared understandings of the roles of students and markers with respect to criteria and feedback, which would in turn help to build confidence in the assessment process.

 

Bloxham, S., den-Outer, B., Hudson, J., Price, M., 2015. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education 1–16. doi:10.1080/02602938.2015.1024607

 

Authentic multimodal assessments

By Mira Vogel, on 7 October 2016

Cross-posted to the Connected Curriculum Fellows blog.

My Connected Curriculum Fellowship project explores current practice with Connected Curriculum dimension 5 – ‘Students learn to produce outputs – assessments directed at an audience’. My emphasis is on assessing students’ digital (including digitised) multimodal outputs for an audience. What does ‘multimodal’ mean? Modes can be thought of as styles of communication –  register and voice, for example – while media and be thought of as its fabric. In practice, though, the line between the two is quite blurry (Kress, 2012). This work will look at multimodal assessment from the following angles.

What kinds of digital multimodal outputs are students producing at UCL, and using which media? The theoretic specificity of verbal media, such as essay or talk, explains its dominance in academia. Some multimodal forms, such as documentaries, are recognised as (potentially) academic, while others are straightforwardly authentic, such as curation students producing online exhibitions. At the margins are works which bring dilemmas about academic validity, such as fan fiction submitted for the From Codex To Kindle module, or the Internet Cultures student who blogged as a dog.

How are students supported to conceptualise their audiences? DePalma and Alexander (2015) observe that students who are used to writing for one or two academic markers may struggle with the complex notions of audience called for by an expanded range of rhetorical resources. The 2016 Making History convenor has pointed out that students admitted to UCL on strength of their essays may find the transition to multimodal assessment unsettling and question its validity.  I hope to explore tutor and student perspectives here with a focus on how the tasks are introduced to students. I will maintain awareness of the Liberating the Curriculum emphasis on diverse audiences. I will also explore matters of consent and intellectual property, and ask what happens to the outputs once the assessment is complete.

What approaches are taken to assessing multimodal work? A 2006 survey (Anderson et al) reported several assessment challenges for markers, including separation of rhetorical from aesthetic effects, diversity of skills, technologies and interpretation, and balancing credit between effort and quality where the output may be unpolished. Adsanatham (2012) describes how his students generated more complex criteria than he could have alone, helping “enrich our ever-evolving understanding and learning of technology and literacies”. DePalma and Alexander (2015) discuss written commentaries or reflective pieces as companions to students’ multimodal submissions. Finding out about the practices of staff and students across UCL promises to illuminate possibilities, questions, contrasts and dilemmas.

I plan to identify participants by drawing on my and colleagues’ networks, the Teaching and Learning Portal, and calls via appropriate channels. Building on previous work, I hope to collect screen-capture recordings, based on question prompts, in which students explain their work and tutors explain how they marked it. These kinds of recordings provide very rich data but, anticipating difficulties obtaining consent to publish these, I also plan to transcribe and analyse them using NVivo to produce a written report. I aim to produce a collection of examples of multimodal work, practical suggestions for managing the trickier areas of assessment, and ideas for supporting students in their activities. I will ask participants to validate these outputs.

Would you like to get involved? Contact Mira Vogel.

References

Adsanatham, C. 2012. Integrating Assessment and Instruction: Using Student-Generated Grading Criteria to Evaluate Multimodal Digital Projects. Computers and Composition 29(2): 152–174.

Anderson, D., Atkins, A., Ball, C., et al. 2006. Integrating Multimodality into Composition Curricula: Survey Methodology and Results from a CCCC Research Grant. Composition Studies 34(2). http://www.uc.edu/journals/composition-studies/issues/archives/fall2006-34-2.html.

DePalma, M.J., and Alexander, K.P. 2015. A Bag Full of Snakes: Negotiating the Challenges of Multimodal Composition. Computers and Composition 37: 182–200.

Gunther, K. and Staffan Selander, S. 2012. Multimodal Design, Learning and Cultures of Recognition. The Internet and Higher Education 15(4): 265–268.

Vogel, M., Kador, T., Smith, F., Potter, J. 2016. Considering new media in scholarly assessment. UCL Teaching and Learning Conference. 19 April 2016. Institute of Education, UCL, London, UK. https://www.ucl.ac.uk/teaching-learning/events/conference/2016/UCLTL2016Abstracts; https://goo.gl/nqygUH