Digital Education team blog
  • We support Staff and Students using technology to enhance education at UCL.

    Here you'll find updates on institutional developments, projects we're involved in, updates on educational technology, events, case studies and personal experiences (or views!).

  • Subscribe to the Digital Education blog

  • Meta

  • Tags

  • A A A

    Archive for the 'Mira’s Mire' Category

    Fake news and Wikidata

    By Mira Vogel, on 20 February 2017

    James Martin Charlton, Head of the Media Department at Middlesex University and co-host of today’s Wikimedia Education Summit, framed Wikimedia as a defence against the fake news currently spread and popularised by dominant search engine algorithms. Fake news undermines knowledge as power and renders societies easily manipulable. This is one reason several programme leaders I work with – one of whom was at the event – have expressed interest in incorporating Wikimedia into their curricula. (Wikimedia is the collection of projects of which Wikipedia is the best known, but which also includes Wikivoyage, Wikisource and Wikimedia Commons).

    Broadly there are two aspects to Wikimedia in education. One is the content – for example, the articles in Wikipedia, the media in Wikimedia Commons, the textbooks in Wikisource. All of this content is in the public domain, available to use freely in our projects and subject to correction and improvement by that public. The other aspect is process. Contributing to Wikimedia can qualify as higher education when students are tasked with, say, digesting complex or technical information for a non-expert Wikipedia readership, or negotiating changes to an article which has an existing community of editors, or contributing an audio-recording which they later use in a project they publish under an open licence. More recently, Wikidata has emerged as a major presence on the linked and open data scene. I want to focus on Wikidata because it seems very promising as an approach to engaging students in the structured data which is increasingly shaping our world.

    Wikidata is conceived as the central data storage for the aforementioned Wikimedia projects. Unlike Wikipedia, Wikidata can be read by machines as well as humans, which means it can be queried. So if you – as we did today – wish to see at a glance the notable alumni from a given university, you can. Today we gave a little back to our hosts by contributing an ‘Educated at’ value to a number of alumni which lacked it on Wikidata. This enabled those people to be picked up by a Wikidata query and visualised. But institutions tend to merge or change their names, so I added a ‘Followed by’ attribute to the Wikidata entry for Hornsey College of Art (which merged into Middlesex Polytechnic), allowing the query to be refine to include Hornsey alumni too. I also visualised UCL’s notable alumni as a timeline (crowded – zoom out!) and a map. The timeline platform is called Histropedia and is the work of Navino Evans. It is available to all and – thinking public engagement – is reputedly a very good way to visualise research data without needing to hire somebody in.

    So far so good. But is it correct? I dare say it’s at least slightly incorrect, and more than slightly incomplete. Yes, I’d have to mend it, or get it mended, at source. But that state of affairs is pretty normal, as anyone involved in learning analytics understands. And can’t Wikidata be sabotaged? Yes – and because the data is linked, any sabotage would have potentially far reaching effects – so there will need to be defences such as limiting the ability to make mass edits, or edit entries which are both disputed and ‘hot’. But the point is, if I can grasp the SPARQL query language (which is said to be pretty straightforward and, being related to SQL, a transferable skill) then – without an intermediary – I can generate information which I can check, and triangulate against other information to reach a judgement. How does this play out in practice? Here’s Oxford University Wikimedian in Residence Martin Poulter with an account of how he queried Wikidata’s biographical data about UK MPs and US Senators to find out – and, importantly, visualise – where they were educated, and what occupation they’ve had (153 cricketers!?).

    So, say I want to master the SPARQL query language? Thanks to Ewan McAndrew, Wikimedian in Residence at the University of Edinburgh, there’s a SPARQL query video featuring Navino Evans on Edinburgh’s Wikimedia in Residence media channel.

    Which brings me to the beginning, when Melissa Highton set out the benefits Wikimedians have brought to Edinburgh University, where she is Assistant Principal. These benefits include building digital capabilities, public engagement for researchers, and addressing the gender gap in Wikimedia representation, demonstrating to Athena Swann assessors that the institution is addressing structural barriers to women contributing in science and technology. Here’s Melissa’s talk in full. Bodleian Library Web and Digital Media Manager Liz McCarthy made a similarly strong case – they have had to stop advertising their Wikimedian in Residence’s services since so many Oxford University researchers have woken up to Wikimedia’s public engagement potential.

    We also heard from Wikimedians with educational ideas, tutor Stefan Lutschinger on designing Wikimedia assignments, and the students who presented on their work in his Publishing Cultures module – and there were parallel sessions. You can follow the Wikimedia Education Summit tweets at .

    Comparing Moodle Assignment and Turnitin for assessment criteria and feedback

    By Mira Vogel, on 8 November 2016

    Elodie Douarin (Lecturer in Economics, UCL School of Slavonic and Eastern European Studies) and I have been comparing how assessment criteria can be presented to engage a large cohort of students with feedback in Moodle Assignment and Turnitin Assignment (report now available). We took a mixed methods approach using questionnaire, focus group and student screencasts as they accessed their feedback and responded to our question prompts. Here are some our key findings.

    Spoiler – we didn’t get a clear steer over which technology is (currently) better – they have different advantages. Students said Moodle seemed “better-made” (which I take to relate to theming issues rather than software architecture ones) while the tutor appreciated the expanded range of feedback available in Moodle 3.1.

    Assessment criteria

    • Students need an opportunity to discuss, and ideally practice with, the criteria in advance, so that they and the assessors can reach a shared view of the standards by which their work will be assessed.
    • Students need to know that criteria exist and be supported to use them. Moodle Assignment is good for making rubrics salient, whereas Turnitin requires students to know to click an icon.
    • Students need support to benchmark their own work to the criteria. Moodle or Turnitin rubrics allow assessors to indicate which levels students have achieved. Moreover, Moodle allows a summary comment for each criterion.
    • Since students doubt that assessors refer to the criteria during marking, it is important to make the educational case for criteria (i.e. beyond grading) as a way of reaching a shared understanding about standards, for giving and receiving feedback, and for self/peer assessment.

    Feedback

    • The feedback comments most valued by students explain the issue, make links with the assessment criteria, and include advice about what students should do next.
    • Giving feedback digitally is legible and easily accessible from any web connected device.
    • Every mode of feedback should be conspicuously communicated to students and suggestions on how to cross-reference these different modes should be provided. Some thoughts should be given to ways to facilitate access to and interpretation of all the elements of feedback provided.
    • Students need to know that digital feedback exists and how to access it. A slideshow of screenshots would allow tutors to hide and unhide slides depending on which feedback aspects they are using.

    Effort

    • The more feedback is dispersed between different modes, the more effortful it is for students to relate it to their own work and thinking. Where more than one mode is used, there is a need to distinguish between the purpose and content of each kind of feedback, signpost their relationships, and communicate this to students. Turnitin offers some support for cross referencing between bubble comments and criteria.
    • It would be possible to ask students to indicate on their work which mode (out of a choice of possibilities) they would like assessors to use.
    • The submission of formative assessment produced with minimal effort may impose a disproportionate burden on markers, who are likely to be commenting on mistakes that students could have corrected easily by themselves. Shorter formative assessment, group works, clearer statements of the benefits of submitting formative work may all help limiting the incidence of low-effort submissions.
    • If individual summary comments have a lot in common, consider releasing them as general feedback for the cohort, spending the saved time on more student-specific comments instead. However, this needs to be signposted clearly to help students cross-reference with their individual feedback.
    • As a group, teaching teams can organise a hands-on session with Digital Education to explore Moodle Assignment and Turnitin from the perspectives of students, markers and administrators. This exposure will help immeasurably with designing efficient, considerate processes and workflows.
    • The kind of ‘community work’ referred to by Bloxham and colleagues (2015) would be an opportunity to reach shared understandings of the roles of students and markers with respect to criteria and feedback, which would in turn help to build confidence in the assessment process.

     

    Bloxham, S., den-Outer, B., Hudson, J., Price, M., 2015. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education 1–16. doi:10.1080/02602938.2015.1024607

     

    Authentic multimodal assessments

    By Mira Vogel, on 7 October 2016

    Cross-posted to the Connected Curriculum Fellows blog.

    My Connected Curriculum Fellowship project explores current practice with Connected Curriculum dimension 5 – ‘Students learn to produce outputs – assessments directed at an audience’. My emphasis is on assessing students’ digital (including digitised) multimodal outputs for an audience. What does ‘multimodal’ mean? Modes can be thought of as styles of communication –  register and voice, for example – while media and be thought of as its fabric. In practice, though, the line between the two is quite blurry (Kress, 2012). This work will look at multimodal assessment from the following angles.

    What kinds of digital multimodal outputs are students producing at UCL, and using which media? The theoretic specificity of verbal media, such as essay or talk, explains its dominance in academia. Some multimodal forms, such as documentaries, are recognised as (potentially) academic, while others are straightforwardly authentic, such as curation students producing online exhibitions. At the margins are works which bring dilemmas about academic validity, such as fan fiction submitted for the From Codex To Kindle module, or the Internet Cultures student who blogged as a dog.

    How are students supported to conceptualise their audiences? DePalma and Alexander (2015) observe that students who are used to writing for one or two academic markers may struggle with the complex notions of audience called for by an expanded range of rhetorical resources. The 2016 Making History convenor has pointed out that students admitted to UCL on strength of their essays may find the transition to multimodal assessment unsettling and question its validity.  I hope to explore tutor and student perspectives here with a focus on how the tasks are introduced to students. I will maintain awareness of the Liberating the Curriculum emphasis on diverse audiences. I will also explore matters of consent and intellectual property, and ask what happens to the outputs once the assessment is complete.

    What approaches are taken to assessing multimodal work? A 2006 survey (Anderson et al) reported several assessment challenges for markers, including separation of rhetorical from aesthetic effects, diversity of skills, technologies and interpretation, and balancing credit between effort and quality where the output may be unpolished. Adsanatham (2012) describes how his students generated more complex criteria than he could have alone, helping “enrich our ever-evolving understanding and learning of technology and literacies”. DePalma and Alexander (2015) discuss written commentaries or reflective pieces as companions to students’ multimodal submissions. Finding out about the practices of staff and students across UCL promises to illuminate possibilities, questions, contrasts and dilemmas.

    I plan to identify participants by drawing on my and colleagues’ networks, the Teaching and Learning Portal, and calls via appropriate channels. Building on previous work, I hope to collect screen-capture recordings, based on question prompts, in which students explain their work and tutors explain how they marked it. These kinds of recordings provide very rich data but, anticipating difficulties obtaining consent to publish these, I also plan to transcribe and analyse them using NVivo to produce a written report. I aim to produce a collection of examples of multimodal work, practical suggestions for managing the trickier areas of assessment, and ideas for supporting students in their activities. I will ask participants to validate these outputs.

    Would you like to get involved? Contact Mira Vogel.

    References

    Adsanatham, C. 2012. Integrating Assessment and Instruction: Using Student-Generated Grading Criteria to Evaluate Multimodal Digital Projects. Computers and Composition 29(2): 152–174.

    Anderson, D., Atkins, A., Ball, C., et al. 2006. Integrating Multimodality into Composition Curricula: Survey Methodology and Results from a CCCC Research Grant. Composition Studies 34(2). http://www.uc.edu/journals/composition-studies/issues/archives/fall2006-34-2.html.

    DePalma, M.J., and Alexander, K.P. 2015. A Bag Full of Snakes: Negotiating the Challenges of Multimodal Composition. Computers and Composition 37: 182–200.

    Gunther, K. and Staffan Selander, S. 2012. Multimodal Design, Learning and Cultures of Recognition. The Internet and Higher Education 15(4): 265–268.

    Vogel, M., Kador, T., Smith, F., Potter, J. 2016. Considering new media in scholarly assessment. UCL Teaching and Learning Conference. 19 April 2016. Institute of Education, UCL, London, UK. https://www.ucl.ac.uk/teaching-learning/events/conference/2016/UCLTL2016Abstracts; https://goo.gl/nqygUH

    Wikipedia Course Leaders’ event

    By Mira Vogel, on 15 August 2016

    Wikimedia UK held a Wikipedia Course Leaders event on the afternoon of July 19th. The meeting brought together academics who use Wikipedia in their modules, Wikipedians in Residence, and other Wikipedia and higher education enthusiasts (like me) to exchange their practice and think about some of the challenges of working for assessment in an environment which is very much alive and out in the world.

    As you can imagine, we were all in agreement about the potential of Wikipedia in our respective disciplines, which included Applied Human Geography, Psychology, Law, World Christianity, and Research Methods for Film. As you can see from the notes we took, we discussed colleagues’ and students’ reservations, tensions and intersections between Wikimedia and institutional agendas, relationships between students and other Wikipedians, assessment which is fair and well-supported, and Wikipedia tools for keeping track of students. There are plenty of ideas, solutions, and examples of good and interesting practice. There is a new and developing Wikimedia page for UK universities.

    If you are interested in using Wikipedia to give your students the experience of public writing on the Web and contributing within a global community of interest, there is plenty of support.

    ELESIG London 3rd Meeting – Evaluation By Numbers

    By Mira Vogel, on 13 July 2016

    The third ELESIG London event, ‘Evaluation By Numbers‘, was a two-hour event on July 7th. Building on the successful format of our last meeting, we invited two presenters on the theme of ‘proto-analytics’ – an important aspect of institutional readiness for learning analytics which empowers individuals to work with their own log data to come up with theories about what to do next. There were 15 participants with a range of experiences and interests, including artificial intelligence, ethics, stats and data visualisation, and a range of priorities including academic research, academic development, and data security, and real-time data analysis.
    After a convivial round of introductions there was a talk from Michele Milner, Head of Centre for Excellence in Teaching and Learning at the University of East London, titled Empowering Staff And Students. Determined to avoid data-driven decision making, UEL’s investigations had confirmed a lack of enthusiasm and wariness on the part of most staff to work with log data. This is normal in the sector and probably attributable to a combination of inexperience and overwork. The UEL project had different strands. One was attendance monitoring  feeding into a student engagement metric with more predictive power including correlation between engagement (operationalised as e.g. library and VLE access, data from the tablets students are issued) and achievement. This feeds a student retention app, along with demographic weightings. Turnitin and Panopto (lecture capture) data have so far been elusive, but UEL is persisting on the basis these gross measures do correlate.
    The project gave academic departments a way to visualise retention as an overall red-amber-green rating, and simulate the expected effects of different interventions. The feedback they received from academics was broadly positive but short of enthused, and with good questions about cut-off dates, workload allocation, and nature and timing of interventions. Focus group with students revealed that there was low awareness of data collection, that students weren’t particularly keen to see the data, and that if presented with it they would prefer barcharts by date rather than comparators with other students. We discussed ethics of data collection, including the possibility of student opt-in or opt-out of opening their anonymised data set.
    Our next speaker was Andreas Konstantinidis from Kings College London, on Utilising Moodle Logs (slides).  He attributes the low numbers of educators are currently working with VLE data to limitations of logs. In Moodle’s case this is particularly to do with limited filtering, and the exclusion of some potentially important data including Book pages and links within Labels. To address this, he and his colleague Cat Grafton worked on some macros to allow individual academics to import and visualise logs downloaded from their VLE (KEATS) in a MS Excel spreadsheet.
    To dodge death by data avarice they first had to consider which data to include, deciding on the following. Mean session length does not firmly correspond to anything but the fluctuations are interesting. Bounce rate indicates students are having difficulty finding what they need. Time of use, combining two or more filters, can inform plans about when to schedule events or release materials. You can also see the top and bottom 10 students engaging with Moodle, and the top and bottom resources used – this data can be an ice breaker to be able to discuss reasons and support. IP addresses, may reveal where students are gathering e.g. a certain IT room, which in turn may inform decisions about where to reach students.
    Kings have made KEATS Analytics available to all (includes workbook), and you can download it from http://tinyurl.com/ELESIG-LA. It currently supports Moodle 2.6 and 2.8, with 3.X coming soon. At UCL we’re on 2.8 only for the next few weeks, so if you want to work with KEATS analytics here’s some guidance for downloading your logs now.
    As Michele (quoting Eliot) said, “Hell is a place where nothing connects with nothing”. Although it is not always fit to use immediately, data abounds – so what we’re looking for now are good pedagogical questions which data can help to answer. I’ve found Anna Lea Dyckhoff’s meta-analysis of tutors’ action research questions helpful. To empower individuals and build data capabilities in an era of potentially data-driven decision-making, a good start might be to address these questions in short worksheets which take colleagues who aren’t statisticians through statistical analysis of their data. If you are good with data and its role in educational decision-making, please get in touch.
    A participant pointed us to a series of podcasts from Jisc around the ethical and legal issues of learning analytics. Richard Treves has as write-up of the event and my co-organiser Leo Havemann has collected the tweets. For a report on the current state of play with learning analytics, see Sclater and colleagues’ April 2016 review of UK and International Practice. Sam Ahern mentioned there are still places on a 28th July data visualization workshop being run by the Software Sustainability Institute.
    To receive communications from us, including details of our next ELESIG London meeting, please sign up to the ELESIG London group on Ning. It’s free and open to all with an interest in educational evaluation.

    KEATS Analytics screenshot

     

    An even better peer feedback experience with the Moodle Workshop activity

    By Mira Vogel, on 21 December 2015

    This is the third and final post in a series about using the Moodle Workshop activity for peer feedback, in which I’ll briefly summarise how we acted on recommendations from the second iteration which in turn built on feedback from the first go. The purpose is to interpret pedagogical considerations as Moodle activity settings.
    To refresh your memories, the setting is the UCL Arena Teaching Association Programme in which postgraduate students, divided into three cognate cohorts, give and receive peer feedback on case studies they are preparing for their Higher Education Academy Associate Fellowship application. Since the activity was peer feedback only, we weren’t exploiting the numeric grades, tutor grades, or grade weighting capabilities of Moodle Workshop on this occasion.
    At the point we last reported on Moodle Workshop there were a number of recommendations. Below I revisit those and summarise the actions we took and their consequences.

    Improve signposting from the Moodle course area front page, and maybe the title of the Workshop itself, so students know what to do and when.

    We changed the title to a friendly imperative: “Write a mini case study, give peer feedback”. That is how the link to it now appears on the Moodle page.

    Instructions: let students know how many reviews they are expected to do; let them know if they should expect variety in how the submissions display.

    Noting that participants may need to click or scroll for important information, we used the instructions fields for submissions and for assessment to set out what they should expect to see and do, and how. In instructions for Submission this included word count, how to submit, and that their names would appear with their submission. Then the instructions for Assessment included how to find the allocation, a rough word count for feedback, and that peer markers’ names would appear with their feedback (see below for more on anonymity). The Conclusion included how to find both the original submission and the feedback on it.
    In the second iteration some submissions had been attachments while others had been typed directly into Moodle. This time we set attachments to zero, instead requiring all participants to paste their case studies directly into Moodle. We hoped that the resulting display of submission and its assessment on the same page would help with finding the submission and with cross-referencing. Later it emerged that there were mixed feelings about this: one participant reported difficulties with footnotes and another said would have preferred a separate document so he could arrange the windows in relation to each other, rather than scrolling. In future we may allow attachments, and include a line in the instructions prompting participants to look for an attachment if they can’t see the submission directly in Moodle.
    Since the participants were entirely new to the activity, we knew we would need to give more frequent prompts and guidance than if they were familiar with it. Over the two weeks we sent out four News Forum posts in total at fixed times in relation to the two deadlines. The first launched the activity, let participants know where to find it, and reminded them about the submission deadline; the second, a couple of days before the submission deadline, explained that the deadline was hard and let them know how and when to find the work they had been allocated to give feedback; the third reminded them of the assessment deadline; the fourth let them know where and when to find the feedback they had been given. When asked whether these emails had been helpful or a nuisance, the resounding response was that they had been useful. Again, if students had been familiar with the process, we would have expected to take a much lighter touch on the encouragement and reminders, but first times are usually more effort.

    Consider including an example case study & feedback for reference.

    We linked to one rather than including it within the activity (which is possible) but some participants missed the link. There is a good case for including it within the activity (with or without the feedback). Since this is a low-stakes, voluntary activity, we would not oblige participants to carry out a practice assessment.

    Address the issue that, due to some non-participation during the Assessment phase, some students gave more feedback than they received.

    In our reminder News Forum emails we explicitly reminded students of their role in making sure every participant received feedback. In one cohort this had a very positive effect with participants who didn’t make the deadline (which is hard for reasons mentioned elsewhere) using email to give feedback on their allocated work. We know that, especially with non-compulsory activities and especially if there is a long time between submitting, giving feedback and receiving feedback, students will need email prompts to remind them what to do and when.

    We originally had a single comments field but will now structure the peer review with some questions aligned to the relevant parts of the criteria.

    Feedback givers had three question prompts to which they responded in free text fields.

    Decide about anonymity – should both submissions and reviews be anonymous, or one or the other, or neither? Also to consider – we could also change Permissions after it’s complete (or even while it’s running) to allow students to access the dashboard and see all the case studies and all the feedback.

    We decided to even things out by making both the submissions and reviews attributable, achieving this by changing the permissions for that Moodle Workshop activity before it ran. We used the instructions for submissions and assessment to flag this to participants.
    A lead tutor for one of the cohorts had been avoiding using Moodle Workshop because she felt it was too private between a participant their few reviewees. We addressed this after the closure of the activity by proposing to participants that we release all case studies and their feedback to all participants in the cohort (again by changing the permissions for that Moodle Workshop activity). We gave them a chance to raise objections in private, but after receiving none we went ahead with the release. We have not yet checked the logs to see whether this access has been exploited.

    Other considerations.

    Previously we evaluated the peer feedback activity with a questionnaire, but this time we didn’t have the opportunity for that. We did however have the opportunity to discuss the experience with one of the groups. This dialogue affirmed the decisions we’d taken. Participants were positive about repeating the activity, so we duly ran it again after the next session. They also said that they preferred to receive feedback from peers in their cognate cohort, so we maintained the existing Moodle Groupings (Moodle Groups would also work if the cohorts had the same deadline date, but ours didn’t, which is why we had three separate Moodle Workshop instances with Groupings applied).
    The staff valued the activity but felt that without support from ELE they would have struggled to make it work. ELE is responding by writing some contextual guidance for that particular activity, including a reassuring checklist.