Digital Education team blog
  • We support Staff and Students using technology to enhance education at UCL.

    Here you'll find updates on institutional developments, projects we're involved in, updates on educational technology, events, case studies and personal experiences (or views!).

  • Subscribe to the Digital Education blog

  • Meta

  • Tags

  • A A A

    Archive for the 'Evaluation' Category

    Comparing Moodle Assignment and Turnitin for assessment criteria and feedback

    By Mira Vogel, on 8 November 2016

    Elodie Douarin (Lecturer in Economics, UCL School of Slavonic and Eastern European Studies) and I have been comparing how assessment criteria can be presented to engage a large cohort of students with feedback in Moodle Assignment and Turnitin Assignment (report now available). We took a mixed methods approach using questionnaire, focus group and student screencasts as they accessed their feedback and responded to our question prompts. Here are some our key findings.

    Spoiler – we didn’t get a clear steer over which technology is (currently) better – they have different advantages. Students said Moodle seemed “better-made” (which I take to relate to theming issues rather than software architecture ones) while the tutor appreciated the expanded range of feedback available in Moodle 3.1.

    Assessment criteria

    • Students need an opportunity to discuss, and ideally practice with, the criteria in advance, so that they and the assessors can reach a shared view of the standards by which their work will be assessed.
    • Students need to know that criteria exist and be supported to use them. Moodle Assignment is good for making rubrics salient, whereas Turnitin requires students to know to click an icon.
    • Students need support to benchmark their own work to the criteria. Moodle or Turnitin rubrics allow assessors to indicate which levels students have achieved. Moreover, Moodle allows a summary comment for each criterion.
    • Since students doubt that assessors refer to the criteria during marking, it is important to make the educational case for criteria (i.e. beyond grading) as a way of reaching a shared understanding about standards, for giving and receiving feedback, and for self/peer assessment.

    Feedback

    • The feedback comments most valued by students explain the issue, make links with the assessment criteria, and include advice about what students should do next.
    • Giving feedback digitally is legible and easily accessible from any web connected device.
    • Every mode of feedback should be conspicuously communicated to students and suggestions on how to cross-reference these different modes should be provided. Some thoughts should be given to ways to facilitate access to and interpretation of all the elements of feedback provided.
    • Students need to know that digital feedback exists and how to access it. A slideshow of screenshots would allow tutors to hide and unhide slides depending on which feedback aspects they are using.

    Effort

    • The more feedback is dispersed between different modes, the more effortful it is for students to relate it to their own work and thinking. Where more than one mode is used, there is a need to distinguish between the purpose and content of each kind of feedback, signpost their relationships, and communicate this to students. Turnitin offers some support for cross referencing between bubble comments and criteria.
    • It would be possible to ask students to indicate on their work which mode (out of a choice of possibilities) they would like assessors to use.
    • The submission of formative assessment produced with minimal effort may impose a disproportionate burden on markers, who are likely to be commenting on mistakes that students could have corrected easily by themselves. Shorter formative assessment, group works, clearer statements of the benefits of submitting formative work may all help limiting the incidence of low-effort submissions.
    • If individual summary comments have a lot in common, consider releasing them as general feedback for the cohort, spending the saved time on more student-specific comments instead. However, this needs to be signposted clearly to help students cross-reference with their individual feedback.
    • As a group, teaching teams can organise a hands-on session with Digital Education to explore Moodle Assignment and Turnitin from the perspectives of students, markers and administrators. This exposure will help immeasurably with designing efficient, considerate processes and workflows.
    • The kind of ‘community work’ referred to by Bloxham and colleagues (2015) would be an opportunity to reach shared understandings of the roles of students and markers with respect to criteria and feedback, which would in turn help to build confidence in the assessment process.

     

    Bloxham, S., den-Outer, B., Hudson, J., Price, M., 2015. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education 1–16. doi:10.1080/02602938.2015.1024607

     

    What IT Directors care about

    By Fiona Strawbridge, on 30 October 2016

    IMG_7849I heard about the Campus Computing survey for the first time at Educause 2016 – but this survey has been around since 1990 – before, I suspect, the term e-learning had even been coined. This is a survey of CIOs’ (IT Directors’) perspectives on e-learning, amongst other things and I was intrigued to find out what they thought, so went to hear about it from Casey (Kenneth) Green, the Founding Director of CampusComputing.net. I haven’t managed to find the actual survey report, so what follows is a bit patchy, but in essence, CIOs’ have ‘great faith in the benefits of e-learning’, but Learning Analytics keeps them up at night.

    Their top five priorities are:

    1. hiring and retaining skilled staff;
    2. assisting academics with e-learning;
    3. the network and data security;
    4. providing adequate user support;
    5. leveraging IT resources to support student success.

    The trouble with learning analytics:

    CIOs are consistently bothered about their institutions’ ability to deliver learning analytics capabilities and cited concerns with:

    • the infrastructure to deliver them;
    • effectiveness of investment to date;
    • sense of satisfaction with what has been delivered

    There was a general sense that their ‘reach exceeded their grasp’ in this area.

    What we do vs what we buy:

    An interesting observation was that CIOs’ rating of services and facilities that are bought in or outsourced was higher than of those that are developed in house. ‘What we buy works better than what we do’.  Which is perhaps unsurprising, but rather depressing. The service that CIOs were happiest about was wifi!

    If I manage to get a link to the report or presentation I will link to it here.

    What (American) Students Want

    By Fiona Strawbridge, on 30 October 2016

    Infographic of ECAR Survey - https://library.educause.edu/resources/2016/6/~/media/files/library/2016/10/eig1605.pdf

    ECAR infographic

    One motivation for enduring the jet lag and culture shock of a long haul conference is the chance to find out what the big issues are in a different HE environment; Educause is a very good opportunity to do that as it reports on a number of surveys in the world’s largest higher education sector.

    So, at this year’s Educause in LA, I went to sessions reporting the results of two very different surveys. One – the ECAR (Educause Center for Analysis & Research) Student Survey – asks students themselves about their attitudes to, experiences of and preferences for using technology in HE – a bit like a tech-focused NSS. The second – CampusComputing.net – surveys IT Directors’ views on e-learning; this seemed, to me, to be a rather odd perspective (why ask CIOs and not heads of e-learning who are closer to the area?).  This post looks at the ECAR student view. To find out what the directors want I’ve written a more sketchy post…

    The student survey was completed by a staggering 71,641 students from 183 institutions in 37 states and from 12 countries. The survey is a good benchmarking tool for participating institutions – they are able to compare their results against those from other institutions. Christopher Brookes and Jeffrey Pomerania from Educause presented a whistle-stop tour of the main findings. The full report is at the survey hub, and the infographic shown on the right is a nice summary. There weren’t too many surprises; in a nutshell, students own a lot of devices, and they view them as very important for their learning.

    Their devices

    In terms of devices, 93% own laptops and a further 3% plan to purchase one, and almost all say they are very or extremely important for their studies. 96 % own smartphones. Tablet ownership is much lower at 57%, and students rated them as less important to their studies than their smartphones. 61% of students have two or three devices, and 33% own four or more. Challenging for wifi, as we know…

    Techiness

    ECAR looked at techiness (sic) as measured by students’

    1. disposition to technology (sceptic vs cheerleader, technophobe vs technophile etc);
    2. their attitude (distraction vs enhancement, discontented vs contented etc) and
    3. their actual usage of technology (peripheral vs central, never vs alway connected etc).

    Since 2014 all three measures have increased – so students are more techie now, and men are more techie than women. As I said, no great surprises.

    Students’ experiences of technology

    We were told that there was good news about students’ experiences of technology – 80% rated their overall technology experience as good or excellent. Now, it strikes me that if our scores for question 17 in the National Student Survey which asks about technology had been this low (we score 87%) we’d be very seriously concerned – but of course the questions are different so a direct comparison isn’t valid. But a good question is what is actually meant by “students’ experience of technology”. We were told that the main determinants were wifi in halls of residence and on campus, ease of login, having skilled academics, students’ own attitudes to technology, and it helped if technology used in class was perceived as relevant to their career.

    Technology in teaching

    Around 69% of students said that their teachers had adequate technical skills. More than half reported that technology was being used to share materials (61%) and collaborate (57%). There was less use which encouraged critical thinking (49%) and only a 34% of students said they were encouraged to use their own technology in the classroom.

    82% of students reported preferring a blended learning environment over a fully online or fully offline one. Since 2013, the percentage of students who don’t want any online education has halved from around 22% to 11%. The number wanting a fully online experience has dropped slightly, but the number wanting a ‘nearly fully online’ experience has increased; the number wanted a more traditionally blended approach is stable at around 60%. Those who have previous experience of fully online courses are more likely to want a more fully online experience, and women were more likely than men to want to learn online – it was suggested due to a reluctance to speak up in a face-to-face environment.

    Students found technology helped them with engagement with academics, with one another, and with content. There were some other interesting demographic effects. Women, first generation students, and non-white students were more likely to say that technology had a positive impact on the efficacy of their learning – it empowered them; it was helpful for communication, for helping them with basic terminology, and for getting swift feedback from others. It was found to enrich the learning experience in many ways.

    And finally, students want more:

    • Lecture capture – this mirrors experience at UCL
    • Free, supplemental online content
    • Search tools to find references – this has digital literacy implications as tools exist so perhaps students are unaware.

    But, I guess, not more engaging or challenging online learning experiences. Ah well…

    Introducing the new E-Learning Baseline

    By Jessica Gramp, on 7 June 2016

    UCL E-Learning Baseline 2016The UCL E-Learning Baseline is now available as a printable colour booklet. This can be downloaded from the UCL E-Learning Baseline wiki page: http://bit.ly/UCLELearningBaseline

    The 2016 version is a product of merging the UCL Moodle Baseline with the Student Minimum Entitlement to On-Line Support from the Institute of Education.

    The Digital Education Advisory team will be distributing printed copies to E-Learning Champions and Teaching Administrators for use in departments.

    Please could you also distribute this to your own networks to help us communicate the new guidelines to all staff.

    Support is available to help staff apply this to their Moodle course templates via digi-ed@ucl.ac.uk.

    We are also working on a number of ideas to help people understand the baseline (via a myth busting quiz) and a way for people to show their courses are Baseline (or Baseline+) compliant by way with a colleague endorsed badge.

    See ‘What’s new?’, to quickly see what has changed since the last 2013 Baseline.

     

    Reflections before UCL’s first Mooc

    By Matt Jenner, on 26 February 2016

    Why We Post: Anthropology of Social Media

    Why We Post: Anthropology of Social Media

    UCL’s first Mooc – Why We Post: The Anthropology of Social Media launches on Monday on FutureLearn. It’s not actually our first Mooc – it’s not even one Mooc, it’s 9! Eight other versions are simultaneously launching on UCLeXtend in the following languages: Chinese, English, Italian, Hindi, Portuguese, Spanish, Tamil and Turkish. If that’s not enough  we seem to have quite a few under the banner of UCL:

    (quite a few of these deserve title of ‘first’ – but who’s counting…)

    Extended Learning Landscape - UCL 2015

    Extended Learning Landscape – UCL 2015

    UCL is quite unique for some of these – we have multiple platforms which form a part of our Extended Learning Landscape. This maps out areas of activity such as CPD, short courses, Moocs, Public Engagement, Summer Schools (and many more) and tries to understand how we can utilise digital education / e-learning with these (and what happens when we do).

     

    Justification for Moocs

    We’ve not launched our first Mooc (apparently) but we also need to develop a mid term plan too – so we can do more. Can we justify the ones we’ve done so far? Well a strong evaluation will certainly help but we also need an answer to the most pertinent pending question:

    How much did all this cost and was it worth it? 

    It’s a really good question, one we started asking a while ago, and still the answer feels no better than educated guesswork. Internally we’re working on merging a Costing and Pricing tool (not published, sorry) and the IoE / UCL Knowledge Lab Course Resource Appraisal Modeller (CRAM) tool. The goal is to have a tool which takes the design of a Mooc and outputs a realistic cost. It’s pretty close already – but we need to feed in some localisations from our internal Costing and Pricing tool such as Estates cost, staff wages, Full Economic Costings, digital infrastructure, support etc. The real cost of all this is important. But the value? Well…

    Evaluation

    We’ve had a lot of ideas and thoughts about evaluation; what is the value of running Moocs for the university? It feels right to mention public engagement, the spirit of giving back and developing really good resources that people can enjoy. There’s the golden carrot being dangled of student recruitment but I can’t see that balancing any Profit/Loss sheets. I do not think it’s about pedagogical innovation, let’s get real here: most Moocs are still a bulk of organised expert videos and text. I don’t think this does a disservice to our Moocs, or those of others, I’d wager that people really like organised expert videos and text (YouTube and Wikipedia being stable Top 10 Global Websites hints at this). But there are other reasons – building Moocs is an new way to engage a lot of people with your topic of interest. Dilution of the common corpus of subjects is a good thing; they are open to anyone who can access them. The next logical step is subjects of fascination, niche, specialist, bespoke – all apply to the future of Moocs.

    For evaluation, some obvious things to measure are:

    • Time from people spend on developing the Mooc – we’ve got a breakdown document which tries to list each part of making / running a Mooc so we can estimate time spent.
    • Money spent on media production – this one tends to be easy
    • Registration, survey, quiz, platform usage and associated learner data
    • Feedback from course teams on their experience
    • Outcomes from running a Mooc (book chapters, conference talks, awards won, research instigated)
    • Teaching and learning augmentation (i.e. using the Mooc in a course/module/programme)
    • Developing digital learning objects which can be shared / re-used
    • Student recruitment from the Mooc
    • Pathways to impact – for research-informed Moocs (and we’re working on refining what this means)
    • How much we enjoyed the process – this does matter!

    Developing a Mooc – lessons learned

    Communication

    Designing a course for FutureLearn involves a lot of communication; both internally and to external Partners, mostly our partner manager at FutureLearn but there are others too. This is mostly a serious number of emails – 1503 (so far) to be exact. How? If I knew I’d be rich or loaded with oodles of time. It’s another new years resolution: Stop: Think: Do you really need to send / read / keep that email? Likely not! I tried to get us on Trello early, as to avoid this but I didn’t do so well and as the number of people involved grew adding all these people to a humungous Trello board just seemed, well, unlikely. Email; I shall understand you one day, but for now, I surrender.

    Making videos

    From a bystander’s viewpoint I think the course teams all enjoyed making their videos (see final evaluation point). The Why We Post team had years to make their videos in-situ from their research across the world. This is a great opportunity to capture real people in the own context; I don’t think video gets much better than this. They had permission from the outset to use the video for educational purposes (good call) and wove them right into the fabric of the course – and you can tell. Making Babies in the 21st Century has captured some of the best minds in the field of reproduction; Dan Reisel (lead educator) knows the people he wants, he’s well connected and has captured and collated experts in the field – a unique and challenging achievement. Tim Shakespeare, The Many Faces of Dementia, was keener to capture three core groups for his course: people with Dementia, their carers / family and the experts who are working to improve the lives for people with Dementia. This triangle of people makes it a rounded experience for any learner, you’ll connect with at least one of these groups. Genius.

    Also:

    • Audio matters the most – bad audio = not watching
    • Explain and show concepts – use the visual element of video to show what you mean, not a chin waggling around
    • Keep it short – it’s not an attention span issue, it’s an ideal course structuring exercise.
    • Show your face – people still want to see who’s talking at some point
    • Do not record what can be read – it’s slower to listen than it is to read, if your video cam be replaced with an article, you may want to.
    • Captions and transcripts are important – do as many as you can. Bonus: videos can then be translated.

    Using third party works

    Remains as tricky as it ever has been. Moocs are murky (commercial? educational? for-profit?) but you’ll need to ask permission for every single third-party piece of work you want to use. Best advice: try not to or be prepared to have no response! Images are the worst, it’s a challenge to find lots of great images that you’re allowed to use, and a course without images isn’t very visually compelling. Set aside some time for this.

    Designing social courses that can also be skim-read

    FutureLearn, in particular, is a socially-oriented learning platform – you’ll need to design a course around peer-to-peer discussion. Some is breaking thresholds – you’re trying to teach them something important, enabling rich discussion will help. You’re also trying to keep them engaged – so you can’t ask for a deep, thoughtful, intervention every 2 minutes. Find the balance between asking important questions – raising provocative points – and enjoying the fruits of the discussion with the reality of ‘respond if you want’ type discussion prompts.

    Connect course teams together

    While they might not hold one another’s hair when things get rough – the course teams will benefit from sharing their experiences with one another. We’ve held monthly meetings since the beginning, encouraging each team to attend and share their updates, challenges, show content, see examples from other courses and generally make it a more social experience. Some did share their dropboxes with one another – which I hadn’t expected but am enjoying the level of transparency. I am guilty of thinking at scale at the moment, so while I was guiding and pseudo ‘project-managing’ the courses, I was keen to promote independence and agency within the course teams. It’s their course, they’ll be the ones working into the night on it, I can’t have them relying on me and my dreaded inbox. The outcome is they build their own ideas and shape them in their own style; maybe we’re lucky but this is important. We do intervene at critical stages, recommending approaches and methods as appropriate.

    Plan, design and then build

    Few online learning environments make good drafting tools. We encouraged a three-stage development process:

    1. Proposals, expanded into Excel-based documents. Outlines each week, the headline for each step/component and critical elements like discussion starters.
    2. Design in documents – Word/Google Docs (whatever) – expand each week; what’s in each step. Great for editorial and refinement.
    3. Build in the platform.

    The reason for this is the outlines are usually quick to fix when there’s a glaring structural omission or error. The document-based design then means content can be written, refined and steps planned out in a loose, familiar tool. Finally the platform needs to be played with, understood and then the documents translated into real courses. It’s not a solid process and some courses had an ABC (Arena Blended Connected) Curriculum Design stage, just to be sure a storyboard of the course made sense.

    Overall

    • It’s hard work – for the course teams – you can just see they’ll underestimate the amount of time needed.
    • The value shows once you go live and people start registering, sharing early comments on the Week 0 discussion areas.
    • These courses look good and work well as examples for others, Mooc or credit-bearing blended/online courses
    • Courses don’t need to be big – 1/2 hours a week, 2-4 weeks is enough. I’d like to see more smaller Moocs
    • Integrating your Moocs into taught programmes, modules, CPD courses makes a lot of sense

    As a final observation before we go live with the first course: Why We Post: The Anthropology of Social Media, on Monday there was one thing that caught my eye early:

    Every course team leader for our Moocs is primarily a researcher and their Moocs are produced, largely, from their research activity. UCL is research intensive, so this isn’t too crazy, but we’re also running an institutional initiative the Connected Curriculum which is designed to fully integrate research and teaching. The Digital Education team is keen to see how we build e-learning into research from the outset. This leads us to a new project in UCL entitled: Pathways to Impact: Research Outputs as Digital Education (ROADE) where we’re exploring research dissemination and e-learning objects and courses origins and value. More soon on that one – but our Mooc activity has really initiated this activity.

    Coming soon – I hope – Reflections after UCL’s first Mooc 🙂 

     

    Online learning and the No Significant Difference phenomenon

    By Mira Vogel, on 20 August 2015

    When asked for evidence of effectiveness of digital education I often find it hard to respond, even though this is one of the best questions you can ask about it. Partly this is because digital education is not a single intervention but a portmanteau of different applications interacting with the circumstances and practices of staff and students – in other words, it’s situated. Another is that evaluation by practitioners tends not to be well resourced or rewarded, leading to a lack of well-designed and well-reported evaluation studies to synthesise into theory. For these reasons I was interested to see a paper by Tuan Nguyen titled ‘The effectiveness of online learning: beyond no significant difference and future horizons‘ in the latest issue of the Journal of Online Learning and Teaching. Concerned with generalisability of research which compares ‘online’ to ‘traditional’ education, it offers critique and proposes improvements.

    Nguyen directs attention to nosignificantdifference.org, a site which indicates that 92% of distance or online education is at least as effective or better than what he terms ‘traditional’ i.e. in-person, campus-based education. He proceeds to examine this statistic, raising questions about the studies included and a range of biases within them.

    Because the studies include a variety of interventions in a variety of contexts, it is impossible to define an essence of ‘online learning’ (and the same is presumably true for ‘traditional learning’). From this it follows that no constant effect is found for online learning; most of the studies had mixed results attributed to heterogeneity effects. For example, one found that synchronous work favoured traditional students whereas asynchronous work favoured online students. Another found that, as we might expect, its results were moderated by race/ethnicity, sex and ability. One interesting finding was that fixed timetabling can enable traditional students to spend more time-on-task than online students, with correspondingly better outcomes. Another was improvements in distance learning may only be identifiable if we exclude what Nguyen tentatively calls ‘first-generation online courses’ from the studies.

    A number of the studies contradict each other, leading some researchers to argue that much of the variation in observed learning outcomes is due to research methodology. Where the researcher was also responsible for running the course there was concern about vested interests in the results of the evaluation. The validity of quasi experimental studies is threatened by confounding effects such as students from a control group being able to use friends’ accounts to access the intervention.  One major methodological concern is endogenous selection bias: where students self-select their learning format rather than being randomly assigned, there are indications that the online students are more able and confident, which in turn may mask the effectiveness of traditional format. Also related to sampling, most data comes from undergraduate courses and wonders whether graduate students with independent learning skills might fare better with online courses.

    Lest all of this feed cynicism about bothering to evaluate at all, only evaluation research can empower good decisions about where to put our resources and energies. What this paper indicates is that it is possible to design out or control for some of the confounding factors it raises. Nguyen makes a couple of suggestions for the ongoing research agenda. The first he terms the “ever ubiquitous” more-research-needed approach to investigating heterogeneity effects.

    “In particular, there needs to be a focus on the factors that have been observed to have an impact on the effectiveness of online education: self-selection bias, blended instruction, active engagement with the materials, formative assessment, varied materials and repeatable low-stake practice, collaborative learning communities, student maturity, independent learning skills, synchronous and asynchronous work, and student characteristics.”

    He points out a number of circumstances which are under the direct control of the teaching team, such as opportunities for low stakes practice, occasions for synchronous and asynchronous engagement, and varied materials, which are relatively straightforward to adjust and relate to student outcomes. He also suggests how to approach weighting and measuring these. Inevitably, thoughts turn to individualising student learning and it is this, particularly in the form of adaptive learning software, that Nguyen proposes as the most likely way out of the No Significant Difference doldrums. Determining the most effective pathways for different students in different courses promises to inform those courses ongoing designs. This approach puts big data in the service of individualisation based on student behaviour or attributes.

    This dual emphasis of Nguyen’s research agenda avoids an excessively data-oriented approach. When evaluation becomes diverted into trying to relate clicks to test scores, not only are some subject areas under-researched but benefits of online environments are liable to be conceived in narrowed terms of the extent to which they yield enough data to individualise student pathways. This in itself is an operational purpose which overlooks the educational qualities of environments as design spaces in which educators author, exercise professional judgment, and intervene contingently. I had a bit of a reverie about vast repositories of educational data such as LearnSphere and the dangers of allowing them to over-determine teaching (though I don’t wish to diminish their opportunities, either). I wished I had completed Ryan Baker’s Big Data in Education Mooc on EdX (this will run again, though whether I’ll be equal to the maths is another question). I wondered if the funding squeeze might conceivably lead us to adopt paradoxically homogeneous approaches to coping with the heterogeneity of students, where everyone draws similar conclusions from the data and acts on it in similar ways, perhaps buying off-the-shelf black-box algorithmic solutions from increasingly monopolistic providers. Then I wondered if I was indulging dystopian flights of fancy, because in order for click-by-click data to inform the learning activity design you need to triangulate it with something less circumstantial – you need to know the whys as well as the whats and the whens. Click data may provide circumstantial evidence about what does or doesn’t work, but on its own it can’t propose solutions. Speculating about solutions is a luxury – using A/B testing on students may be allowed in Moocs and other courses where nobody’s paying, but it’s a more fraught matter in established higher education cohorts. Moreover Moocs are currently outside many institutions’ quality frameworks and this is probably why their evaluation questions often seem concerned with engagement rather than learning. Which is to say that Mooc evaluations which are mainly click and test data-oriented may have limited light to shed outside those Mooc contexts.

    Evaluating online learning is difficult because evaluating learning is difficult. To use click data and test scores in a way which avoids unnecessary trial and error, we will need to carry out qualitative studies. Nguyen’s two approaches should be treated as symbiotic.


    Video HT Bonnie Stewart.

    Nguyen, T. (2015). The effectiveness of online learning: beyond no significant difference and future horizons. Journal of Online Learning and Teaching11(2). Retrieved from http://jolt.merlot.org/Vol11no2/Nguyen_0615.pdf