X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

Archive for the 'Mira’s Mire' Category

Wikipedia Course Leaders’ event

By Mira Vogel, on 15 August 2016

Wikimedia UK held a Wikipedia Course Leaders event on the afternoon of July 19th. The meeting brought together academics who use Wikipedia in their modules, Wikipedians in Residence, and other Wikipedia and higher education enthusiasts (like me) to exchange their practice and think about some of the challenges of working for assessment in an environment which is very much alive and out in the world.

As you can imagine, we were all in agreement about the potential of Wikipedia in our respective disciplines, which included Applied Human Geography, Psychology, Law, World Christianity, and Research Methods for Film. As you can see from the notes we took, we discussed colleagues’ and students’ reservations, tensions and intersections between Wikimedia and institutional agendas, relationships between students and other Wikipedians, assessment which is fair and well-supported, and Wikipedia tools for keeping track of students. There are plenty of ideas, solutions, and examples of good and interesting practice. There is a new and developing Wikimedia page for UK universities.

If you are interested in using Wikipedia to give your students the experience of public writing on the Web and contributing within a global community of interest, there is plenty of support.

ELESIG London 3rd Meeting – Evaluation By Numbers

By Mira Vogel, on 13 July 2016

The third ELESIG London event, ‘Evaluation By Numbers‘, was a two-hour event on July 7th. Building on the successful format of our last meeting, we invited two presenters on the theme of ‘proto-analytics’ – an important aspect of institutional readiness for learning analytics which empowers individuals to work with their own log data to come up with theories about what to do next. There were 15 participants with a range of experiences and interests, including artificial intelligence, ethics, stats and data visualisation, and a range of priorities including academic research, academic development, and data security, and real-time data analysis.
After a convivial round of introductions there was a talk from Michele Milner, Head of Centre for Excellence in Teaching and Learning at the University of East London, titled Empowering Staff And Students. Determined to avoid data-driven decision making, UEL’s investigations had confirmed a lack of enthusiasm and wariness on the part of most staff to work with log data. This is normal in the sector and probably attributable to a combination of inexperience and overwork. The UEL project had different strands. One was attendance monitoring  feeding into a student engagement metric with more predictive power including correlation between engagement (operationalised as e.g. library and VLE access, data from the tablets students are issued) and achievement. This feeds a student retention app, along with demographic weightings. Turnitin and Panopto (lecture capture) data have so far been elusive, but UEL is persisting on the basis these gross measures do correlate.
The project gave academic departments a way to visualise retention as an overall red-amber-green rating, and simulate the expected effects of different interventions. The feedback they received from academics was broadly positive but short of enthused, and with good questions about cut-off dates, workload allocation, and nature and timing of interventions. Focus group with students revealed that there was low awareness of data collection, that students weren’t particularly keen to see the data, and that if presented with it they would prefer barcharts by date rather than comparators with other students. We discussed ethics of data collection, including the possibility of student opt-in or opt-out of opening their anonymised data set.
Our next speaker was Andreas Konstantinidis from Kings College London, on Utilising Moodle Logs (slides).  He attributes the low numbers of educators are currently working with VLE data to limitations of logs. In Moodle’s case this is particularly to do with limited filtering, and the exclusion of some potentially important data including Book pages and links within Labels. To address this, he and his colleague Cat Grafton worked on some macros to allow individual academics to import and visualise logs downloaded from their VLE (KEATS) in a MS Excel spreadsheet.
To dodge death by data avarice they first had to consider which data to include, deciding on the following. Mean session length does not firmly correspond to anything but the fluctuations are interesting. Bounce rate indicates students are having difficulty finding what they need. Time of use, combining two or more filters, can inform plans about when to schedule events or release materials. You can also see the top and bottom 10 students engaging with Moodle, and the top and bottom resources used – this data can be an ice breaker to be able to discuss reasons and support. IP addresses, may reveal where students are gathering e.g. a certain IT room, which in turn may inform decisions about where to reach students.
Kings have made KEATS Analytics available to all (includes workbook), and you can download it from http://tinyurl.com/ELESIG-LA. It currently supports Moodle 2.6 and 2.8, with 3.X coming soon. At UCL we’re on 2.8 only for the next few weeks, so if you want to work with KEATS analytics here’s some guidance for downloading your logs now.
As Michele (quoting Eliot) said, “Hell is a place where nothing connects with nothing”. Although it is not always fit to use immediately, data abounds – so what we’re looking for now are good pedagogical questions which data can help to answer. I’ve found Anna Lea Dyckhoff’s meta-analysis of tutors’ action research questions helpful. To empower individuals and build data capabilities in an era of potentially data-driven decision-making, a good start might be to address these questions in short worksheets which take colleagues who aren’t statisticians through statistical analysis of their data. If you are good with data and its role in educational decision-making, please get in touch.
A participant pointed us to a series of podcasts from Jisc around the ethical and legal issues of learning analytics. Richard Treves has as write-up of the event and my co-organiser Leo Havemann has collected the tweets. For a report on the current state of play with learning analytics, see Sclater and colleagues’ April 2016 review of UK and International Practice. Sam Ahern mentioned there are still places on a 28th July data visualization workshop being run by the Software Sustainability Institute.
To receive communications from us, including details of our next ELESIG London meeting, please sign up to the ELESIG London group on Ning. It’s free and open to all with an interest in educational evaluation.

KEATS Analytics screenshot

 

An even better peer feedback experience with the Moodle Workshop activity

By Mira Vogel, on 21 December 2015

This is the third and final post in a series about using the Moodle Workshop activity for peer feedback, in which I’ll briefly summarise how we acted on recommendations from the second iteration which in turn built on feedback from the first go. The purpose is to interpret pedagogical considerations as Moodle activity settings.
To refresh your memories, the setting is the UCL Arena Teaching Association Programme in which postgraduate students, divided into three cognate cohorts, give and receive peer feedback on case studies they are preparing for their Higher Education Academy Associate Fellowship application. Since the activity was peer feedback only, we weren’t exploiting the numeric grades, tutor grades, or grade weighting capabilities of Moodle Workshop on this occasion.
At the point we last reported on Moodle Workshop there were a number of recommendations. Below I revisit those and summarise the actions we took and their consequences.

Improve signposting from the Moodle course area front page, and maybe the title of the Workshop itself, so students know what to do and when.

We changed the title to a friendly imperative: “Write a mini case study, give peer feedback”. That is how the link to it now appears on the Moodle page.

Instructions: let students know how many reviews they are expected to do; let them know if they should expect variety in how the submissions display.

Noting that participants may need to click or scroll for important information, we used the instructions fields for submissions and for assessment to set out what they should expect to see and do, and how. In instructions for Submission this included word count, how to submit, and that their names would appear with their submission. Then the instructions for Assessment included how to find the allocation, a rough word count for feedback, and that peer markers’ names would appear with their feedback (see below for more on anonymity). The Conclusion included how to find both the original submission and the feedback on it.
In the second iteration some submissions had been attachments while others had been typed directly into Moodle. This time we set attachments to zero, instead requiring all participants to paste their case studies directly into Moodle. We hoped that the resulting display of submission and its assessment on the same page would help with finding the submission and with cross-referencing. Later it emerged that there were mixed feelings about this: one participant reported difficulties with footnotes and another said would have preferred a separate document so he could arrange the windows in relation to each other, rather than scrolling. In future we may allow attachments, and include a line in the instructions prompting participants to look for an attachment if they can’t see the submission directly in Moodle.
Since the participants were entirely new to the activity, we knew we would need to give more frequent prompts and guidance than if they were familiar with it. Over the two weeks we sent out four News Forum posts in total at fixed times in relation to the two deadlines. The first launched the activity, let participants know where to find it, and reminded them about the submission deadline; the second, a couple of days before the submission deadline, explained that the deadline was hard and let them know how and when to find the work they had been allocated to give feedback; the third reminded them of the assessment deadline; the fourth let them know where and when to find the feedback they had been given. When asked whether these emails had been helpful or a nuisance, the resounding response was that they had been useful. Again, if students had been familiar with the process, we would have expected to take a much lighter touch on the encouragement and reminders, but first times are usually more effort.

Consider including an example case study & feedback for reference.

We linked to one rather than including it within the activity (which is possible) but some participants missed the link. There is a good case for including it within the activity (with or without the feedback). Since this is a low-stakes, voluntary activity, we would not oblige participants to carry out a practice assessment.

Address the issue that, due to some non-participation during the Assessment phase, some students gave more feedback than they received.

In our reminder News Forum emails we explicitly reminded students of their role in making sure every participant received feedback. In one cohort this had a very positive effect with participants who didn’t make the deadline (which is hard for reasons mentioned elsewhere) using email to give feedback on their allocated work. We know that, especially with non-compulsory activities and especially if there is a long time between submitting, giving feedback and receiving feedback, students will need email prompts to remind them what to do and when.

We originally had a single comments field but will now structure the peer review with some questions aligned to the relevant parts of the criteria.

Feedback givers had three question prompts to which they responded in free text fields.

Decide about anonymity – should both submissions and reviews be anonymous, or one or the other, or neither? Also to consider – we could also change Permissions after it’s complete (or even while it’s running) to allow students to access the dashboard and see all the case studies and all the feedback.

We decided to even things out by making both the submissions and reviews attributable, achieving this by changing the permissions for that Moodle Workshop activity before it ran. We used the instructions for submissions and assessment to flag this to participants.
A lead tutor for one of the cohorts had been avoiding using Moodle Workshop because she felt it was too private between a participant their few reviewees. We addressed this after the closure of the activity by proposing to participants that we release all case studies and their feedback to all participants in the cohort (again by changing the permissions for that Moodle Workshop activity). We gave them a chance to raise objections in private, but after receiving none we went ahead with the release. We have not yet checked the logs to see whether this access has been exploited.

Other considerations.

Previously we evaluated the peer feedback activity with a questionnaire, but this time we didn’t have the opportunity for that. We did however have the opportunity to discuss the experience with one of the groups. This dialogue affirmed the decisions we’d taken. Participants were positive about repeating the activity, so we duly ran it again after the next session. They also said that they preferred to receive feedback from peers in their cognate cohort, so we maintained the existing Moodle Groupings (Moodle Groups would also work if the cohorts had the same deadline date, but ours didn’t, which is why we had three separate Moodle Workshop instances with Groupings applied).
The staff valued the activity but felt that without support from ELE they would have struggled to make it work. ELE is responding by writing some contextual guidance for that particular activity, including a reassuring checklist.

Online learning and the No Significant Difference phenomenon

By Mira Vogel, on 20 August 2015

When asked for evidence of effectiveness of digital education I often find it hard to respond, even though this is one of the best questions you can ask about it. Partly this is because digital education is not a single intervention but a portmanteau of different applications interacting with the circumstances and practices of staff and students – in other words, it’s situated. Another is that evaluation by practitioners tends not to be well resourced or rewarded, leading to a lack of well-designed and well-reported evaluation studies to synthesise into theory. For these reasons I was interested to see a paper by Tuan Nguyen titled ‘The effectiveness of online learning: beyond no significant difference and future horizons‘ in the latest issue of the Journal of Online Learning and Teaching. Concerned with generalisability of research which compares ‘online’ to ‘traditional’ education, it offers critique and proposes improvements.

Nguyen directs attention to nosignificantdifference.org, a site which indicates that 92% of distance or online education is at least as effective or better than what he terms ‘traditional’ i.e. in-person, campus-based education. He proceeds to examine this statistic, raising questions about the studies included and a range of biases within them.

Because the studies include a variety of interventions in a variety of contexts, it is impossible to define an essence of ‘online learning’ (and the same is presumably true for ‘traditional learning’). From this it follows that no constant effect is found for online learning; most of the studies had mixed results attributed to heterogeneity effects. For example, one found that synchronous work favoured traditional students whereas asynchronous work favoured online students. Another found that, as we might expect, its results were moderated by race/ethnicity, sex and ability. One interesting finding was that fixed timetabling can enable traditional students to spend more time-on-task than online students, with correspondingly better outcomes. Another was improvements in distance learning may only be identifiable if we exclude what Nguyen tentatively calls ‘first-generation online courses’ from the studies.

A number of the studies contradict each other, leading some researchers to argue that much of the variation in observed learning outcomes is due to research methodology. Where the researcher was also responsible for running the course there was concern about vested interests in the results of the evaluation. The validity of quasi experimental studies is threatened by confounding effects such as students from a control group being able to use friends’ accounts to access the intervention.  One major methodological concern is endogenous selection bias: where students self-select their learning format rather than being randomly assigned, there are indications that the online students are more able and confident, which in turn may mask the effectiveness of traditional format. Also related to sampling, most data comes from undergraduate courses and wonders whether graduate students with independent learning skills might fare better with online courses.

Lest all of this feed cynicism about bothering to evaluate at all, only evaluation research can empower good decisions about where to put our resources and energies. What this paper indicates is that it is possible to design out or control for some of the confounding factors it raises. Nguyen makes a couple of suggestions for the ongoing research agenda. The first he terms the “ever ubiquitous” more-research-needed approach to investigating heterogeneity effects.

“In particular, there needs to be a focus on the factors that have been observed to have an impact on the effectiveness of online education: self-selection bias, blended instruction, active engagement with the materials, formative assessment, varied materials and repeatable low-stake practice, collaborative learning communities, student maturity, independent learning skills, synchronous and asynchronous work, and student characteristics.”

He points out a number of circumstances which are under the direct control of the teaching team, such as opportunities for low stakes practice, occasions for synchronous and asynchronous engagement, and varied materials, which are relatively straightforward to adjust and relate to student outcomes. He also suggests how to approach weighting and measuring these. Inevitably, thoughts turn to individualising student learning and it is this, particularly in the form of adaptive learning software, that Nguyen proposes as the most likely way out of the No Significant Difference doldrums. Determining the most effective pathways for different students in different courses promises to inform those courses ongoing designs. This approach puts big data in the service of individualisation based on student behaviour or attributes.

This dual emphasis of Nguyen’s research agenda avoids an excessively data-oriented approach. When evaluation becomes diverted into trying to relate clicks to test scores, not only are some subject areas under-researched but benefits of online environments are liable to be conceived in narrowed terms of the extent to which they yield enough data to individualise student pathways. This in itself is an operational purpose which overlooks the educational qualities of environments as design spaces in which educators author, exercise professional judgment, and intervene contingently. I had a bit of a reverie about vast repositories of educational data such as LearnSphere and the dangers of allowing them to over-determine teaching (though I don’t wish to diminish their opportunities, either). I wished I had completed Ryan Baker’s Big Data in Education Mooc on EdX (this will run again, though whether I’ll be equal to the maths is another question). I wondered if the funding squeeze might conceivably lead us to adopt paradoxically homogeneous approaches to coping with the heterogeneity of students, where everyone draws similar conclusions from the data and acts on it in similar ways, perhaps buying off-the-shelf black-box algorithmic solutions from increasingly monopolistic providers. Then I wondered if I was indulging dystopian flights of fancy, because in order for click-by-click data to inform the learning activity design you need to triangulate it with something less circumstantial – you need to know the whys as well as the whats and the whens. Click data may provide circumstantial evidence about what does or doesn’t work, but on its own it can’t propose solutions. Speculating about solutions is a luxury – using A/B testing on students may be allowed in Moocs and other courses where nobody’s paying, but it’s a more fraught matter in established higher education cohorts. Moreover Moocs are currently outside many institutions’ quality frameworks and this is probably why their evaluation questions often seem concerned with engagement rather than learning. Which is to say that Mooc evaluations which are mainly click and test data-oriented may have limited light to shed outside those Mooc contexts.

Evaluating online learning is difficult because evaluating learning is difficult. To use click data and test scores in a way which avoids unnecessary trial and error, we will need to carry out qualitative studies. Nguyen’s two approaches should be treated as symbiotic.


Video HT Bonnie Stewart.

Nguyen, T. (2015). The effectiveness of online learning: beyond no significant difference and future horizons. Journal of Online Learning and Teaching11(2). Retrieved from http://jolt.merlot.org/Vol11no2/Nguyen_0615.pdf

 

You said, we did – how student feedback influenced a Moodle space

By Mira Vogel, on 21 July 2015

CALT staff and ELE worked together to incorporate Arena student feedback into reworked Moodle spaces. The report below is organised around before-and-after screenshots, explains the changes, sets out a house style, and concludes with a checklist.

When we next ask students for feedback, we hope to find improvements in orientation, organisation and communication. We also hope that the Moodle work, in-person activities and individual study will integrate even better together.

Conciseness, consistency, glanceability, signposts and instructions were the most important things to come out of this work.

Loader Loading...
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Download

Jisc Learning and Teaching Experts Group, June 2015

By Mira Vogel, on 23 June 2015

Originally comprising project fundholders from the E-Learning Programme and now more open, Jisc convenes the Learning and Teaching Experts Group three times a year. This meeting – the 35th – had sessions on the student experience, leadership, and students as partners, all with a digital focus.

Helen Beetham introduced a new NUS benchmarking tool for the student digital experience (not yet released, but see their existing benchmarking tools), and further work on a digital capabilities framework for staff. Each table critiqued one of eleven areas of the tool, and contributed ideas to a twelfth on ‘Digital Wellbeing’.

There followed a series of shorter presentations including two senior managers describing their respective institution’s digital strategy and approach to supporting digital leadership, along with staff at Reading College who presented on their use of Google, their ethos of ‘pass it on’ for digital know-how, and how staff can indicate that they are happy to be observed (by hanging a green or red coat hanger on the door of their teaching room – paradoxically and unsurprisingly the green one was redundant because everybody got the message and used it).  In case anybody remained unconvinced that there is any urgency to this, Neil Witt (another senior participant) tweeted a recent House of Lords report, Make or Break. The UK’s Digital Future [pdf]. He thinks that for institutions to build digital capabilities will require an HR strategy.

During lunch I talked with Ron Mitchell about Xerte the open source suite for authoring interactive digital content, and made a note to ask for a pilot installation. I failed to find the roof garden (consulting the floor guide later, it’s close to the bottom of the building) and fretted about a very large fish in a very small tank on reception. Then came a session on cultures of partnership with a panel of students and student-facing roles. Like the previous session, this was full of tantalising ideas like staff being able to choose a student or staff colleague to observe their teaching, and Dan Derricot from Lincoln University starting to think of student engagement as a ladder where the course evaluation form is lower than, say, creating new opportunities. Partnership culture depends on visibility; at first staff need to take a lot of initiative but as students see other students’ work, they are more likely to step forward with ideas of their own. Eric Stoller tweeted this interesting-looking paper theorising student involvement. Jisc has a network of Change Agents and (separately) there is a new journal of Educational Innovation, Partnership and Change with a call for papers.

Finally the members showcase. I attended Lina Petrakieva’s session on assessing students’ digital stories at Glasgow Caledonian. They had to deliberate about similar things to us, namely whether to require the students to use a common platform (they did) and whether to change the assessment criteria in recognition of the new modes of expression (they did). I caught the end of a talk from the Lisette Toetenel at the Open University about setting up a network to share designs for learning.

Participants used the Twitter hashtag #JiscExperts15 mostly to amplify the event but with a few conversations sparking – including this one on helping champions and when James Kieft (a runner up for last year’s Learning Technologist of the Year) from Reading College dropped the bombshell / reminded us that they’d turned off their Moodle in 2014 and moved to Google applications. This set quite a few people off – not for reasons of rent-seeking and fear of change though I’m sure we all need to check for that, but business models, orientation, and the risk of abruptly-retired services. It also gave other people a frisson of liberation). I should reassure (?) at this point that there are no plans to turn off UCL Moodle. Then somebody asked what the purpose of learning technologists would be in the VLEless future but the session ended before another round of “What is a learning technologist today?” could get underway. Sometimes I think of these (what we’re currently calling) digital education professional services roles as midwife, sometimes I think of them as more specialised educational design roles in waiting until the ‘digital’ becomes more taken-for-granted. As long as education isn’t served up pre-programmed or decided centrally, the roles are likely to endure in some evolving form.

Thanks to Jisc and all contributors for a stimulating day.