E-Learning Environments team blog
  • ELE Group
    We support Staff and Students using technology to enhance teaching & learning.

    Here you'll find updates on developments at UCL, links & events as well as case studies and personal experiences. Let us know if you have any ideas you want to share!

  • Subscribe to the ELE blog

  • Meta

  • Tags

  • A A A

    Moodle Snapshot Lockdown – 1st September 2015

    By Domi C Sinclair, on 27 August 2015

    Please note that from the 1st September 2015 the Moodle Snapshot for 2014-15 will become read-only for all users.

    Up until this point you will be able to make tweaks to courses that you have tutor or course administrator rights to.

    It is important to note any deletion or removals made within the Moodle Snapshot will be permanent and cannot be reversed.

    Should you have any questions please see the Snapshot page in the Moodle Resource Centre wiki or contact E-Learning Environments.

     

    Lynda.com is now available offline on your computer

    By Jessica Gramp, on 25 August 2015

    You might have noticed a recent addition to the buttons on Lynda.com courses over the past week. This is a new feature that will allow you to view content offline on desktops via an application download. When you are viewing a course in Lynda – as opposed to a video or playlist – you will see a [View Offline] button above the video, as shown in the screenshot below. When you click on this you will be prompted to download the Desktop app for your computer system.

    Download courses to watch offline on your desktop or laptop by following 3 simple steps:

    1. Download the native Mac or Windows App ( by clicking ‘View Offline; on any Lynda.com course).
    2. Login with just 1-click (this will check you are already logged in to Lynda.com via your web browser).
    3. Select the ‘View Offline’ button on any Lynda.com course page to add courses to the Lynda.com Offline App.
    Offline viewing

    (^ Click to enlarge)

    Once installed you can click the ‘1-click login’ and it will take you to Lynda.com in your web browser and you should then see a message saying “Successfully connected!” – providing you were still logged in to Lynda.com.

    Successfully connected

    From the app you can easily add courses by clicking the [Add courses] button in the top, right corner of the page. This will open your web browser and take you to Lynda.com, where you can use the same [View offline] button you clicked before to install the software to add the software to your Lynda Offline App.

    Lynda Offline App

    (^ Click to enlarge)

    You may need to allow your web browser to launch an external application (as shown below):

    Chrome popup for launching external app-circled

     

    You can try this new offline viewing feature out for yourself on your computer.

    To get started log in (with your UCL credentials) via www.ucl.ac.uk/lynda.

    Online learning and the No Significant Difference phenomenon

    By Mira Vogel, on 20 August 2015

    When asked for evidence of effectiveness of digital education I often find it hard to respond, even though this is one of the best questions you can ask about it. Partly this is because digital education is not a single intervention but a portmanteau of different applications interacting with the circumstances and practices of staff and students – in other words, it’s situated. Another is that evaluation by practitioners tends not to be well resourced or rewarded, leading to a lack of well-designed and well-reported evaluation studies to synthesise into theory. For these reasons I was interested to see a paper by Tuan Nguyen titled ‘The effectiveness of online learning: beyond no significant difference and future horizons‘ in the latest issue of the Journal of Online Learning and Teaching. Concerned with generalisability of research which compares ‘online’ to ‘traditional’ education, it offers critique and proposes improvements.

    Nguyen directs attention to nosignificantdifference.org, a site which indicates that 92% of distance or online education is at least as effective or better than what he terms ‘traditional’ i.e. in-person, campus-based education. He proceeds to examine this statistic, raising questions about the studies included and a range of biases within them.

    Because the studies include a variety of interventions in a variety of contexts, it is impossible to define an essence of ‘online learning’ (and the same is presumably true for ‘traditional learning’). From this it follows that no constant effect is found for online learning; most of the studies had mixed results attributed to heterogeneity effects. For example, one found that synchronous work favoured traditional students whereas asynchronous work favoured online students. Another found that, as we might expect, its results were moderated by race/ethnicity, sex and ability. One interesting finding was that fixed timetabling can enable traditional students to spend more time-on-task than online students, with correspondingly better outcomes. Another was improvements in distance learning may only be identifiable if we exclude what Nguyen tentatively calls ‘first-generation online courses’ from the studies.

    A number of the studies contradict each other, leading some researchers to argue that much of the variation in observed learning outcomes is due to research methodology. Where the researcher was also responsible for running the course there was concern about vested interests in the results of the evaluation. The validity of quasi experimental studies is threatened by confounding effects such as students from a control group being able to use friends’ accounts to access the intervention.  One major methodological concern is endogenous selection bias: where students self-select their learning format rather than being randomly assigned, there are indications that the online students are more able and confident, which in turn may mask the effectiveness of traditional format. Also related to sampling, most data comes from undergraduate courses and wonders whether graduate students with independent learning skills might fare better with online courses.

    Lest all of this feed cynicism about bothering to evaluate at all, only evaluation research can empower good decisions about where to put our resources and energies. What this paper indicates is that it is possible to design out or control for some of the confounding factors it raises. Nguyen makes a couple of suggestions for the ongoing research agenda. The first he terms the “ever ubiquitous” more-research-needed approach to investigating heterogeneity effects.

    “In particular, there needs to be a focus on the factors that have been observed to have an impact on the effectiveness of online education: self-selection bias, blended instruction, active engagement with the materials, formative assessment, varied materials and repeatable low-stake practice, collaborative learning communities, student maturity, independent learning skills, synchronous and asynchronous work, and student characteristics.”

    He points out a number of circumstances which are under the direct control of the teaching team, such as opportunities for low stakes practice, occasions for synchronous and asynchronous engagement, and varied materials, which are relatively straightforward to adjust and relate to student outcomes. He also suggests how to approach weighting and measuring these. Inevitably, thoughts turn to individualising student learning and it is this, particularly in the form of adaptive learning software, that Nguyen proposes as the most likely way out of the No Significant Difference doldrums. Determining the most effective pathways for different students in different courses promises to inform those courses ongoing designs. This approach puts big data in the service of individualisation based on student behaviour or attributes.

    This dual emphasis of Nguyen’s research agenda avoids an excessively data-oriented approach. When evaluation becomes diverted into trying to relate clicks to test scores, not only are some subject areas under-researched but benefits of online environments are liable to be conceived in narrowed terms of the extent to which they yield enough data to individualise student pathways. This in itself is an operational purpose which overlooks the educational qualities of environments as design spaces in which educators author, exercise professional judgment, and intervene contingently. I had a bit of a reverie about vast repositories of educational data such as LearnSphere and the dangers of allowing them to over-determine teaching (though I don’t wish to diminish their opportunities, either). I wished I had completed Ryan Baker’s Big Data in Education Mooc on EdX (this will run again, though whether I’ll be equal to the maths is another question). I wondered if the funding squeeze might conceivably lead us to adopt paradoxically homogeneous approaches to coping with the heterogeneity of students, where everyone draws similar conclusions from the data and acts on it in similar ways, perhaps buying off-the-shelf black-box algorithmic solutions from increasingly monopolistic providers. Then I wondered if I was indulging dystopian flights of fancy, because in order for click-by-click data to inform the learning activity design you need to triangulate it with something less circumstantial – you need to know the whys as well as the whats and the whens. Click data may provide circumstantial evidence about what does or doesn’t work, but on its own it can’t propose solutions. Speculating about solutions is a luxury – using A/B testing on students may be allowed in Moocs and other courses where nobody’s paying, but it’s a more fraught matter in established higher education cohorts. Moreover Moocs are currently outside many institutions’ quality frameworks and this is probably why their evaluation questions often seem concerned with engagement rather than learning. Which is to say that Mooc evaluations which are mainly click and test data-oriented may have limited light to shed outside those Mooc contexts.

    Evaluating online learning is difficult because evaluating learning is difficult. To use click data and test scores in a way which avoids unnecessary trial and error, we will need to carry out qualitative studies. Nguyen’s two approaches should be treated as symbiotic.


    Video HT Bonnie Stewart.

    Nguyen, T. (2015). The effectiveness of online learning: beyond no significant difference and future horizons. Journal of Online Learning and Teaching11(2). Retrieved from http://jolt.merlot.org/Vol11no2/Nguyen_0615.pdf

     

    Lecturecast archiving completed.

    By Domi C Sinclair, on 10 August 2015

    The Lecturecast archiving process has now been successfully completed. This means Lecturecast can be accessed again via https://lecturecast.ucl.ac.uk/

    During the archiving process all recordings were moved to the ‘archive’ category in Lecturecast and are now NOT available for viewing.

    If you want to make archived content available for students, then you will need to move it back into the available category. The Lecturecast Resource Centre wiki has instructions on how to make content available from the archive.

    Recordings will be kept in the archive category until 2 years after their creation date, at which point they will be deleted. This is an important part of managing the storage system for Lecturecast.

    You can read more about why we carry out this process then you find the retention and archiving policy in the Lecturecast Resource Centre wiki.

     

    Thoughts from AAEEBL 2015

    By Domi C Sinclair, on 6 August 2015

    Last week I was fortunate enough to attend and present at AAEEBL 2015 in Boston, Massachusetts. You might be wondering what AAEEBL stands for and what this event was all about, especially if you have never heard of it before. The Association for Authentic, Experiential and Evidence-Based Learning focuses on the usage of portfolios at their annual conference. In fact one of the key points to come out of the conference was a consensus that as a community we should stop referring to e-portflios (or eportfolios depending on your preference), which is distracting and in many cases superfluous. Instead it is time we just talk about portfolios and focus on the pedagogy. This conference was very much aimed at focusing on the pedagogy, and in most cases the tool used was almost irrelevant to the presentation. In education it is far too easy to get caught up in our own silos, whether that is a department based silo or a tool based silo. When we stop and look to the outside we can often find valuable input we would have otherwise missed.

    Collaboration was also a key theme from the conference. To make a portfolio effective involves everyone working together. It involves tutors and students having a clear dialogue about what is expected in the portfolio. It also can benefit from peer-to-peer collaboration, whether that is academics helping one another out with creative ideas/support or students giving each other tips and feedback. Of course it can also require working with the E-Learning team, and here at UCL we are always happy to offer advice or support around any platform, including portfolios. Currently we use the Mahara platform at UCL, you might have heard of it as MyPortfolio which is the name we use for our installation. If you’d like to find out more about MyPortfolio then you can go directly to the platform at https://myportfolio.ucl.ac.uk or visit the MyPortfolio Resource Centre in the wiki – although please note this site is currently under maintenance and being updated.

    The final key theme I’d like to highlight is badges. There were a number of presentations and a keynote on the use of badges with portfolios. This seems like a natural fit as portfolios are a great way of collecting evidence for a badge. A badge in turn is a nice way to recognise competencies or skills that might not otherwise be acknowledge by assessment criteria or formal credit. The McArthur Foundation have produced a video which explains the basics of what a badge is, if you are still unsure.

    At UCL we have done some pilots with badges and we’d be happy to talk to anyone about this if they wish to get in touch.

    If you’d like to get a wider overview of the conversations from AAEEBL then please see my Storify, collecting my tweets and all the best other tweets from the event.

    You can also see my presentation on utilizing the (portfolio) community: https://youtu.be/wcFBsON_-6Q.

    Should you have any questions then please contact the E-Learning Environments team.

    Turnitin downtime 15 August 2015

    By Domi C Sinclair, on 4 August 2015

    We have had the following communications from Turnitin regarding global maintenance of their service:

    Turnitin services will be mostly unavailable during a scheduled maintenance period on Saturday, August 15, 2015 from 7 AM to 11 AM U.S. Pacific Time (see local time: http://go.turnitin.com/e/45292/o5gahfa/3vbcwk/441055800).

    An announcement will appear to users within Turnitin in advance of when the system will be unavailable for the scheduled maintenance. The  maintenance will affect Turnitin and TurnitinUK users, as well as those using a Learning Management System.

    Instructors are encouraged to modify assignment due dates either before or at least several hours after the scheduled maintenance window.

    This will mean Turnitin will be unavailable for UCL users (whether via Moodle or direct) on 15th August from 15:00 to 19:00 BST.  Please ensure you do not have any deadlines scheduled during this period.