E-Learning Environments team blog
  • ELE Group
    We support Staff and Students using technology to enhance teaching & learning.

    Here you'll find updates on developments at UCL, links & events as well as case studies and personal experiences. Let us know if you have any ideas you want to share!

  • Subscribe to the ELE blog

  • Meta

  • Tags

  • A A A

    Archive for the 'Learning designs' Category

    How can e-learning help with student feedback?

    By Clive Young, on 20 June 2014

    feedbackI attended a teaching and learning meeting in one of our academic departments recently when someone asked if they could use technology to improve their feedback to students. Four possibilities sprang to mind.

    Online marking  – As an associate lecturer at the OU I have to use online feedback via standard forms and document mark-up (i.e. comments in Word) is obligatory. After several years I now have a ‘bank’ (personal collection) of comments I can draw on to quickly provide rich personalised feedback. Moreover the OU uses ‘rubrics’ (marking schemes) to structure feedback and make sure it is aligned to the learning outcomes. This improves the efficiency and effectiveness of marking and honestly I’m not sure I could manage without this approach now. Here in UCL many colleagues use Moodle Assignments to allow markers to bulk upload markers’ annotations on files of student work, and to set up rubrics and other structured marksheets.  GradeMark, part of Turnitin, has a particularly convenient marking environment. Its unique selling point is customisable sets of frequently-made comments which provide online marking with drag and drop and general online comments. Comment ‘banks’ are available at a click and a drag, and can be shared across programmes and they can themselves be linked to rubric structures.

    Self-assessment – Perennial favourites in UCL student surveys are diagnostic and self-assessment quizzes, usually developed in Moodle’s Quiz, which despite its frivolous name is actually a very sophisticated assessment and feedback tool. Moodle quizzes offer different question types, including multiple choice, gap fill, and drag-and-drop,  all of which can automate giving feedback, based on the settings tutors choose.  For example, feedback could be given immediately a question is answered, or it can be deferred until after the quiz is completed. Students appreciate the chance to check progress and get focused feedback based on the answer they chose.  While writing good questions and feedback takes thought and care , technically quizzes are comparitively easy to set up in Moodle. The trick is to provide good, differentiated feedback, linking to remedial or additional materials that the student can look at straight away. In Moodle these links could be to documents, items on the electronic reading lists, Lecturecast recordings, YouTube videos and so in as well as simple texts and images. Questions can be imported from Word using a template, allowing rapid quiz authoring without an internet connection, and even the Matlab GUI has been used to automatically generate mathematical question banks for later import. As an alternative UCL has also had some success using PeerWise enabling students to design their own multiple-choice questions (MCQs).

    Audio and video feedback – One interesting feature of GradeMark is the facility to provide students audio feedback. Staff at UCL have been experimenting with audio feedback for several years, adding live audio comments to text documents or forum posts, for example. The rationale is that feedback is richer, more personal, more expressive of different emphasis, and there is more of it for the amount of time spent. Since it also tends to be less concerned with particularities of grammar, spelling, etc, some markers may want to combine with word-processed annotations. An extension of this is to make a single recording giving general feedback to an entire cohort on a given piece of work. And an extension of this idea is to create simple narrated screencasts using Lecturecast personal capture (desktop recording) to record worked examples for generic assessment and exam feedback. This approach has been tried in at least one department with positive results.

    Peer assessment and MyPortfolio – Technology can of course provide whole new ways to enable group assessment and the development of rich personal portfolios, for example using the increasingly popular UCL portfolio and presentation environment MyPortfolio.  For an excellent introduction, have a look at the recent UCL case study Making history with iPads, peer assessment and MyPortfolio.

    Further reading

    Image “Got Feedback?” by Alan Levine https://flic.kr/p/nKPbtE

    Digital Literacies special interest group (SIG) meeting – November 2013

    By Jessica Gramp, on 28 November 2013

    Digital Literacies at UCLFifteen academic and support staff from across UCL met for the first UCL Digital Literacies special interest group (SIG) on Wednesday 27th November.   Jessica Gramp, form E-Learning Environments, delivered a presentation prepared in collaboration with Hana Mori, giving the Jisc definition of digital literacies.

    We’re not sure about the term – some find it demeaning.  A better term than Digital Literacies is clearly needed so that it doesn’t offend and imply a deficit. There’s also a need to differentiate between kinds of digital literacy. Some areas that have been used at other institutions include: digital identity, managing studies; working in team; using other people’s content responsibly and digitally enhancing job prospects. There was a general consensus that digital literacies need to be embedded, not tagged on as a separate thing to do.

    (more…)

    Chronogogy – Time-led learning design examples

    By Matt Jenner, on 15 November 2013

    I recently blogged about a concept called chronogogy; a time-led principle of learning design the importance of which I’m trying to fathom. My approach is to keep blogging about it & wait until someone picks me up on it. Worst case, I put some ideas out in the public domain with associated keywords etc. Please forgive me.

    An example of chronogogically misinformed learning design

    A blended learning programme makes good use of f2f seminars. Knowing the seminar takes at least an hour to get really interesting, the teacher prefers to use online discussion forums to seed the initial discussions and weed out any quick misgivings. Using a set reading list, before the seminar they have the intention of students to read before the session, be provoked to think about the topics raised and address preliminary points in an online discussion. The f2f seminars are on Tuesdays & student have week to go online and contribute. This schedule is repeated a few times during the twelve week module.

    The problem is, only a handful of students ever post online and others complain that there’s “not enough time” to do the task each week. The teacher has considered making them fortnightly, but this isn’t really ideal either, as some may slip behind, especially when this exercise is repeated during the module.

    The argument in my previous post was that if the planning of the activity doesn’t correlate well with activity of website users then it may increase the chance of disengagement.

    Example learner 1

     

    Tues Wed Thurs Fri Sat Sun Mon Tues

    Learner1

    Task set Reading start Reading finish Contributes to forum Attends seminar

     

    If a reading is set on Tuesday completed by Sunday, the learner may only start considering their discussion points on Sunday or Monday night. This will complete the task before Tuesday’s session, but does it make good use of the task?

    Example learner 2

     

    Tues Wed Thurs Fri Sat Sun Mon Tues

    Learner1

    Task set Reading start Reading finish Contributes to forum Visits

    forum

    Contributes to forum Attends seminar

    The reading is set on Tuesday, completed by Friday, the learner even posts to the forum on Saturday. By Sunday the come back to the forum, there’s not much there. They come back on Monday and can contribute towards Learner 1’s points, but it could be too late to really fire up a discussion. The seminar is the next day, Tuesday which could increase the chance of discussion points being saved for that instead, as the online discussion may not be worth adding to.

    These are two simplistic example, but they provide further questions:

    • Q: Can these two students ever have a valuable forum discussion?
    • Q: Is this was scaled up would the night before the seminar provide enough time for a useful online discussion?
    • Q: If Learner3 had read the material immediately and posted on the Wednesday what would’ve been the outcome?

    Any students posting earlier in the seven-day period may be faced with the silence of others still reading. Postings coming in too late may be marred by the signs that fewer visitors will log on during the weekend. Therefore, unless people are active immediately after the seminar (i.e. read and post in the first day or two) then any online discussions takes place on Monday – the day before the seminar.

    In this example a lot of assumptions are made, obviously, but it could happen.

    Development/expansion

    If this example were true, and it helps if you can believe it is for a moment, then what steps could be taken to encourage the discussion to start earlier?

    One thought could be to move the seminar to later in the week, say Thursday or Friday. By observing learners behaviour ‘out of class’ (offline and online) it could give insight into the planning of sessions and activities. In the classic school sense, students are given a piece of homework and they fit it in whenever suits them. However, if that work is collaborative, i.e. working in a group or contributing towards a shared discussions, then the timing of their activity needs to align with the group, and known timings that are most effective.

    Time-informed planning

    Muffins got tired waiting for fellow students to reply to his post.

    Muffins got tired waiting for fellow students to reply to his post.

    Knowing study habits, and preferences, for off and on-line study could make a difference here. If the teacher had given the students a different time over the week it might have altered contributions to the task. Data in the previous post indicates that learners access educational environments more in the week than the weekend. An activity given on Friday and expected for Monday seems unfair on two levels; a weekend is an important time for a break and weekends are not as busy as weekdays for online activity.

    If the week shows a pattern of access for online, then an online task could be created around the understanding of access patterns. If online tasks are planned around this, then it may affect the outcome.

    Does time-informed learning design make a difference?

    There’s only one way to know, really, and that’s to perform an experiment around a hypothesis. The examples above were based on a group/cohort discussion & it made a lot of assumptions but it provides a basis of which I wanted to conduct some further research.

    Time-based instruction and learning. Is activity design overlooked?

    In the examples, the teacher is making an assumption that their students will ‘find the time’. This is perfectly acceptable, but students may better perform ‘time-finding’ when they are also wrapped into a strong schedule, or structure for their studies. Traditionally this is bound to the restrictions of the timetabling/room access, teacher’s duties and the learners’ schedules (plus any other factors). But with online learning (or blended) the timetabling or time-planning duty is displaced into a new environment. This online space is marketed as open, personalised, in-your-own-time – all of which is very positive. However, it’s also coming with the negative aspect of self-organisation and could, possible, be a little too loosely defined. Perhaps especially so when it’s no longer personal, but group or cohort based.

    There’s no intention here of mandating when learners should be online – that’s certainly not the point. In the first instance it’s about being aware of when they might be online, and better planning around that. In this first instance, the intention is to see if this is even ‘a thing to factor in’.

    Chronology is the study of time. Time online is a stranger concept than time in f2f. For face to face the timing of a session is roughly an hour, or two. Online it could be the same, but not in one chunk. Fragmentation, openness and flexibility are all key components – learners can come and go whenever they like, and our logs showing how many UK connections are made to UCL Moodle at 3-5AM show this quite clearly.

    Chronogogy is just a little branding for the foundation of the idea that instructional design, i.e. the planning and building of activities for online learning, may need to factor time into the design process. This isn’t to say ‘time is important’ but that by understanding more about access patterns for users, especially (but not necessarily only) online educational environments, could influence the timing and design of timing for online activities. This impact could directly impact the student and teacher experiences. This naturally could come back into f2f sessions too, where the chronogogy has been considered to ensure that the blended components are properly supporting the rest of the course.

    Time-led instructional design, or chronogogically informed learning design could potentially become ever more important if considering fully online courses that rely heavily on user to user-interaction as a foundation to the student experience. For example the Open University who rely heavily on discussion forums or MOOCs where learner to learner interaction is the only viable form.

    Most online courses would state that student interaction is on the critical path to success. From credit-bearing courses to MOOCs – it’s likely that if adding chronogogy into the course structure, then consideration can inform design decisions early in the development process. This would be important when considering:

    • Planned discussions
    • Release of new materials
    • Synchronous activities
    • Engagement prompts*

    In another example, MOOCs (in 2013) seem to attract a range of learners. Some are fully engaged, participate in all the activities, review all the resources and earn completion certificates. Others do less than this, lurking in the shadows as some may say, but remain to have a perfectly satisfactory experience. Research is being performed into these engagement patterns and much talk of increasing retention has sparked within educational and political circles, for MOOCs and Distance Learning engagement/attrition.

    One factor to consider here is how you encourage activity in a large and disparate group. The fourth point above, engagement prompts, is a way of enticing learners back to the online environment. Something needs to bring them back and this may be something simple like an email from the course lead.  Data may suggest that sending this on a Saturday could have a very different result than on a Tuesday.

    Engagement prompts as the carrot, or stick?

    Among many areas till to explore is that if learners were less active over the weekends, for example, then would promoting them to action – i.e. via an engagement prompt, provide a positive or negative return? This could be addressed via an experiment.

    Concluding thoughts

    I seem interested in this field, but I wonder of its true value. I’d be keen to hear you thoughts. Some things for me to consider are:

    • If there’s peaks and troughs in access – what would happen if this could be levelled out?
    • How could further research be conducted (live or archive data sets).
    • Have I missed something in the theory of learning design that is based on time-led instruction?
    • I wonder what learners would think of this, canvas for their opinions.
    • Could I make a small modification to Moodle to record data to show engagement/forum posting to create a more focused data set?
    • Am I mad?

     

     

    New UCL Moodle baseline

    By Jessica Gramp, on 12 November 2013

    MoodleThe UCL Moodle Baseline that was approved by Academic Committee in June 2009, has now been updated after wide consultation on best current UCL practice.  The aim of the Baseline is to provide guidelines for staff to follow when developing Moodle courses in order for UCL students to have a consistently good e-learning experience. They are intended to be advisory rather than prescriptive or restrictive. These recommendations may be covered within a combination of module, programme and departmental courses.

    Changes include the addition of a course usage statement explaining how students are expected to use their Moodle course. A communications statement is also now a requirement, in order to explain to students how they are expected to communicate with staff, and how often they can expect staff to respond. It is now a recommendation for staff to add (and encourage their students to add) a profile photograph or unique image, to make it easier to identify contributors in forums and other learning activities.

    New guidelines for including assessment detail and Turnitin guidance have been added for those who use these technologies.

    See the new UCL Moodle Baseline v2

    Find out more about this and other e-learning news in the monthly UCL E-Learning Champions’ Newsletter.

    Engagement! A tale of two MOOCs

    By Clive Young, on 13 October 2013

    psychology02What is the real educational experience of MOOC students? Some people seem to take strong positions on MOOCs without actually having completed one, after just ‘dipping in’. I felt this was not quite enough to judge what MOOC learning is about, so back in August I signed up two MOOCs running almost concurrently. Both were on the Coursera platform and both  – coincidentally – from Weslyan University. Modernism and Postmodernism is 14 weeks long , is still running and Social Psychology at a sprightly – and more normal – six weeks finished recently. I had actually completed a Coursera MOOC at the beginning of the year but as it was on a familiar subject I considered that taking subjects I knew little about would give me a more ‘authentic’ learner experience.

    signatureI thought it was important to avoid ‘dip-in-ism’ so I committed to completing both, even paying $40 to go on the Signature Track on the first one. This means Coursera verifies my identity when I submit assignments, both by typing pattern and face recognition. To set up face recognition initially I held my passport in front of the laptop camera and it scanned my photo. For typing recognition a short phrase is tapped out; Coursera now knows what a dismal typist I am.

    Both courses were based around an hour or so of weekly video lectures but despite being out of the same stable, they turned out to be very different in design.

    modernism01Modernism and Postmodernism was/is perhaps most ‘conventional’. Each week there were four to six short video lectures and a couple of original texts as assigned readings. That was it. The videos featured Weslyan president and star lecturer Prof Michael Roth. Most were professionally shot, though sometimes interspersed with lecture capture type clips from some of his classes. What was unexpected here was the quality of the video – although nice – was largely immaterial. The power and engagement was simply in Prof Roth’s remarkable narrative, essentially the story of modern Western thought since the Enlightenment and expressed in the works of Kant, Rousseau, Marx, Darwin, Flaubert, Baudelaire, Woolf and so on, not really a ‘grand narrative’ but a compelling intellectual bricolage. I was genuinely gripped by the story Roth was telling, and sometimes just read the transcripts (much quicker) when I was too busy to watch the video. The eight assessments, 800 word essays, were peer-marked and the twenty or so assignments I have looked at so far in the course are of quite a high academic standard. The peer marking approach is astonishingly valuable, by the way, as the other students usually present the material in a very different way; challenging and reviewing my understanding of that part of the course.

    psychology01Social Psychology used video differently. The video of the lecturer in was slightly more ‘amateurish’ but the editing was far more sophisticated. Great effort had been taken to get permission to show and edit in some remarkable clips of experiments (including the infamous Stanford Prison Experiment), TED talks, interviews with psychologists and some public broadcasting documentaries. This was supported by chapter-length PDF extracts from major textbooks and reprints of papers. Together this was an astonishingly rich learning resource, the best I have seen on any online course, including many paid-for ones. Like the other course, the tutor voice of Prof Scott Plous was very clear and engaging but his written assignments were more diverse; reactions to an online survey, analysis of a web site and the ‘day of compassion’. The assignments – also peer-marked – were less good than the Modernism course but improved as the ‘drop-ins’ dropped out. The final assignments I read on compassion, from students in India, the Philippines and so on were genuinely moving. The idea that MOOCs encourage a superficial form of learning is misplaced, at least in this case. Participants had evidently reflected, sometimes quite deeply, on the sometimes challenging material.

    psychology03Engagement and interaction In neither course did I especially follow the discussion threads, they were too fragmented. Social Psychology for example had 200,000 enrolments, 7000 forum posts in the first week and about 8000 students still active at the end. How can you have a ‘conversation’ in that environment? It made me wonder if ‘interaction’ our much-vaunted goal of many online courses is slightly overrated. Much more motivating to me as student was the strength of the narrative, the storyline, a bit like reading a good book in fact. Video proved an excellent way of getting that narrative across and the assignments in both made sure I assimilated at least some of the content and provided an important time frame to ensure I ‘kept up’. This ‘interaction light’ approach seemed to be in contrast with the Open University courses I have done, and indeed tutor on. These are deliberately designed around a series of regular interactions with fellow students and tutors and, being written by a teaching team, have a far less imposing narrative personality. Maybe in the MOOC environment, where ‘classical’ online interaction is necessarily weaker, design may necessarily focus not simply on interaction but engagement and that strong personal narrative may often be a key element. Just ‘dipping into’ a MOOC may completely miss this most important aspect.

     

    Just how good is your online course?

    By Clive Young, on 25 September 2013

    bbrubricOne of the perennial problems for both academic colleagues and learning technologists is trying to judge the educational value of online courses. Especially in blended learning the online ‘course’ is often just a component of a broader learner experience, and its role really can only be understood in the context of how it supports or extends ‘live’ activities. Thus what looks to a learning technologist like an unsophisticated ‘list of links’ in Moodle may actually support a rich classroom-led enquiry-based learning activity. It is hard to tell without speaking to the lecturer (or students) involved.

    Nevertheless for modules which are wholly online or have a high use of technology a consensus has emerged as to what components are necessary to enable a ‘good’ course. One very practical example of this is the Blackboard Exemplary Course Program Rubric, which has gradually developed as a kind of sector standard since it was established in 2000, back then under the WebCT flag. The eight page rubric actually supports Blackboard’s Catalyst course competition (only open to Blackboard users, of course!) but the document can also be read as a platform-neutral checklist of good design, as applicable to Moodle as it is to Blackboard. Using the rubric course designers can evaluate how well their own course conforms to ‘best practices’ in four areas; Course Design, Interaction and Collaboration, Assessment and Learner Support. Each area is broken down into separate areas, with a checklist of ‘incomplete’ to ‘exemplary’ examples.

    • Course Design covers how clear the course goals and objectives are, the way the content is presented and any use of media, how learning design encourages students to be engaged in ‘higher order’ thinking and generally how the VLE is used to help student engagement.
    • Interaction and Collaboration includes communication strategies (an aspect so important we are considering including in the UCL Moodle baseline), how a sense of learner community is developed and ‘logistics’ i.e. quality and expectations of interaction.
    • Assessment is essentially about how assessment design aligns with the learning outcomes, the expectations on students and any opportunities for self assessment.
    • Learner support highlights the importance of orientation to the course and the VLE, clarity around the instructor role, links to institutional policies, accessibility and the role of feedback.

    In short this is really a very useful checklist for people already running or currently designing programmes with a high online component and well worth a look. Using a checklist does not guarantee an ‘exemplary’ student experience but is simply a way to ensure that what are nowadays commonly regarded as critical components of success are fully considered in the course design and planning. Some of the sections may need some ‘interpretation’ or localisation and that is hopefully where E-Learning Environments can help!