X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

Archive for the 'Evaluation' Category

Reflections before UCL’s first Mooc

By Matt Jenner, on 26 February 2016

Why We Post: Anthropology of Social Media

Why We Post: Anthropology of Social Media

UCL’s first Mooc – Why We Post: The Anthropology of Social Media launches on Monday on FutureLearn. It’s not actually our first Mooc – it’s not even one Mooc, it’s 9! Eight other versions are simultaneously launching on UCLeXtend in the following languages: Chinese, English, Italian, Hindi, Portuguese, Spanish, Tamil and Turkish. If that’s not enough  we seem to have quite a few under the banner of UCL:

(quite a few of these deserve title of ‘first’ – but who’s counting…)

Extended Learning Landscape - UCL 2015

Extended Learning Landscape – UCL 2015

UCL is quite unique for some of these – we have multiple platforms which form a part of our Extended Learning Landscape. This maps out areas of activity such as CPD, short courses, Moocs, Public Engagement, Summer Schools (and many more) and tries to understand how we can utilise digital education / e-learning with these (and what happens when we do).

 

Justification for Moocs

We’ve not launched our first Mooc (apparently) but we also need to develop a mid term plan too – so we can do more. Can we justify the ones we’ve done so far? Well a strong evaluation will certainly help but we also need an answer to the most pertinent pending question:

How much did all this cost and was it worth it? 

It’s a really good question, one we started asking a while ago, and still the answer feels no better than educated guesswork. Internally we’re working on merging a Costing and Pricing tool (not published, sorry) and the IoE / UCL Knowledge Lab Course Resource Appraisal Modeller (CRAM) tool. The goal is to have a tool which takes the design of a Mooc and outputs a realistic cost. It’s pretty close already – but we need to feed in some localisations from our internal Costing and Pricing tool such as Estates cost, staff wages, Full Economic Costings, digital infrastructure, support etc. The real cost of all this is important. But the value? Well…

Evaluation

We’ve had a lot of ideas and thoughts about evaluation; what is the value of running Moocs for the university? It feels right to mention public engagement, the spirit of giving back and developing really good resources that people can enjoy. There’s the golden carrot being dangled of student recruitment but I can’t see that balancing any Profit/Loss sheets. I do not think it’s about pedagogical innovation, let’s get real here: most Moocs are still a bulk of organised expert videos and text. I don’t think this does a disservice to our Moocs, or those of others, I’d wager that people really like organised expert videos and text (YouTube and Wikipedia being stable Top 10 Global Websites hints at this). But there are other reasons – building Moocs is an new way to engage a lot of people with your topic of interest. Dilution of the common corpus of subjects is a good thing; they are open to anyone who can access them. The next logical step is subjects of fascination, niche, specialist, bespoke – all apply to the future of Moocs.

For evaluation, some obvious things to measure are:

  • Time from people spend on developing the Mooc – we’ve got a breakdown document which tries to list each part of making / running a Mooc so we can estimate time spent.
  • Money spent on media production – this one tends to be easy
  • Registration, survey, quiz, platform usage and associated learner data
  • Feedback from course teams on their experience
  • Outcomes from running a Mooc (book chapters, conference talks, awards won, research instigated)
  • Teaching and learning augmentation (i.e. using the Mooc in a course/module/programme)
  • Developing digital learning objects which can be shared / re-used
  • Student recruitment from the Mooc
  • Pathways to impact – for research-informed Moocs (and we’re working on refining what this means)
  • How much we enjoyed the process – this does matter!

Developing a Mooc – lessons learned

Communication

Designing a course for FutureLearn involves a lot of communication; both internally and to external Partners, mostly our partner manager at FutureLearn but there are others too. This is mostly a serious number of emails – 1503 (so far) to be exact. How? If I knew I’d be rich or loaded with oodles of time. It’s another new years resolution: Stop: Think: Do you really need to send / read / keep that email? Likely not! I tried to get us on Trello early, as to avoid this but I didn’t do so well and as the number of people involved grew adding all these people to a humungous Trello board just seemed, well, unlikely. Email; I shall understand you one day, but for now, I surrender.

Making videos

From a bystander’s viewpoint I think the course teams all enjoyed making their videos (see final evaluation point). The Why We Post team had years to make their videos in-situ from their research across the world. This is a great opportunity to capture real people in the own context; I don’t think video gets much better than this. They had permission from the outset to use the video for educational purposes (good call) and wove them right into the fabric of the course – and you can tell. Making Babies in the 21st Century has captured some of the best minds in the field of reproduction; Dan Reisel (lead educator) knows the people he wants, he’s well connected and has captured and collated experts in the field – a unique and challenging achievement. Tim Shakespeare, The Many Faces of Dementia, was keener to capture three core groups for his course: people with Dementia, their carers / family and the experts who are working to improve the lives for people with Dementia. This triangle of people makes it a rounded experience for any learner, you’ll connect with at least one of these groups. Genius.

Also:

  • Audio matters the most – bad audio = not watching
  • Explain and show concepts – use the visual element of video to show what you mean, not a chin waggling around
  • Keep it short – it’s not an attention span issue, it’s an ideal course structuring exercise.
  • Show your face – people still want to see who’s talking at some point
  • Do not record what can be read – it’s slower to listen than it is to read, if your video cam be replaced with an article, you may want to.
  • Captions and transcripts are important – do as many as you can. Bonus: videos can then be translated.

Using third party works

Remains as tricky as it ever has been. Moocs are murky (commercial? educational? for-profit?) but you’ll need to ask permission for every single third-party piece of work you want to use. Best advice: try not to or be prepared to have no response! Images are the worst, it’s a challenge to find lots of great images that you’re allowed to use, and a course without images isn’t very visually compelling. Set aside some time for this.

Designing social courses that can also be skim-read

FutureLearn, in particular, is a socially-oriented learning platform – you’ll need to design a course around peer-to-peer discussion. Some is breaking thresholds – you’re trying to teach them something important, enabling rich discussion will help. You’re also trying to keep them engaged – so you can’t ask for a deep, thoughtful, intervention every 2 minutes. Find the balance between asking important questions – raising provocative points – and enjoying the fruits of the discussion with the reality of ‘respond if you want’ type discussion prompts.

Connect course teams together

While they might not hold one another’s hair when things get rough – the course teams will benefit from sharing their experiences with one another. We’ve held monthly meetings since the beginning, encouraging each team to attend and share their updates, challenges, show content, see examples from other courses and generally make it a more social experience. Some did share their dropboxes with one another – which I hadn’t expected but am enjoying the level of transparency. I am guilty of thinking at scale at the moment, so while I was guiding and pseudo ‘project-managing’ the courses, I was keen to promote independence and agency within the course teams. It’s their course, they’ll be the ones working into the night on it, I can’t have them relying on me and my dreaded inbox. The outcome is they build their own ideas and shape them in their own style; maybe we’re lucky but this is important. We do intervene at critical stages, recommending approaches and methods as appropriate.

Plan, design and then build

Few online learning environments make good drafting tools. We encouraged a three-stage development process:

  1. Proposals, expanded into Excel-based documents. Outlines each week, the headline for each step/component and critical elements like discussion starters.
  2. Design in documents – Word/Google Docs (whatever) – expand each week; what’s in each step. Great for editorial and refinement.
  3. Build in the platform.

The reason for this is the outlines are usually quick to fix when there’s a glaring structural omission or error. The document-based design then means content can be written, refined and steps planned out in a loose, familiar tool. Finally the platform needs to be played with, understood and then the documents translated into real courses. It’s not a solid process and some courses had an ABC (Arena Blended Connected) Curriculum Design stage, just to be sure a storyboard of the course made sense.

Overall

  • It’s hard work – for the course teams – you can just see they’ll underestimate the amount of time needed.
  • The value shows once you go live and people start registering, sharing early comments on the Week 0 discussion areas.
  • These courses look good and work well as examples for others, Mooc or credit-bearing blended/online courses
  • Courses don’t need to be big – 1/2 hours a week, 2-4 weeks is enough. I’d like to see more smaller Moocs
  • Integrating your Moocs into taught programmes, modules, CPD courses makes a lot of sense

As a final observation before we go live with the first course: Why We Post: The Anthropology of Social Media, on Monday there was one thing that caught my eye early:

Every course team leader for our Moocs is primarily a researcher and their Moocs are produced, largely, from their research activity. UCL is research intensive, so this isn’t too crazy, but we’re also running an institutional initiative the Connected Curriculum which is designed to fully integrate research and teaching. The Digital Education team is keen to see how we build e-learning into research from the outset. This leads us to a new project in UCL entitled: Pathways to Impact: Research Outputs as Digital Education (ROADE) where we’re exploring research dissemination and e-learning objects and courses origins and value. More soon on that one – but our Mooc activity has really initiated this activity.

Coming soon – I hope – Reflections after UCL’s first Mooc 🙂 

 

Online learning and the No Significant Difference phenomenon

By Mira Vogel, on 20 August 2015

When asked for evidence of effectiveness of digital education I often find it hard to respond, even though this is one of the best questions you can ask about it. Partly this is because digital education is not a single intervention but a portmanteau of different applications interacting with the circumstances and practices of staff and students – in other words, it’s situated. Another is that evaluation by practitioners tends not to be well resourced or rewarded, leading to a lack of well-designed and well-reported evaluation studies to synthesise into theory. For these reasons I was interested to see a paper by Tuan Nguyen titled ‘The effectiveness of online learning: beyond no significant difference and future horizons‘ in the latest issue of the Journal of Online Learning and Teaching. Concerned with generalisability of research which compares ‘online’ to ‘traditional’ education, it offers critique and proposes improvements.

Nguyen directs attention to nosignificantdifference.org, a site which indicates that 92% of distance or online education is at least as effective or better than what he terms ‘traditional’ i.e. in-person, campus-based education. He proceeds to examine this statistic, raising questions about the studies included and a range of biases within them.

Because the studies include a variety of interventions in a variety of contexts, it is impossible to define an essence of ‘online learning’ (and the same is presumably true for ‘traditional learning’). From this it follows that no constant effect is found for online learning; most of the studies had mixed results attributed to heterogeneity effects. For example, one found that synchronous work favoured traditional students whereas asynchronous work favoured online students. Another found that, as we might expect, its results were moderated by race/ethnicity, sex and ability. One interesting finding was that fixed timetabling can enable traditional students to spend more time-on-task than online students, with correspondingly better outcomes. Another was improvements in distance learning may only be identifiable if we exclude what Nguyen tentatively calls ‘first-generation online courses’ from the studies.

A number of the studies contradict each other, leading some researchers to argue that much of the variation in observed learning outcomes is due to research methodology. Where the researcher was also responsible for running the course there was concern about vested interests in the results of the evaluation. The validity of quasi experimental studies is threatened by confounding effects such as students from a control group being able to use friends’ accounts to access the intervention.  One major methodological concern is endogenous selection bias: where students self-select their learning format rather than being randomly assigned, there are indications that the online students are more able and confident, which in turn may mask the effectiveness of traditional format. Also related to sampling, most data comes from undergraduate courses and wonders whether graduate students with independent learning skills might fare better with online courses.

Lest all of this feed cynicism about bothering to evaluate at all, only evaluation research can empower good decisions about where to put our resources and energies. What this paper indicates is that it is possible to design out or control for some of the confounding factors it raises. Nguyen makes a couple of suggestions for the ongoing research agenda. The first he terms the “ever ubiquitous” more-research-needed approach to investigating heterogeneity effects.

“In particular, there needs to be a focus on the factors that have been observed to have an impact on the effectiveness of online education: self-selection bias, blended instruction, active engagement with the materials, formative assessment, varied materials and repeatable low-stake practice, collaborative learning communities, student maturity, independent learning skills, synchronous and asynchronous work, and student characteristics.”

He points out a number of circumstances which are under the direct control of the teaching team, such as opportunities for low stakes practice, occasions for synchronous and asynchronous engagement, and varied materials, which are relatively straightforward to adjust and relate to student outcomes. He also suggests how to approach weighting and measuring these. Inevitably, thoughts turn to individualising student learning and it is this, particularly in the form of adaptive learning software, that Nguyen proposes as the most likely way out of the No Significant Difference doldrums. Determining the most effective pathways for different students in different courses promises to inform those courses ongoing designs. This approach puts big data in the service of individualisation based on student behaviour or attributes.

This dual emphasis of Nguyen’s research agenda avoids an excessively data-oriented approach. When evaluation becomes diverted into trying to relate clicks to test scores, not only are some subject areas under-researched but benefits of online environments are liable to be conceived in narrowed terms of the extent to which they yield enough data to individualise student pathways. This in itself is an operational purpose which overlooks the educational qualities of environments as design spaces in which educators author, exercise professional judgment, and intervene contingently. I had a bit of a reverie about vast repositories of educational data such as LearnSphere and the dangers of allowing them to over-determine teaching (though I don’t wish to diminish their opportunities, either). I wished I had completed Ryan Baker’s Big Data in Education Mooc on EdX (this will run again, though whether I’ll be equal to the maths is another question). I wondered if the funding squeeze might conceivably lead us to adopt paradoxically homogeneous approaches to coping with the heterogeneity of students, where everyone draws similar conclusions from the data and acts on it in similar ways, perhaps buying off-the-shelf black-box algorithmic solutions from increasingly monopolistic providers. Then I wondered if I was indulging dystopian flights of fancy, because in order for click-by-click data to inform the learning activity design you need to triangulate it with something less circumstantial – you need to know the whys as well as the whats and the whens. Click data may provide circumstantial evidence about what does or doesn’t work, but on its own it can’t propose solutions. Speculating about solutions is a luxury – using A/B testing on students may be allowed in Moocs and other courses where nobody’s paying, but it’s a more fraught matter in established higher education cohorts. Moreover Moocs are currently outside many institutions’ quality frameworks and this is probably why their evaluation questions often seem concerned with engagement rather than learning. Which is to say that Mooc evaluations which are mainly click and test data-oriented may have limited light to shed outside those Mooc contexts.

Evaluating online learning is difficult because evaluating learning is difficult. To use click data and test scores in a way which avoids unnecessary trial and error, we will need to carry out qualitative studies. Nguyen’s two approaches should be treated as symbiotic.


Video HT Bonnie Stewart.

Nguyen, T. (2015). The effectiveness of online learning: beyond no significant difference and future horizons. Journal of Online Learning and Teaching11(2). Retrieved from http://jolt.merlot.org/Vol11no2/Nguyen_0615.pdf

 

ABC (Arena Blended Connected) curriculum design

By Natasa Perovic, on 9 April 2015

(For latest news about ABC LD, visit ABC LD blog)

The ABC curriculum design method is a ninety-minute hands-on workshop for module (and programme) teams. This rapid-design method starts with your normal module (programme) documentation and will help you create a visual ‘storyboard’. A storyboard lays out the type and sequence learning activities required to meet the module’s learning outcomes and how these will be assessed. ABC is particularly useful for new programmes or those changing to an online or a more blended format.

The method uses an effective and engaging paper card-based approach based on research from the JISC* and UCL IoE**. Six common types of learning activities are represented by six cards. These types are acquisition, inquiry, practice, production, discussion and collaboration.

learning_types_all_cards

The team starts by writing a very short ‘catalogue’ description of the module to highlight its unique aspects. The rough proportion of each type is agreed (e.g. how much practice, or collaboration) and the envisaged blend of face-to-face and online.

curriculum_cards_m

Next the team plan the distribution of each learning type by arranging the postcard-sized cards along the timeline of the module. With this outline agreed participants turn over the cards. Each card lists online and conventional activities associated with each learning types and the team can pick from this list and add their own.

workshop team selecting activities

The type and range of learner activities soon becomes clear and the cards often suggest new approaches. The aim of this process is not to advocate any ‘ideal’ mix but to stimulate a structured conversation among the team.

Participants then look for opportunities for formative and summative assessment linked to the activities, and ensure these are aligned to the module’s learning outcomes.

assessment

 

The final stage is a review to see if the balance of activities and the blend have changed, agree and photograph the new storyboard. graph_s

The storyboard can then be used to develop detailed student documentation or outline a Moodle course (a module in Mooodle).

 

curriculum_final

The ABC team is developing a program-level version based on the Connected Curriculum principles.

Participants’ thoughts about ABC curriculum design workshop:

 

For questions and workshops contact Clive and Nataša cy_np

 

More:

References:

*Viewpoints project JISC

**UCL IoE: Laurillard, D. (2012). Teaching as a Design Science: Building Pedagogical Patterns for Learning and Technology. New York and London: Routledge.

 

Aloha ELESIG London

By Mira Vogel, on 31 March 2015

 IMG_5505 by Oliver Hine, 2009. Work found at https://www.flickr.com/photos/27718575@N07/4117063692/ (https://creativecommons.org/licenses/by-nc-nd/2.0/)A summary of the first meeting of the London regional group of the Evaluation of Learners’ Experiences of E-Learning national special interest group a.k.a. ELESIG (and breathe). It took place on Tuesday 24th March, 11.00am-1.00pm, at Birkbeck University of London. The talks weren’t recorded but you can find slides on the ELESIG London Group discussion forum.

Eileen Kennedy presented a case study on the UCL Institute of Education’s ‘What future for education’ Mooc. The Mooc had a repeating weekly structure of reflection task, a recorded interview, open access readings, posting to a Padlet wall on a theme (‘Where do you learn?’ for example), a Google Hangout, and a review & reflection (the latter was a main way for the Mooc team to gather feedback). Eileen’s study of the learner experience aimed to find out whether the design of the Mooc could enable a dialogic educational experience, at scale, and whether the learning led students to interrogate their prior assumptions. The end-of-Mooc survey yielded some appreciation for most of the elements of the Mooc, but the real-time hangouts were hard to join. Respondents wanted external validation of their learning in the form of a statement of accomplishment and a peer grading system they were confident was rigorous. To supplement this survey data, the evaluation team mapped their findings to Laurillard’s conversational framework, matrix of elements including what the learners did, justification for including this type of element in this situation, the specific role of the element in the Mooc, and the evidence collected or needed. We discussed ways to make the rationale of the course design more explicit to students to help them identify hinge points in their learning. The yearning for attention and recognition raised the matter of the relationship between Mooc providers and learners, and the role of caring. We noted that the Mooc is destined to be packaged up as an on-demand Mooc, which seems to be part of a global trend in response to lack of resource to run it.

Ghazaleh Cousin presented on an evaluation of the Panopto lecture capture service  at Imperial. Beyond the basic Panopto reports about who accessed which recording and for how long, questions include whether viewing is associated with differences in students’ results, which sessions are most popular, and which days are most popular. Since Panopto’s data is currently quite limited, Imperial are contributing feature requests. We discussed whether students who perform better are watching the videos more. To address this, video could be made which discouraged students from fixating on memorising explanations. We touched only briefly on methods – the team did not have immediate opportunities to arrange questionnaires and interviews, and opted to make sense of the Panopto data as a way to generate deeper questions. At the more challenging methodological end, there was interest in comparing learning from lecture recordings to learning from lecture graphics or lecture pedagogies.

Damien Darcy presented on uses of video at Birkbeck. Before Birkbeck’s Panopto roll-out, use of video at Birkbeck was sporadic, professional or slightly Blair Witchy, and it wasn’t clear how to record a lecture. Video was treated in a technocentric way isolated from educational concerns of assessment or student engagement. Damien carried out an exploratory study with the Law department, as large scale Panopto users, with a methodology he referred to as ‘guerilla ethnography’. His questions were: was it working, was it used (properly) by staff, how were students using it? He confirmed that decontextualised training doesn’t carry across to the rigours of the lecture hall, and superstitions about how technologies work persist. He related a sense of control, pride and ownership to increasing proficiency. Panopto data showed that peak viewing was often immediately after the lecture, and there were signs that if the lecture wasn’t up quickly it wouldn’t get watched. Watching was often social, often while doing other things, and was predictably uneven with spikes at particular points and particular times related to assessment. As video was normalised student expectations became more exacting, with requests for consistent tagging and titles and the inclusion of an overview. To contain their video initiative, Organisational Psychology had initiated a dialogue with students about what to record – i.e. not everything – and what to leave as ephemeral. Damien’s next steps would be to find out more about student reactions and perceptions, lecturer motivations, and how the identity of the lecture is changing. Methods would include surveys, focus groups, and a range of ethnographic studies looking at changes to the identity of lecture and lecturer. Questions would be informed by Panopto data.

We then discussed next steps for ELESIG London – in no particular order:

  • Case-making for resourcing evaluation activities.
  • Understanding and negotiating institutional barriers to evaluation.
  • How to take the findings from an evaluation and create narratives of impact.
  • Micro-evaluation possibilities: what kinds of evaluation can you do if you have only been given ten minutes? One day? Ten days? As you go along?
  • Methods masterclasses including ethnography and data wrangling
  • Can learning experiences be designed so it becomes possible to relate a change the evaluation identifies in students to a specific aspect of course design or learning?
  • Incorporating evaluation into developing new programmes.
  • Should the group have outputs?
  • Can we improve the generalisability of findings by coordinating our evaluation activities across institutions?
  • Not encroaching on other London e-learning groups such as the M25LTG – keeping focus on evaluation (e.g. methods, data, analysis, interpretation, politics and strategic importance).
  • Twitter rota for the national ELESIG account by region rather than by individual.

The coordinators (Leo Havemann and Mira Vogel) will be incorporating these ideas into plans for the next meeting in summer.

If you are interested in attending or keeping up with ELESIG London goings-on or you’d like to contact a coordinator, then join the London Group on Ning.

Image credit: IMG_5505 by Oliver Hine, 2009. Work found at https://www.flickr.com/photos/27718575@N07/4117063692/ (https://creativecommons.org/licenses/by-nc-nd/2.0/)

A good peer review experience with Moodle Workshop

By Mira Vogel, on 18 March 2015

Update Dec 2015: there are now three posts on our refinements to this peer feedback activity: one, two, and three.

Readers have been begging for news of how it went with the Moodle Workshop activity from this post.

Workshop is an activity in Moodle which allows staff to set up a peer assessment or (in our case) peer review. Workshop collects student work, automatically allocates reviewers, allows the review to be scaffolded with questions, imposes deadlines on the submission and assessment phase, provides a dashboard so staff can follow progress, and allows staff to assess the reviews/assessments as well as the submissions.

However, except for some intrepid pioneers, it is almost never seen in the wild.

The reason for that is partly to do with daunting number and nature of the settings – there are several pitfalls to avoid which aren’t obvious on first pass – but also the fact that because it is a process you can’t easily see a demo and running a test instance is pretty time consuming. If people try once and it doesn’t work well they rarely try again.

Well look no further – CALT and ELE have it working well now and can support you with your own peer review.

What happened?

Students on the UCL Arena Teaching Associate Programme reviewed each others’ case studies. 22 then completed a short evaluation questionnaire in which they rated their experience of giving and receiving feedback on a five-point scale and commented on their responses. The students were from two groups with different tutors running the peer review activity. A third group leader chose to run the peer review on Moodle Forum since it would allow students to easily see each others’ case studies and feedback.

The students reported that giving feedback went well (21 respondents):

Pie chart - giving feedback

Satisfaction with reviewing work – click to enlarge

This indicates that the measures we took – see previous post – to address disorientation and participation were successful. In particular we were better aware of where the description, instructions for submission, instructions for assessment, and concluding comments would display, and put the relevant information into each.

Receiving feedback also went well (22 respondents) though with a slightly bigger spread in both directions:

Pie chart - receiving feedback

Satisfaction with receiving reviews – click to enlarge

 

Students appreciated:

  • Feedback on their work.
  • Insights about their own work from considering others’ work.
  • Being able to edit their submission in advance of the deadline.
  • The improved instructions letting them know what to do, when and where.

Staff appreciated:

This hasn’t been formally evaluated, but from informal conversations I know that the two group leaders appreciate Moodle taking on the grunt work of allocation. However, this depends on setting a hard deadline with no late submissions (otherwise staff have to keep checking for late submissions and allocating those manually) and one of the leaders was less comfortable with this than the other. Neither found it too onerous to write diary notes to send reminders and alerts to students to move the activity along – in any case this manual messaging will hopefully become unnecessary with the arrival of Moodle Events in the coming upgrade.

For next time:

  • Improve signposting from the Moodle course area front page, and maybe the title of the Workshop itself, so students know what to do and when.
  • Instructions: let students know how many reviews they are expected to do; let them know if they should expect variety in how the submissions display – in our case some were attachments while others were typed directly into Moodle (we may want to set attachments to zero); include word count guidance in the instructions for submission and assessment.
  • Consider including an example case study & review for reference (Workshop allows this).
  • Address the issue that, due to some non-participation during the Assessment phase, some students gave more feedback than they received.
  • We originally had a single comments field but will now structure the peer review with some questions aligned to the relevant parts of the criteria.
  • Decide about anonymity – should both submissions and reviews be anonymous, or one or the other, or neither? These can be configured via the Workshop’s Permissions. Let students know who can see what.
  • Also to consider – we could also change Permissions after it’s complete (or even while it’s running) to allow students to access the dashboard and see all the case studies and all the feedback.

Have you had a good experience with Moodle Workshop? What made it work for you?