X Close

Digital Education team blog

Home

Ideas and reflections from UCL's Digital Education team

Menu

Archive for the 'Mira’s Mire' Category

Students’ intellectual property, open nitty gritty

By Mira Vogel, on 19 May 2015

Brass tacks by MicroAssist on FlickrWhat happened when staff on one module encouraged students to openly license the online products of their assessed group work?

Object Lessons is a module on Bachelor of Arts and Sciences at UCL. In keeping with its object-based nature and emphasis on inquiry and collaboration, part of the assessment is a group research project to produce a media-rich online exhibition. Because the exhibitions are lovely and shine a light on multimodal assessment, the teaching team are frequently approached by colleagues across UCL with requests to view them. In considering how to get students’ permission for this, Leonie Hannan (now at QUB), Helen Chatterjee and I quickly realised a few things. One, highlighted by an exchange with UCL’s Copyright specialist Chris Holland, was that the nature of the permission was hard to define and therefore hard to get consent for, so we needed to shift the emphasis away from staff and the nuances of their possible use scenarios, and onto the status of the work itself. Another was that since the work was the product of a group and could not be decomposed into individual contributions without breaking the whole, consent would need to be unanimous. Then there was the question of administrative overhead related to obtaining consent and actually implementing what students had consented to – potentially quite onerous. And finally the matter presented us with some opportunities we shouldn’t miss, namely to model taking intellectual property seriously and to engage students in key questions about contemporary practices.

We came up with four alternative ways for students to license their work ranging incrementally from open to private. We called these:

1. Open;
2. Publish;
3. Show;
4. Private.

You can read definitions of each alternative in the document ‘Your groupwork project – requesting consent for future use scenarios’ which we produced to introduce them to students. As part of their work students were required to discuss these, reach a unanimous consensus on one, and implement it by publishing (or selectively, or not at all) the exhibition and providing an intellectual property notice on its front page. That way staff would not have to collect consent forms nor gate-keep access.

Before we released it to students I circulated the guidance to two Jiscmail discussion groups (Open Educational Resources and Association for Learning Technology) and worked in some of their suggestions. A requirement that students include a statement within the work itself reduces the administrative overhead and, we hoped, would be more future-proof than staff collecting, checking off and filing paper records. While making it clear that students would not be at any deficit if they chose not to open their work, we also took a clear position in favour of Creative Commons licensing – the most open of our alternatives, since as well as flexibility and convenience it would potentially lend the work more discoverability and exposure.

What did the students choose? In the first iteration, out of ten groups:

  • Five opted for Open. Between them they used 3 different varieties of Creative Commons licence, and one submitted their work to Jorum;
  • Two opted for Publish;
  • None opted for Show;
  • Three opted for Private (including one which didn’t make a statement; since the group kept the work hidden this defaults to Private).

We haven’t yet approached the students to ask about their decision-making processes, but from informal conversations and reading some of the intellectual property statements we know that there are different reasons why half the students decided not to make their work open. One was the presence of elements which were not themselves open, and therefore could not be opened in turn. From evaluations of a number of other modules, we know that the students were not generally all that enthusiastic about the platform they were asked to use for their exhibition (Mahara, which is serviceable but vanishingly rare outside educational settings). This may have contributed to another factor, which was that not all group members felt the work reflected well on them individually.

Then there’s the matter of deciding to revoke consent, which is something individual students can do at any time. In the context of group work we decided that what this would mean is that if any group member decides at a later date that they want to reduce openness, then this effectively overrides other group members’ preferences. That doesn’t work in reverse though – a student can’t increase openness without the consent of all other group members. So here we are privileging individuals who want to close work, although we do encourage them to consider instead simply ending their association with it. We have yet to find out how this state of affairs works out, and it may take quite a while to find out. But so far it seems stable and viable.

We would be very interested in your views, suggestions and any experiences you have had with this kind of thing – please do comment below.

Particular thanks to Pat Lockley and Javiera Atenas for their input.

Image source: MicroAssist, 2012. Brass tacks. Work found at https://www.flickr.com/photos/microassist/7136725313/. Licensed as CC BY-SA.

The UCL Teaching Administrators Conference 2015

By Mira Vogel, on 18 May 2015

workshop_3-5UCL’s Teaching Administrators Conference is an event dedicated to supporting learning and teaching (in the professional services sense) which this year took place on 23rd April. UCL people can access conference materials including recorded plenaries, slides, &c from the dedicated section of the Teaching Administrators Forum Moodle space.

In his opening plenary the Provost Michael Arthur explained the early impact of UCL’s 2034 strategy. Three initiatives related to students’ experience were students undertaking Changemakers projects to shape their curricula, the Arena programme of professional development opportunities convened by CALT, and the Connected Curriculum with its throughline of research-based education. The Provost envisages teaching administrators “transporting the best ideas from students into the academic community” and helping to enact the Student Information Strategy. As such TAs would be “important if not powerful”. Asked about career progression opportunities within current structure, he emphasised that TAs would need to be prepared to move departments. On the mission critical Quality Assurance Exercise he had a clear message: “We’ve been complacent and I don’t think our processes are tight enough”.

I led a repeating session to find out more about workflows with e-assessment platforms in departments, with a view to feeding into the Jisc Electronic Management of Assessment project. Participants drew and wrote on an A3 sheet with a light structure. The idea was to give an occasion for reviewing the processes, to see what was happening in other departments, and to ask questions. Hopefully it was useful – there was certainly a lot of discussion. From this sample of 13 artefacts (some participants worked in pairs) it’s immediately clear that Turnitin use outweighs Moodle Assignment use, hardly any other digital platforms are used beyond those two, about half were collecting hard copy alongside the digital submissions, with a corresponding amount of paper-based marking but also some digitisation of paper-based feedback to pass back to students. Difficulties with blind second marking interfere with the uptake of e-assessment.

That left me time to attend one parallel session, Simon To and Tom Flynn on the Organisation and Management Benchmarking Tool co-developed by the NUS and the Association of University Administrators. The tool can be used by UCL’s Student Academic Representations (StARs, numbering around 800 these days). As well as helping identify priorities, it can also help to highlight trade-offs implicit in balancing organisation and management needs. The discussion raised many interesting questions. What happens when staff and student needs are mutually exclusive? How can institutions explain decisions to students? Which decisions do students want explained? Is there a gap on skills and knowledge?  What can data from, say FAQ web pages, tell us about student needs? What is a good balance between being reactive and pre-emptive? For each of the framework’s 10 principles there are descriptors along a continuum from ‘Underdeveloped’ to ‘Outstanding’. Plenty of suggestions were made in the course of the discussion.

Other sessions included an e-learning update, managing an exam board, personal and professional development for staff, and several more.

Carroll Graham (University of Technology, Sydney) and Julie-Anne Regan (Academic Development Advisor, University of Chester) presented their research into what Australian and UK professional services staff understand to be their contribution to student outcomes. Their participants ranked their contribution to behaviours which promote positive student outcomes i.e. retention, persistence and success. There was clear agreement across countries that the most important contribution is institutional behaviours, environments and processes which are welcoming and efficient – this unpacks into responses to student inquiries and the quality of the institutional environment. However, there wasn’t much consensus, either within or between cohorts, on the other propositions, which the researchers put down to the diversity of professional roles compared to academic roles.

Wendy Appleby gave a lively talk on Student & Registry Services, with some illuminating organisational charts and the news that ‘flexible working’ may now be referred to as ‘agile working’, which sounds pretty dynamic. SRS will be heavily involved in the Student Information Strategy, the Student Centre and the QAA Higher Education Review. She reiterated SRS’ intention to work more closely with Teaching Administrators, where “‘closely’ means training rather than missives”.

Anthony Smith, Vice Provost for Education and Student Affairs closed the conference by thanking TAs for being front-line, key, and “the face of UCL”. 2034 envisions the integration of undergraduate research to take students to the edge of knowledge. Translating these high minded words into things departments can actually do will demand realism and creativity on the part of staff. The main elements of this plan are people, places, and practice – in other words, promotion of staff, work to improve learning spaces and partnership with the student body. The internal consultation for UCL’s education strategy has been launched.

Well done Stefanie Anyadi and the conference committee for another well run, diverse, stimulating and sociable event. Looking forward to the next one.

Teaching Administrators chat over lunch

Assessment born digital – Sian Bayne at UCL

By Mira Vogel, on 12 May 2015

Sian Bayne portraitSian Bayne is Professor of Digital Education in the School of Education at the University of Edinburgh. She convenes the Digital Cultures and Education research group and teaches on the MSc in Digital Education, a fully-online course. At an earlier ELE Assessment & Feedback Special Interest Group (link for UCL people), Tony McNeill from SELCS – a graduate of that MSc – recommended we invite Sian to talk about assessment in a digital age, she kindly accepted, and Anthony Smith (UCL’s Vice Provost Education & Student Affairs) chaired the event. The abstract:

“The study and production of text is a defining academic activity, yet the way in which texts are shaped and shared in internet spaces presents an intriguing set of challenges to teachers and learners. Pedagogic work with the new generation of web artefacts requires us to work within a textual domain which is unstable, multilinear, driven by a visual logic and informed by authorship practices which are multimodal, public and sometimes collective. How can we critically approach these new writing spaces, as learners, teachers and scholars? Drawing on experience of conducting such assessment within a large, online Masters programme, the talk will demonstrate how assignments born digital can be rich, critical and creative. It will also consider how as teachers we can manage, mark and organise for these assessment forms.”

Sian’s MSc students have a range of digital skills. As a fully-online course contact is crucial, so students are required to blog frequently for a term, privately by default but shared if preferred, receiving individual feedback in the form of comments on posts. This is necessarily labour intensive for the teaching team since it is intended to replicate the one-to-one tutorial within the blog space, as far as possible. To build students’ confidence and skills with multimodal presentation they’re set a number of formative tasks in advance of higher stakes assessment – for example to rework a passage from Plato’s Phaedrus.

For high stakes assessment students have a choice – they can submit work in established essay form but have the option to instead work on digital artefacts out on the Web. Where these are public they can bring new and exhilarating kinds of attention, sometimes from the thinkers whose work they are referencing. Increasing numbers of students are choosing this multimodal alternative (a side effect is that the public nature of the work also raises the profile of the MSc).

Proposals to assess beyond the essay often prompt questions about the appropriateness of other modes for academic communication – as one person asked during the discussion, don’t images and music fall within a cultural domain apart from academia, an emotional realm of implicit meaning and taste – isn’t it more art than scholarly communication? Sian emphasised that multimodal assessment shouldn’t be treated as a special case, and that the MSc assessment criteria are conventional and shared with other postgraduate courses in Edinburgh. Moreover the student work we saw was sophisticated. A student used a screen capture of his explorations in Google Earth and Google Streetview, rhetorical forms attuned to the content of his work on flaneurship. To pose questions about the meaning of originality in a copy-paste age, another fabricated a plagiarised essay with each section linked to its source, juxtaposed with an essay on the same subject which adhered to established norms of academic integrity.

There was a question about whether assessment criteria conceived with text in mind could adequately comprehend the sensuality and interpretive ambiguity of multimodal work. Sian observed that the MSc assessors were alive to their burden of responsibility to interpret the work. There is a single holistic mark rather than breaking down by criteria, and there is moderation and sometimes third marking. Trust between marker and student is important; students and tutors need to know each other because assessing this kind of work depends on building a relationship between tutor and students. Sian explained that students are asked to propose their own assessment criteria in addition to the regulated ones. There may be much to learn from assessment practices in visual arts when assessing multimodal work in humanities and social sciences. There was a discussion about the role of images – it was clear that they needed to be doing rhetorical work, and students who simply used them illustratively or ornamentally tended to be marked down.

On more than one occasion Sian observed that “text is not being toppled”. Digital modes aren’t taking over; it’s more a case of what exceeds, rather than what comes after, ‘the essay’. Programmes and institutions who are doing this now are the ones which are willing to experiment.

If you’re at UCL and want to experiment with multimodal assessment, E-Learning Environments looks forward to working with you. Contact your school’s E-Learning Facilitator to discuss – Jessica Gramp (BEAMS), Natasa Perovic (SLMS), and Mira Vogel (SLASH). At UCL there are plenty of precedents, including Making History (History Department), Internet Cultures (Institute of Education), Digital anthropology, the BEng, and an object-based learning module called Object Lessons (more on the latter to come). See also Laura Gibbs from the University of Oklahoma in a short conversation with Howard Rheingold about how her students retell old stories in new ways.

When UCL students edit Wikipedia

By Mira Vogel, on 15 April 2015

A presentation by Rocío Baños Pinero (Deputy Director, Centre for Translation Studies), Raya Sharbain (Year 2 undergraduate, Management Science and Innovation) and  Mira Vogel (E-Learning Environments) for the UCL Teaching and Learning Conference, 2015. Here’s the abstract, presentation graphics embedded below and in case you can’t see that, a PDF version of those.

See also the UCL Women’s Health Translatathon write-up.

Aloha ELESIG London

By Mira Vogel, on 31 March 2015

 IMG_5505 by Oliver Hine, 2009. Work found at https://www.flickr.com/photos/27718575@N07/4117063692/ (https://creativecommons.org/licenses/by-nc-nd/2.0/)A summary of the first meeting of the London regional group of the Evaluation of Learners’ Experiences of E-Learning national special interest group a.k.a. ELESIG (and breathe). It took place on Tuesday 24th March, 11.00am-1.00pm, at Birkbeck University of London. The talks weren’t recorded but you can find slides on the ELESIG London Group discussion forum.

Eileen Kennedy presented a case study on the UCL Institute of Education’s ‘What future for education’ Mooc. The Mooc had a repeating weekly structure of reflection task, a recorded interview, open access readings, posting to a Padlet wall on a theme (‘Where do you learn?’ for example), a Google Hangout, and a review & reflection (the latter was a main way for the Mooc team to gather feedback). Eileen’s study of the learner experience aimed to find out whether the design of the Mooc could enable a dialogic educational experience, at scale, and whether the learning led students to interrogate their prior assumptions. The end-of-Mooc survey yielded some appreciation for most of the elements of the Mooc, but the real-time hangouts were hard to join. Respondents wanted external validation of their learning in the form of a statement of accomplishment and a peer grading system they were confident was rigorous. To supplement this survey data, the evaluation team mapped their findings to Laurillard’s conversational framework, matrix of elements including what the learners did, justification for including this type of element in this situation, the specific role of the element in the Mooc, and the evidence collected or needed. We discussed ways to make the rationale of the course design more explicit to students to help them identify hinge points in their learning. The yearning for attention and recognition raised the matter of the relationship between Mooc providers and learners, and the role of caring. We noted that the Mooc is destined to be packaged up as an on-demand Mooc, which seems to be part of a global trend in response to lack of resource to run it.

Ghazaleh Cousin presented on an evaluation of the Panopto lecture capture service  at Imperial. Beyond the basic Panopto reports about who accessed which recording and for how long, questions include whether viewing is associated with differences in students’ results, which sessions are most popular, and which days are most popular. Since Panopto’s data is currently quite limited, Imperial are contributing feature requests. We discussed whether students who perform better are watching the videos more. To address this, video could be made which discouraged students from fixating on memorising explanations. We touched only briefly on methods – the team did not have immediate opportunities to arrange questionnaires and interviews, and opted to make sense of the Panopto data as a way to generate deeper questions. At the more challenging methodological end, there was interest in comparing learning from lecture recordings to learning from lecture graphics or lecture pedagogies.

Damien Darcy presented on uses of video at Birkbeck. Before Birkbeck’s Panopto roll-out, use of video at Birkbeck was sporadic, professional or slightly Blair Witchy, and it wasn’t clear how to record a lecture. Video was treated in a technocentric way isolated from educational concerns of assessment or student engagement. Damien carried out an exploratory study with the Law department, as large scale Panopto users, with a methodology he referred to as ‘guerilla ethnography’. His questions were: was it working, was it used (properly) by staff, how were students using it? He confirmed that decontextualised training doesn’t carry across to the rigours of the lecture hall, and superstitions about how technologies work persist. He related a sense of control, pride and ownership to increasing proficiency. Panopto data showed that peak viewing was often immediately after the lecture, and there were signs that if the lecture wasn’t up quickly it wouldn’t get watched. Watching was often social, often while doing other things, and was predictably uneven with spikes at particular points and particular times related to assessment. As video was normalised student expectations became more exacting, with requests for consistent tagging and titles and the inclusion of an overview. To contain their video initiative, Organisational Psychology had initiated a dialogue with students about what to record – i.e. not everything – and what to leave as ephemeral. Damien’s next steps would be to find out more about student reactions and perceptions, lecturer motivations, and how the identity of the lecture is changing. Methods would include surveys, focus groups, and a range of ethnographic studies looking at changes to the identity of lecture and lecturer. Questions would be informed by Panopto data.

We then discussed next steps for ELESIG London – in no particular order:

  • Case-making for resourcing evaluation activities.
  • Understanding and negotiating institutional barriers to evaluation.
  • How to take the findings from an evaluation and create narratives of impact.
  • Micro-evaluation possibilities: what kinds of evaluation can you do if you have only been given ten minutes? One day? Ten days? As you go along?
  • Methods masterclasses including ethnography and data wrangling
  • Can learning experiences be designed so it becomes possible to relate a change the evaluation identifies in students to a specific aspect of course design or learning?
  • Incorporating evaluation into developing new programmes.
  • Should the group have outputs?
  • Can we improve the generalisability of findings by coordinating our evaluation activities across institutions?
  • Not encroaching on other London e-learning groups such as the M25LTG – keeping focus on evaluation (e.g. methods, data, analysis, interpretation, politics and strategic importance).
  • Twitter rota for the national ELESIG account by region rather than by individual.

The coordinators (Leo Havemann and Mira Vogel) will be incorporating these ideas into plans for the next meeting in summer.

If you are interested in attending or keeping up with ELESIG London goings-on or you’d like to contact a coordinator, then join the London Group on Ning.

Image credit: IMG_5505 by Oliver Hine, 2009. Work found at https://www.flickr.com/photos/27718575@N07/4117063692/ (https://creativecommons.org/licenses/by-nc-nd/2.0/)

A good peer review experience with Moodle Workshop

By Mira Vogel, on 18 March 2015

Update Dec 2015: there are now three posts on our refinements to this peer feedback activity: one, two, and three.

Readers have been begging for news of how it went with the Moodle Workshop activity from this post.

Workshop is an activity in Moodle which allows staff to set up a peer assessment or (in our case) peer review. Workshop collects student work, automatically allocates reviewers, allows the review to be scaffolded with questions, imposes deadlines on the submission and assessment phase, provides a dashboard so staff can follow progress, and allows staff to assess the reviews/assessments as well as the submissions.

However, except for some intrepid pioneers, it is almost never seen in the wild.

The reason for that is partly to do with daunting number and nature of the settings – there are several pitfalls to avoid which aren’t obvious on first pass – but also the fact that because it is a process you can’t easily see a demo and running a test instance is pretty time consuming. If people try once and it doesn’t work well they rarely try again.

Well look no further – CALT and ELE have it working well now and can support you with your own peer review.

What happened?

Students on the UCL Arena Teaching Associate Programme reviewed each others’ case studies. 22 then completed a short evaluation questionnaire in which they rated their experience of giving and receiving feedback on a five-point scale and commented on their responses. The students were from two groups with different tutors running the peer review activity. A third group leader chose to run the peer review on Moodle Forum since it would allow students to easily see each others’ case studies and feedback.

The students reported that giving feedback went well (21 respondents):

Pie chart - giving feedback

Satisfaction with reviewing work – click to enlarge

This indicates that the measures we took – see previous post – to address disorientation and participation were successful. In particular we were better aware of where the description, instructions for submission, instructions for assessment, and concluding comments would display, and put the relevant information into each.

Receiving feedback also went well (22 respondents) though with a slightly bigger spread in both directions:

Pie chart - receiving feedback

Satisfaction with receiving reviews – click to enlarge

 

Students appreciated:

  • Feedback on their work.
  • Insights about their own work from considering others’ work.
  • Being able to edit their submission in advance of the deadline.
  • The improved instructions letting them know what to do, when and where.

Staff appreciated:

This hasn’t been formally evaluated, but from informal conversations I know that the two group leaders appreciate Moodle taking on the grunt work of allocation. However, this depends on setting a hard deadline with no late submissions (otherwise staff have to keep checking for late submissions and allocating those manually) and one of the leaders was less comfortable with this than the other. Neither found it too onerous to write diary notes to send reminders and alerts to students to move the activity along – in any case this manual messaging will hopefully become unnecessary with the arrival of Moodle Events in the coming upgrade.

For next time:

  • Improve signposting from the Moodle course area front page, and maybe the title of the Workshop itself, so students know what to do and when.
  • Instructions: let students know how many reviews they are expected to do; let them know if they should expect variety in how the submissions display – in our case some were attachments while others were typed directly into Moodle (we may want to set attachments to zero); include word count guidance in the instructions for submission and assessment.
  • Consider including an example case study & review for reference (Workshop allows this).
  • Address the issue that, due to some non-participation during the Assessment phase, some students gave more feedback than they received.
  • We originally had a single comments field but will now structure the peer review with some questions aligned to the relevant parts of the criteria.
  • Decide about anonymity – should both submissions and reviews be anonymous, or one or the other, or neither? These can be configured via the Workshop’s Permissions. Let students know who can see what.
  • Also to consider – we could also change Permissions after it’s complete (or even while it’s running) to allow students to access the dashboard and see all the case studies and all the feedback.

Have you had a good experience with Moodle Workshop? What made it work for you?