Digital Education team blog
  • We support Staff and Students using technology to enhance education at UCL.

    Here you'll find updates on institutional developments, projects we're involved in, updates on educational technology, events, case studies and personal experiences (or views!).

    Subscribe to our elearning newsletters.

  • Subscribe to this blog

  • Meta

  • Tags

  • A A A

    Archive for the 'Learning analytics' Category

    A tale of two cities, or five

    By Samantha Ahern, on 7 April 2017

    March was a very busy month, 8 events over 31 days in 5 different cities, one of which was on a different continent. My poor suitcase did suffer a little, but I also got to do exciting things for engineers such as travel on different types of aircraft, cross a 150m suspension bridge and hear about pit stop optimisation in Formula 1 racing.  The events themselves were a combination of seminars, workshops and conferences; some academic, others more industry focused.

    The main event: LAK’17, 13-17 March, Simon Fraser University, Vancouver

    This was the eight meeting of the International Learning and Knowledge (LAK) Conference organised by the Society for Learning Analytics Research (SoLAR). Conference website: http://educ-lak17.educ.sfu.ca/

    On the 13th & 14th March I had the opportunity to be involved with the LAK17 Hackathon (GitHub: https://github.com/LAK-Hackathon/LAK17Hackathon). For the Hackathon I worked with colleagues from the University of British Columbia (UBC), University of Wollongong (UoW) Australia and JISC.  During the Hackathon we created Tableau dashboards to visualise staff and student interactions with courses in a VLE (Tableau workbook can be viewed and downloaded from: https://public.tableau.com/profile/alison.myers3113#!/vizhome/LAKHackathonv1/Student). The staff pages focus on identifying the contents and activities incorporated in a course, when it was created, who it was created/edited or added by and the usage of that content or activity.  The student pages focus on how students interact with and navigate through courses. For the hackathon we used dummy data, initial small files were hand crafted but larger files were generated by JISC’s Michael Webb (https://github.com/jiscdev/lakhak). I am hoping to use these dashboards to gain some initial understanding of the structure of UCL’s Moodle courses for taught modules and how students interact with them.

    The main conference ran from 15th to 17th March with inspirational keynotes from Dr. Sanna Järvelä (University of Oulu, Finalnd), Dr Timothy McKay (University of Michigan) and Dr. Sidney D’Mello (University of Notre Dame).  The overall conference theme was Understanding, Informing and Improving Learning with Data. Concurrent session talks were organised by sub-theme, these sub-themes included Modelling Student Behaviour, Understanding Discourse and LA Ethics, the talks on ethics were so popular that there was barely standing room only.  Many of the papers presented still focused on LA research but there is a growing number of implementations. During the conference SoLAR launched the Handbook of Learning Analytics, hard copies were available to preview. This will primarily be freely available as electronic download, for more information please see:https://solaresearch.org/hla-17/.

    Data Fest Data Summit #Data Changes Everything, 23rd & 24th March, Edinburgh and Big Data Innovation Summit, 30th & 31st March, London

    Of these two events, the one I probably enjoyed the most was the Edinburgh Data Summit which was part of Data Fest (http://www.datafest.global/), a week long series of activities organised by the Data Innovation Lab.  The Data Summit had a nice buzz about it and this was helped along by the hosts Phil Tetlow (Director and Chief Architect, Three Steps Left), Day 1, and Georgie Barratt (Presenter, The Gadget Show), Day 2.  Social good was a key theme of the event with talks from NHS Scotland and Transport for London, with humanitarian applications of data science discussed in talks from Nuria Oliver of Vodafone discussing Data-Pop Alliance and Natalia Adler from Unicef discussing DataCollaboratives. The Big Data Innovation Summit (https://theinnovationenterprise.com/summits/big-data-innovation-summit-london-2018) also featured a number of public sector talks from HMRC, Department for Work and Pensions and Camden Borough Council.  A highlight was Camden’s approach to open data.

    The key message from both events was that Data Science is not magic, there is no alchemy. Exploratory data analysis is great and has its place, but the main function is to support the decision making process, and in order to this you need to understand the business questions you are trying to answer.  This is echoed in Step 4 of Jisc’s Effective Learning Analytics On-boarding, Institutional Aims (https://analytics.jiscinvolve.org/wp/on-boarding/step-4/).

    And everything else

    Other events attended focused on collaborative working in the sciences (http://sciencetogether.online/), testing and validation of computational science (https://camfort.github.io/tvcs2017/) and some more theoretical probability days (http://www.lancaster.ac.uk/maths/probability_days_2017/).

    In summary it was a slightly exhausting but very informative month. Many of the ideas percolated at these events will find their way into the work I am undertaking in Digital Education over the next few months.

    Rebooting Learning for the Digital Age (report)

    By Clive Young, on 10 February 2017

    hepireportThe HE ‘think tank’, the Higher Education Policy Institute (HEPI), has just published Rebooting Learning for the Digital Age (PDF 58pp) written by three JISC leaders Sarah Davies, Joel Mullan and Paul Feldman. The report reviews best practice around the world to show how technology is benefiting universities and students through better teaching and learning, improved retention rates and lower costs and though a list of seven recommendations calls on universities to embrace new technology to meet the various challenges faced by the sector.

    While the actual approach is maybe less ‘reboot’ and more ‘refocus’, the report is an astute summary of the main issues and opportunities surrounding digital education in UK HE. It is more grounded than for example the OU Innovating Pedagogy 2016 report and provides a useful benchmark against which an institution such as UCL can gauge progress.

    A range of UK and international case studies indicate how digital initiatives can be used to improve student satisfaction, boost outcomes, retention and employability but still manage costs (so-called ‘win-win’ methods). However this inevitably requires strong leadership and the development of suitably-skilled staff.

    Two underpinning themes are threaded through the report, learning design and learning analytics.  On the first of these, the report comments, “when ‘designed in’ as part of the overall pedagogic approach, technology can be used to enable great teaching and improve student outcomes” and the first recommendation is Higher education institutions should ensure that the effective use of technology for learning and teaching is built into curriculum design processes. UCL has been particularly active in this area with ABC Learning Design, a bespoke rapid-development method that has already been very successful. The second recommendation identifies a real need, UK HE should develop an evidence and knowledge base on what works in technology-enhanced learning to help universities, faculties and course teams make informed decisions, plus mechanisms to share and discuss practice.

    Learning analytics which correlates patterns of student activity with learning outcomes and offer staff the opportunity to identify disengaged and underachieving students is the second main theme of the report. The next two recommendations suggest universities adopt learning analytics and research how the big datasets can be harnessed to provide new insights into teaching and learning. Digital Educaton has of course been looking into this e.g. From Bricks to Clicks: the potential for learning analytics and 8th Jisc Learning Analytics Network. Steve Rowett’s second post links the two themes of the report and the Open University published The impact of 151 learning designs on student satisfaction and performance: social learning (analytics) matters last year showing the remarkable potential of this combined approach.

    The third section of the report provides a useful reflection on the potential role of technology-enhanced in the Teaching Excellence Framework (TEF). It recommends Digital technology should be recognised as a key tool for HEIs responding to the TEF. Providers should be expected to include information on how they are improving teaching through the use of digital technology in their submissions to the TEF. Recognising the risk involved in new methods and the sometimes conservatism of students it adds, “The Department for Education (DfE) and the TEF panel must ensure the TEF does not act as a barrier against institutions innovating with technology-enhanced approaches”.

    The final two recommendations reinforce the institutional prerequisites mentioned above to realise the opportunity of digital education HEIs should ensure the digital agenda is being led at senior levels – and should embed digital capabilities into recruitment, staff development, appraisal, reward and recognition and finally academic leads for learning and teaching should embrace technology-enhanced learning and the digital environment and recognise the relationship with other aspects of learning and teaching.

    Walking in a data wonderland

    By Samantha Ahern, on 9 January 2017

    So where do we begin? Straight down the rabbit hole or some contextual rambling?

    The contextual rambling.

    I have recently been thinking about the logic puzzles, syllogisms, of Charles Dodgson and the literary work of his alter-ego Lewis Carroll – Alice’s Adventures in Wonderland.  This and discussions with my colleague Dr Steve Rowett lead me to explore Anastasia Salter’s project Alice in Dataland (http://aliceindataland.net/).  Alice in Dataland is an experiment in critical making, an exploration guided by the question: “Why does Alice in Wonderland endure as a metaphor for experiencing media?”

    Down the rabbit hole.

    Exploring Anastasia’s project has generated some questions of my own; What if data is Alice and data analysis is Wonderland?

    It has been noted that each new representation of Alice has showed her in a new and different way, it has been argued that these changes have added to our interpretation.  Is this also true of our analysis of data, or do we see “different truths” through different lenses of our analysis? In other words, do our analysis of data add to understanding by providing insight or do we alter the narrative told by data by how we choose to analyse or visualise it.

    In August 2016 theNode (http://thenode.biologists.com/barbarplots/photo/) reported on the kickstarter campaign #BarBarPlots! with the focus of the campaign being how to avoid misleading representations of statistical data.  This follows on from a 2015 ban on null hypothesis significance testing procedures by the journal Basic and Applied Social Psychology, which was discussed in an article by the Royal Statistical Society (https://www.statslife.org.uk/features/2114-journal-s-ban-on-null-hypothesis-significance-testing-reactions-from-the-statistical-arena).

    Do these analyses constitute re-imaginings of the data and like the use of Photoshop and other media tools described by Salter re-imagine Wonderland or data analysis as a remediation of reality through a different lens?

    When data is collected over time to create user profiles, and potential in learning analytics creating identities through narratives (data analysis and visualisation), again noted by Salter: “it is through narrative that we create and re-create selfhood” (Bruner, Jerome. Making stories: Law, literature, life. Harvard University Press, 2003.).  Are these generated identities subject to “defamiliarization of perception”; threatened by time as new data received alters our models and the story told? I am not sure, but it is an interesting thought.

    white rabbit

     

    Data dialogues – new data, old data and respect

    By Samantha Ahern, on 19 December 2016

    Somewhat unsurprisingly, some would say, over the last two weeks I have been preoccupied with data.

    More specifically, the notion of data having a life of its own.

    This is was the key theme of Prof. Charlotte Roueché’s talk at the Science & Engineering South event The Data Dialogue – At War with Data at King’s College on the 7th December. Citing a number of examples of data reuse such as archaeological maps by British Armed Forces and aerial photographs of Aleppo taken in 1916 for military used now being used as archaeological record, she argued that data develop a life of their own.  This means that we need to make sure that the data we collect is of the best possible quality and well curated. It should meet the FAIR principles: Findable, Accessible, Interoperable and Re-usable. However, once we have released our data into the wild, we will never truly know how it will be used and by whom. Unfortunately, history has shown that not all re-use is benign.

    This then begs the question: how open, should our open data be? Is there a case for not disclosing some data if you know it could do harm. e.g. In the current political climate, the exact location of archaeological sites of religious significance.

    This ties in to the two main themes of the National Statistician John Pullinger’s talk at The Turing Institute event Stats, Decision Making and Privacy on 5th December of respect and value.

    The key thing about respect is that data is about people and entities, this should never be forgotten.  People’s relationships with and perceptions of organisations who collect and process their data varies, as data analysts/scientists we should understand and respect this.  This means being alive to what privacy means to individuals and entities, and the context of how it is being discussed. Caring about the security of the data and demonstrating this through good practices. Additionally, thinking about what we should do, not just what we could do with data available to us. This is very pertinent with the rise in the use of machine learning tools and techniques within data science.

    This last point links into the second theme of value.  Data is valuable. It enables us to make better, more informed decisions and is a critical resource.  However, a balance needs to be drawn between extracting value from the data and respect.  So, is there a need to change the way in which we think about our data analysis processes?

    Dr Cynthia Dwork in her talk on Privacy-Preserving Data Analysis (The Turing Institute event Stats, Decision Making and Privacy) noted that statistics are inherently not private, with aggregate statistics destroying privacy.  Echoing the talk of John Pullinger, Dr Dwork raised the question ‘What is the nature of the protection we wish to provide?’. It is also important to understand who is a threat to our data and why. A move towards differential privacy (https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf) was proposed. When an analysis is undertaken in this way the outcome of the analysis is essentially equally likely, irrespective of whether any individuals join, or refrain from joining, the data set. However, this would require a completely different way of working.

    We’ve all heard the old adage of ‘lies, damned lies and statistics’; a key factor in making sure this is not the case is the presentation of the data.  We need to ensure that the data is correctly understood and correctly interpreted.  Start from where your audience is, and think carefully about your choice of words and visualisations. We also need to help our audiences to be more data literate.  But to undermine good analysis and communication we need to invest in skills and develop a good data infrastructure.

    Support the Royal Statistical Society’s Data Manifesto: http://www.rss.org.uk/Images/PDF/influencing-change/2016/RSS_Data%20Manifesto_2016_Online.pdf

    and in the words of John Pullinger ‘step up, step forward and step on the gas’!

     

     

    Innovating Pedagogy 2016 report

    By Clive Young, on 2 December 2016

    ip2016Innovating Pedagogy 2016 is the fifth annual report from the Open University (this year in collaboration with the Learning Sciences Lab at the National Institute of Education, Singapore) highlighting new forms of teaching, learning and assessment with an aim to “guide educators and policy makers”.

    The report proposes ten innovations that are “already in currency but have not yet had a profound influence on education”. In other words they are at an early phase of the Gartner Hype Cycle. Whether any will become, in the current idiom, ‘normalised’ remains to be seen and some scepticism would be advised. However, as I noted when the 2015 version was published, such reports often frame the discussion around technology in education, even if initially only at the level of “buzz-word bingo” for enthusiasts.

    The current list “in an approximate order of immediacy and timescale to widespread implementation” is;

    • Learning through social media – Using social media to offer long-term learning opportunities
    • Productive failure – Drawing on experience to gain deeper understanding
    • Teachback – Learning by explaining what we have been taught
    • Design thinking – Applying design methods in order to solve problems
    • Learning from the crowd – Using the public as a source of knowledge and opinion
    • Learning through video games – Making learning fun, interactive and stimulating
    • Formative analytics – Developing analytics that help learners to reflect and improve
    • Learning for the future – Preparing students for work and life in an unpredictable future
    • Translanguaging – Enriching learning through the use of multiple languages
    • Blockchain for learning – Storing, validating and trading educational reputation

    The usual fascinating mix of familiar ideas with novel concepts, the report gives a quick overview of why these may be important and includes handy links to further reading if you are interested

    8th Jisc Learning Analytics Network

    By Stephen Rowett, on 7 November 2016

    The Open University was the venue for the 8th Jisc Learning Analytics Network. I’d not been there before. It was slightly eerie to see what was a clearly reconigsable university campus but without the exciting if slightly claustrophic atmosphere that thousands of students provide. I won’t report on everything, but will give some highlights most relevant to me. There’s more from Niall Sclater on the Jisc Learning Analytics blog.

    The day kicked off with Paul Bailey and Michael Webb giving an update on Jisc’s progress. Referring back to their earlier aims they commented that things were going pretty much to plan, but the term ‘learner monitoring’ has thankfully been discarded. Their early work on legal and ethical issues set the tone carefully and has been a solid base.

    Perhaps more clearly than I’ve seen before, Jisc have set their goal as nothing less than sector transformation. By collecting and analysing data across the sector they believe they can gain insights that no one institution could alone. Jisc will provide the central infrastructure including a powerful learning records warehouse, along with some standardised data transformation tools, to provide basic predictive and alerts functionality. They will also manage a procurement framework for insitutions who want more sophistication.

    The learning records warehouse is a biggie here – currently with 12 institutions on board and around 200 million lines of activity. Both Moodle and Blackboard have plug-ins to feed live data in, and code for mainpulating historic data into the right formats for it.

    Paul and Michael launched a new on-boarding guide for institutions at https://analytics.jiscinvolve.org/wp/on-boarding – A 20 step checklist to getting ready for learning analytics. Step 1 is pretty easy though, so anyone can get started!

    Bart Rientes from the Open University showed again how important learning analytics is to them and how powerfully they can use it. Mapping all of the activities students undertake into seven different categories (assimilative, finding and handling information, communication, productive, experiential, interactive/adaptive, assessment) gives dashboards allowing course designers to visualise their courses. Add in opportunities for workshops and discussion and you have a great way of encouraging thinking about course design.

    Interestingly, Bart reported that there was no correlation between retentition and satisfaction. Happy students fail and unhappy students pass, and vice versa. Which begs the question – do we design courses for maximum retention, or for maximum satisfaction, because we can’t have both!

    Andrew Cormack, Chief Regulatory Advisor at Jisc, gave an update on legal horizons. The new General Data Protection Regulations is already on the statute books in the UK but comes into force on 1 May 2018. For a complex issue, his presentation was wonderfully straightforward. I shall try to explain more, but you can read Andrew’s own version at http://www.learning-analytics.info/journals/index.php/JLA/article/view/4554   [I am not a lawyer, so please do your own due diligence].

    Much of the change in this new legislation involves the role of consent, which is downplayed somewhat in favour of accountability. This gives logic thus:

    • We have a VLE that collects lots of data for its primary purpose – providing staff and students with teaching and learning activities.
    • We have a secondary purpose for this data which is improving our education design, helping and supporting learners and we make these explicit upfront. We might also say any things that we won’t do, such as selling the data to third parties.
    • We must balance any legitimate interest they have in using the data collected, against any risks of using the data that the data subject might face. But note that this risk does not need to be zero in order for us to go ahead.
    • Andrew distinguished between Improvements (that which is general and impersonal, e.g. the way a course is designed or when we schedule classes) and Interventions (which go to an individual student to suggest a change in behaviour). The latter needs informal consent, the former can be based on legitimate interest. He also suggested that consent is better asked later in the day, when you know the precise purpose for the consent.
    • So for example in a learning analytics project, we might only obtain consent at the first point where we intervene with a given student. This might be an email which invites them to discuss their progress with the institution, and the act of the doing so gives consent at the same time.

    You can follow Andrew as @Janet_LegReg if you want to keep up with the latest info.

    Thanks to Jisc for another really good event, and apologies to those I haven’t written about – there was a lot to take in!