Update on Jisc Learning Analytics
By Stephen Rowett, on 23 October 2015
On Monday 19th November Steve Rowett attended the 4th Jisc Learning Analytics Network meeting in Bradford. Jisc have been running their leaning analytics R&D project for just over a year now and their plans are really starting to take shape. The aim is to provide a learning analyitics service for the HE/FE/skills sector that – at least at a basic level – institutions can take off the shelf and start using. There are also likely to be more sophisticated premium offerings from the vendors involved. The components of this solution are becoming well defined and I report on these below.
The architecture of their system looks quite complex, but focuses on different elements for storing, processing and displaying back data to various audiences. For many of the components they are working with multiple vendors to offer alternative options.
Already produced are a number of documents including a state of play review, literature review and code of practice – all available from http://analytics.jiscinvolve.org/wp/
Now, the technical systems are in development. Michael Webb from Jisc focused on systems for student and staff use rather than back end processing, and include:
The staff dashboard or ‘student insight tool’
This is currently 98% ready and due to be launched in November 2015. The tool has a simple purpose: to predict withdrawals and dropouts amongst the student cohort. The system is pre-programmed with 3 years of student data which it analyses to predict the metrics that increase probabilities of withdrawal. It then applies this model to current trends.
The interface starts at programme level, and allows drill-down to modules and then to individual students giving a ‘withdrawal risk score’ for each. The types of data that that system can handle are:
- enrolment and demographic data (essentially things the student can do nothing about): previous exam results, education history, school/college, demographic data
- engagement (things the student can change): library/campus visit activity, books or journals used, VLE data, attendance data, student society membership, submission time before deadlines
- academic performance (the outcomes): coursework and exam performance.
Current estimates are that the predictions are about 70% accurate although this will need to validated by early institutions and maybe improve as the models get further developed.
Alert and Intervention system (the ‘Student Success Planner’)
Currently 70% ready, expected to launch in Q1 2016. This system provides alert to nominated contacts (typically a student’s personal tutor or welfare officer) based on pre-determinted criteria. The alert is reviewed and any appropriate intervention can then be taken. It is expected that around 10 criteria would be sufficient for most cases. At the moment the focus is on intervening to support students in difficulty, but it could also be used to offer reward or praise mechanisms for behaviour likely to lead to successful outcomes.
Learning records warehouse (‘Learning locker’)
Nearly complete, launch imminent. Back end system which stores learner activity in a simple but powerful form: X verb Y at time Z. For example ‘Paul visited library’ or ‘Irrum completed coursework’. This provides a the data used to build the models
Currently still being designed. Borrowing concepts from fitness apps the student app provides feedback (“you are in the top 25%” etc.) and encouragement, allowing students to set targets and monitor progress (“I will complete 10 hours reading this week”). Also allows self-declared data such as demographics or even general happiness to be added.
The presentations from the event will shortly be on the Jisc Learning Analytics blog, and they have also developed some Moodle courses with more information – self register and find the LA-prefixed courses.
My thoughts were that Jisc is moving quickly here, and these products are really taking shape. However much of the day focused on the legal, cultural, ethical issues around learning analytics; the technical readiness of our institutions to integrate so many different data sources; and the (perhaps) high levels of variability in the types of models that would best predict outcomes in very different institutions. The technology looks fairly straightforward, but every institution that wishes to adopt these tools will need to choose their own focus and rules of engagement that works best for them and their students.
Update: Notes and presentations from this event are now available, along with a fuller description of other presentations and activities during the day.