X Close

IOE Blog

Home

Expert opinion from IOE, UCL's Faculty of Education and Society

Menu

Ofsted's use of Artificial Intelligence: how smart is it to automate risk assessments?

By Blog Editor, IOE Digital, on 9 February 2018

Melanie Ehren.
Ofsted has come under attack for its collaboration with the Behavioural Insights Team for using machine learning to identify failing schools. According to several sources (BBC and Matthew Reynolds), BIT has been trialling machine learning models that can crunch through publicly available data to help automate Ofsted’s decisions on whether a school is potentially performing inadequately. The algorithms use information on number of children on free school meals, how much teachers are paid, the number of teachers for each subject, and particular words and sentiments in reviews of schools submitted by parents on the Ofsted-run website Parent View.
As Ofsted’s head of risk assessment (Paul Moore) explains:
‘For a number of years Ofsted have risk assessed maintained schools and academies. This risk assessment is used to help put inspection resource into schools where it is most needed. It’s important to note that it has never been used to pre-judge inspection grades. The risk assessment model has evolved over the years, as inspection frameworks and accountability measures have changed.
The latest risk assessment development work is a continuation of our aim to continually improve our models. It influences how we plan our inspections, but inspectors are not given the findings of the risk assessment to avoid it having an influence on the inspection itself, which is based on the evidence they gather on-site. Over recent months Ofsted have been investigating whether a machine learning approach to predicting school decline could be a useful risk assessment development. The Behavioural Insight Team have been assisting us in upskilling our staff, and providing expert advice on machine learning techniques’.
Paul is keen to emphasise that the role of risk assessment stops at the point that Ofsted selects which inspections to carry out. Once an inspector arrives at a school, for example, all the focus is on the evidence that they are able to gather in person and in dialogue with the school’s leaders—the risk assessment plays no further part.
Automating risk assessments seems an intelligent approach to using scarce inspection resources more efficiently. But the recent critique begs the question of whether this combined use of data and human judgement is an actual example of ‘intelligent accountability’? Crooks (2006) provides an extensive description of ‘intelligent accountability’ in saying that it is a system which:

  • preserves and enhances trust among key participants
  • involves participants in the process
  • offers participants a strong sense of professional responsibility and initiative
  • encourages deep, worthwhile responses
  • provides well-founded and effective feedback to support good decision-making and
  • leaves the majority of educators more enthusiastic and motivated in their work.

The focus on data and machine learning particularly supports the ‘decision-making’ part of the description when risk assessments are used to inform inspection scheduling. Data and risk assessments however don’t tell us much about how accountability can motivate people to learn and improve. Mechanistic assessments are generally far removed from the people who have to learn from these assessments and are therefore not the most motivating or best ways to support learning and improvement. This is a point well understood by Ofsted in saying that inspectors are the ones who make the ultimate decision on a school’s quality, based on on-site evidence collection.
Ofsted’s emphasis on the human factor in making decisions on school quality rightfully takes into account that schools are not factories where children with high learning outcomes are produced according to prescribed standards scripts. At best, learning requires co-production between children, teachers, and their parents, where they are engaged and committed to learning and teaching and have an opportunity to explore and nurture individual talents and interests. Risk assessments however ensure that a basic standard can be safeguarded with limited resources. But it’s ultimately the interaction between schools and inspectors about actual achievements which will generate new ideas for improvement that can lead to real learning.
 

Print Friendly, PDF & Email

4 Responses to “Ofsted's use of Artificial Intelligence: how smart is it to automate risk assessments?”

  • 1
    John Mountford wrote on 9 February 2018:

    I offer the following views as a retired primary headteacher/Ofsted inspector and as the grandparent of a year 7 child.
    In answer to the question, “how smart is it to automate risk assessment” as part of Ofsted inspections, my response is, not as proposed here.
    As an aside from my main concern, I have an issue with the use of “particular words and sentiments in reviews of schools submitted by parents on the Ofsted-run website Parent View.” to determine “whether a school is potentially performing inadequately”. As is understood by both schools and inspectors, this source of evidence can be biased, and in extreme cases potentially manipulated. This can come about where an individual or more likely a group of parents seek to influence inspectors in a particular direction. In the human sphere, this is understood and corrective measures can be taken. I fear that artificial intelligence may not respond appropriately to such tactics.
    Paul Moore informs us that currently inspectors are not given the risk assessment evidence prior to inspection to avoid “having an influence on the inspection itself”. This is as it should be, but
    “Over recent months Ofsted have been investigating whether a machine learning approach to predicting school decline could be a useful risk assessment development.”
    Melanie, it is this remark from Paul Moore that I find especially worrisome. As anyone with experience of managing a school or working in one will recognise, highly complex issues underpin their day-to-day functioning and therefore contribute to their possible future performance. I am very unhappy with the suggestion that “predicting school decline” could be the next silver bullet in the arms-race of school accountability.
    This is what I believe the main focus of this article ought to be.

  • 2
    Mr & Mrs Gist wrote on 9 February 2018:

    Hmmmm. This seems to take a lot of the intuition out of the process. I am not convinced.

  • 3
    @TeacherToolkit wrote on 12 April 2018:

    An excellent overview. I’m perturbed by this methodology. It seems a step backwards from reducing teacher workload and improving recruitment/retention. The key question OfSTED must answer is: do they want to make judgements about schools operating in different context, or do they not? https://wp.me/p6wlje-myT

  • 4
    Can Machine Learning Predict A School Inspection? | TeacherToolkit wrote on 12 April 2018:

    […] UCL quotes in this blog, Ofsted’s head of risk assessment (Paul Moore) says: ‘For a number of years Ofsted have risk […]

Leave a Reply