Slow down you move too fast, you got to make the meaning (sic) last
By Blog Editor, IOE Digital, on 4 January 2018
‘Slow’ human intelligence must be valued more holistically if we are to really benefit from the power of AI, says Rose Luckin.
The end of 2017 brought some worrying observations about the progress of Artificial Intelligence with respect to UK Education. It illustrated that many people are far too willing to equate speed and reduced cost with success. We are in danger of missing what really matters in education; in danger of missing the meaning of what education should about. This is dangerous for education and for the progress of learners and educators of all ages.
First came the publication of the first report from the Data Science Behavioural Insights Team (BIT); this stressed the value of speed. Second came the 19th evidence session of the House of Lords select committee on AI, which focused on AI and education. This revealed the potential for machine learning AI to reduce the cost of delivering the current school curriculum, and at the same time reduce the value of human intelligence.
The BIT report marks the first anniversary of the data science team and is the demonstration of its raison d’être and the value of data science for policy. In the report, the authors suggest that data science is “hugely beneficial” as a government tool and that the challenge faced by a government wanting to use data science in policy is to “find ways to quickly identify problems that can benefit from data science, bring data together, and deliver practical insights quickly.” Looking for ways in which data science can benefit policy making sounds reasonable, laudable even, but is ‘speed’ the best mechanism for the evaluation of success?
The educational policy highlight of the report is that we are going to be better able to identify failing schools. The report authors promise more accurate school inspections, because: “machine learning models can identify six times as many schools that are inadequate compared to random inspections.” The disappointing truth is that I agree with the BIT data science team about the likelihood of being able to identify so called ‘failing schools’ easily using machine learning. Decisions about whether or not a school is failing or not are now based on a clearly identifiable set of criteria that can be extracted from available data sets about school performance. This type of work is easy for machine learning AI that can process massive data sets far more quickly and accurately than we humans can.
But, is this really the best use of data science and machine learning AI that we can identify for education policy at the moment?
It seems to me a sad day for education when the key problem that we want to identify within our education system is failure! And I worry that the ease and speed with which our current inspection metrics and rationale make this possible are not good indicators of the successful application of data science or AI. However, the problem here is greater than merely a motivation that is driven by speed. There are two other BIG problems with this type of use of machine learning AI:
- The identification of failing schools by machine learning AI can only ever be as good as the data it processes, and I think we must reasonably question whether that data about schools is rich enough to capture what really matters when deciding who is failing;
- The machine learning AI cannot in any way explain any decision that it makes, a factor that will be poignantly apparent when the machine learning AI makes unexpected decisions, as it may well do.
If we use these two problems to compare machine learning AI and our slow and inconsistent human intelligence, we can easily see where real differences of quality exist. Firstly, a human inspector may notice something about a school that is important in making any judgement about the quality of its provision, but that is not revealed in the limited data set processed by the machine learning AI. Perhaps for example, a human inspector notices the enthusiasm with which the pupils rush into class, or the subtle emotional support provided by teachers and teaching assistants to pupils who are most in need, and yet whose positive response to this support are not part of the data captured for the machine learning AI.
Secondly, human inspectors are able to explain why they have made their decisions about the quality of the educational provision of the school they are inspecting. The fact that machine learning AI cannot explain itself is an enormous limitation on the usefulness of machine learning AI, particularly in areas such as education where we expect explanations.
When I learned of the areas that the House of Lords select committee on AI wanted to probe in their 19th evidence sessions, I was slightly concerned, because it revealed the extent to which AI is equated with computer science. AI is concerned with increasing our understanding about how the intelligent human mind works by investigating the problem of designing machines that have intelligent abilities, as well as being about using computer science to build these intelligent machines to address problems. Fortunately, their Lordships are a human-intelligent group and their questioning roamed beyond computer science and highlighted some of the more troubling aspects of the current proliferation of machine learning AI when it comes to education. This interchange with Baroness Bakewell sums up one of the issues nicely:
Baroness Bakewell: I wonder whether you are not in danger of creating your own silos. The most inspirational teacher is a completely integrated human being with an enthusiasm for everything. It seems to me that we need those teachers, and we need them to be well briefed. I can see a place for this exact use of AI in marking, correcting, talking, language correction and so on. That is one silo. The other is the emotional intelligence that interprets it, but do they have to be at odds? Can we not use both?
Professor Rosemary Luckin: You can, absolutely. The key thing here is that we can use AI to deliver a knowledge-based curriculum. That is no problem; we have systems that can do that perfectly well. In fact, the current knowledge-based curriculum is based on the same psychology and models of memory that AI systems were originally built on. We let the AI get on with that, which means that the human teacher can do all the rest of the stuff that you were talking about—the integrated, holistic approach to learning—and take students beyond what they are doing now. If we do not do that, we are basically dooming these students to trying to do what the AI is doing but not being able to do it as quickly or as well.
This little interaction highlights the fact that the nature of our current UK curriculum lends itself nicely to automation. This automation will be much cheaper than the current use of human teachers to deliver the knowledge based curriculum. However, if anyone is tempted by the potential cost savings here, then the results will be nothing short of catastrophic for learners and teachers. Education must increasingly be about so much more than what can be delivered by clever technology, because:
- This is where human teachers excel;
- This is what learners need, because it cannot be automated and will therefore be at a premium for employment in the AI augmented workplace of the future;
- AI will learn the knowledge based curriculum faster and more accurately than any human learner, so we humans must be able to do more, much more than this.
I hope that our decision makers are smart enough to appreciate the importance of human intelligence, and that they will not be tempted by the huge cost savings that machine learning AI could deliver for education, if we continue to focus so exclusively on a curriculum and methodology that values what AI can achieve so easily.
 Transcript of this session available here.