Dr Hugh Morrison (Queen’s University Belfast [retired])
Given that Rose Luckin is professor of “learner-centred design” at UCL, one would expect that she has a strong appreciation of the meaning of the word “learning.” This isn’t clear from her article. Professor Luckin seems resigned to the fact that teachers must change and embrace a role for Artificial Intelligence in the classroom. According to Luckin, this acceptance of AI will enable teachers to influence how its various products will be deployed in teaching and learning. Professor Luckin’s sense of resignation is clear in the title of her piece: “AI is coming: use it or lose to it.” The headline writer at the TES goes further, seeming to suggest that teachers should yield a substantial part of their current remit to machines: “When knowledge isn’t power. Why teachers need to focus on the things machines can’t teach.”
Alas, both Professor Luckin and the TES seem totally unaware that a “category error” lurks at the core of the AI project, a category error which should be deployed to protect the teaching profession from the impact of neural nets, deep learning and artificial intelligence.
Anyone familiar with the research of one of the giants of machine learning, the computer scientist Judea Pearl, will know that artificial intelligence, as currently conceived, has profound and intractable difficulties. (Pearl describes AI as little more than curve-fitting.) By way of illustration, consider a concept which should be close to the hearts of both Luckin and the TES, namely, “learning.” If any profession can lay claim to expertise concerning the nature of learning, it is teachers. From Professor Luckin’s TES article, I suspect she is unaware that AI suffers from a category error in respect of the concept “learning,” an error first identified by Aristotle, which goes by the name of the “mereological fallacy.”
Those computer scientists who work in the field of so-called “deep learning” claim to model the learning that occurs in the brain using extremely complex neural nets. Look at any You Tube presentation in which an AI enthusiast lectures on the structures underpinning neural nets and you will likely hear the claim that learning and thinking are (neural) activities in the brain. However, it transpires that it is nonsense to suggest that learning or thinking are processes located in the brain.
Popular science publications routinely refer to brains “learning”, “thinking”, “processing information,” “creating meaning,” “perceiving patterns” and so on. Now where is the scientific evidence for these claims? There are no laboratory demonstrations of brains learning or thinking. Such activities are carried out by human beings, not their brains. Needless to say, no one would dispute that without a functioning brain an individual couldn’t learn or think, but it does not follow that the individual’s brain is doing the thinking or learning.
While it is clear that learning would be impossible without a properly functioning brain, the claim that brains can learn or that learning takes place in the brain ought to be supported by scientific evidence. There isn’t any. To mistakenly attribute properties to the brain which are, in fact, properties of the human being is to fall prey to the “mereological fallacy” where mereology is concerned with part/whole relations.
To ascribe psychological predicates – such as “learn” and “think” – to the brain is simply nonsensical. If the human brain could learn or think, “This would be astonishing, and we should want to hear more. We should want to know what the evidence for this remarkable discovery was” (Bennett & Hacker, 2003, p. 71). “Psychological predicates are predicates that apply essentially to the whole animal, not its parts. It is not the eye (let alone the brain) that sees, but we see with our eyes (and we do not see with our brains, although without a brain functioning normally in respect of the visual system, we would not see)” (Bennett & Hacker, 2003, pp. 72-73).
“We know what it is for human beings to experience things, to see things, to know or believe things, to make decisions … But do we know what it is for a brain to see …for a brain to have experiences, to know or believe something? Do we have any conception of what it would be like for a brain to make a decision? … These are all attributes of human beings. Is it a new discovery that brains also engage in such human activities?” (Bennett & Hacker, 2003, p. 70).
“It is our contention that this application of psychological predicates to the brain makes no sense. It is not that as a matter of fact brains do not think, … rather, it makes no sense to ascribe such predicates or their negations to the brain. … just as sticks and stones are not awake, but they are not asleep either” (Bennett & Hacker, 2003, p. 72).
If one casts one’s mind back through the many, many ill-conceived fads visited upon a long-suffering teaching profession, one may recall the “brain-based learning” movement. Proponents of brain-based learning were constantly drawing the attention of mathematics teachers, for example, to the illuminated area of the brain devoted to the learning of mathematics. A more careful, conservative approach which eschews hype would be to say that this area of the brain is “lit up” when the person learns mathematics. Bennett & Hacker (2007, p. 143) demonstrate how careful science avoids the hype which characterises popular accounts of the functioning of the brain: “All his brain can show is what goes on there while he is thinking; all fMRI scanners can show is which parts of his brain are metabolizing more oxygen than others when the patient in the scanner is thinking.”
Luckin proposes the following: “To ensure their place in the schools of future, educators need to move on from a knowledge-based curriculum that could soon become automatable through AI.” Rather than urging yet further radical professional change on already innovation-fatigued teachers, she should be protecting schools from the over-hyped claims of the AI industry. Luckin’s radical suggestion for the future of the teaching profession reveals a lamentable grasp of the fundamental concepts “learning” and “knowledge”: “It is not that the knowledge-based curriculum is wrong per se, the problem is that it is wrong for the 21st century. Because now that we can build AI systems that can learn well-defined knowledge so effectively, it’s probably not very wise to continue to develop the human intelligence of our students to achieve this main goal,”
The key words in this quotation are: “we can now build AI systems that can learn well-defined knowledge.” Surely the central aim of AI is to design machines which can “learn” and “know” in the same way as human beings learn and know? I have already established that for human beings, learning is not an activity of the mind/brain. What about Luckin’s claim that machines can have access to knowledge? Wittgenstein teaches that “The grammar of the word ‘knows’ is … closely related to the word ‘understands’” (PI, §150). To know or understand is not to have access to inner states of the mind or brain; knowing and understanding are best thought of as capacities. Rowlands (2003, p. 5) writes: “Thus, according to Wittgenstein, to … understand something by a sign is not to be the subject of an inner state or process. Rather, it is to possess a capacity: the capacity to adjust one’s usage of the sign to bring it into line with custom or practice. And this connects … understanding with structures that are external to the subject of this … understanding.”
According to Wittgenstein, human knowledge is best construed as a capacity rather than an inner actuality. An AI machine capable of knowing or understanding the concept “molecule,” say, as a human being does, would have to be capable of adjusting its use of the concept “molecule” so that it accords with the established use of that concept in physics, biology, and so on. In short, a machine capable of non-collusively agreeing with the human practices which surround it! Moreover, these human practices lie outside the computer.
I disagree with the headline on the front page of the TES; the invaluable mathematical knowledge I acquired from my teachers and lecturers allows me to confirm Judea Pearl’s claim that deep learning algorithms amount to little more than mathematical curve-fitting, and machines capable of knowing, thinking, learning and understanding are a fantasy. My mathematical knowledge protects me from hype. Pace the front page of the TES, knowledge is power.
The teaching profession would be well advised to give AI a wide berth. AI research conducted at Cambridge and Stanford universities has been described as “incredibly ethically questionable” by Professor Alexander Todorov, who warns that “developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era” (see The Guardian 07.07.18). I will leave the last word to mathematician David Sumpter (2018, p. 226). He reports on a Future of Life Institute meeting: “Despite the panel’s conviction that AI is on its way, my scepticism increased as I watched them talk. I had spent the last year of my life dissecting algorithms used within the companies these guys lead and, from what I have seen, I simply couldn’t understand where they think this intelligence is going to come from. I had found very little in the algorithms they are developing to suggest that human-like intelligence is on its way. As far as I could see, this panel, consisting of the who’s-who of the tech industry, wasn’t taking the question seriously. They were enjoying the speculation, but it wasn’t science. It was pure entertainment.”
 Bennett, M.R., & Hacker, P.M.S. (2003). Philosophical foundations of neuroscience. Oxford: Blackwell Publishing.
 Bennett, M.R. & Hacker, P.M.S. (2007). The conceptual presuppositions of cognitive neuroscience. In M.R. Bennett, D. Dennett, P.M.S. Hacker, & J. Searle, Neuroscience and philosophy (pp. 127-162). New York: Columbia University Press.
 Wittgenstein, L. (1953). Philosophical investigations. G.E.M. Anscombe, & R. Rhees (Eds.), G.E.M. Anscombe (Tr.). Oxford: Blackwell.
 Rowlands, M. (2003). Externalism. Ithaca: McGill-Queen’s University Press.
 Sumpter, D. (2018). Outnumbered. London: Bloomsbury Sigma.