Why the Ofqual/CCEA proposal for using teacher judgement to grade 2020 GCSE/A level examinations is indefensible.



The claim made in this essay is that the academic literature clearly indicates that the capacity of teachers to predict their pupils’ grades falls far below acceptable levels. Furthermore, the evidence that teachers can rank-order their pupils within-grade is scant to non-existent. Indeed, the Awarding Bodies couldn’t stand up the claim that any of their examinations rank-order pupils on the construct they purport to measure. It follows that the only defensible solution is to provide two measures per examination: (i) a teacher-predicted grade (without an associated rank-order); and (ii) a test that could be used – if the pupil so decides – to overwrite the teacher prediction, in relevant cases. Where a pupil cannot take the test, he or she must accept the teacher-predicted grade. There is no credible evidence in the literature that one can mobilize some “standardization” algorithm (which has yet to be detailed by Ofqual or CCEA) to somehow correct for any excesses in teacher judgement.

The perils of expert prediction of all types

As far back as 1954 Paul Meehl (Clinical versus Statistical Prediction: A theoretical analysis and a review of the evidence) analysed the ability of a range of teachers to predict measures of academic success, and found scant evidence that this could meet acceptable standards. Meehl’s book ranged far beyond teachers’ predictions of grades to consider, for example, expert predictions of an individual’s probability of violating parole, predictions of success in pilot training, predictions of criminal recidivism, and so on. In his book Thinking, Fast and Slow the Nobel Laureate Daniel Kahneman (2011, p. 225) endorsed Meehl’s findings and stressed that the range of studies demonstrating the limitations of experts’ abilities to predict the future had expanded greatly since Meehl’s book was published:

“Another reason for the inferiority of expert judgement is that humans are incorrigibly inconsistent in making summary judgements of complex information. When asked to evaluate the same information twice, they frequently give different answers. The extent of the inconsistency is often a matter of real concern. Experienced radiologists who evaluate chest X-rays as “normal” or “abnormal” contradict themselves 20% of the time when they see the same picture on separate occasions. A study of 101 independent auditors who were asked to evaluate the reliability of internal corporate audits revealed a similar degree of inconsistency. A review of 41 separate studies of the reliability of judgements made by auditors, psychologists, pathologists, organizational managers, and other professionals suggests that this level of inconsistency is typical, even when a case is re-evaluated within a few minutes. Unreliable judgements cannot be valid predictors of anything.”

The perils of predicting within-grade rank order

 Ofqual and CCEA are requiring teachers to rank order their pupils according to their achievement in mathematics, English, biology, and so on. However, no GCSE or A level product designed by the Awarding Bodies can itself perform this feat. The rank-ordering of candidates on the construct “achievement in geography,” for example, is a validity issue and the Awarding Bodies have a very, very poor record in this area.

In 1991 an expert on the work of the examination boards, Robert Wood, summarized his conclusions in the book Assessment and Testing: A survey of research commissioned by the University of Cambridge Local Examination Syndicate (UCLES). On pages 147- 151 he wrote:

“If an examining board were to be asked point blank about the validities of its offerings or, more to the point, what steps it takes to validate the grades it awards, what might it say? … The examining boards have been lucky not to have been engaged in validity argument. … Nevertheless, the extent of the boards’ neglect of validity is plain to see once attention is focused. Whenever boards make claims that they are measuring the ability to make clear reasoned judgements, or the ability to form conclusions (both examples from IGCSE and Economics), they have a responsibility to at least attempt a validation of the measures. … The boards know so little about what they are assessing that if, for instance, it were to be said that teachers assess ability … rather than achievement, the boards would be in no position to defend themselves. … As long as examination boards make claims that they are assessing this or that ability or skill, they are vulnerable to challenge from disgruntled individuals.”

The claim that a GCSE or A level examination could rank-order candidates on some appropriate construct would require the Awarding Bodies to use Structural Equation Modelling to compute three indices: root mean-square residual, adjusted goodness-of-fit, and chi-squared divided by degrees of freedom. To claim a rank order, these three statistics would have to be demonstrated to satisfy relevant inequalities. Can it be reasonable to ask teachers to predict something that is beyond the capabilities of the GCSE and A level examinations themselves?

The resolution

Needless to say, staff at Ofqual and CCEA are mandated to provide young people with grades that are as error-free as possible. They should take heed of Paul Meehl’s counsel in respect of teachers’ capacities to anticipate the future: “When one is dealing with human lives and life opportunities, it is immoral to adopt a mode of decision-making which has been demonstrated repeatedly to be … inferior.” If teacher judgement (omitting the requirement to rank-order) is to be used to forecast grades, pupils must also be offered speedy access to a public examination which protects them from the well-documented vagaries of teacher prediction.

Stephen Elliott





Lyra McKee

This is the sort of honest communication between the generations that has the potential to do more good than the wasted hundreds of millions spent on conflict resolution ever could.


I’ve waited some time before putting pen to paper.

The death of a young woman, not much older than my daughter, is hard.

Murder is harder still.

When I got the news, at two in the morning, I did not sleep again that night.

I was introduced to Lyra about five years ago.There could be be fewer similarities.

I met this small, owlish, slightly diffident girl, in a Victoria Square coffee shop. She met a grumpy old man , with issues and a background. She had many difficulties with technology, which we laughed about. She was softly spoken, and I’m slightly deaf.

I was hoping that she could introduce me to contacts that might progress my enquiries into the murders of my parents. This she did.

Despite the disparity in our ages and in our experience of the world, she dispensed sage advice about me and my predicament.  She was…

View original post 402 more words

Why AI will never expose the “mind’s inner workings”


, , , , ,

The text of a letter submitted to the New Scientist in reply to an article by Timothy Revell on a claim that mind-reading devices can access your thoughts and dreams using AI.

As usual there has been no acknowledgement, response or publication by the New Scientist

Timothy Revell’s article Thoughts Laid Bare (29 September, p. 28) illustrates a worrying tendency of AI enthusiasts to over-hype the capabilities of their algorithms. The article suggests that AI offers the possibility of the “ultimate privacy breach” by gaining access to “one of the only things we can keep to ourselves,” namely, “the thoughts in our heads.”

Niels Bohr counselled that the hallmark of science is not experiment or even quantification, but “unambiguous communication.” AI has much to learn from this great physicist. When one scans an individual’s brain one does not thereby gain any access whatsoever to that individual’s thoughts; brains are in the head while thoughts are not. The brain isn’t doing the thinking. As far back as 1877, G H Lewes cautioned: “It is the man and not the brain that thinks.” To quote Peter Hacker, what neuroscientists show us “is merely a computer-generated image of increased oxygenation in select areas of the brain” of the thinking individual. Needless to say, one cannot think without an appropriately functioning brain, but thinking is not located in the brain; no analysis of neural activity will give insights to thoughts because thinking is neither an activity of the mind or the brain.

In ascribing thoughts to the brain or the mind (rather than to the individual) AI falls prey to a fallacy that can be traced all the way back to Aristotle: the “mereological fallacy.”

Dr Hugh Morrison, The Queen’s University, Belfast (retired)


The incoherence of Professor Boaler’s “Visual Mathematics”


, , , , , , , , ,

Dr Hugh Morrison (The Queen’s University of Belfast [retired])


Professor Jo Boaler’s case for a new approach to teaching and learning in mathematics is an incoherent mix of dubious mathematical reasoning and neuroscience.  Boaler’s (2016, p. 1) claim that her “visual mathematics” approach satisfies “an urgent need for change in the way mathematics is offered to learners”[1] is outlined in her TEDx Stanford presentation entitled How you can be good at math, and other surprising facts about learning.  Her recent visit to Scotland confirms that her visual approach is now being urged upon that country’s teachers.  This short essay is designed to alert teachers everywhere to the dangers of replacing traditional approaches to pedagogy with Professor Boaler’s confused reasoning.

Boaler’s case for her visual mathematics is illustrated using a sequence of patterns each comprising a number of squares (see her TedxStanford talk for a very engaging outline of her analysis): the first pattern (n = 1) has four squares, the second (n = 2) has nine squares, the third (n = 3) has sixteen squares, and so on.  (The reader will, no doubt, recognise the three numbers 4, 9 and 16 as “square numbers” because 22 = 4, 32 = 9 and 42 = 16.)  The pupil is asked to continue the patterns in the same way and find the general rule of which these three patterns are instances.  According to Boaler’s TedxStanford presentation, the general rule which generates the number of squares (4, 9, 16, and so on) in the sequence of patterns is, needless to say: number of squares = (n + 1)2.  This isn’t difficult to verify.  Substituting

n = 1 in this rule gives 4, substituting n = 2 gives 9, and substituting n = 3 gives 16, and so on.

Every mathematics teacher in England, Wales and Northern Ireland with experience of GCSE mathematics coursework – now abandoned in the UK after decades of effort to promote and assess “discovery learning” – will recognise Professor Boaler’s illustrative example of visual mathematics as one of the GCSE “growing squares” tasks.  Indeed, one could be forgiven for thinking that Boaler’s visual mathematics is little more than her UK experience of discovery learning, with a pinch of neuroscience.  Once identified, the (n + 1)2 rule can then be used to continue the sequence of patterns onwards, yielding:

4, 9, 16, 25, 36, 49, …


Published in 1989




There can be little doubt that mathematical activities such as the “growing squares” task serve to enrich the mathematical experience of children by teaching the principles of problem-solving, and facilitating collaborative learning.  However, the case I want to advance in this essay is that it is nonsensical to argue, as Boaler does, that such activities can ever challenge established, “traditional” approaches to the teaching and learning of mathematics.  Traditional learning is always prior to discovery learning; without the framework laid down by the traditional teacher (the so-called “fiduciary” framework), discovery learning is impossible.  Boaler’s error is to have ignored Polanyi’s (1958, p. 266) warning: “No intelligence, however critical or original, can operate outside such a fiduciary framework.”[2]   Boaler’s visual mathematics can never replace the traditional approach to teaching and learning.

Michael Polanyi

Professor Boaler seems unaware of a problem first identified by the great mathematician Leibniz, namely, that a finite number of examples always underdetermines the rule which generates these examples.  Boaler focuses on the rule (n + 1)2 as the answer, but there is an infinity of such answers.  Anscombe (1985, pp. 342 – 343) presents the Leibniz argument using the even numbers: “[A]lthough an intelligence tester may suppose that there is only one possible continuation to the sequence 2, 4, 6, 8, …, mathematical and philosophical sophisticates know that an indefinite number of rules (even rules stated in terms of mathematical functions as conventional as polynomials) are compatible with any such finite initial segment.  So if the tester urges me to respond, after 2, 4, 6, 8, with the unique appropriate next number, the proper response is that no such unique number exists. … The intelligence tester has arbitrarily fixed on one answer as the correct one.”[3]

Godfried Liebniz

In her TEDx Stanford presentation, Boaler presents her pupils with three instances of a rule (the first pattern has 4 squares, the second has 9 squares, and the third has 16 squares) and implies that the brain (for Professor Boaler the brain is pivotal to appreciating how children learn mathematics) should, after a process of understanding, arrive at the conclusion that the rule is:

number of squares = (n + 1)2

However, there is an infinite number of alternative rules which begin with the numbers

4, 9, 16, but diverge thereafter.  These can be characterised as follows:

number of squares = (n + 1)2 + a (n – 1)(n – 2)(n – 3)

where a can take an infinite number of values.

For example, a = 0.5 generates the sequence: 4, 9, 16, 28, …

and a = 5 generates the sequence: 4, 9, 16, 55, …

and a = 12.5 generates the sequence: 4, 9, 16, 100, …

and a = 100 generates the sequence: 4, 9, 16, 625, …

and a = 2000 generates the sequence: 4, 9, 16, 12025, …

In all these cases, the pupil can protest that he or she went on in the same way.  It is tempting to suggest that the pupil’s way diverges from Professor Boaler’s because he or she had a different rule in mind, or should I say, in brain.  Indeed, what of the pupil who simply repeats the pattern, arriving at the sequence: 4, 9, 16, 4, 9, 16, 4, 9, 16, … ?  Hasn’t this pupil gone on in the same way as indicated by the three initial examples?

Dweck Brain image

Clearly, one rarely, if ever, comes across a pupil who would propose one of these alternatives to the (n + 1)2 rule in real classrooms, but Boaler’s thesis is that understanding is a rational process in the brain.  By what neural mechanism would the rational brain select one rule – Boaler’s (n + 1)2 rule – from an indefinite number of alternatives?  How, in a finite time period, does the brain brand all of these alternative rules (indefinite in number) as somehow incorrect, and settle on the (n + 1)2 rule as the correct rule?  What possible criterion does the brain use to distinguish correct from incorrect?


The role of the brain in mathematics is central to Boaler’s research.  It is tempting therefore to think that the “visual” learner, in understanding the problem, attaches an interpretation (something represented in the brain) to the three examples which constitute the statement of the problem, namely, a pattern of four squares, followed by a pattern of nine squares, followed by a pattern of sixteen squares.  If a pupil responds by suggesting that the next pattern is made up of 55 squares, for example, (the a = 5 sequence has 55 for its fourth term), Professor Boaler will treat this as a mistake (after all, the “correct” answer is 25).


But nothing in the statement of the problem rules out the answer “55” because it could be argued that the pupil has merely interpreted the statement of the problem in a way which is at odds with Professor Boaler.  Of course, the pupil’s interpretation accords perfectly with the three examples used in the statement of the problem.  The pupil has responded correctly to the instruction to continue the sequence of shapes in the same way.  What makes Professor Boaler’s interpretation correct and the pupil’s incorrect?  Indeed, any answer whatsoever to the question “how many squares are in the fourth pattern?” will be correct on some interpretation.  Wright (2001, p. 98) captures this intractable situation in the words: “Finite behaviour cannot constrain its interpretation to within uniqueness.”[4]  This is at the core of the case made by Leibniz.  It would seem that if understanding in mathematics is construed as an activity of the mind or brain, then the notions of a “correct” and an “incorrect” answer are rendered meaningless!

Liebniz notes 2

Did anyone at Professor Boaler’s TEDx Stanford talk, or at her Scotland talk, spot this profound error in her reasoning?  Thousands of papers and many hundreds of books have been written about Ludwig Wittgenstein’s resolution of what has been called the “rule-following paradox.”  Furthermore, Wittgenstein’s resolution is unlikely to be to Professor Boaler’s liking, for the solution emphasises the traditional classroom in which children are trained to adhere to established mathematical practices, and Wittgenstein makes no mention of the brain.  According to Wittgenstein, we are forced to conclude that children go to school to acquire a “framework” or “background,” which they grow to accept without question and within which they can be creative.  This framework constrains the pupil’s creativity in order that he or she can be understood by peers and teachers, but, it never determines the pupil’s subsequent response to any particular problem.

Wittgenstein Rule Following

If the object of analysis is the pupil treated as a separate individual with a brain/mind, divorced from the framework of mathematical practices that the pupil takes on trust from authority figures at school and beyond, one gets incoherent nonsense.  Understanding is not an activity, state or process of the brain or mind; understanding is a capacity.  This is the error at the heart of Boaler’s analysis: her model omits the framework of mathematical customs and practices which the pupil has come to accept (as common sense, one might say) through his or her training at school.  According to Scruton (1981, p. 291), “All attempts to understand the human mind in isolation from the social practices through which it finds expression”[5] are doomed to fail.
Ludwig Wittgenstein

In Zettel (§419) Wittgenstein cautions: “Any explanation has its foundations in training. (Educators ought to remember this).”[6]  Because she omits the all-important, long-established mathematical customs and practices in which the pupil participates in the traditional classroom, and treats the pupil as separately analysable, Boaler is forced to accept an indefinite number of different answers as the correct answer for the number of squares in the 4th pattern, for example, and she must conclude that it is meaningless for pupils to seek the correct rule.

It is instructive to identify the source of the incoherence in Boaler’s “visual mathematics.”  She has a confused grasp of the notion of understanding in general and in mathematics in particular.  Her 2016 paper with Chen, Williams and Montserrat has the title: Seeing as understanding: The importance of visual mathematics for our brain and learning.[7]  For Boaler, understanding consists in an inner state or inner process in the head, that state or process being the source of the pupil’s subsequent behaviour.  Nothing could be further from the truth: Rowlands (2003, p. 5) writes:

Thus, according to Wittgenstein, to … understand something by a sign is not to be the subject of an inner state or process.  Rather, it is to possess a capacity: the capacity to adjust one’s use of the sign to bring it into line with custom or practice.  And this connects … understanding with structures that are external to the subject.[8]

Note the absence of any mention of the brain in Wittgenstein’s resolution of the rule-following paradox.

This is why the vast majority of children respond to the “growing squares” problem with the answer: number of squares = (n + 1)2.  It is custom and practice in mathematics to respond in this way.  The conundrum identified by Leibniz (see above) is also resolved.  If understanding the even numbers is a matter of adjusting one’s behaviour to accord with the established mathematical practice in respect of the these numbers, then there is only one unique answer we ought to give, namely, “10.”  In §6.21 of his Remarks on the Foundations of Mathematics, Wittgenstein writes: “The application of the concept ‘following a rule’ presupposes a custom,”[9] and McGinn (1984, p. 39) defines custom as follows: “A custom, like a habit, is something that gets established, not through the deliverances of reason, but on the basis of what we might call a tradition.”[10]  Boaler et al. (2016, p. 5) must appreciate that if students were not “made to memorise math facts, and plough through worksheets of numbers,”[11]  in the traditional classroom, mathematical rules would not so much as exist.



[1] Boaler, J., Chen, L., Williams, C., & Cordero, M. (2016).  Seeing as understanding: The importance of visual mathematics for our brain and learning.  Journal of Applied & Computing Mathematics, 5(5), 1-6.

[2] Polanyi, M. (1958).  Personal knowledge.  Chicago: University of Chicago Press.

[3] Anscombe, G.E.M. (1958).  Wittgenstein on rules and private language.  Ethics, 95, 342-352.

[4] Wright, C. (2001).  Rails to infinity.  Cambridge, MA: Harvard University Press.

[5] Scruton, R. (1981).  A short history of modern philosophy.  London: Taylor and Francis.

[6] Wittgenstein, L. (1967).  Zettel.  Oxford: Blackwell

[7] Ibid.

[8] Rowlands, M. (2003).  Externalism.  Ithaca: McGill-Queen’s University Press.

[9] Wittgenstein, L. (1956).  Remarks on the foundations of mathematics.  Cambridge MA: MIT Press.

[10] McGinn, C. (1984).  Wittgenstein on meaning.  Oxford: Blackwell.

[11] Ibid.

Response to Professor Luckin’s TES (29.06.2018) article: “AI is coming: use it or lose it.”


, , , , , ,


Alan Turing 3

Alan Turing

 Dr Hugh Morrison (Queen’s University Belfast [retired])

Hilary Putnam 2

Hilary Putnam

Jerry Fodor 3

Jerry Fodor

Jerome Bruner2.png

Jerome Bruner

Given that Rose Luckin is professor of “learner-centred design” at UCL, one would expect that she has a strong appreciation of the meaning of the word “learning.”  This isn’t clear from her article.  Professor Luckin seems resigned to the fact that teachers must change and embrace a role for Artificial Intelligence in the classroom.  According to Luckin, this acceptance of AI will enable teachers to influence how its various products will be deployed in teaching and learning.  Professor Luckin’s sense of resignation is clear in the title of her piece: “AI is coming: use it or lose to it.”  The headline writer at the TES goes further, seeming to suggest that teachers should yield a substantial part of their current remit to machines: “When knowledge isn’t power.  Why teachers need to focus on the things machines can’t teach.”

Luckin TES June 18 AI

Alas, both Professor Luckin and the TES seem totally unaware that a “category error” lurks at the core of the AI project, a category error which should be deployed to protect the teaching profession from the impact of neural nets, deep learning and artificial intelligence.

Rose Luckin 3

Anyone familiar with the research of one of the giants of machine learning, the computer scientist Judea Pearl, will know that artificial intelligence, as currently conceived, has profound and intractable difficulties.  (Pearl describes AI as little more than curve-fitting.)  By way of illustration, consider a concept which should be close to the hearts of both Luckin and the TES, namely, “learning.”  If any profession can lay claim to expertise concerning the nature of learning, it is teachers.  From Professor Luckin’s TES article, I suspect she is unaware that AI suffers from a category error in respect of the concept “learning,” an error first identified by Aristotle, which goes by the name of the “mereological fallacy.”

Judea Pearl 2

Judea Pearl

Those computer scientists who work in the field of so-called “deep learning” claim to model the learning that occurs in the brain using extremely complex neural nets.  Look at any You Tube presentation in which an AI enthusiast lectures on the structures underpinning neural nets and you will likely hear the claim that learning and thinking are (neural) activities in the brain.  However, it transpires that it is nonsense to suggest that learning or thinking are processes located in the brain.

Popular science publications routinely refer to brains “learning”, “thinking”, “processing information,” “creating meaning,” “perceiving patterns” and so on.  Now where is the scientific evidence for these claims?  There are no laboratory demonstrations of brains learning or thinking.  Such activities are carried out by human beings, not their brains.  Needless to say, no one would dispute that without a functioning brain an individual couldn’t learn or think, but it does not follow that the individual’s brain is doing the thinking or learning.

While it is clear that learning would be impossible without a properly functioning brain, the claim that brains can learn or that learning takes place in the brain ought to be supported by scientific evidence.  There isn’t any.  To mistakenly attribute properties to the brain which are, in fact, properties of the human being is to fall prey to the “mereological fallacy” where mereology is concerned with part/whole relations.

To ascribe psychological predicates – such as “learn” and “think” – to the brain is simply nonsensical.  If the human brain could learn or think, “This would be astonishing, and we should want to hear more.  We should want to know what the evidence for this remarkable discovery was” (Bennett & Hacker, 2003, p. 71)[1].  “Psychological predicates are predicates that apply essentially to the whole animal, not its parts.  It is not the eye (let alone the brain) that sees, but we see with our eyes (and we do not see with our brains, although without a brain functioning normally in respect of the visual system, we would not see)” (Bennett & Hacker, 2003, pp. 72-73)[2].

“We know what it is for human beings to experience things, to see things, to know or believe things, to make decisions … But do we know what it is for a brain to see …for a brain to have experiences, to know or believe something?  Do we have any conception of what it would be like for a brain to make a decision? … These are all attributes of human beings.  Is it a new discovery that brains also engage in such human activities?” (Bennett & Hacker, 2003, p. 70)[3].

“It is our contention that this application of psychological predicates to the brain makes no sense.  It is not that as a matter of fact brains do not think, … rather, it makes no sense to ascribe such predicates or their negations to the brain. … just as sticks and stones are not awake, but they are not asleep either” (Bennett & Hacker, 2003, p. 72)[4].

If one casts one’s mind back through the many, many ill-conceived fads visited upon a long-suffering teaching profession, one may recall the “brain-based learning” movement.  Proponents of brain-based learning were constantly drawing the attention of mathematics teachers, for example, to the illuminated area of the brain devoted to the learning of mathematics.  A more careful, conservative approach which eschews hype would be to say that this area of the brain is “lit up” when the person learns mathematics.  Bennett & Hacker (2007, p. 143) demonstrate how careful science avoids the hype which characterises popular accounts of the functioning of the brain: “All his brain can show is what goes on there while he is thinking; all fMRI scanners can show is which parts of his brain are metabolizing more oxygen than others when the patient in the scanner is thinking.”[5]

Luckin proposes the following: “To ensure their place in the schools of future, educators need to move on from a knowledge-based curriculum that could soon become automatable through AI.”  Rather than urging yet further radical professional change on already innovation-fatigued teachers, she should be protecting schools from the over-hyped claims of the AI industry.  Luckin’s radical suggestion for the future of the teaching profession reveals a lamentable grasp of the fundamental concepts “learning” and “knowledge”: “It is not that the knowledge-based curriculum is wrong per se, the problem is that it is wrong for the 21st century.  Because now that we can build AI systems that can learn well-defined knowledge so effectively, it’s probably not very wise to continue to develop the human intelligence of our students to achieve this main goal,”

The key words in this quotation are: “we can now build AI systems that can learn well-defined knowledge.”  Surely the central aim of AI is to design machines which can “learn” and “know” in the same way as human beings learn and know?  I have already established that for human beings, learning is not an activity of the mind/brain.  What about Luckin’s claim that machines can have access to knowledge?  Wittgenstein teaches that “The grammar of the word ‘knows’ is … closely related to the word ‘understands’” (PI, §150)[6].  To know or understand is not to have access to inner states of the mind or brain; knowing and understanding are best thought of as capacities.  Rowlands (2003, p. 5) writes: “Thus, according to Wittgenstein, to … understand something by a sign is not to be the subject of an inner state or process.  Rather, it is to possess a capacity: the capacity to adjust one’s usage of the sign to bring it into line with custom or practice.  And this connects … understanding with structures that are external to the subject of this … understanding.”[7]

According to Wittgenstein, human knowledge is best construed as a capacity rather than an inner actuality.  An AI machine capable of knowing or understanding the concept “molecule,” say, as a human being does, would have to be capable of adjusting its use of the concept “molecule” so that it accords with the established use of that concept in physics, biology, and so on.  In short, a machine capable of non-collusively agreeing with the human practices which surround it!  Moreover, these human practices lie outside the computer.

I disagree with the headline on the front page of the TES; the invaluable mathematical knowledge I acquired from my teachers and lecturers allows me to confirm Judea Pearl’s claim that deep learning algorithms amount to little more than mathematical curve-fitting, and machines capable of knowing, thinking, learning and understanding are a fantasy.  My mathematical knowledge protects me from hype.  Pace the front page of the TES, knowledge is power.

David Sumpter Outnumbered

ISBN 978-1-4729-4741-3

The teaching profession would be well advised to give AI a wide berth.  AI research conducted at Cambridge and Stanford universities has been described as “incredibly ethically questionable” by Professor Alexander Todorov, who warns that “developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era” (see The Guardian 07.07.18).  I will leave the last word to mathematician David Sumpter (2018, p. 226).  He reports on a Future of Life Institute meeting: “Despite the panel’s conviction that AI is on its way, my scepticism increased as I watched them talk.  I had spent the last year of my life dissecting algorithms used within the companies these guys lead and, from what I have seen, I simply couldn’t understand where they think this intelligence is going to come from.  I had found very little in the algorithms they are developing to suggest that human-like intelligence is on its way.  As far as I could see, this panel, consisting of the who’s-who of the tech industry, wasn’t taking the question seriously.  They were enjoying the speculation, but it wasn’t science.  It was pure entertainment.”[8]

[1] Bennett, M.R., & Hacker, P.M.S. (2003).  Philosophical foundations of neuroscience.  Oxford: Blackwell Publishing.

[2] Ibid.

[3] Ibid.

[4] Ibid.

[5] Bennett, M.R. & Hacker, P.M.S. (2007).  The conceptual presuppositions of cognitive neuroscience.  In M.R. Bennett, D. Dennett, P.M.S. Hacker, & J. Searle, Neuroscience and philosophy (pp. 127-162).  New York: Columbia University Press.

[6] Wittgenstein, L. (1953).  Philosophical investigations.  G.E.M. Anscombe, & R. Rhees (Eds.), G.E.M. Anscombe (Tr.).  Oxford: Blackwell.

[7] Rowlands, M. (2003).  Externalism.  Ithaca: McGill-Queen’s University Press.

[8] Sumpter, D. (2018).  Outnumbered.  London: Bloomsbury Sigma.


Language rights in Mother Ireland


, , , , , , ,

The Decline of the Celtic Languages

The Northern Ireland Assembly’s failure to deliver “language rights” is a constant refrain of Sinn Fein politicians. One such right must be: “the right of every parent to exclude their child from all aspects of the teaching and assessment of Irish in school.”

The Irish language now has an absolutely unique place in the history of teaching and assessment in that thousands of Irish parents have sought out psychologists to confirm that their child has a disability which precludes them from sitting examinations in Irish. This purported disability appears to be unique to Irish in that many of the children suffering from it nevertheless present for the relevant examinations in French, German, Spanish etc. According to the banner headline on the front page of The Irish Times (14.06.2018): “The Department of Education is reviewing the granting of exemptions from studying Irish amid evidence that thousands of students who secure them are sitting exams in foreign languages.” The fact that “[s]tudents or their parents are required to submit a psychologist’s report in order to secure an exemption on the grounds of disability” may give the impression that negative attitudes to the teaching and assessment of Irish are a new development, or are the preserve of the rich. Alas, history reveals that strong negative attitudes to learning Irish in school transcend class in Ireland.

Irish Times exemptions

Writing in The Decline of the Celtic Languages, Victor Durkacz notes: “Every interest, claimed the Society for Educating the Poor in Ireland, ‘every ambition, every means of advancement and hope of profit for the peasantry, depend upon their acquisition of English. Thus the Commissioners of National Education could truthfully claim that at no time had the parents shown an inclination for the schools to cultivate Irish. On the contrary, they had demonstrated their anxiety that their children should learn English.”

The fundamental flaw in the University of Cambridge’s psychographics algorithm: private traits are not predictable from Facebook “likes.


, , , , , , , , , , , , , , , , , , , , , , , , ,

Dr Hugh Morrison (The Queen’s University of Belfast [retired])

Despite the fact that research carried out at the University of Cambridge’s Psychometrics Centre is at the very heart of the Cambridge Analytica debate, this research seems to have been subject to little or no serious scrutiny.  This is puzzling given that an analysis of the Apply Magic Sauce algorithm, developed at the University, could settle, once and for all, whether tools developed at Cambridge could have influenced the Brexit or Trump votes.  The impression has been created that the Cambridge academics, armed only with an individual’s Facebook “likes,” could somehow use this Apply Magic Sauce algorithm to peer into the mind of that individual.

The Cambridge academics (erroneously) portray psychological constructs such as personality and intelligence as “inner”, “private” traits, somehow hidden in mind.  It is claimed, however, that provided one has access to an individual’s Facebook likes, one can use the Apply Magic Sauce algorithm to represent numerically that individual’s five-trait personality profile, together with his or her intelligence.  It is easy to see how the title of a paper by Michal Kosinski, David Stillwell & Thore Graepel (2013)[1], published in PNAS – “Private traits and attributes are predictable from digital records of human behaviour” – gives the impression that tools developed at Cambridge can somehow “reveal” an individual’s personality/intelligence, given his or her digital footprint.  It will be argued in this essay that no scientific basis exists for this claim.

Nina Burleigh, writing in Newsweek (18.06.2017), paints a picture of a dystopian future in which algorithms can be used to infer psychological profiles by stealth: “Big Data, artificial intelligence and algorithms designed and manipulated by strategists like the folks at Cambridge have turned our world into a Panopticon, the 19th century circular prison designed so that guards, without moving, could observe every inmate every minute of every day.”  The Cambridge Analytica whistleblower, Cristopher Wylie, goes further in the Observer newspaper of 18.03.2018.  According to Wylie, because “personality traits could be a precursor to political behaviour,” (p. 10) then the tools developed at Cambridge could represent a “psychological warfare mind**** tool” (p. 9) capable of disrupting the democratic process itself.

This brief essay asks the important question, “is there a shred of scientific truth in the claim that one can exploit the link between inner mental states (personality traits, intelligence etc.) and voting intention, in order to nudge an individual’s behaviour in the polling booth?”  The unequivocal response is “No,” because the Cambridge academics’ claim to have developed an algorithm capable of inferring personality from Facebook likes has no scientific basis.

The Apply Magic Sauce algorithm claims to predict (and quantify) personality traits.  This cannot be so for a very simple reason: such traits do not exist.  This we know from the extensive rule-following literature (see, for example, Crispin Wright’s 2001 book, “Rails to Infinity”[2]).  Dynamic intrinsic attributes are the preserve of Newtonian mechanics, and are not available to psychologists.  Personality is a property of the interaction between person and measuring tool rather than an intrinsic attribute of a person; it is a joint property of the individual and the instrument used to measure personality.

One of the towering figures of 20th century thought, Ludwig Wittgenstein (1958, p. 143) dismissed unequivocally the notion of inner mental traits posited by the Cambridge team: “There is a kind of disease of thinking which always looks for (and finds) what would be called a mental state from which all our acts spring as from a reservoir.”[3]  One cannot capture an individual’s personality in a number because personality isn’t a property of the individual.  Moreover, it is inconceivable that the Cambridge academics were not aware that their interpretation of personality as a quantifiable trait was entirely at odds with serious scholarship in their own discipline.

Ross and Nisbett’s classic textbook The Person and the Situation[4] is a staple of undergraduate psychology.  This book argues that personality is so entangled with the context in which it is expressed that it is meaningless to conceive of personality as some free-standing quantifiable inner state.  One of psychology’s most respected thinkers, Jerome Kagan, has identified this tendency for some psychologists to picture psychological attributes as traits hidden in mind as undermining their profession.

On page xvii of Psychology’s Ghosts: The Crisis in the Profession and the Way Back[5], Kagan “concludes that ‘agents in a context’ should replace the current, restricted focus on stable properties of individuals that, like their eye colour, are presumably available for expression in all settings.”  Professor Kagan points out that measurement in psychology is not a matter of checking up on traits which already exist in mind.  Kagan (1998, p. 16)[6] writes: “Most investigators who study ‘anxiety’ or ‘fear’ use answers on a standard questionnaire or responses to an interviewer to decide which of their subjects are anxious or fearful.  A smaller number of scientists asks close friends or relatives of each subject to evaluate how anxious that person is.  A still smaller group measures the heart rate, blood pressure, galvanic skin response, or salivary cortisol level of their subjects.  Unfortunately, these three sources of information rarely agree.”

Kagan is making the point that in order to communicate unambiguously (the hallmark of science) one cannot omit the measuring instrument.  Psychological predicates only have definite properties relative to a specified measuring tool.  One cannot attribute a definite value to a psychological attribute construed as a property of a person; this attribute only has definite properties relative to a particular instrument.  Kagan (1998, p. 77) cautions: “Modern physicists appreciate that light can behave as a wave or a particle depending on the method of measurement.  But some contemporary psychologists write as if that maxim did not apply to consciousness, intelligence or fear.”  This poses a fundamental problem for the Cambridge project because personalities predicted from Facebook likes involve no interaction with an appropriate measuring tool.  In these circumstances ascribing a definite personality makes no sense.

In Malcolm Gladwell’s popular text The Tipping Point,[7] he depicts the incoherence which is the consequence of divorcing psychological predicates from the contexts in which they are expressed, and treating them as traits.  Indeed, psychology has a name for the error which afflicts the Cambridge algorithm: “The Fundamental Attribution Error.”  According to the Fundamental Attribution Error, behaviour is not determined by some theoretical mental trait within the individual; rather, behaviour can only be understood by examining the interaction between the individual and the measuring instrument.  Finally, even if one were to accept that psychological predicates can be represented as latent traits, such models break down at the level of individual (see Borsboom, Mellenbergh & van Heerden, 2003, p. 217)[8], presenting intractable problems for any “personalisation machine” capable of “microtargeting.”

Turning to the claim that one can measure intelligence from Facebook likes, one finds the Cambridge algorithm at odds with one of the central tenets of quantum mechanics (see below).  The reader will be aware of the decades-old debate: do intelligence tests measure some inner, hidden mental state which psychologists call “intelligence”, or do they merely measure the ability of the testee to answer a series of questions on something psychologists call an “intelligence test”?

Psychology was aware – as far back as the 1930s – of the profound difficulties associated with interpreting intelligence as a trait: “He [Carl Brigham] recognized that a test score could not be reified as an entity inside a person’s head: Most psychologists working in the test field have been guilty of a naming fallacy which easily enables them to slide mysteriously from the score in the test to the hypothetical faculty suggested by the name of the test. Thus, they speak of sensory discrimination, perception, memory, intelligence, and the like while reference is to a certain objective test situation” (Gould, 1996, p. 262)[9].

The distinguished American physicist, David Mermin (1993, p. 1)[10] notes: “When you measure IQ are you learning something about an inherent quality of a person called “intelligence,” or are you merely acquiring information about how the person responds to something you have fancifully called an IQ test?  Until the advent of quantum theory in 1925 physicists were above such concerns.  But since then, with the discovery that experiments at the atomic level necessarily disturb the objects of investigation, precisely such reservations have been built into the foundations of physics.”  In modern physics, momentum is not an intrinsic property of an electron; rather, it is a property of the electron’s interaction with the measuring instrument.

The introduction to Werner Heisenberg’s 1989 book[11] contains a single sentence with profound implications for the Apply Magic Sauce algorithm: “[T]he reality is in the observations [interactions], not in the electron.”  The lesson for Apply Magic Sauce is clear: one can only speak meaningfully about an individual’s intelligence when that person interacts with an intelligence test.  Intelligence is not an inner trait and the Cambridge algorithm can no more predict intelligence than it can predict personality.

The Cambridge academics’ central claim for their algorithm is that it can predict an individual’s personality without requiring that person to take a personality test.  Alas, removing the measuring instrument actually deprives any references to personality or intelligence of their very meaning.  Those who feared that academics at Cambridge had actually developed an algorithm which could derive an individual’s intimate traits from his or her digital footprint, without requiring that person to take the relevant test, can take comfort from the words of the physicist Asher Peres: “Unperformed experiments have no results.”[12]  In summary, it seems inconceivable that an algorithm with the conceptual difficulties of Apply Magic Sauce could be deployed to nudge voting intention in any predetermined direction.



[1] Kosinski, M., Stillwell, D., & Graepel, T. (2013).  Private traits and attributes are predictable from digital records of human behaviour.  Proceedings of the National Academy of Sciences of the United States of America, 110 (15), 5802-5805.

[2] Wright, C. (2001).  Rails to infinity.  Cambridge, MA: Harvard University Press.

[3] Wittgenstein, L. (1958).  The blue and brown books.  Oxford: Blackwell.

[4] Ross, L, & Nisbett, R.E. (1991).  The person and the situation.  Philadelphia: Temple University Press.

[5] Kagan, J. (2012).  Psychology’s ghosts.  New Haven: Yale University Press.

[6] Kagan, J. (1998).  Three seductive ideas.  Cambridge, MA: Harvard University Press.

[7] Gladwell, M. (2000).  The tipping point.  Boston: Little, Brown & Company.

[8] Borsboom, D., Mellenbergh, G.J., & van Heerden, J. (2003).  The theoretical status of latent variables.  Psychological Review, 110 (2), 203-219.

[9] Gould, S.J. (1996).  The mismeasure of man.  London: Penguin Books.

[10] Mermin, D. (1993).  Lecture given at the British Association Annual Science Festival.  London: British Association for the Advancement of Science.

[11] Heisenberg, W. (1989).  Physics and philosophy.  London: Penguin Books.

[12] Peres, A. (1978).  Unperformed experiments have no results. American Journal of Physics, 46, 745-747.

Big Data, Big Lies, Big Payday


, , , , , , , , , , , , , , , , , , , , , , , , , ,

Why the “psychographic” techniques which underpin the Big Data ideas of Michal Kosinski and Alexander Nix have no scientific foundation.

Nix2KosinskiFacebook Like

Dr Hugh Morrison (The Queen’s University of Belfast [retired])


“There is a general disease of thinking which always looks for (and finds) what would be called a mental state from which all our acts spring as from a reservoir.”  Ludwig Wittgenstein

The data mining techniques of the software company Cambridge Analytica have come in for particular scrutiny in the aftermath of the 2016 American presidential election.  In recent days, two names in particular have become associated with using the Facebook “likes” of individuals to gain access to their personality: Michal Kosinski, a past Operations Director at the Cambridge University Psychometrics Centre, and Alexander Nix of the company Cambridge Analytica.  While there is clearly much that divides these two men, they appear to have at least one belief in common, namely, that “Big Data” techniques make it possible to access the personality profile of an individual from his or her Facebook likes.  I will make the case that this claim has no scientific merit.  “Psychographics,” as it has come to be known, has no basis whatever in science.


It is important to investigate the claims made by those who use data mining to infer personality.  Aside from the eye-watering sums of money involved, it is important that the general public be aware of the profound limitations of this use of Big Data.  The June 16, 2017 issue of Newsweek captures the degree to which we should all be concerned: “Big Data, artificial intelligence and algorithms designed and manipulated by strategists like the folks at Cambridge [Analytica] have turned our world into a Panopticon, the 19th century circular prison designed so that guards, without moving, could observe every inmate every minute of every day.”


The central claim advanced by those who advocate the use of Big Data to uncover information about personality is that information derived from an individual’s Facebook likes can be used to infer something about that individual’s personality.  This claim has no scientific basis for a very simple reason; personality is not a property of the individual which can be represented numerically.  Personality is a joint property of the individual and the context in which it was manifest.  Personality isn’t a trait which the individual somehow carries within themselves from context to context.  Rather, personality varies with context: a child may be extrovert at home, but quiet and reserved in the classroom.  She may be extrovert in the company of the children who live next door, but introvert when interacting with strangers in unfamiliar settings.


The research documented in Malcolm Gladwell’s best-seller The Tipping Point leaves one in very little doubt that – pace Kosinski and Nix – personality cannot be a trait, ascribable to the individual, and amenable to quantification.  On page 186 of his book Surprise, Uncertainty, and Mental Structures, the distinguished Harvard psychologist Jerome Kagan writes: “Some men are loyal to their wives and affectionate with their children but disloyal and hostile in their relations with colleagues at work.”  On page 188 he cautions: “[C]onclusions about personality that are based only on questionnaires or interviews have a meaning that is as limited as Ptolemy’s conclusions about the cosmos based on the reports of observers staring at the sky without telescopes.”  Every undergraduate physicist learns (from the teaching of Niels Bohr) that the mind is not a carrier of definite states which determine behaviour, but a carrier of potentiality which cannot be represented by real numbers.


In short, in order to communicate unambiguously (the hallmark of science) one must describe the context in which a particular facet of personality is manifest.  Data, no matter how “big”, is powerless to capture the complex interactions etc. that comprise human situations; we must rely on language in any attempt to represent context.  On page 135 of Werner Heisenberg’s book Physics and Beyond, the following advice appears: “For if we want to say anything at all about nature – and what else does science try to do? – we must somehow pass from mathematical to everyday language.”  The lesson for psychologists who use questionnaires to measure personality is that it is meaningful to speak about someone’s personality only if one details the questionnaire; personality is a relational attribute rather than an attribute intrinsic to the person.


In an article published in Proceedings of the National Academy of Sciences, Van Bavel, Mende-Siedlecki, Brady and Reinero demonstrate that the centrality of context (or environment) is something well-known to all undergraduate psychologists.  “Indeed, the insight that behaviour is a function of both the person and the environment – elegantly captured by Lewin’s equation: B = f(P, E) –  has shaped the direction of social psychological research for more than half a century.  During that time, psychologists and other social scientists have paid considerable attention to the influence of context on the individual and have found extensive evidence that contextual factors alter human behaviour.”  If personality is a joint property of the person and the context in which it is manifest, then unambiguous communication demands that a description of the context must be integral to any attempt to represent personality.


Finally, one only appreciates the incoherence of psychographics when one notes the intellectual standing of those whose writings oppose the thinking of Kosinski and Nix.  Three thinkers who stand out among those who argue that psychological attributes cannot be separated from the context in which they are manifest are the Nobel laureate Herbert A. Simon and two of the 20th century’s greatest intellectuals: the father of quantum theory, Niels Bohr, and the philosopher Ludwig Wittgenstein.


All three reject the notion that one can ignore context and treat behaviour as wholly analysable in terms of traits and inner processes (and therefore quantifiable).  Indeed, psychology itself has a name for the error at the heart of psychographics, a name familiar to all undergraduate psychologists.  Gerd Gigerenzer of the Max Planck Institute writes: “The tendency to explain behaviour internally without analysing the environment is known as the ‘fundamental attribution error.’”


First, Herbert Simon uses a scissors metaphor to indicate the degree to which a psychological attribute cannot be disentangled from the context in which it is manifest.  Herbert writes: “Human rational behaviour is shaped by a scissors whose blades are the structure of the environment and the computational capabilities of the actor.”


Secondly, Niels Bohr – in his Discussion with Einstein on Epistemological Problems in Atomic Physics – uses quantum complementarity to argue that first-person ascriptions [the contribution of the individual] and third-person ascriptions [the contribution of the environment] of psychological attributes form an “indivisible whole.”


Finally, on page 143 of his Blue and Brown Books, Wittgenstein highlights the error at the heart of the psychographics project: “There is a general disease of thinking which always looks for (and finds) what would be called a mental state from which all our acts spring as from a reservoir.”













“Research intensity” – fake news in higher education


, , , , , , , , , , , , , , ,

Ligo Caltech

One of the greatest advances in modern physics – the detection of gravitational waves first postulated, a century ago, by Einstein in his general theory of relativity – was made by physicists at the Laser Interferometer Gravitational-Wave Observatory (LIGO).  To paraphrase Richard Feynman, LIGO’s measurement precision can be expressed as follows: If you were to measure the distance between earth and the nearest star with this precision, it would be exact to the thickness of a human hair.  Such incredible accuracy alone would more than justify the £150 million construction costs of LIGO as a feat of engineering alone.

THE Intensity rankings


It is instructive to contrast the measurement properties of the UK’s Research Excellence Framework (REF), which aims to rank-order the research quality of UK universities, with those of LIGO.  While the REF league table has no discernible measurement properties whatsoever, its cost far exceeds LIGO’s construction costs, coming in at a staggering one quarter of a billion pounds.

THE REF winners

A recent issue of the Times Higher Education (1 – 7 February 2018) included a booklet published by the Queen’s University of Belfast which illustrates the extremes to which universities are prepared to go in using highly questionable data derived from REF ranks for the purposes of self-promotion.  Page 5 of the Queen’s booklet consists of a single statement.  At the centre of a black A4 page the words “Ranked in the top 10 in the UK for research intensity” (in white print and large font) sit in isolation.  (In a much smaller font the university attributes this ranking to the Times Higher Education.)  Pages 6 to 9 offer pen portraits of nine “world-leading academics” employed by Queen’s University.  The university has used this “research intensity” claim to market the university ever since the publication of the 2014 REF.  One cannot browse the university’s website without encountering the claim at every turn.  It has appeared on university billboards, in promotional materials and was central to the university’s ubiquitous claim: “we are exceptional.”  Why did no one notice that research intensity is a meaningless concept?

QUB THE insert cover


In the Research Excellence Framework, the research quality of journal articles, books etc. is assessed and reported on a four-point scale (five-point if one includes the ‘unclassified’ category).  The scale is ordinal in the sense that a 3* article is deemed superior to one rated 1* or 2*, and inferior to an article rated 4*.  Any appeal to arithmetic is impermissible because an article rated 4* is not 4 times the quality of a 1* article; a 2* article is not two-thirds the quality of a 3* article, and so on.  The rules of arithmetic do not apply.  (Needless to say, the challenge of assessing research quality would remain  unchanged if numbers were abandoned for the grades A, B, C and D.)  These whole-number ratings are then used by the Times Higher Education to compute a university’s all-important “research intensity” (reported to two decimal place accuracy) using simple arithmetic.  But, as every sixth-form statistician knows, arithmetical operations are not meaningful when applied to an ordinal scale.



How can it be that no world-class scientist at Queen’s seems to have pointed out to those charged with marketing the university that “research intensity” is a highly questionable measure?  Are we to believe that no UK scientist has written to the Times Higher Education pointing out the magazine’s error?  One gets the clear impression from those charged with marketing Queen’s University that the Belfast campus is crammed to the rafters with ”world-leading” scientists.  How could any scientist worthy of the label endorse the notion of “research intensity” when his or her sixth-form mathematics training would identify the notion as nonsensical.  In particular, why have none of the university’s world-leading academics, listed on pages 6 through 9 of the Queen’s promotional booklet, questioned the nonsensical claim printed on page 5?”


The Queen’s University of Belfast and the Times Higher Education must clarify their positions on this matter.

QUB THE insert p5







Plagiarism: One law for pupils and another for teachers


, , , , , , , , , ,


Stephen Elliott – Chair: Parental Alliance for Choice in Education


In the closing days of January 2018 it was revealed that Dermot Mullan, headteacher at Our Lady and St Patrick’s College in Belfast, was accused on plagiarising the work of another teacher.  Mr Mullan immediately confessed to the offence and that, it would seem, is to be the end of the matter.  His Board of Governors made no comment, the Catholic Church made no comment, and – most concerning of all – Northern Ireland’s General Teaching Council remained silent.  This silence is puzzling given that Mr Mullan heads a school which makes much of its lofty Catholic principles.  How does a plagiarist urge honesty and integrity on pupils in general (and pupils taking GCSE and GCE examinations, in particular)?  How does Mr Mullan discipline a pupil suspected of copying the coursework of another pupil?  Surely the parents of the culprit will detect a double standard here: there seems to be one rule for the children and another for their principal?



Dr Neill Morton pictured with Professor Tony Gallagher at QUB graduation.

The existence of a disturbing double standard is nowhere better illustrated than in the intervention of Neill Morton, the self-styled “emeritus” headmaster of Portora Royal School.  Despite being the Education Chair of Northern Ireland’s Examination Council (CCEA), Dr Morton appeared on BBCNI television Newsline on Monday 29th January 2018 to assure the public that the whole issue of Mr Mullan’s plagiarism was overblown.  This clearly demonstrates one law for pupils taking examinations and another for their teachers: if Dr Morton’s view of Mr Mullan’s indiscretion were applied to pupils, then the entire concept of public examinations would collapse.  In short, Dr Morton’s comments on Mr Mullan’s plagiarism should immediately disqualify him from any public office concerned with public examinations.

Dr Morton’s failure to condemn Mr Mullan’s activities outright is even more surprising given that he has recently completed a Doctorate in Education at The Queen’s University of Belfast.  A glance at that university’s website or a random walk through its McClay library will quickly reveal the seriousness with which it views plagiarism.

When pupils are charged with plagiarism the consequences can be drastic: their grades can be deleted; they may be expelled and the pupil whose work was plagiarised may fall under suspicion.  One doesn’t seem to encounter the same clarity of decision-making when it comes to settling the fate of a highly-salaried headteacher like Mr Mullan.  One encounters the same imbalance in respect of university students and their teachers: one can spend many hours searching for a well-defined Queens policy on staff accused of appropriating the work of other academics.


The claims advanced here deserve a response.  It is completely unacceptable that Dr Morton’s judgement of Mr Mullan’s plagiarism is entirely at odds with the treatment of examination candidates guilty of the same offence.  How must the parents of children judged to have plagiarised in an assessment have reacted to CCEA’s Education Chair making little of a headteacher facing the same charge?  Why have the Governors of Mr Mullan’s school not made a statement?  Why is the Catholic Church silent on what is a failure in morality in a person charged with leading by example?  Finally, why are teachers, pupils and parents yet to hear a word from Northern Ireland’s General Teaching Council?