Plagiarism: One law for pupils and another for teachers

Tags

, , , , , , , , , ,

 

Stephen Elliott – Chair: Parental Alliance for Choice in Education

 

In the closing days of January 2018 it was revealed that Dermot Mullan, headteacher at Our Lady and St Patrick’s College in Belfast, was accused on plagiarising the work of another teacher.  Mr Mullan immediately confessed to the offence and that, it would seem, is to be the end of the matter.  His Board of Governors made no comment, the Catholic Church made no comment, and – most concerning of all – Northern Ireland’s General Teaching Council remained silent.  This silence is puzzling given that Mr Mullan heads a school which makes much of its lofty Catholic principles.  How does a plagiarist urge honesty and integrity on pupils in general (and pupils taking GCSE and GCE examinations, in particular)?  How does Mr Mullan discipline a pupil suspected of copying the coursework of another pupil?  Surely the parents of the culprit will detect a double standard here: there seems to be one rule for the children and another for their principal?

 

 

Dr Neill Morton pictured with Professor Tony Gallagher at QUB graduation.

The existence of a disturbing double standard is nowhere better illustrated than in the intervention of Neill Morton, the self-styled “emeritus” headmaster of Portora Royal School.  Despite being the Education Chair of Northern Ireland’s Examination Council (CCEA), Dr Morton appeared on BBCNI television Newsline on Monday 29th January 2018 to assure the public that the whole issue of Mr Mullan’s plagiarism was overblown.  This clearly demonstrates one law for pupils taking examinations and another for their teachers: if Dr Morton’s view of Mr Mullan’s indiscretion were applied to pupils, then the entire concept of public examinations would collapse.  In short, Dr Morton’s comments on Mr Mullan’s plagiarism should immediately disqualify him from any public office concerned with public examinations.

Dr Morton’s failure to condemn Mr Mullan’s activities outright is even more surprising given that he has recently completed a Doctorate in Education at The Queen’s University of Belfast.  A glance at that university’s website or a random walk through its McClay library will quickly reveal the seriousness with which it views plagiarism.

When pupils are charged with plagiarism the consequences can be drastic: their grades can be deleted; they may be expelled and the pupil whose work was plagiarised may fall under suspicion.  One doesn’t seem to encounter the same clarity of decision-making when it comes to settling the fate of a highly-salaried headteacher like Mr Mullan.  One encounters the same imbalance in respect of university students and their teachers: one can spend many hours searching for a well-defined Queens policy on staff accused of appropriating the work of other academics.

The claims advanced here deserve a response.  It is completely unacceptable that Dr Morton’s judgement of Mr Mullan’s plagiarism is entirely at odds with the treatment of examination candidates guilty of the same offence.  How must the parents of children judged to have plagiarised in an assessment have reacted to CCEA’s Education Chair making little of a headteacher facing the same charge?  Why have the Governors of Mr Mullan’s school not made a statement?  Why is the Catholic Church silent on what is a failure in morality in a person charged with leading by example?  Finally, why are teachers, pupils and parents yet to hear a word from Northern Ireland’s General Teaching Council?

Advertisements

The AQE CEA and GL Assessment Test Results: Advice to parents: 2018

Tags

, , , , , , ,

Undoubtedly, thanks mainly to media pressure, the results of the 2017/18 transfer tests will be the subject of conversations in families all over Northern Ireland this weekend and for months beyond. The Parental Alliance for Choice in Education wish to offer our congratulations to all pupils who took the tests and express our hope that pupils are offered a place in the school of their choice. Unfortunately as with any competition based on opportunity not everyone will be able to avoid some disappointment.

Breen Wilson transfer test 2004

Politicians, teacher unions and school principals are determined to end testing for transfer at 11

Perhaps an expression of thanks should be offered by parents & guardians for the provision of these “unofficial” or “unregulated” tests. Without the dedication, commitment, psychometric expertise, and adherence to available international standards all pupils would be attending comprehensive schools. This was the expressed aim and intention of successive Education Ministers and remains the aim of the Department of Education. This is particularly relevant given the collapse of the Executive and Assembly. Indeed it is remarkable that very little support for academic selection to grammar schools can be found in the media. This stands in stark contrast to the commercial greed of newspapers promoting the publication of schools league tables, transfer test practice booklets (while AQE provide all past papers at no cost) and just this week a transfer test guide suggesting, albeit inaccurately,  scores that will get your child a place in a particular school.

As has been widely reported the number of applications for the AQE and GL Assessment tests has continued to grow. In the academic year 2016/17 14,491 test entries were received. This resulted in 11,570 applications to grammar schools for the 8,743 places available. Therefore 3 out of 4 applications succeeded. This year will be similar.

BT PPTC single test

Victoria College, Belfast have operated the questionable and non-transparent practice of dualling (accepting applications from pupils who have taken both tests or either) yet in this Belfast Telegraph article the principal Patricia Slevin proposes a single test. The dualling practice will have undoubtedly created misclassifications, resulting inmany pupils being denied  a place in the school through the error of suggesting that two different tests, measuring different constructs, can simply be merged into one.

Single Test Impossible_20170128_0001_NEW

 

 

SingleTest Robinson 2012

 

The Parental Alliance has sought engagement with both AQE and GL Assessment, the test providers. GL Assessment refuse to engage citing their customer, PPTC, in a commercial contract. Since much is made of the fact that GL Assessment tests are free to pupils, who pays GL Assessment their charge for providing the multiple-choice, computer scanned and marked test? This raises the question of why this high stakes transfer test remains shrouded in secrecy. Recognised international standards suggest that pupils and their parents should be provided with exemplars of the questions likely to appear on tests. Neither the PPTC nor GL Assessment meet the standard. Indeed no past paper from GL Assessment has ever been published. The media have conspicuously not sought answers to this issue. Every year the BBCNI will broadcast a package on results day, invariably it will be from a school using the AQE test. No questions have been raised by the media about the dualling tables and their origin.

PPTC Practice Papers

There is no shortage of commercial Practice Papers available to purchase. Note the term “PPTC-style” All AQE past papers are made available to primary schools at no charge.

Another major distinction between the two tests is that GLA pupils only have one attempt at the examination. The time required for familiarisation, practice and the actual assessments in English & Maths exceeds that for many GCSE examinations.

Pupils taking AQE have three opportunities, allowing for a possible “off day” due to testing anxiety.

Details of GLA tests

 

Concerns have been raised this year about the use of content from the work of Charles Dickens in the PPTC  GL Assessment English paper. Most pupils may have difficulty distinguishing between the author of fourteen and a half novels and the contemporary magician pictured below. Charles Dickens was famously known for being paid by the word published. The version of David Copperfield featured has 745 pages of text. The two exams are contrasted for parents to discuss.

 

Cover to DC bookDC back coverDavid Cooperfield Magician

 

dc-page228.jpg

 

 

Contrast the above passage (randomly selected from the 745 pages in the book imaged above with the prose passage taken from an AQE 2017 paper. AQE tests are always unique; never repeated.

AQE 2017 Prose

 

The quest and motivation for a single transfer test must be critically examined by parents. In whose interests has the project been adopted? When the CCEA Transfer Test was ended, without a replacement examination in place, by Caitriona Ruane the prospect of compulsory comprehensive post primary schools loomed. A single (one provider) test was offered by AQE. This was quickly rejected by those mainly representing Catholic grammars. To be clear, the single test project is a manufactured crisis, clearly in the hands of politicians, civil servants, and school principals. Former DUP Education Minister kept the project alive by inviting Peter Tymms of Durham University to report on the matter. Tymms has a history with Northern Ireland primary school pupils via the now abandoned Incas assessments used in primary schools. (see blog search engine for articles).

The report from Peter Tymms was published by the Northern Ireland Executive Office close to the last day of the collapsed Assembly

Concerns raised with AQE joint CEO, Stephen Connolly about entering any process proposed by the Department of Education to work on a single test were met with a promise to express further reservations. It is understood that Stephen Connolly subsequently continued to meet with DENI officials

 

 

 

Screenshot_2

Another difficulty for parents is the fact that many grammar schools are not using academic selection for all pupils. Read the admission criteria carefully before applying to schools. It may be the case that your child is denied a place in favour of a pupil who did not take the tests. In the graphic below it is clear that Royal School Dungannon, Royal School Armagh and Sullivan Upper do not select 100% of their pupils by academic testing. Strathern School use bands rather than rank order of marks so that it will be impossible to reassure a child getting results today that their score will get them a place.

 

BT AQEGLA2018

 

The problem is even more acute when the dualling schools are examined. The obvious issue of the integrated schools pretending to be grammars can only be matched by those Catholic grammar schools which no longer use academic selection.

Dualling18 BT

Lagan and Slemish are not grammar schools. They were permitted by the anti-selection DENI to use 11-plus tests to select 35% of their pupils.

 

Wallace High School in Lisburn, another grammar school which uses bands to report test scores only selects 87% of year 8 pupils. The minimum score reported is 101. Wallace High admit 170 pupils to year 8 so a total of 22 pupils get places without use of academic selection.

During October 2017 Wallace High School attracted attention for restricting the number of primary 7 pupils allowed to sit AQE tests at the school. It became clear that this was not a matter of physical capacity but the willingness of teachers to make themselves available on Saturday mornings.

https://www.lisburntoday.co.uk/news/schools-are-criticised-over-their-handling-of-aqe-test-situation-1-8249218

Lisburn Star

 

Advice to Parents on AQE & GL Transfer Tests 2017/18

Tags

, ,

Undoubtedly, thanks mainly to media pressure, the results of the 2017/18 transfer tests will be the subject of conversations in families all over Northern Ireland this weekend and for months beyond. The Parental Alliance for Choice in Education wish to offer our congratulations to all pupils who took the tests and express our hope that pupils are offered a place in the school of their choice. Unfortunately as with any competition based on opportunity not everyone will be able to avoid some disappointment.

Breen Wilson transfer test 2004

Politicians, teacher unions and school principals are determined to end testing for transfer at 11

Perhaps an expression of thanks should be offered by parents & guardians for the provision of these “unofficial” or “unregulated” tests. Without the dedication, commitment, psychometric expertise, and adherence to available international standards all pupils would be attending comprehensive schools. This was the expressed aim and intention of successive Education Ministers and remains the aim of the Department of Education. This is particularly relevant given the collapse of the Executive and Assembly. Indeed it is remarkable that very little support for academic selection to grammar schools can be found in the media. This stands in stark contrast to the commercial greed of newspapers promoting the publication of schools league tables, transfer test practice booklets (while AQE provide all past papers at no cost) and just this week a transfer test guide suggesting, albeit inaccurately,  scores that will get your child a place in a particular school.

As has been widely reported the number of applications for the AQE and GL Assessment tests has continued to grow. In the academic year 2016/17 14,491 test entries were received. This resulted in 11,570 applications to grammar schools for the 8,743 places available. Therefore 3 out of 4 applications succeeded. This year will be similar.

BT PPTC single test

Victoria College, Belfast have operated the questionable and non-transparent practice of dualling (accepting applications from pupils who have taken both tests or either) yet in this Belfast Telegraph article the principal Patricia Slevin proposes a single test. The dualling practice will have undoubtedly created misclassifications, resulting inmany pupils being denied  a place in the school through the error of suggesting that two different tests, measuring different constructs, can simply be merged into one.

Single Test Impossible_20170128_0001_NEW

 

 

SingleTest Robinson 2012

 

The Parental Alliance has sought engagement with both AQE and GL Assessment, the test providers. GL Assessment refuse to engage citing their customer, PPTC, in a commercial contract. Since much is made of the fact that GL Assessment tests are free to pupils, who pays GL Assessment their charge for providing the multiple-choice, computer scanned and marked test? This raises the question of why this high stakes transfer test remains shrouded in secrecy. Recognised international standards suggest that pupils and their parents should be provided with exemplars of the questions likely to appear on tests. Neither the PPTC nor GL Assessment meet the standard. Indeed no past paper from GL Assessment has ever been published. The media have conspicuously not sought answers to this issue. Every year the BBCNI will broadcast a package on results day, invariably it will be from a school using the AQE test. No questions have been raised by the media about the dualling tables and their origin.

PPTC Practice Papers

There is no shortage of commercial Practice Papers available to purchase. Note the term “PPTC-style” All AQE past papers are made available to primary schools at no charge.

Another major distinction between the two tests is that GLA pupils only have one attempt at the examination. The time required for familiarisation, practice and the actual assessments in English & Maths exceeds that for many GCSE examinations.

Pupils taking AQE have three opportunities, allowing for a possible “off day” due to testing anxiety.

Details of GLA tests

 

Concerns have been raised this year about the use of content from the work of Charles Dickens in the PPTC  GL Assessment English paper. Most pupils may have difficulty distinguishing between the author of fourteen and a half novels and the contemporary magician pictured below. Charles Dickens was famously known for being paid by the word published. The version of David Copperfield featured has 745 pages of text. The two exams are contrasted for parents to discuss.

 

Cover to DC bookDC back coverDavid Cooperfield Magician

 

dc-page228.jpg

 

 

Contrast the above passage (randomly selected from the 745 pages in the book imaged above with the prose passage taken from an AQE 2017 paper. AQE tests are always unique; never repeated.

AQE 2017 Prose

 

The quest and motivation for a single transfer test must be critically examined by parents. In whose interests has the project been adopted? When the CCEA Transfer Test was ended, without a replacement examination in place, by Caitriona Ruane the prospect of compulsory comprehensive post primary schools loomed. A single (one provider) test was offered by AQE. This was quickly rejected by those mainly representing Catholic grammars. To be clear, the single test project is a manufactured crisis, clearly in the hands of politicians, civil servants, and school principals. Former DUP Education Minister kept the project alive by inviting Peter Tymms of Durham University to report on the matter. Tymms has a history with Northern Ireland primary school pupils via the now abandoned Incas assessments used in primary schools. (see blog search engine for articles).

The report from Peter Tymms was published by the Northern Ireland Executive Office close to the last day of the collapsed Assembly

Concerns raised with AQE joint CEO, Stephen Connolly about entering any process proposed by the Department of Education to work on a single test were met with a promise to express further reservations. It is understood that Stephen Connolly subsequently continued to meet with DENI officials

 

 

 

Screenshot_2

Another difficulty for parents is the fact that many grammar schools are not using academic selection for all pupils. Read the admission criteria carefully before applying to schools. It may be the case that your child is denied a place in favour of a pupil who did not take the tests. In the graphic below it is clear that Royal School Dungannon, Royal School Armagh and Sullivan Upper do not select 100% of their pupils by academic testing. Strathern School use bands rather than rank order of marks so that it will be impossible to reassure a child getting results today that their score will get them a place.

 

BT AQEGLA2018

 

The problem is even more acute when the dualling schools are examined. The obvious issue of the integrated schools pretending to be grammars can only be matched by those Catholic grammar schools which no longer use academic selection.

Dualling18 BT

Lagan and Slemish are not grammar schools. They were permitted by the anti-selection DENI to use 11-plus tests to select 35% of their pupils.

 

Wallace High School in Lisburn, another grammar school which uses bands to report test scores only selects 87% of year 8 pupils. The minimum score reported is 101. Wallace High admit 170 pupils to year 8 so a total of 22 pupils get places without use of academic selection.

During October 2017 Wallace High School attracted attention for restricting the number of primary 7 pupils allowed to sit AQE tests at the school. It became clear that this was not a matter of physical capacity but the willingness of teachers to make themselves available on Saturday mornings.

https://www.lisburntoday.co.uk/news/schools-are-criticised-over-their-handling-of-aqe-test-situation-1-8249218

Lisburn Star

 

Proof that there is a profound error at the heart of Carol Dweck’s Mindsets Project

Tags

, , , , , , , , , , , , , ,

Dr Hugh Morrison (The Queen’s University of Belfast [retired])

drhmorrison@gmail.com

The Brookings Institute recently published a study – entitled New Evidence that Students’ Beliefs about their Brains Drive Learning, by Susana Claro & Susanna Loeb – which drew heavily on the ideas of Stanford University Professor Carol Dweck.  The article appears to take Dweck’s research on so-called “Mindsets” at face value.

This  essay offers a proof that Dweck’s Mindset concept is simply wrong.

https://www.brookings.edu/research/new-evidence-that-students-beliefs-about-their-brains-drive-learning/

The central psychological concept in Dweck’s work is “belief”; according to Dweck, Mindsets are nothing more than beliefs.  Dweck never addresses, in any detail, the nature of belief.  She simply accepts the common-sense view that beliefs are entities carried in the minds/brains of individuals.  Despite a complete absence of any evidence in support of her claim that beliefs/mindsets are carried in the heads of individuals, Dweck has become something of a celebrity in psychological and educational circles; she has given “Ted-talks” and millions of dollars have been spent implementing her ideas in classrooms across the world.

It isn’t difficult to see why Dweck makes no attempt to analyse the psychological predicate around which all her research revolves.  If Dweck were to consult a standard text such as Peter Hacker’s The Intellectual Powers: A Study of Human Nature (published in 2013 by Wiley) she would find sixty pages of carefully-considered analysis directly addressing the notion of belief.  Alas, nothing contained in these pages supports her claim that beliefs are mental states or processes in the mind/brain of the individual.

Needless to say, such a claim is bound to have enormous popular support; when all is said and done, where else would one expect to find beliefs but in the head of the believer?  Alas, however, this appealing notion turns out to be wrong; beliefs are not objects in the mind/brain.  For example, on page 227 of Hacker’s book she would find the conclusion: “believing is not a state of mind,” and the title of section 8 on page 230 is “Why believing something cannot be a state of the brain.”  If beliefs cannot be construed as the property of the mind or brain of the individual who “has” them, then Dweck’s reasoning is vitiated.

In respect of the study published in Brookings the findings are invalidated if one cannot ascribe a definite mindset to everyone who took part in the study.  The purpose of the remainder of this essay is to offer a proof that Dweck is wrong about beliefs; mindsets, as Dweck construes them, do not exist.  The author hopes that the Brookings Institute will invite Professor Dweck to identify any shortcomings she can identify in the proof set out in the paragraphs below.

The proof centres on what Ludwig Wittgenstein referred to as “Moore’s Paradox.”  The philosopher G.E. Moore identified statements such as “It is raining, and I don’t believe it is raining” or “I believe it is raining and it isn’t raining,” as patently absurd.  The first part of these two conjunctions clearly contradicts the second part.  Indeed, all statements of the form “I believe p, and p is not true” are absurd according to Moore.

However, if one accepts Dweck’s claim that beliefs are carried in the mind/brain of the individual, the conjunction is perfectly intelligible because no contradiction exists between the two parts of the conjunction.  Why?  Because, on Dweck’s reasoning they refer to entirely different things: “it is raining” refers to the weather (something in the “outer” world), while “I don’t believe it is raining” refers to a mental state (something “inside” the head of the believer).

Carol Dweck’s only escape from this paradox is to accept that a belief is not an inner state or process in the mind/brain.  The following brief extract from Wittgenstein’s 1944 letter to Moore captures the depth of the conceptual flaw at the core of the Mindset thesis:

I should like to tell you how glad I am that you read us a paper yesterday.  It seems to me that the most important point was the “absurdity” of the assertion “There is a fire in this room and I don’t believe there is.” … If I ask someone “Is there a fire in the next room?” and he answers, “I believe there is” I can’t say: “Don’t be irrelevant.  I asked you about the fire, not about your state of mind!”

 

 

 

 

 

 

 

 

 

 

 

 

 

Why Randomised Controlled Trials in education are a waste of money.

Tags

, , , , , , , , , , , , , , , , ,

 

Ludwig Wittgenstein

Ludwig Wittgenstein

 

The National Foundation for Education Research (NFER) via their Education Trials Unit have issued a challenge to Professor Stephen Gorard  (Durham University) regarding his book on Randomised Controlled Trials in education. Unfortunately, both parties have it completely wrong on the matter as made clear by the writings of Ludwig Wittgenstein

Wittgenstein Brown Book

   The mathematics behind randomised trials only works if the attribute being studied is first-person/third person symmetric, like the properties of plants in Fisher’s Cambridge garden.  However, attributes of interest in education are, with few exceptions, first/third asymmetric.  So-called evidenced-based education research is nonsense, pure and simple. Those who fund it are wasting public money.  In his “Brown Book” Wittgenstein described the error of confusing symmetry with asymmetry in the ascription of attributes (via criteria) as “a disease of thought.”

 

To understand the full extent of NFER’s misunderstanding of the limitations of RCTs in education watching Ben Styles’ performance during the U Tube video referenced below may be illustrative.

 

Academic selection is not a problem for poor children, but the curriculum is and policy makers are responsible.

Tags

, , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

It’s the Curriculum, Stupid: why the Executive Office IliAD Report was a Waste of Public Money

Darwin 3

If the misery of our poor be caused not by the laws of nature, but by our institutions, great is our sin.” Charles Darwin

Introduction

This essay concerns the Investigating Links in Achievement and Deprivation (ILiAD) Report, volume three of which was placed in the public domain on September 5th 2017 by the Northern Ireland Executive Office (formerly OFMDFM). The research was commissioned from Queen’s University Belfast with Professor Ruth Leitch as principal investigator  in 2012.   In the next section, I will argue that the focus on the role academic selection plays in explaining the attainment gap between rich and poor is unjustified.  I will identify the real culprit, namely, the curriculum and demonstrate that my analysis is confirmed by the international literature.  Furthermore, given the Curriculum Vitae of the Queen’s University academics, why was £0.3 million awarded to individuals without detailed knowledge of the literature on curriculum, and little or no direct experience of the practicalities of the classroom.

 

Iliad vol 3

Where are the missing Volume 1 & Volume 2 including the literature review?

 

In the final section, I demonstrate that my claim that the culprit is the curriculum (a Revised curriculum foisted on every child in Northern Ireland many years ago), rather than academic selection, is confirmed by the international literature.  Given the sheer scale and quality of this international research – a highly visible century-long literature review and the most sophisticated and extensive experiment in the history of education – how could the Queen’s academics, even with their limited expertise, not have known about the consequences of skills-based curricula like the Northern Ireland Revised curriculum for the life chances of the poor?  Why were those who funded the ILiAD project not aware that the solution, for which they paid £0.3 million from the public purse, already existed?

When is a School of Education not a School of Education?

The BBC’s presentation (on both radio and television) of volume three of the ILiAD Report gave the clear impression that academic selection was a culprit, if not the culprit, in the attainment gap which separates rich and poor schoolchildren in Northern Ireland.  The ILiAD report (a product of The Queen’s University of Belfast’s “School of Education”) has always been surrounded by controversy both in respect of its eye-watering cost to the public purse, and the fact that it has been embargoed by the Executive Office for so long.

Given this background, it is intriguing that no one at the BBC thought to question the motives of the civil servant(s) who triggered the publication of ILiAD now.  Moreover, William Crawley, host of the BBC Radio Ulster Talkback programme afforded Tony Gallagher, professor at the School of Education, the opportunity to interpret the ILiAD findings pretty-much unopposed.  William Crawley failed to raise with Professor Gallagher how the ILiAD report could be construed as a School of Education report when none of the Queens authors has a background in education, let alone experience in the classroom.

The authors comprised two psychologists, one anthropologist, one “built environment” academic, one sociologist, and a researcher working in conflict resolution/transformation.  It is interesting to note, in passing, that of the many professors of education employed by the School of Education, none have experience of the post-primary classroom.  William Crawley’s claim on a previous Talkback programme that Tony Gallagher had expertise in three of the fundamental facets of education is bound to puzzle those who are aware that the professor has never even been employed by a school.

Economist Scotland

But here is where the BBC, professor Gallagher and the authors of the ILiAD report are most at fault.  The conclusions of the ILiAD report, and the central message of the BBC’s coverage, is undermined by this simplest of counterexamples: Scotland has a similar attainment gap (see Economist 27.08.2016)  to Northern Ireland, but that gap cannot be accounted for by academic selection because Scotland doesn’t have grammar schools.  Moreover, this simple counterexample points the way to the real source of the attainment gap – something Scotland and Northern Ireland have in common – a progressivist curriculum.  The Economist study attributes Scotland’s difficulties not to selection but to its progressivist curriculum.

We have now arrived at an explanation of the attainment gap which accords with all the extant high quality international research but is entirely at odds with the reasoning set out in ILiAD: Northern Ireland and Scotland share a common problem in that policy-makers (supported by “educationalists”) in both countries have adopted models of curriculum known to be damaging to the life chances of the poor.  The true culprits are not the grammar schools but former CCEA leaders Gavin Boyd and Carmel Gallagher who developed a progressivist “Revised” curriculum which required schools to adopt incoherent notions such as “learning to learn,” Assessment for Learning and so-called thinking skills.

 

Gallagher Trojan Horse

 

It’s the curriculum, stupid

OFMDFM (now The Executive Office) made a profound mistake when it supported the ILiAD approach to the problem of the achievement gap between rich and poor.  OFMDFM funded ILiAD to investigate a question that had already been answered in what has been described as the largest educational experiment ever conducted: Project Follow Through.

It is important for the reader to appreciate how Project Follow Through dwarfs the ILiAD project both in scope and ambition.  The generalizability of Project Follow Through’s findings is far beyond anything ILiAD could offer. Project Follow Through focused on the classroom and sought to identify the teaching method that would raise the academic standards of the poor to middle class levels.  The most successful method?  Traditional teaching (so-called “direct instruction” (DI)) stood head and shoulders above all other teaching techniques.

With a price tag of a fraction of a billion dollars (a lot of money in the 1960/70s), Ian Ayres summarises Project Follow Through as follows: “Concerned that ‘poor children tend to do poorly in school,’ the USA’s Office of Education and the Office of Economic Opportunity sought to determine what types of education models could best break this cycle of failure.  The result was Project Follow Through, an ambitious effort that studied 79,000 children in 180 low-income communities for 20 years.”

Traditional teaching methods (DI) outperformed all its rivals in getting disadvantaged children to perform at middle class standards.  Richard Nadler writes: “When the testing was over, students in DI classrooms had been placed first in reading, first in math, first in spelling, and first in language.  No other model came close.”  Independent evaluations were subsequently carried out by the American Federation of Teachers and by the American Institutes for Research, with the same conclusions. The message for Northern Ireland is simple: If one wants to address the ill-effects of poverty, scrap the Revised Curriculum.

Curriculum models categorised as “learning-to-learn” performed very poorly in Project Follow Through.  Given that Northern Ireland’s Revised Curriculum has ‘learning-to-learn’ principles at its core, the explanation for the underachievement of disadvantaged children (Catholic, Protestant, whatever) isn’t hard to find.  The missing link between poverty and underachievement could have been identified without giving a penny of public money to “educationalists” at Queens.  Individuals like Gavin Boyd – who now commands one of the highest salaries in the public sector as head of the new Education Authority – have bequeathed to children who live in poverty a dysfunctional and damaging curriculum.

Jeanne Chall Academic Achievement

In her 1990 book The Academic Achievement Challenge, the distinguished Harvard academic Jean Chall conducted a detailed study of a century of research on the effective teaching of disadvantaged children, finding no evidence for the efficacy of methods which depart from traditional teacher-centred methods.  On page 171 Chall writes: “Whenever the students were identified as coming from families of low socioeconomic status, they achieved at higher levels when they received a more formal, traditional education.  Overall, while the traditional, teacher-centred approach produced higher achievement than the progressive, student-centred approach among all students, its effects were even stronger for those students who were less well prepared.  The teacher-centred approach was also more effective for students with learning disabilities at all social levels.  overall, the research showed that at-risk students at all social levels achieved better academically when given a more traditional education.”

On page 182, Chall draws this conclusion from a century of published evidence: “The major conclusion of my study in this book is that a traditional, teacher-centred approach to education generally results in higher academic achievement than a progressive student-centred approach.  This is particularly so among students who are less well prepared for academic learning – poor children and those with learning difficulties at all social and economic levels.”

 

 

 

 

Why Dweck’s “Mindsets” concept should have no place in schools: large-scale evidence from UK public examinations

Tags

, , , , , , , ,

Psychologist Carol Dweck invoked her research on children’s beliefs to explain the attainment gap between boys and girls.  A number of small scale studies have been published which divide pupils into two broad categories according to their beliefs: (i) “incremental theorists” who believe their mathematical ability can be developed with effort, and (ii) “entity theorists” who believe that mathematical ability is fixed.  It is then argued that boys, in general, tend to be incremental theorists, while girls in general – and “bright” girls in particular – tend to be entity theorists.  Dweck’s writings leave the reader in no doubt as to which belief system is superior; incremental theorists will always have the edge on their peers who subscribe to an entity theory of ability.  When mathematics becomes challenging, girls will always lag behind boys.  However, it is revealed in what follows that large-scale public examination data can be used to stand Dweck’s thesis on its head.

The following passages are from Dweck’s Self-theories: their role in motivation, personality and development, published by Psychology Press.  These passages illustrate how she traces underachievement among “bright” girls later in their school career (when mathematics questions become really challenging) to their having developed harmful entity-focused beliefs during their grade school mathematics education (when the subject lacks real challenge).

“In our research we have found that the students with the most striking history of success are often the most, rather than the least, vulnerable.  In grade school they are far and away the highest achievers. … These bright girls may look very confident and well put together as they go about their academic work, and teachers probably do not think of them as vulnerable.  This is because these girls can easily master what is asked of them in grade school.  Yet as we will see, they are a group that does not want challenge. … And when they are presented with challenge or obstacles, they are a group that readily blames their ability and falls into a helpless pattern” (p. 53).

“We described earlier how bright girls are the highest achieving group in grade school: They earn the highest grades, they exceed bright boys in reading achievement, and they equal bright boys in math achievement.  Moreover, teachers agree that these girls are the stars.  Nevertheless, when we and others look at bright girls in our studies, we find that they are the group with the greatest vulnerability to helplessness.  They are more likely than boys to hold an entity theory of their intelligence. … Moreover, when school begins to get more challenging, as it does in junior high school, these girls traditionally have begun to fall behind their male counterparts, especially in math and science achievement … These motivational patterns, then, can help us understand why girls, who are the stars in grade school, have not traditionally been the stars in the later world of achievement” (pp. 123-124).

“[W]hen these girls hit junior high or high school, these same desires can cause problems.  The work (especially the math) suddenly gets harder, and immediate mastery is often not possible. … Quite the reverse may happen for boys”(p. 147).

Dweck cites a range of small scale studies to bolster her thesis that pupils’ beliefs can play a central role in explaining the “paradox of bright girls” (p. 123).  Alas, Dweck’s reasoning is contradicted by the public examination system in Northern Ireland.  For many years bright girls have outperformed boys at almost every level in mathematics.  The mathematics involved is highly challenging, incorporating differential and integral calculus, the solution of polynomials equations, complex variables, the application of advanced trigonometry, trajectory problems, motion with variable acceleration, and so on.  The examinations are conducted under carefully supervised conditions over a period of up to six hours.  “Examination boards” are charged with ensuring pupils and their teachers do not have sight of the examination paper prior to their entry to the examination hall.  No analysis of the grades of these thousands of pupils who takes mathematics papers supports Dweck’s reasoning.

Dweck’s explanation of the achievement gap between boys and girls in the USA, therefore, fails completely when tested on a large scale in Northern Ireland.  If this were an isolated refutation of Dweck’s worldview, one could be generous and interpret it as demonstrating that her self-theory/mindset research holds up in some cultures but not in others.  (Even this conservative reading would rule out the mindset approach as a psychological principle.)  However, the most likely explanation of the fact that the Northern Ireland data contradicts Dweck’s thinking is that her reasoning – involving “brainology” and “putting” subjects “into” one mindset or another – is plain incoherent.    For a careful analysis of the deep conceptual flaws in Dweck’s research see The flaw in Dweck & Boaler’s  Mindset research  on this blog.

 

Stephen Elliott

 

Time for Ofqual to take back control

Tags

, , , , , , , , , , , , , , , , ,

The GCSE Grade 5 controversy: why it’s time for Ofqual to “take back control”

 Numbers3

Dr Hugh Morrison, The Queen’s University of Belfast (retired)  drhmorrison@gmail.com

 The GCSE Grade 5 Controversy

A highly unusual feature of the new numbered GCSE grade scale is the claim that the new grade 5 will somehow reflect the standards of educational jurisdictions ranked near the top of international league tables.  Given the controversy surrounding such tables it will be possible, for the first time, to raise profound technical concerns about a particular grade on the GCSE grade scale.  Moreover, it will be difficult to make the case that grade 5 has any technical merit if Pisa ranks have any role in its determination.  Pisa is the acronym for the Paris-based “Programme for International Student Assessment” and a glance at the Times Educational Supplement of 26.07.2013 will reveal that Pisa league tables are fraught with technical difficulties.

 Concerns about Item Response Theory

The methodology which underpins Pisa ranks is called “Item Response Theory” (IRT).  IRT software claims to estimate the ability of individuals based on their responses to test items.  However, while the claim that ability is some inner state or “trait” of the individual from which his or her tests responses flow – the so-called reservoir model – is central to IRT, the claim is rarely supported by evidence.  There is very good reason to reject the inner state approach.  Consider, for example, the child solving simple problems in arithmetic.  To explain this everyday behaviour it transpires that one must invoke inner states with talismanic properties in that the state must be timeless, infinite and future-anticipating!

Rasch

Great caution is needed when using the word “ability” – while test evidence can justify us in saying that an individual has ability, that same evidence can never be used to justify the claim that the word “ability” refers to an inner (quantifiable) state of that individual.  Ability is not an intrinsic property of an individual; rather, it is a property of the interaction between individual and test items.  The individual’s responses to the test items are an inseparable part of that ability.  Indeed, divorced from a measurement context, ability is indefinite.  Individuals have definite ability only relative to a measurement context; even here it is incorrect to suggest that individuals have a quantifiable entity called “ability.”  Abandoning IRT’s appealingly simple picture of ability as an inner (quantifiable) state that individuals carry about with them, renders IRT untenable.  Ability is a two-faceted entity governed by first-person/third-person asymmetry: while we ascribe ability to ourselves without criteria, criteria are an essential prerequisite when ascribing ability to others.

The picture central to all IRT modelling – that ability is something intrinsic to the individual which is definite (and quantifiable) at all times – is rejected by the Nobel laureate Herbert Simon and by two giants of 20th century thought, physicist Niels Bohr and philosopher Ludwig Wittgenstein.  Indeed, Wittgenstein described the rationale which underpins IRT modelling – that test responses can be explained by appealing to inner processes – as a “general disease of thinking.”  Psychologists have a name for this error; Gerd Gigerenzer of the Max Plank Institute writes: “The tendency to explain behaviour internally without analysing the environment is known as the ‘Fundamental Attribution Error’.”

Niels Bohr

The criticisms levelled by Bohr and Wittgenstein are particularly damaging because IRT modellers construe ability as something inner which can be measured.  Few philosophers can match Wittgenstein’s contribution to our understanding of what can be said about the “inner”; and few scientists can match Bohr’s contribution to our understanding of measurement, particularly when the object of that measurement lies beyond direct experience.  (Bohr is listed among the top ten physicists of all time in recognition of his research on the quantum measurement problem.)  Both Bohr and Wittgenstein are concerned with the same fundamental question: how can one communicate unambiguously about aspects of reality which are beyond the direct experience of the measurer?  Just as Bohr rejected entirely the existence of definite states within the atom, Wittgenstein also rejected any claim to inner mental states; potentiality replaces actuality for both men.

For the duration of his professional life, Bohr maintained that quantum attributes have a “deep going” relation to psychological attributes in that neither can be represented as quantifiable states hidden in some inner realm.  We will always be limited to talking about ability; we will never be able to answer the question “what is ability?” let alone quantify someone’s ability.  Bohr believed that “Our task is not to penetrate into the essence of things, the meaning of which we don’t know anyway, but rather to develop concepts which allow us to talk in a productive way about phenomena in nature. …The task of physics is not to find out how nature is, but to find out what we can say about nature. … For if we want to say anything at all about nature – and what else does science try to do? – we must somehow pass from mathematical to everyday language” [italics added].

Given that IRT software is designed to measure ability, it may surprise readers that the claim that ability can be construed as a quantifiable inner state is rarely defended in IRT textbooks and journal articles.  In their article “Five decades of item response modelling,” Goldstein and Wood trace the beginnings of IRT to a paper written in 1943 by Derrick Lawley.  They note: “Lawley, a statistician, was not concerned with unpacking what ‘ability’ might mean.”  Little has changed in the interim.

Why Ofqual must protect GCSE pupils from the OECD’s “sophisticated processes”

These profound conceptual difficulties with the model which underpins Pisa rankings must surely undermine the OECD’s claim that one can rank order countries for the quality of their education systems.  In a detailed analysis of the 2006 Pisa rankings, the eminent statistician Svend Kreiner revealed that “Most people don’t know that half of the students taking part in Pisa [2006] do not respond to any reading item at all.  Despite that, Pisa assigns reading scores to these children.”  Given such revelations, why are governments, the media and the general public not more sceptical about Pisa rankings?  Kreiner offers the following explanation: “One of the problems that everybody has with Pisa is that they don’t want to discuss things with people criticising or asking questions concerning the results.  They didn’t want to talk with me at all.  I am sure it is because they can’t defend themselves.”

Given the depth of the conceptual problems which afflict IRT and, as a consequence, Pisa rankings, it seems to me foolhardy in the extreme to predicate the new GCSE grade 5 on Pisa rankings.  Ofqual have announced that grade 5 will be “broadly in line with what the best available evidence tells us is the average PISA performance in countries such as Finland, Canada, the Netherlands and Switzerland.”  In addition to Ofqual, the Department for Education and Tim Oates, director of research at Cambridge Assessment, appear to endorse a role for Pisa in UK public examinations.  The Department for Education have produced a report – PISA 2009 Study: How big is the gap? – which creates the impression that “gaps” between England and high performing Pisa countries can be represented on a GCSE grade scale designed for reporting achievement rather than ability.

Finally, the director of Cambridge Assessment asserts: “I am more optimistic … than most other analysts, I don’t see too many problems in these kinds of international comparisons.”  Indeed, Mr Oates believes that UK assessment has much to learn from involving Pisa staff directly in solving the grade 5 problem: “If we want to do it formally then we ought to have discussions with OECD. … OECD have some pretty sophisticated processes of equating tests which contain different items in different national settings.”  There is an immediate problem with this claim.  Since the psychometric definition of equity begins with the words: “for every group of examinees of identical ability …,” equity itself is founded on the erroneous assumption that ability can be quantified.

For the first time in the history of public examinations in the UK the technical fidelity of a GCSE grade will be linked to Pisa methodology.  Given the concerns surrounding IRT, is it not time for Ofqual to distance itself from the claim that the grade 5 standard is somehow invested with properties which allow it to track international standards in the upper reaches of the Pisa league tables?  The recent introduction of the rather vague term “strong pass” smacks of desperation; couldn’t a grade 6 also be deemed a strong pass?  Why not stop digging, sever the link with Pisa, and simply interpret grade 5 as nothing more than the grade representing a standard somewhere between grade 4 and grade 6?

The new GCSE grade 5: what Ofqual refuse to tell the public

Tags

, , , , , , , , , , , , , , , , , , ,

The new GCSE grade 5 and the Fundamental Attribution Error

Dr Hugh Morrison, The Queen’s University of Belfast (retired)  drhmorrison@gmail.com

Hilda Ogden

Holding the new GCSE grade 5 up to ridicule

A highly unusual feature of the new numbered GCSE grade scale is the claim that the new grade 5 will somehow reflect the standards of educational jurisdictions ranked near the top of international league tables.  Given the controversy surrounding such tables it will be possible, for the first time, to raise profound technical concerns about a particular grade on the GCSE grade scale.  Moreover, it will be impossible to make the case that grade 5 has any technical merit if (as seems likely) Pisa ranks have any role in its determination.  Pisa is the acronym for the OECD’s “Programme for International Student Assessment” and a glance at the Times Educational Supplement of 26.07.2013 will reveal that Pisa league tables are fraught with technical difficulties.

The distinguished statistician Svend Kreiner, of the University of Copenhagen, who has carried out a detailed investigation of the Pisa model, concluded: “the best we can say about Pisa rankings is that they are useless.”  The British mathematician Tony Gardner, of Birmingham University, has referred to Pisa claims as “snake oil.”  In the Times Educational Supplement piece, I argued that the model used by Pisa is flawed because, in order to explain a child’s ability to do simple arithmetic, for example, one must posit exotic inner states which are infinite, timeless and which somehow anticipate every arithmetical problem the child will subsequently encounter in a lifetime.  These impossible inner states arise because Pisa models treat “ability” as a state rather than a capacity.  How has Pisa managed to survive all these years given such damaging and unequivocal criticism?  Its secret is that it appears to enjoy a relationship with Government and the media which, in effect, insulates it from its critics.  Kreiner writes: “One of the problems that everybody has with Pisa is that they don’t want to discuss things with people criticising or asking questions concerning the results.  They didn’t want to talk to me at all.  I am sure it is because they can’t defend themselves.”

For the first time in the history of British examinations, a simple argument that anyone can understand can be deployed to undermine the technical fidelity of a particular examination grade.  Mixing the measurement of achievement with the measurement of ability exposes the new grade 5 to ridicule.  If grade 5 is to be predicated on Pisa rankings then profound validity shortcomings in respect of the rankings will have implications for grade 5.  Consider the arrangement of balls on a snooker table before a game begins.  The configuration of balls requires 44 numbers (two per ball, with the front and side rails serving as coordinate axes).  While the arrangement of balls on a snooker table cannot be summarised in less than 44 numbers, Pisa claims to represent the state of mathematics education in the USA – with its almost 100,000 schools – in a single number.  It would seem that what cannot be achieved for the location of simple little resin balls is nevertheless possible when the entity being “measured” is the mathematical attainment of millions of complex, intentional beings.

The Nobel laureate Sir Peter Medawar labelled such claims “unnatural science.”  Citing the research of John R. Philip, he notes that the properties of a simple particle of soil cannot be captured in a single number: “the physical properties and field behaviour of soil depend on particle size and shape, porosity, hydrogen ion concentration, material flora and water content and hygroscopy.  No single figure can embody itself in a constellation of values of all these variables.”  Once again, what is impossible for a tiny particle of soil taken from the shoe of one of the many millions of pupils who attend school in America, is nevertheless possible when the entity being “measured” is the combined mathematical attainment of a continent’s schoolchildren.

The problem with the new GCSE grade 5: a detailed critique

The OECD has now taken the bold step of analysing measures of “happiness,” “well-being” and “anxiety” for individual countries (see, for example, ‘New Pisa happiness table,’ Times Educational Supplement 19.04.2017).  In these tables “life satisfaction,” for example, is measured to two-decimal place accuracy.  This begs the question, “Can complex constructs such as happiness or anxiety really be represented by a number such as 7.26?”  For two giants of 20th century thought – the philosopher Ludwig Wittgenstein and the father of quantum physics, Niels Bohr – the answer to this question is an unequivocal “no.”  The fundamental flaws in Pisa’s approach to measuring happiness will serve to illustrate the folly of linking a particular GCSE grade to Pisa methodology.

Once again, surely common sense itself dictates that constructs such as happiness, anxiety and well-being cannot be captured in a single number?  In his book Three Seductive Ideas, the Harvard psychologist Jerome Kagan draws on the writings of Bohr and Wittgenstein to argue that measures of constructs such as happiness cannot be attributed to individuals and cannot be represented as numbers because such measures are context-dependent.  He writes: “The first premise is that the unit of analysis … must be a person in a context, rather than an isolated characteristic of that person.”  Wittgenstein and Bohr (independently) arrived at the conclusion that what is measured cannot be separated from the measurement context.  It follows that when an individual’s happiness is being measured, a complete description of the measuring tool must appear in the measurement statement because the measuring tool helps define what the measurer means by the word happiness.

Kagan rejects the practice of reporting the measurement of complex psychological constructs using numbers: “The contrasting view, held by Whitehead [co-author of the Principia Mathematica] and Wittgenstein, insists that every description should refer to … the circumstances of the observation.”  The reason for including a description of the measuring instrument isn’t difficult to see.  Kagan points out that “Most investigators who study “anxiety” or “fear” use answers on a standard questionnaire or responses to an interview to decide which of their subjects are anxious or fearful.  A smaller number of scientists ask close friends or relatives of each subject to evaluate how anxious the person is.  A still smaller group measures the heart rate, blood pressure, galvanic skin response, or salivary level of subjects.”  Alas, all these methodologies yield very different “measures” of the anxiety or fear of the subject.

Kagan therefore argues that a change in the measuring tool means a change in the reported measurement; one must include a description of the measuring instrument in order to “communicate unambiguously,” as Bohr expressed it.  One can never simply write “happiness = 4.29” (as in Pisa tables) because there is no such thing as a context-independent measure of happiness.  We have no idea what happiness is as a thing-in-itself.  Kagan notes the implications for psychologists of the measurement principles set out by Niels Bohr: “Modern physicists appreciate that light can behave as a wave or a particle depending on the method of measurement.  But some contemporary psychologists write as if that maxim did not apply to consciousness, intelligence, or fear.”

According to Bohr, when one reports psychological measurements, the requirement to describe the measurement situation means that ordinary language must replace numbers.  This invalidates the entire Pisa project.  Werner Heisenberg summarised his mentor’s teachings: “If we want to say anything at all about nature – and what else does science try to do – we must pass from mathematical to everyday language.”  The consequences of accepting this counsel are clear; one cannot rank order descriptions.

(To simplify matters somewhat, while numbers function perfectly well when observing the motion of a tennis ball or a star, the psychologist cannot observe directly the pupil’s happiness.  Bohr argued that there is “a deep-going analogy” between measurement in quantum physics and measurement in psychology because both are concerned with measuring constructs which transcend the limits of ordinary experience.  According to Bohr, because the physicist, like the psychologist (in respect of attempts to measure happiness), cannot observe electrons and photons directly, “physics concerns what we can say about nature,” and numbers, therefore, must give way to ordinary language.)

The arguments advanced above apply, without modification, to Pisa’s core activity of measuring pupil ability.  A simple thought experiment (first reported in the Times Educational Supplement of 26.07.2013) makes this clear.  Suppose that a pupil is awarded a perfect score in a GCSE mathematics examination.  It seems sensible to conjecture that if Einstein were alive, he too would secure a perfect score on this mathematics paper.  Given the title on the front page of the examination paper, one has the clear sense that the examination measures ability in mathematics.  Is one therefore justified in saying that Einstein and the pupil have the same mathematical ability?

This paradoxical outcome results from the erroneous treatment of mathematical ability as something entirely divorced from the questions which make up the examination paper (the measurement context).  It is clear that the pupil’s mathematical achievements are dwarfed by Einstein’s; to ascribe equal ability to Einstein and the pupil is to communicate ambiguously.  To avoid the paradox one simply has to detail the measurement circumstances in any report of attainment and say: “Einstein and the pupil have the same mathematical ability relative to this particular GCSE mathematics paper.”  By including a description of the measuring instrument one is, in effect, making clear the restrictive meaning which attaches to the word “mathematics” as it is being used here; school mathematics omits whole areas of the discipline familiar to Einstein such as non-Euclidean geometry, tensor analysis, vector field theory, Newtonian mechanics, and so on.  As with the measurement of happiness, when one factors in a description of the measuring instrument, the paradox dissolves away.

Pace Pisa, ability is not an intrinsic property of the person.  Rather, it is a joint property of the person and the measuring tool.  Ability is the property of an interaction.  Alas for Pisa, the move from numbers to language also dissolves away that organisation’s much-lauded rank orders.  Little wonder that Wittgenstein described the reasoning which underpins the statistical model (Item Response Theory) at the heart of the Pisa rankings as “a disease of thought.”  For the first time, the many profound conceptual difficulties of the Pisa league table now become difficulties for a grade on the GCSE grade scale.  Why would anyone agree to predicate a perfectly respectable grade scale on a ranking system with such profound shortcomings?

An article published in 2016 in the USA’s Proceedings of the National Academy of Sciences by Van Bavel, Mende-Siedlecki, Brady and Reinero, serves to emphasise the degree to which Pisa thinking is isolated even in psychology: “Indeed, the insight that behaviour is a function of both the person and the environment – elegantly captured by Lewin’s equation: B = f(P, E) –  has shaped the direction of social psychological research for more than half a century.  During that time, psychologists and other social scientists have paid considerable attention to the influence of context on the individual and have found extensive evidence that contextual factors alter human behaviour.”

If ability is a joint property of the person and the context in which that ability is manifest, then unambiguous communication demands that a description of the context must be integral to any attempt to represent an individual’s ability.  Mainstream psychology rejects the notion that one can ignore context and treat behaviour as wholly analysable in terms of traits and inner processes.  Indeed, psychology itself has a name for the error which afflicts the Pisa ranking model.  Gerd Gigerenzer of the Max Planck Institute writes: “The tendency to explain behaviour internally without analysing the environment is known as the ‘fundamental attribution error.’”

Three thinkers who stand out among those who argue that ability measures cannot be separated from the context in which they are manifest are the Nobel laureate Herbert A. Simon and two of the 20th century’s greatest intellectuals: the father of quantum theory, Niels Bohr, and the philosopher Ludwig Wittgenstein.  First, Herbert Simon uses a scissors metaphor to indicate the degree to which an attribute like ability cannot be disentangled from the context in which it is manifest.  (Pursuing questions such as “which blade of the scissors cuts the cloth?” will do little to advance an explanation of how scissors cut; there seems to be little value in seeking to understand the whole (the cutting action) in terms of its parts (the unique contribution of each blade).)  Herbert writes: “Human rational behaviour is shaped by a scissors whose blades are the structure of the environment and the computational capabilities of the actor.”

Secondly, Niels Bohr – in his Discussion with Einstein on Epistemological Problems in Atomic Physics – uses quantum “complementarity” to argue that first-person ascriptions [the contribution of the individual] and third-person ascriptions [the contribution of the environment] of psychological attributes form an “indivisible whole.”  Finally, on page 143 of his Blue and Brown Books, Wittgenstein highlights the error at the heart of the Pisa project: “There is a general disease of thinking which always looks for (and finds) what would be called a mental state from which all our acts spring as from a reservoir.”

Conclusions

The arguments set out above have serious implications for the technical fidelity of the new GCSE grade 5.  The more the general public find out about the modelling which underpins Pisa, the more their faith in the new GCSE grade scale will be undermined.  (For example, Kreiner reveals in the Times Educational Supplement piece that, “Most people don’t know that half of the students taking part in Pisa [2006] do not respond to any reading item at all.  Despite that, Pisa assigns reading scores to these children.”) 

The fact that a switch from numbers to language invalidates entirely the practice of ordering countries according to the efficacy of their education systems has profound implications for the validity of inferences made in respect of the new GCSE grade 5.  Given the assertion that grade 5 is designed to reflect the academic standards of high performing educational jurisdictions, as identified by their Pisa ranks, what possible justification can be offered for assigning a privileged role to the GCSE grade 5 in school performance tables?

To date, the profound conceptual difficulties which attend Pisa ranks have not impacted directly on the life chances of particular children in this country.  This would change if individual pupils failing to reach the grade 5 standard were construed as having fallen short of international standards (whatever that means).  If one accepts the reasoning of Simon, Wittgenstein and Bohr, grade 5 can represent nothing more than a standard somewhere between grade 4 and grade 6.  Any attempt to accord it special status, thereby giving it a central role in the EBacc and/or performance tables, for example, risks exposing the new GCSE grading scale to ridicule.

Why the Belfast Telegraph and the Irish News must correct their claim that St Dominic’s High School is Northern Ireland’s top grammar school.

Tags

, , , , , , , ,

 

 

In May 2016 the Belfast Telegraph published a league table of Northern Ireland grammar schools, based on the Advanced Level grades achieved by grammar school pupils in the school year 2014/2015.  The Belfast Telegraph’s editor failed to reply to correspondence asserting that fundamental errors in the design of her newspaper’s league table could result in unfair reputational damage to schools (see,  “Why the Belfast Telegraph and Irish News must set the record straight on grammar school league tables”).  In the recent past, the Irish News has published a similarly designed league table based on the 2015/2016 examinations.  This table has precisely the same design fault as the Belfast Telegraph table.  Once again, there is potential for reputational damage to a significant number of grammar schools.

 

It isn’t difficult to spot the error in the tables.  In assigning ranks to the various grammar schools, grades C, B, A and A* are treated as equal in value.  This is clearly wrong; a grade B represents a higher standard than a grade C, a grade A represents a higher standard than a grade B, and a grade A* represents a higher standard than an A.  Any instrument which treats a grade C the same as a grade A* simply cannot claim to measure academic excellence. The following scenario illustrates just how unfit-for-purpose these league tables are as tools for identifying high performing grammar schools: if every middle-sixth pupil in every school discipline across the entire grammar school estate were to simultaneously improve from C standard to A* standard, the tables published in these two newspapers would be completely powerless to detect any change whatsoever in standards.

 

Furthermore, both newspapers are, in effect, using highly questionable analysis to call into question the quality of teaching and learning in all “non-Catholic” schools.  Here are a few quotations from Rebecca Black’s Belfast Telegraph piece: “It is impossible not to be impressed at the consistently high performances of our top Catholic schools;” “By contrast, some of the best known non-Catholic grammars have slipped below the Northern Ireland average;” “If someone could bottle the ethos for success in these (Catholic) schools, they could run the world;” “Sean Rafferty, principal of St Louis Grammar, makes a very salient point in today’s Belfast Telegraph in calling for the Department of Education to examine what makes the top Catholic schools so successful, to learn the lessons and spread that magic across the school estate.”

 

Is the Belfast Telegraph not guilty of promulgating what  has been labelled “fake news”?  Based on highly dubious reasoning, Rebecca Black is distorting the debate on the relative efficacy of Catholic and non-Catholic education in Northern Ireland.  For Ms Black, the superiority of St Dominic’s High School (the school ranked first in both the Belfast Telegraph and the Irish News league tables) over, for example, Friends School Lisburn (ranked 12) is explained by the Catholic ethos of the former.  However, when a Grade Point Average approach is used to compute ranks, the order is reversed and Friends (a “non-Catholic” school) is the superior school!  The simplistic hypothesis that Catholic education is superior to that offered in non-Catholic schools is goes up in smoke when proper account is taken of the ordinal nature of examination grades: C, B, A and A*.