The cruel exams algorithm has laid bare the unfairness at the heart of our schools | Kenan Malik | Opinion

What children know and too many politicians seem not to: a few years ago, the psychologists Alex Shaw and Kristina Olson ran an experiment in which they told young children about two boys, Dan and Mark, who had cleaned up their room and were to be rewarded with rubbers (why rubbers should be seen as a reward I don’t know). However, there were five rubbers, so they could not be divided equally between the two boys. What should they do? The vast majority of children thought that one eraser should be thrown away, so there could be an even split between Dan and Mark. However, when the children heard that “Dan did more work than Mark”, they were quite comfortable giving three to Dan and two to Mark.

The children, in other words, had a deep commitment to fairness – anyone who has children will know that their favourite cry is “but that’s not fair!” – but they also recognised that the meaning of fairness could change depending on context. If Dan worked harder than Mark, it was only fair that he received more of the goodies, rather than fairness always requiring an equal division of the rewards.

The issue of fairness is a key concern, of course, not just for children but for politics too. Unlike the children, though, many politicians seem not to recognise that the meaning of fairness depends on context, that there are different ways of being fair and that we have to choose between them, depending on our broader political aims.

Consider the examination results fiasco. The original set of results, created by an algorithm designed by the exam authorities in the four nations, was manifestly unfair, penalising exceptional pupils in historically disadvantaged schools while giving a statistical leg-up to poorly performing students in high-achieving ones.

Outrage over the unfairness has led to the abandonment of the algorithmic scores and their replacement with unmoderated teacher assessments. But this, too, is unfair. Not only do teachers’ assessments tend to be overgenerous when compared with actual exam results but, left unmoderated, they penalise those pupils whose teachers were stricter in their assessments.

Then there is the question of grade inflation. The fact that this year’s grades are so much better than those of previous years may be unfair to both past and future students who must compete with them.

All the methods, in other words, are fair from certain perspectives and unfair from others. The question that the exam authorities and the politicians needed to answer was not: “How do we create a fair assessment system to replace the exams?” but: “What kind of fairness do we want and what kinds of unfairness are we willing to tolerate?”

Fairness is not a thing in itself but defined by one’s wider political vision. A utilitarian, committed to the notion of the greatest good for the greatest number of people, has a different understanding of fairness from an Aristotelian who believes, in the words of Aristotle’s Politics, that “persons who are equal should have assigned to them equal things”. Fairness to a free-market libertarian, for whom the market is best placed to equitably distribute goods, is different from fairness to a socialist, whose starting point is social need.

Politicians and policymakers have, however, increasingly embraced a technocratic view of fairness, adopting the pretence that science or statistics can objectively define what it is to be fair. The problem with this approach, as the Royal Statistical Society observed, is that an algorithm “is not simply a technically obvious and neutral procedure” but “embeds a range of judgments and choices”. The results of an algorithm depend on what it is asked to do and what data it is fed.

In the schools fiasco, the exam authorities, such as Ofqual in England and the Scottish Qualifications Authority, were apparently told that the primary concern was to prevent grade inflation. Once politicians had made that choice, then blaming the algorithm for producing the wrong political answer is little more than refusing to accept responsibility for one’s judgment.

Algorithms are, as the writer and broadcaster Timandra Harkness puts it, “prejudice engines”. The data with which they are fed is inevitably tainted by the prejudices and biases of the human world. Unchecked, that feeds into the results they produce. And where algorithms make predictions, those prejudices and biases are projected into the future.

The reason the exam algorithms penalised pupils from disadvantaged schools is that this is the algorithm built into real life. The education system has long served to thwart the ambitions of working-class pupils and to ease the path of the more privileged ones. The results debacle is but a sharper expression of what usually happens year after year.

It’s not just with algorithms that we see the problem of political judgments being passed off as objective decisions. Throughout the summer, ministers have justified their pandemic policies by claiming that “we’re following the science”. Scientific data and modelling can help us understand the consequences of different political decisions but they cannot tell us which decision is socially or morally preferable. Is it better to prevent grade inflation or to reward students who have done better than historically expected? Do the benefits of opening schools outweigh the risks of further spreading coronavirus? These are not just empirical questions but require political judgment too.

“People in this country have had enough of experts,” claimed Michael Gove during the Brexit referendum campaign. No government would seem to be more in tune with that sentiment than Boris Johnson’s administration. Yet his is also a government that shirks responsibility for its own decisions by pretending that political questions are really technical ones to be settled by experts. Perhaps what Gove meant was: “We’ve had enough of experts except when they can provide us with an alibi for political misjudgments.”

Kenan Malik is an Observer columnist

Source link