Bias in ‘the science’ on coronavirus? Britain has been here before | Coronavirus outbreak

It became the defining moment of the BSE crisis. In an attempt to persuade the public that it was perfectly safe to eat beef, the then agriculture minister, John Gummer, fed his five-year-old daughter a burger on camera in 1990. Six years later, the government admitted there was a link between eating infected beef and the brain disease vCJD, though thankfully the numbers affected were relatively small. Nevertheless, the scandal eroded public trust in government public health messaging, with knock-on effects on parents’ perception of government advice during the MMR scandal, a few years later.

The Gummer moment should serve as a warning to politicians about what happens if they seek to patronise the public with assurances that are not backed up by science. Yet there are already several candidates for the coronavirus equivalent. Will it be Boris Johnson boasting of “shaking hands with everybody” on 3 March? His claim on 12 March that banning large events would have little effect? Or his assurances that the government will avoid another national lockdown?

The public inquiry into the BSE scandal resulted in a 16-volume report that called for greater transparency in the production and use of scientific advice, and for the public to be treated like grownups who can understand uncertainty. Twenty years later, those lessons appear to have been forgotten.

One of the questions already being asked is why the government took so long to implement a full lockdown. There were two narratives that emerged from the government’s science advisers in the early days: that it would be wrong to suppress the spread of Covid-19 altogether as this would prevent “herd immunity” building up, and that the public would only tolerate a lockdown for so long. In a programme for Radio 4 this week, I looked at where this pseudo-scientific idea of “behavioural fatigue” came from: none of the behavioural scientists I interviewed – including those who sit on Sage’s behavioural science subcommittee – knew. Similar questions remain about exactly how the idea of pursuing herd immunity emerged.

One thing is clear. The government’s science advisers are repeating the mistakes of the BSE crisis – confusing a lack of evidence of risk or benefit for a lack of risk or benefit altogether. The deputy chief medical officer, Jenny Harries, asserted back in March that mask wearing was “not a good idea” for members of the public because it could increase the risk of contracting the virus. The developing evidence base on masks contradicts this. Others have made the same error: the World Health Organization has for months insisted that Covid-19 is transmitted through droplets when people cough or sneeze, rather than through much finer aerosols that can linger in the air for longer. But there was little certainty around this, and new evidence suggests that aerosol transmission may indeed be an issue.

At the root of all this is the nature of scientific knowledge. It is rarely black and white, and this is never more true than with a new disease, what the science historian Lorraine Daston has dubbed an “empirical ground zero”. That ground zero entails a high degree of uncertainty, which paves the way for human bias. The paradox of science is that it is that while it aspires to peeling away bias to leave knowledge that is pure and true, it is practised by human beings who are as subject to biases as the rest of us.

As understanding of the problem of bias in science has grown, there has been much soul-searching about how to reduce it by improving the way research gets reviewed and scrutinised. But there has been much less focus on how to eliminate bias from the production of scientific advice for government.

A brilliant scientist will not necessarily be a brilliant scientific adviser. An adviser needs to be able to communicate findings outside their area of expertise, to juggle contradictory views, and to act as a conduit between academia and high politics. And there are structural incentives towards bias in scientific advice. Politicians suffer from confirmation bias – looking for evidence that supports their worldview. Boris Johnson – a self-professed libertarian aware of the economic risks of Brexit – would hardly have been keen to shut down the economy; little wonder the flawed ideas of behavioural fatigue and herd immunity got traction. Politicians also like certainty: despite Brexiteer scepticism about experts, this government has used the mantra that it is “following the science” as a shield for its political decision making.

But scientific advisers can also be complicit. Peter Lunn, a behavioural scientist who advised the Irish government there was no evidence for behavioural fatigue, told me that in his experience, scientists tend to be more strident in communicating evidence if they know it will chime with what politicians want to hear. This is why there needs to be more transparency about the processes through which scientific advice feeds into decision making.

The role of Sage versus the role of the chief scientific adviser, at present Patrick Vallance, is too fuzzily defined. Sage was set up as an ad hoc group with a rotating cast of scientists, yet is being held collectively accountable in a way that does not reflect this status. Its minutes are now being published, but do not adequately express dissenting opinion. We know too little about how the chief scientific adviser, who chairs Sage, funnels its range of opinions into government. It would create much clearer lines of accountability if Vallance’s advice, drawing on Sage input, was published as such. But the only way to get a sense of that advice in real time has been when he stands at a Downing Street podium, a troubling politicisation of the independent scientist’s role. This is even more troubling given reports that the chief nursing officer was dropped from press conference lineups after she refused to back Dominic Cummings over his breach of lockdown guidelines.

The lack of clarity also fails to protect scientific advisers from accusations of bias. For instance, back in March, Vallance – who says his job is to speak scientific truth to power – was the frontrunner for chief executive of UK Research and Innovation, a £7bn -a-year science funding body, an appointment that Cummings, obsessed with science policy, would have likely had influence over. This would be a tricky position for anyone to negotiate, regardless of their integrity, and greater transparency would have insulated Vallance from the accusation that it might have affected his ability to deliver difficult messages.

Certainty comes across as authoritative, even when it is anything but. It is not just the government’s advisers who fall prey to this; some of its strongest scientific critics have also downplayed uncertainty. There are those who are keen to pin 100% of the blame on the politicians, and those who would rather it was all laid at the door of their scientific advisers. But I suspect any inquiry will produce lessons eerily similar to those of the BSE report. Back then, the absence of systems to protect scientific advice from bias were partly to blame. It seems not much has changed.

• Sonia Sodha is a Guardian columnist

Source link