'People will forgive you for being wrong, but they will never forgive you for being right - especially if events prove you right while proving them wrong.' Thomas Sowell
Search This Blog
Showing posts with label lies. Show all posts
Showing posts with label lies. Show all posts
Friday, 29 September 2023
Wednesday, 30 November 2022
Monday, 10 January 2022
Sunday, 29 August 2021
Wednesday, 27 January 2021
Covid lies cost lives – we have a duty to clamp down on them
George Monbiot in The Guardian
Why do we value lies more than lives? We know that certain falsehoods kill people. Some of those who believe such claims as “coronavirus doesn’t exist”, “it’s not the virus that makes people ill but 5G”, or “vaccines are used to inject us with microchips” fail to take precautions or refuse to be vaccinated, then contract and spread the virus. Yet we allow these lies to proliferate.
We have a right to speak freely. We also have a right to life. When malicious disinformation – claims that are known to be both false and dangerous – can spread without restraint, these two values collide head-on. One of them must give way, and the one we have chosen to sacrifice is human life. We treat free speech as sacred, but life as negotiable. When governments fail to ban outright lies that endanger people’s lives, I believe they make the wrong choice.
Any control by governments of what we may say is dangerous, especially when the government, like ours, has authoritarian tendencies. But the absence of control is also dangerous. In theory, we recognise that there are necessary limits to free speech: almost everyone agrees that we should not be free to shout “fire!” in a crowded theatre, because people are likely to be trampled to death. Well, people are being trampled to death by these lies. Surely the line has been crossed?
Those who demand absolute freedom of speech often talk about “the marketplace of ideas”. But in a marketplace, you are forbidden to make false claims about your product. You cannot pass one thing off as another. You cannot sell shares on a false prospectus. You are legally prohibited from making money by lying to your customers. In other words, in the marketplace there are limits to free speech. So where, in the marketplace of ideas, are the trading standards? Who regulates the weights and measures? Who checks the prospectus? We protect money from lies more carefully than we protect human life.
I believe that spreading only the most dangerous falsehoods, like those mentioned in the first paragraph, should be prohibited. A possible template is the Cancer Act, which bans people from advertising cures or treatments for cancer. A ban on the worst Covid lies should be time-limited, running for perhaps six months. I would like to see an expert committee, similar to the Scientific Advisory Group for Emergencies (Sage), identifying claims that present a genuine danger to life and proposing their temporary prohibition to parliament.
While this measure would apply only to the most extreme cases, we should be far more alert to the dangers of misinformation in general. Even though it states that the pundits it names are not deliberately spreading false information, the new Anti-Virus site www.covidfaq.co might help to tip the balance against people such as Allison Pearson, Peter Hitchens and Sunetra Gupta, who have made such public headway with their misleading claims about the pandemic.
But how did these claims become so prominent? They achieved traction only because they were given a massive platform in the media, particularly in the Telegraph, the Mail and – above all – the house journal of unscientific gibberish, the Spectator. Their most influential outlet is the BBC. The BBC has an unerring instinct for misjudging where debate about a matter of science lies. It thrills to the sound of noisy, ill-informed contrarians. As the conservationist Stephen Barlow argues, science denial is destroying our societies and the survival of life on Earth. Yet it is treated by the media as a form of entertainment. The bigger the idiot, the greater the airtime.
Interestingly, all but one of the journalists mentioned on the Anti-Virus site also have a long track record of downplaying and, in some cases, denying, climate breakdown. Peter Hitchens, for example, has dismissed not only human-made global heating, but the greenhouse effect itself. Today, climate denial has mostly dissipated in this country, perhaps because the BBC has at last stopped treating climate change as a matter of controversy, and Channel 4 no longer makes films claiming that climate science is a scam. The broadcasters kept this disinformation alive, just as the BBC, still providing a platform for misleading claims this month, sustains falsehoods about the pandemic.
Ironies abound, however. One of the founders of the admirable Anti-Virus site is Sam Bowman, a senior fellow at the Adam Smith Institute (ASI). This is an opaquely funded lobby group with a long history of misleading claims about science that often seem to align with its ideology or the interests of its funders. For example, it has downplayed the dangers of tobacco smoke, and argued against smoking bans in pubs and plain packaging for cigarettes. In 2013, the Observer revealed that it had been taking money from tobacco companies. Bowman himself, echoing arguments made by the tobacco industry, has called for the “lifting [of] all EU-wide regulations on cigarette packaging” on the grounds of “civil liberties”. He has also railed against government funding for public health messages about the dangers of smoking.
Some of the ASI’s past claims about climate science – such as statements that the planet is “failing to warm” and that climate science is becoming “completely and utterly discredited” – are as idiotic as the claims about the pandemic that Bowman rightly exposes. The ASI’s Neoliberal Manifesto, published in 2019, maintains, among other howlers, that “fewer people are malnourished than ever before”. In reality, malnutrition has been rising since 2014. If Bowman is serious about being a defender of science, perhaps he could call out some of the falsehoods spread by his own organisation.
Lobby groups funded by plutocrats and corporations are responsible for much of the misinformation that saturates public life. The launch of the Great Barrington Declaration, for example, that champions herd immunity through mass infection with the help of discredited claims, was hosted – physically and online – by the American Institute for Economic Research. This institute has received money from the Charles Koch Foundation, and takes a wide range of anti-environmental positions.
It’s not surprising that we have an inveterate liar as prime minister: this government has emerged from a culture of rightwing misinformation, weaponised by thinktanks and lobby groups. False claims are big business: rich people and organisations will pay handsomely for others to spread them. Some of those whom the BBC used to “balance” climate scientists in its debates were professional liars paid by fossil-fuel companies.
Over the past 30 years, I have watched this business model spread like a virus through public life. Perhaps it is futile to call for a government of liars to regulate lies. But while conspiracy theorists make a killing from their false claims, we should at least name the standards that a good society would set, even if we can’t trust the current government to uphold them.
Why do we value lies more than lives? We know that certain falsehoods kill people. Some of those who believe such claims as “coronavirus doesn’t exist”, “it’s not the virus that makes people ill but 5G”, or “vaccines are used to inject us with microchips” fail to take precautions or refuse to be vaccinated, then contract and spread the virus. Yet we allow these lies to proliferate.
We have a right to speak freely. We also have a right to life. When malicious disinformation – claims that are known to be both false and dangerous – can spread without restraint, these two values collide head-on. One of them must give way, and the one we have chosen to sacrifice is human life. We treat free speech as sacred, but life as negotiable. When governments fail to ban outright lies that endanger people’s lives, I believe they make the wrong choice.
Any control by governments of what we may say is dangerous, especially when the government, like ours, has authoritarian tendencies. But the absence of control is also dangerous. In theory, we recognise that there are necessary limits to free speech: almost everyone agrees that we should not be free to shout “fire!” in a crowded theatre, because people are likely to be trampled to death. Well, people are being trampled to death by these lies. Surely the line has been crossed?
Those who demand absolute freedom of speech often talk about “the marketplace of ideas”. But in a marketplace, you are forbidden to make false claims about your product. You cannot pass one thing off as another. You cannot sell shares on a false prospectus. You are legally prohibited from making money by lying to your customers. In other words, in the marketplace there are limits to free speech. So where, in the marketplace of ideas, are the trading standards? Who regulates the weights and measures? Who checks the prospectus? We protect money from lies more carefully than we protect human life.
I believe that spreading only the most dangerous falsehoods, like those mentioned in the first paragraph, should be prohibited. A possible template is the Cancer Act, which bans people from advertising cures or treatments for cancer. A ban on the worst Covid lies should be time-limited, running for perhaps six months. I would like to see an expert committee, similar to the Scientific Advisory Group for Emergencies (Sage), identifying claims that present a genuine danger to life and proposing their temporary prohibition to parliament.
While this measure would apply only to the most extreme cases, we should be far more alert to the dangers of misinformation in general. Even though it states that the pundits it names are not deliberately spreading false information, the new Anti-Virus site www.covidfaq.co might help to tip the balance against people such as Allison Pearson, Peter Hitchens and Sunetra Gupta, who have made such public headway with their misleading claims about the pandemic.
But how did these claims become so prominent? They achieved traction only because they were given a massive platform in the media, particularly in the Telegraph, the Mail and – above all – the house journal of unscientific gibberish, the Spectator. Their most influential outlet is the BBC. The BBC has an unerring instinct for misjudging where debate about a matter of science lies. It thrills to the sound of noisy, ill-informed contrarians. As the conservationist Stephen Barlow argues, science denial is destroying our societies and the survival of life on Earth. Yet it is treated by the media as a form of entertainment. The bigger the idiot, the greater the airtime.
Interestingly, all but one of the journalists mentioned on the Anti-Virus site also have a long track record of downplaying and, in some cases, denying, climate breakdown. Peter Hitchens, for example, has dismissed not only human-made global heating, but the greenhouse effect itself. Today, climate denial has mostly dissipated in this country, perhaps because the BBC has at last stopped treating climate change as a matter of controversy, and Channel 4 no longer makes films claiming that climate science is a scam. The broadcasters kept this disinformation alive, just as the BBC, still providing a platform for misleading claims this month, sustains falsehoods about the pandemic.
Ironies abound, however. One of the founders of the admirable Anti-Virus site is Sam Bowman, a senior fellow at the Adam Smith Institute (ASI). This is an opaquely funded lobby group with a long history of misleading claims about science that often seem to align with its ideology or the interests of its funders. For example, it has downplayed the dangers of tobacco smoke, and argued against smoking bans in pubs and plain packaging for cigarettes. In 2013, the Observer revealed that it had been taking money from tobacco companies. Bowman himself, echoing arguments made by the tobacco industry, has called for the “lifting [of] all EU-wide regulations on cigarette packaging” on the grounds of “civil liberties”. He has also railed against government funding for public health messages about the dangers of smoking.
Some of the ASI’s past claims about climate science – such as statements that the planet is “failing to warm” and that climate science is becoming “completely and utterly discredited” – are as idiotic as the claims about the pandemic that Bowman rightly exposes. The ASI’s Neoliberal Manifesto, published in 2019, maintains, among other howlers, that “fewer people are malnourished than ever before”. In reality, malnutrition has been rising since 2014. If Bowman is serious about being a defender of science, perhaps he could call out some of the falsehoods spread by his own organisation.
Lobby groups funded by plutocrats and corporations are responsible for much of the misinformation that saturates public life. The launch of the Great Barrington Declaration, for example, that champions herd immunity through mass infection with the help of discredited claims, was hosted – physically and online – by the American Institute for Economic Research. This institute has received money from the Charles Koch Foundation, and takes a wide range of anti-environmental positions.
It’s not surprising that we have an inveterate liar as prime minister: this government has emerged from a culture of rightwing misinformation, weaponised by thinktanks and lobby groups. False claims are big business: rich people and organisations will pay handsomely for others to spread them. Some of those whom the BBC used to “balance” climate scientists in its debates were professional liars paid by fossil-fuel companies.
Over the past 30 years, I have watched this business model spread like a virus through public life. Perhaps it is futile to call for a government of liars to regulate lies. But while conspiracy theorists make a killing from their false claims, we should at least name the standards that a good society would set, even if we can’t trust the current government to uphold them.
Monday, 25 November 2019
Wednesday, 3 July 2019
After urging land reform I now know the brute power of our billionaire press
A report I helped publish has led to attacks and flat-out falsehoods in the rightwing media. It’s clear whose interests they serve writes George Monbiot in The Guardian
‘As their crucial role in promoting Nigel Farage, Brexit and Boris Johnson suggests, the newspapers are as powerful as ever.’ Photograph: Christopher Pledger
All billionaires want the same thing – a world that works for them. For many, this means a world in which they are scarcely taxed and scarcely regulated; where labour is cheap and the planet can be used as a dustbin; where they can flit between tax havens and secrecy regimes, using the Earth’s surface as a speculative gaming board, extracting profits and dumping costs. The world that works for them works against us.
So how, in nominal democracies, do they get what they want? They fund political parties and lobby groups, set up fake grassroots (Astroturf) campaigns and finance social media ads. But above all, they buy newspapers and television stations. The widespread hope and expectation a few years ago was that, in the internet age, news controlled by billionaires would be replaced by news controlled by the people: social media would break their grip. But social media is instead dominated by stories the billionaire press generates. As their crucial role in promoting Nigel Farage, Brexit and Boris Johnson suggests, the newspapers are as powerful as ever.
They use this power not only to promote the billionaires’ favoured people and ideas, but also to shut down change before it happens. They deploy their attack dogs to take down anyone who challenges the programme. It is one thing to know this. It is another to experience it. A month ago I and six others published a report commissioned by the Labour party called Land for the Many. It proposed a set of policies that would be of immense benefit to the great majority of Britain’s people: ensuring that everyone has a good, affordable home; improving public amenities; shifting tax from ordinary people towards the immensely rich; protecting the living world; and enhancing public control over the decisions that affect our lives. We showed how the billionaires and other oligarchs could be put back in their boxes.
The result has been four extraordinary weeks of attacks in the Mail, Express, Sun, Times and Telegraph. Our contention that oligarchic power is rooted in the ownership and control of land has been amply vindicated by the response of oligarchic power.
Some of these reports peddle flat-out falsehoods. A week ago the Mail on Sunday claimed that our report recommends a capital gains tax on people’s main homes. This “spiteful raid that will horrify millions” ensures “we will soon be joining the likes of China, Cuba, Laos and Vietnam in becoming one of the world’s few Marxist-Leninist states”. This claim was picked up, and often embellished, by all the other rightwing papers. The policy proved, the Telegraph said, that “keeping a hard-left Labour party out of office is not an academic ideological ambition but a deadly serious matter for millions of voters”. Boris Johnson, Philip Hammond and several other senior Tories weighed in, attacking our “mad” proposal.
But we made no such recommendation. We considered the idea, listed its possible advantages and drawbacks, then specifically rejected it. As they say in these papers, you couldn’t make it up. But they have.
There were dozens of other falsehoods: apparently we have proposed a “garden tax”; we intend to add “an extra £374 a year on top of what the typical household pays in council tax” (no such figure is mentioned in our report); and inspectors will be sent to people’s homes to investigate their bedrooms.
Dozens of reports claim that our proposals are “plans” hatched by Jeremy Corbyn: “Jeremy Corbyn’s garden tax bombshell”; “Jeremy Corbyn is planning a huge tax raid”; “Corbyn’s war on homeowners”. Though Corbyn is aware of our report, he has played no role in it. What it contains are not his plans but our independent policy suggestions, none of which has yet been adopted by Labour. The press response gives me an inkling of what it must be like to walk in his shoes, as I see my name (and his) attached to lurid schemes I’ve never heard of, and associated with Robert Mugabe, Nicolás Maduro and the Soviet Union. Not one of the many journalists who wrote these articles has contacted any of the authors of the report. Yet they harvested lengthy quotes denouncing us from senior Conservatives.
The common factor in all these articles is their conflation of the interests of the ultra-rich with the interests of the middle classes. While our proposals take aim at the oligarchs, and would improve the prospects of the great majority, they are presented as an attack on ordinary people. Progressive taxation, the protection of public space and good homes for all should strike terror into your heart.
We’ve lodged a complaint to the press regulator, Ipso, about one of the worst examples, and we might make others. But to pursue them all would be a full-time job (we wrote the report unpaid, in our own time). The simple truth is that we are being outgunned by the brute power of billionaires. And the same can be said for democracy.
It is easy to see why political parties have become so cautious and why, as a result, the UK is stuck with outmoded institutions and policies, and succumbs to ever more extreme and regressive forms of taxation and control. Labour has so far held its nerve – and this makes its current leadership remarkable. It has not allowed itself to be bullied by the billionaire press.
The old threat has not abated – it has intensified. If a newspaper is owned by a billionaire, be suspicious of every word you read in it. Check its sources, question its claims. And withhold your support from any party that allows itself to be bullied or – worse – guided by their agenda. Stand in solidarity with those who resist it.
All billionaires want the same thing – a world that works for them. For many, this means a world in which they are scarcely taxed and scarcely regulated; where labour is cheap and the planet can be used as a dustbin; where they can flit between tax havens and secrecy regimes, using the Earth’s surface as a speculative gaming board, extracting profits and dumping costs. The world that works for them works against us.
So how, in nominal democracies, do they get what they want? They fund political parties and lobby groups, set up fake grassroots (Astroturf) campaigns and finance social media ads. But above all, they buy newspapers and television stations. The widespread hope and expectation a few years ago was that, in the internet age, news controlled by billionaires would be replaced by news controlled by the people: social media would break their grip. But social media is instead dominated by stories the billionaire press generates. As their crucial role in promoting Nigel Farage, Brexit and Boris Johnson suggests, the newspapers are as powerful as ever.
They use this power not only to promote the billionaires’ favoured people and ideas, but also to shut down change before it happens. They deploy their attack dogs to take down anyone who challenges the programme. It is one thing to know this. It is another to experience it. A month ago I and six others published a report commissioned by the Labour party called Land for the Many. It proposed a set of policies that would be of immense benefit to the great majority of Britain’s people: ensuring that everyone has a good, affordable home; improving public amenities; shifting tax from ordinary people towards the immensely rich; protecting the living world; and enhancing public control over the decisions that affect our lives. We showed how the billionaires and other oligarchs could be put back in their boxes.
The result has been four extraordinary weeks of attacks in the Mail, Express, Sun, Times and Telegraph. Our contention that oligarchic power is rooted in the ownership and control of land has been amply vindicated by the response of oligarchic power.
Some of these reports peddle flat-out falsehoods. A week ago the Mail on Sunday claimed that our report recommends a capital gains tax on people’s main homes. This “spiteful raid that will horrify millions” ensures “we will soon be joining the likes of China, Cuba, Laos and Vietnam in becoming one of the world’s few Marxist-Leninist states”. This claim was picked up, and often embellished, by all the other rightwing papers. The policy proved, the Telegraph said, that “keeping a hard-left Labour party out of office is not an academic ideological ambition but a deadly serious matter for millions of voters”. Boris Johnson, Philip Hammond and several other senior Tories weighed in, attacking our “mad” proposal.
But we made no such recommendation. We considered the idea, listed its possible advantages and drawbacks, then specifically rejected it. As they say in these papers, you couldn’t make it up. But they have.
There were dozens of other falsehoods: apparently we have proposed a “garden tax”; we intend to add “an extra £374 a year on top of what the typical household pays in council tax” (no such figure is mentioned in our report); and inspectors will be sent to people’s homes to investigate their bedrooms.
Dozens of reports claim that our proposals are “plans” hatched by Jeremy Corbyn: “Jeremy Corbyn’s garden tax bombshell”; “Jeremy Corbyn is planning a huge tax raid”; “Corbyn’s war on homeowners”. Though Corbyn is aware of our report, he has played no role in it. What it contains are not his plans but our independent policy suggestions, none of which has yet been adopted by Labour. The press response gives me an inkling of what it must be like to walk in his shoes, as I see my name (and his) attached to lurid schemes I’ve never heard of, and associated with Robert Mugabe, Nicolás Maduro and the Soviet Union. Not one of the many journalists who wrote these articles has contacted any of the authors of the report. Yet they harvested lengthy quotes denouncing us from senior Conservatives.
The common factor in all these articles is their conflation of the interests of the ultra-rich with the interests of the middle classes. While our proposals take aim at the oligarchs, and would improve the prospects of the great majority, they are presented as an attack on ordinary people. Progressive taxation, the protection of public space and good homes for all should strike terror into your heart.
We’ve lodged a complaint to the press regulator, Ipso, about one of the worst examples, and we might make others. But to pursue them all would be a full-time job (we wrote the report unpaid, in our own time). The simple truth is that we are being outgunned by the brute power of billionaires. And the same can be said for democracy.
It is easy to see why political parties have become so cautious and why, as a result, the UK is stuck with outmoded institutions and policies, and succumbs to ever more extreme and regressive forms of taxation and control. Labour has so far held its nerve – and this makes its current leadership remarkable. It has not allowed itself to be bullied by the billionaire press.
The old threat has not abated – it has intensified. If a newspaper is owned by a billionaire, be suspicious of every word you read in it. Check its sources, question its claims. And withhold your support from any party that allows itself to be bullied or – worse – guided by their agenda. Stand in solidarity with those who resist it.
Wednesday, 17 April 2019
'Calling bullshit': the college class on how not to be duped by the news
Professors at the University of Washington say the course provides the most useful skill college can offer writes James McWilliams in The Guardian
Academia being what it is (a place where everything is contested), there has been considerable debate over what exactly qualifies as bullshit. Most of that debate centers on the question of intention. Is bullshit considered bullshit if the deception was unintentionally presented? West and Bergstrom think that it is. They write, “Whether or not that usage is appropriate, we feel that the verb phrase calling bullshit definitely applies to falsehoods irrespective of the intentions of the author or speaker.”
The reason for the class’s existence comes down to a simple and somewhat alarming reality: even the most educated and savvy consumer of information is easily misled in today’s complex information ecosystem. Calling Bullshit is not dedicated to teaching students that Fox News promotes “fake news” or that National Enquirer headlines are fallacious. Instead, the class operates under the assumption that the structures through which today’s endless information comes to the consumer – algorithms, data graphics, info analytics, peer-reviewed publications – are in many ways as full of bullshit as the fake news we easily recognize as bogus. One scientist that West and Bergstrom cite in their syllabus goes so far as to say that, due to the fact that journals are prone to only publish positive results, “most published scientific results are probably false”.
Why smart people are more likely to believe fake news
A case in point is a 2016 article called Automated Inferences on Criminality Using Face Images. In it, the authors present an algorithm that can supposedly teach a machine to determine criminality with 90% accuracy based solely on a person’s headshot. Their core assumption is that, unlike humans, a machine is relatively free of emotion and bias. West and Bergstrom call bullshit, sending students to explore the sample of photos used to represent criminals in the experiment: all them are of convictedcriminals. The professors claim that “it seems less plausible to us that facial features are associated with criminal tendencies than it is that they are correlated with juries’ decisions to convict”. Conclusion: the algorithm is more correlated with facial characteristics that make a person convictable than a set of criminal inclinations.
By teaching ways to find misinformation in the venues many of us consider pristine realms of expertise – peer-reviewed journals such as Nature, reports by the National Institutes of Health, TED Talks – West and Bergstrom highlight the ultimate paradox of the information age: more and more knowledge is making us less and less reasonable.
‘Even the most educated and savvy consumer of information is easily misled in today’s complex information ecosystem.’ Photograph: Ritchie B Tongo/EPA
As we gather more data for mathematical models to better analyze, for example, the shrinking gap between elite male and female runners, we remain as prone as ever to misusing that data to achieve erroneous results. West and Bergstrom cite a 2004 Nature article in which the authors use linear regression to trace the closing gap between men and women’s running times, concluding that women will outpace men in the year 2156. To take down this kind of bullshit, the professors introduce the idea of reductio ad absurdum, which in this case would make the year 2636 far more interesting than 2156, as it’s then that, if the Nature study is right, “times of less than zero will be recorded”.
West and Bergstrom first offered the class in January of 2017 with modest expectations. “We would have been happy if a couple of our colleagues and friends would have said: ‘Cool idea, we should pass that along,’” West says. But within months the course had made national – and then international – news. “We have never guessed that it would get this kind of a response.”
To say that a nerve has been touched would be an understatement. After posting their website online, West and Bergstrom were swamped with emails and media requests from all over the world. Glowing press reports of the class’s ambitions contributed to the growing sense that something seismic in higher education was under way.
The professors were especially pleased by the interest shown among other universities – and even high schools – in modeling a course after their syllabus. Soon the Knight Foundation provided $50,000 for West and Bergstrom to help high school kids, librarians, journalists, and the general public become competent bullshit detectors.
In 1945, when Harvard University defined for the nation the role of higher education with its report on General Education in a Free Society, it stressed as its main goal “the continuance of the liberal and humane tradition”. The assumption, which now seems quaint, was that knowledge, which came from information, was the basis of character development.
Calling Bullshit, which provides the tools for every American (the lectures and readings are all online) to disrupt the foundation of even the most trusted source of information, reveals how profoundly difficult endless information has made the task of achieving that humane tradition. How the necessary shift from conveying wisdom to debunking it will play out is anyone’s guess, but if West and Bergstrom get their way – and it seems that they are – it will mean calling a lot of bullshit before we get to the business of becoming better citizens.
‘Our world is saturated with bullshit,’ the professors say. ‘This is our attempt to fight back.’ Photograph: Leland Bobbe/Getty Images/Image Source
To prepare themselves for future success in the American workforce, today’s college students are increasingly choosing courses in business, biomedical science, engineering, computer science, and various health-related disciplines.
These classes are bound to help undergraduates capitalize on the “college payoff”, but chances are good that none of them comes with a promise of this magnitude: “We will be astonished if these skills [learned in this course] do not turn out to be the most useful and most broadly applicable of those that you acquire during the course of your college education.”
Sound like bullshit? If so, there’s no better way to detect it than to consider the class that makes the claim. Calling Bullshit: Data Reasoning in a Digital World, designed and co-taught by the University of Washington professors Jevin West and Carl Bergstrom, begins with a premise so obvious we barely lend it the attention it deserves: “Our world is saturated with bullshit.” And so, every week for 12 weeks, the professors expose “one specific facet of bullshit”, doing so in the explicit spirit of resistance. “This is,” they explain, “our attempt to fight back.”
The problem of bullshit transcends political bounds, the class teaches. The proliferation of bullshit, according to West and Bergstrom, is “not a matter of left- or rightwing ideology; both sides of the aisle have proven themselves facile at creating and spreading bullshit. Rather (and at the risk of grandiose language) adequate bullshit detection strikes us as essential to the survival of liberal democracy.” They make it a point to stress that they began to work on the syllabus for this class back in 2015 – it’s not, they clarify, “a swipe at the Trump administration”.
There has been considerable debate over what exactly qualifies as bullshit
To prepare themselves for future success in the American workforce, today’s college students are increasingly choosing courses in business, biomedical science, engineering, computer science, and various health-related disciplines.
These classes are bound to help undergraduates capitalize on the “college payoff”, but chances are good that none of them comes with a promise of this magnitude: “We will be astonished if these skills [learned in this course] do not turn out to be the most useful and most broadly applicable of those that you acquire during the course of your college education.”
Sound like bullshit? If so, there’s no better way to detect it than to consider the class that makes the claim. Calling Bullshit: Data Reasoning in a Digital World, designed and co-taught by the University of Washington professors Jevin West and Carl Bergstrom, begins with a premise so obvious we barely lend it the attention it deserves: “Our world is saturated with bullshit.” And so, every week for 12 weeks, the professors expose “one specific facet of bullshit”, doing so in the explicit spirit of resistance. “This is,” they explain, “our attempt to fight back.”
The problem of bullshit transcends political bounds, the class teaches. The proliferation of bullshit, according to West and Bergstrom, is “not a matter of left- or rightwing ideology; both sides of the aisle have proven themselves facile at creating and spreading bullshit. Rather (and at the risk of grandiose language) adequate bullshit detection strikes us as essential to the survival of liberal democracy.” They make it a point to stress that they began to work on the syllabus for this class back in 2015 – it’s not, they clarify, “a swipe at the Trump administration”.
There has been considerable debate over what exactly qualifies as bullshit
Academia being what it is (a place where everything is contested), there has been considerable debate over what exactly qualifies as bullshit. Most of that debate centers on the question of intention. Is bullshit considered bullshit if the deception was unintentionally presented? West and Bergstrom think that it is. They write, “Whether or not that usage is appropriate, we feel that the verb phrase calling bullshit definitely applies to falsehoods irrespective of the intentions of the author or speaker.”
The reason for the class’s existence comes down to a simple and somewhat alarming reality: even the most educated and savvy consumer of information is easily misled in today’s complex information ecosystem. Calling Bullshit is not dedicated to teaching students that Fox News promotes “fake news” or that National Enquirer headlines are fallacious. Instead, the class operates under the assumption that the structures through which today’s endless information comes to the consumer – algorithms, data graphics, info analytics, peer-reviewed publications – are in many ways as full of bullshit as the fake news we easily recognize as bogus. One scientist that West and Bergstrom cite in their syllabus goes so far as to say that, due to the fact that journals are prone to only publish positive results, “most published scientific results are probably false”.
Why smart people are more likely to believe fake news
A case in point is a 2016 article called Automated Inferences on Criminality Using Face Images. In it, the authors present an algorithm that can supposedly teach a machine to determine criminality with 90% accuracy based solely on a person’s headshot. Their core assumption is that, unlike humans, a machine is relatively free of emotion and bias. West and Bergstrom call bullshit, sending students to explore the sample of photos used to represent criminals in the experiment: all them are of convictedcriminals. The professors claim that “it seems less plausible to us that facial features are associated with criminal tendencies than it is that they are correlated with juries’ decisions to convict”. Conclusion: the algorithm is more correlated with facial characteristics that make a person convictable than a set of criminal inclinations.
By teaching ways to find misinformation in the venues many of us consider pristine realms of expertise – peer-reviewed journals such as Nature, reports by the National Institutes of Health, TED Talks – West and Bergstrom highlight the ultimate paradox of the information age: more and more knowledge is making us less and less reasonable.
‘Even the most educated and savvy consumer of information is easily misled in today’s complex information ecosystem.’ Photograph: Ritchie B Tongo/EPA
As we gather more data for mathematical models to better analyze, for example, the shrinking gap between elite male and female runners, we remain as prone as ever to misusing that data to achieve erroneous results. West and Bergstrom cite a 2004 Nature article in which the authors use linear regression to trace the closing gap between men and women’s running times, concluding that women will outpace men in the year 2156. To take down this kind of bullshit, the professors introduce the idea of reductio ad absurdum, which in this case would make the year 2636 far more interesting than 2156, as it’s then that, if the Nature study is right, “times of less than zero will be recorded”.
West and Bergstrom first offered the class in January of 2017 with modest expectations. “We would have been happy if a couple of our colleagues and friends would have said: ‘Cool idea, we should pass that along,’” West says. But within months the course had made national – and then international – news. “We have never guessed that it would get this kind of a response.”
To say that a nerve has been touched would be an understatement. After posting their website online, West and Bergstrom were swamped with emails and media requests from all over the world. Glowing press reports of the class’s ambitions contributed to the growing sense that something seismic in higher education was under way.
The professors were especially pleased by the interest shown among other universities – and even high schools – in modeling a course after their syllabus. Soon the Knight Foundation provided $50,000 for West and Bergstrom to help high school kids, librarians, journalists, and the general public become competent bullshit detectors.
In 1945, when Harvard University defined for the nation the role of higher education with its report on General Education in a Free Society, it stressed as its main goal “the continuance of the liberal and humane tradition”. The assumption, which now seems quaint, was that knowledge, which came from information, was the basis of character development.
Calling Bullshit, which provides the tools for every American (the lectures and readings are all online) to disrupt the foundation of even the most trusted source of information, reveals how profoundly difficult endless information has made the task of achieving that humane tradition. How the necessary shift from conveying wisdom to debunking it will play out is anyone’s guess, but if West and Bergstrom get their way – and it seems that they are – it will mean calling a lot of bullshit before we get to the business of becoming better citizens.
Tuesday, 7 February 2017
The hi-tech war on science fraud
Stephen Buranyi in The Guardian
One morning last summer, a German psychologist named Mathias Kauff woke up to find that he had been reprimanded by a robot. In an email, a computer program named Statcheck informed him that a 2013 paper he had published on multiculturalism and prejudice appeared to contain a number of incorrect calculations – which the program had catalogued and then posted on the internet for anyone to see. The problems turned out to be minor – just a few rounding errors – but the experience left Kauff feeling rattled. “At first I was a bit frightened,” he said. “I felt a bit exposed.”
One morning last summer, a German psychologist named Mathias Kauff woke up to find that he had been reprimanded by a robot. In an email, a computer program named Statcheck informed him that a 2013 paper he had published on multiculturalism and prejudice appeared to contain a number of incorrect calculations – which the program had catalogued and then posted on the internet for anyone to see. The problems turned out to be minor – just a few rounding errors – but the experience left Kauff feeling rattled. “At first I was a bit frightened,” he said. “I felt a bit exposed.”
Kauff wasn’t alone. Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer.
Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics.
Susan Fiske, the former head of the Association for Psychological Science, wrote an op-ed accusing “self-appointed data police” of pioneering a new “form of harassment”. The German Psychological Society issued a statement condemning the unauthorised use of Statcheck. The intensity of the reaction suggested that many were afraid that the program was not just attributing mere statistical errors, but some impropriety, to the scientists.
The man behind all this controversy was a 25-year-old Dutch scientist named Chris Hartgerink, based at Tilburg University’s Meta-Research Center, which studies bias and error in science. Statcheck was the brainchild of Hartgerink’s colleague Michèle Nuijten, who had used the program to conduct a 2015 study that demonstrated that about half of all papers in psychology journals contained a statistical error. Nuijten’s study was written up in Nature as a valuable contribution to the growing literature acknowledging bias and error in science – but she had not published an inventory of the specific errors it had detected, or the authors who had committed them. The real flashpoint came months later,when Hartgerink modified Statcheck with some code of his own devising, which catalogued the individual errors and posted them online – sparking uproar across the scientific community.
Hartgerink is one of only a handful of researchers in the world who work full-time on the problem of scientific fraud – and he is perfectly happy to upset his peers. “The scientific system as we know it is pretty screwed up,” he told me last autumn. Sitting in the offices of the Meta-Research Center, which look out on to Tilburg’s grey, mid-century campus, he added: “I’ve known for years that I want to help improve it.” Hartgerink approaches his work with a professorial seriousness – his office is bare, except for a pile of statistics textbooks and an equation-filled whiteboard – and he is appealingly earnest about his aims. His conversations tend to rapidly ascend to great heights, as if they were balloons released from his hands – the simplest things soon become grand questions of ethics, or privacy, or the future of science.
“Statcheck is a good example of what is now possible,” he said. The top priority,for Hartgerink, is something much more grave than correcting simple statistical miscalculations. He is now proposing to deploy a similar program that will uncover fake or manipulated results – which he believes are far more prevalent than most scientists would like to admit.
When it comes to fraud – or in the more neutral terms he prefers, “scientific misconduct” – Hartgerink is aware that he is venturing into sensitive territory. “It is not something people enjoy talking about,” he told me, with a weary grin. Despite its professed commitment to self-correction, science is a discipline that relies mainly on a culture of mutual trust and good faith to stay clean. Talking about its faults can feel like a kind of heresy. In 1981, when a young Al Gore led a congressional inquiry into a spate of recent cases of scientific fraud in biomedicine, the historian Daniel Kevles observed that “for Gore and for many others, fraud in the biomedical sciences was akin to pederasty among priests”.
The comparison is apt. The exposure of fraud directly threatens the special claim science has on truth, which relies on the belief that its methods are purely rational and objective. As the congressmen warned scientists during the hearings, “each and every case of fraud serves to undermine the public’s trust in the research enterprise of our nation”.
But three decades later, scientists still have only the most crude estimates of how much fraud actually exists. The current accepted standard is a 2009 study by the Stanford researcher Daniele Fanelli that collated the results of 21 previous surveys given to scientists in various fields about research misconduct. The studies, which depended entirely on scientists honestly reporting their own misconduct, concluded that about 2% of scientists had falsified data at some point in their career.
If Fanelli’s estimate is correct, it seems likely that thousands of scientists are getting away with misconduct each year. Fraud – including outright fabrication, plagiarism and self-plagiarism – accounts for the majority of retracted scientific articles. But, according to RetractionWatch, which catalogues papers that have been withdrawn from the scientific literature, only 684 were retracted in 2015, while more than 800,000 new papers were published. If even just a few of the suggested 2% of scientific fraudsters – which, relying on self-reporting, is itself probably a conservative estimate – are active in any given year, the vast majority are going totally undetected. “Reviewers and editors, other gatekeepers – they’re not looking for potential problems,” Hartgerink said.
But if none of the traditional authorities in science are going to address the problem, Hartgerink believes that there is another way. If a program similar to Statcheck can be trained to detect the traces of manipulated data, and then make those results public, the scientific community can decide for itself whether a given study should still be regarded as trustworthy.
Hartgerink’s university, which sits at the western edge of Tilburg, a small, quiet city in the southern Netherlands, seems an unlikely place to try and correct this hole in the scientific process. The university is best known for its economics and business courses and does not have traditional lab facilities. But Tilburg was also the site of one of the biggest scientific scandals in living memory – and no one knows better than Hartgerink and his colleagues just how devastating individual cases of fraud can be.
In September 2010, the School of Social and Behavioral Science at Tilburg University appointed Diederik Stapel, a promising young social psychologist, as its new dean. Stapel was already popular with students for his warm manner, and with the faculty for his easy command of scientific literature and his enthusiasm for collaboration. He would often offer to help his colleagues, and sometimes even his students, by conducting surveys and gathering data for them.
As dean, Stapel appeared to reward his colleagues’ faith in him almost immediately. In April 2011 he published a paper in Science, the first study the small university had ever landed in that prestigious journal. Stapel’s research focused on what psychologists call “priming”: the idea that small stimuli can affect our behaviour in unnoticed but significant ways. “Could being discriminated against depend on such seemingly trivial matters as garbage on the streets?” Stapel’s paper in Science asked. He proceeded to show that white commuters at the Utrecht railway station tended to sit further away from visible minorities when the station was dirty. Similarly, Stapel found that white people were more likely to give negative answers on a quiz about minorities if they were interviewed on a dirty street, rather than a clean one.
Stapel had a knack for devising and executing such clever studies, cutting through messy problems to extract clean data. Since becoming a professor a decade earlier, he had published more than 100 papers, showing, among other things, that beauty product advertisements, regardless of context, prompted women to think about themselves more negatively, and that judges who had been primed to think about concepts of impartial justice were less likely to make racially motivated decisions.
His findings regularly reached the public through the media. The idea that huge, intractable social issues such as sexism and racism could be affected in such simple ways had a powerful intuitive appeal, and hinted at the possibility of equally simple, elegant solutions. If anything united Stapel’s diverse interests, it was this Gladwellian bent. His studies were often featured in the popular press, including the Los Angeles Times and New York Times, and he was a regular guest on Dutch television programmes.
But as Stapel’s reputation skyrocketed, a small group of colleagues and students began to view him with suspicion. “It was too good to be true,” a professor who was working at Tilburg at the time told me. (The professor, who I will call Joseph Robin, asked to remain anonymous so that he could frankly discuss his role in exposing Stapel.) “All of his experiments worked. That just doesn’t happen.”
A student of Stapel’s had mentioned to Robin in 2010 that some of Stapel’s data looked strange, so that autumn, shortly after Stapel was made Dean, Robin proposed a collaboration with him, hoping to see his methods first-hand. Stapel agreed, and the data he returned a few months later, according to Robin, “looked crazy. It was internally inconsistent in weird ways; completely unlike any real data I had ever seen.” Meanwhile, as the student helped get hold of more datasets from Stapel’s former students and collaborators, the evidence mounted: more “weird data”, and identical sets of numbers copied directly from one study to another.
In August 2011, the whistleblowers took their findings to the head of the department, Marcel Zeelenberg, who confronted Stapel with the evidence. At first, Stapel denied the charges, but just days later he admitted what his accusers suspected: he had never interviewed any commuters at the railway station, no women had been shown beauty advertisements and no judges had been surveyed about impartial justice and racism.
Stapel hadn’t just tinkered with numbers, he had made most of them up entirely, producing entire datasets at home in his kitchen after his wife and children had gone to bed. His method was an inversion of the proper scientific method: he started by deciding what result he wanted and then worked backwards, filling out the individual “data” points he was supposed to be collecting.
On 7 September 2011, the university revealed that Stapel had been suspended. The media initially speculated that there might have been an issue with his latest study – announced just days earlier, showing that meat-eaters were more selfish and less sociable – but the problem went much deeper. Stapel’s students and colleagues were about to learn that his enviable skill with data was, in fact, a sham, and his golden reputation, as well as nearly a decade of results that they had used in their own work, were built on lies.
Chris Hartgerink was studying late at the library when he heard the news. The extent of Stapel’s fraud wasn’t clear by then, but it was big. Hartgerink, who was then an undergraduate in the Tilburg psychology programme, felt a sudden disorientation, a sense that something solid and integral had been lost. Stapel had been a mentor to him, hiring him as a research assistant and giving him constant encouragement. “This is a guy who inspired me to actually become enthusiastic about research,” Hartgerink told me. “When that reason drops out, what remains, you know?”
Hartgerink wasn’t alone; the whole university was stunned. “It was a really difficult time,” said one student who had helped expose Stapel. “You saw these people on a daily basis who were so proud of their work, and you know it’s just based on a lie.” Even after Stapel resigned, the media coverage was relentless. Reporters roamed the campus – first from the Dutch press, and then, as the story got bigger, from all over the world.
On 9 September, just two days after Stapel was suspended, the university convened an ad-hoc investigative committee of current and former faculty. To help determine the true extent of Stapel’s fraud, the committee turned to Marcel van Assen, a statistician and psychologist in the department. At the time, Van Assen was growing bored with his current research, and the idea of investigating the former dean sounded like fun to him. Van Assen had never much liked Stapel, believing that he relied more on the force of his personality than reason when running the department. “Some people believe him charismatic,” Van Assen told me. “I am less sensitive to it.”
Van Assen – who is 44, tall and rangy, with a mop of greying, curly hair – approaches his work with relentless, unsentimental practicality. When speaking, he maintains an amused, half-smile, as if he is joking. He once told me that to fix the problems in psychology, it might be simpler to toss out 150 years of research and start again; I’m still not sure whether or not he was serious.
To prove misconduct, Van Assen said, you must be a pitbull: biting deeper and deeper, clamping down not just on the papers, but the datasets behind them, the research methods, the collaborators – using everything available to bring down the target. He spent a year breaking down the 45 studies Stapel produced at Tilburg and cataloguing their individual aberrations, noting where the effect size – a standard measure of the difference between the two groups in an experiment –seemed suspiciously large, where sequences of numbers were copied, where variables were too closely related, or where variables that should have moved in tandem instead appeared adrift.
The committee released its final report in October 2012 and, based largely on its conclusions, 55 of Stapel’s publications were officially retracted by the journals that had published them. Stapel also returned his PhD to the University of Amsterdam. He is, by any measure, one of the biggest scientific frauds of all time. (RetractionWatch has him third on their all-time retraction leaderboard.) The committee also had harsh words for Stapel’s colleagues, concluding that “from the bottom to the top, there was a general neglect of fundamental scientific standards”. “It was a real blow to the faculty,” Jacques Hagenaars, a former professor of methodology at Tilburg, who served on the committee, told me.
By extending some of the blame to the methods and attitudes of the scientists around Stapel, the committee situated the case within a larger problem that was attracting attention at the time, which has come to be known as the “replication crisis”. For the past decade, the scientific community has been grappling with the discovery that many published results cannot be reproduced independently by other scientists – in spite of the traditional safeguards of publishing and peer-review – because the original studies were marred by some combination of unchecked bias and human error.
After the committee disbanded, Van Assen found himself fascinated by the way science is susceptible to error, bias, and outright fraud. Investigating Stapel had been exciting, and he had no interest in returning to his old work. Van Assen had also found a like mind, a new professor at Tilburg named Jelte Wicherts, who had a long history working on bias in science and who shared his attitude of upbeat cynicism about the problems in their field. “We simply agree, there are findings out there that cannot be trusted,” Van Assen said. They began planning a new sort of research group: one that would investigate the very practice of science.
Van Assen does not like assigning Stapel too much credit for the creation of the Meta-Research Center, which hired its first students in late 2012, but there is an undeniable symmetry: he and Wicherts have created, in Stapel’s old department, a platform to investigate the sort of “sloppy science” and misconduct that very department had been condemned for.
Hartgerink joined the group in 2013. “For many people, certainly for me, Stapel launched an existential crisis in science,” he said. After Stapel’s fraud was exposed, Hartgerink struggled to find “what could be trusted” in his chosen field. He began to notice how easy it was for scientists to subjectively interpret data – or manipulate it. For a brief time he considered abandoning a future in research and joining the police.
There are probably several very famous papers that have fake data, and very famous people who have done it
Van Assen, who Hartgerink met through a statistics course, helped put him on another path. Hartgerink learned that a growing number of scientists in every field were coming to agree that the most urgent task for their profession was to establish what results and methods could still be trusted – and that many of these people had begun to investigate the unpredictable human factors that, knowingly or not, knocked science off its course. What was more, he could be a part of it. Van Assen offered Hartgerink a place in his yet-unnamed research group. All of the current projects were on errors or general bias, but Van Assen proposed they go out and work closer to the fringes, developing methods that could detect fake data in published scientific literature.
“I’m not normally an expressive person,” Hartgerink told me. “But I said: ‘Hell, yes. Let’s do that.’”
Hartgerink and Van Assen believe not only that most scientific fraud goes undetected, but that the true rate of misconduct is far higher than 2%. “We cannot trust self reports,” Van Assen told me. “If you ask people, ‘At the conference, did you cheat on your fiancee?’ – people will very likely not admit this.”
Uri Simonsohn, a psychology professor at University of Pennsylvania’s Wharton School who gained notoriety as a “data vigilante” for exposing two serious cases of fraud in his field in 2012, believes that as much as 5% of all published research contains fraudulent data. “It’s not only in the periphery, it’s not only in the journals people don’t read,” he told me. “There are probably several very famous papers that have fake data, and very famous people who have done it.”
But as long as it remains undiscovered, there is a tendency for scientists to dismiss fraud in favour of more widely documented – and less seedy – issues. Even Arturo Casadevall, an American microbiologist who has published extensively on the rate, distribution, and detection of fraud in science, told me that despite his personal interest in the topic, my time would be better served investigating the broader issues driving the replication crisis. Fraud, he said, was “probably a relatively minor problem in terms of the overall level of science”.
This way of thinking goes back at least as far as scientists have been grappling with high-profile cases of misconduct. In 1983, Peter Medawar, the British immunologist and Nobel laureate, wrote in the London Review of Books: “The number of dishonest scientists cannot, of course, be known, but even if they were common enough to justify scary talk of ‘tips of icebergs’, they have not been so numerous as to prevent science’s having become the most successful enterprise (in terms of the fulfilment of declared ambitions) that human beings have ever engaged upon.”
From this perspective, as long as science continues doing what it does well – as long as genes are sequenced and chemicals classified and diseases reliably identified and treated – then fraud will remain a minor concern. But while this may be true in the long run, it may also be dangerously complacent. Furthermore, scientific misconduct can cause serious harm, as, for instance, in the case of patients treated by Paolo Macchiarini, a doctor at Karolinska Institute in Sweden who allegedly misrepresented the effectiveness of an experimental surgical procedure he had developed. Macchiarini is currently being investigated by a Swedish prosecutor after several of the patients who received the procedure later died.
Even in the more mundane business of day-to-day research, scientists are constantly building on past work, relying on its solidity to underpin their own theories. If misconduct really is as widespread as Hartgerink and Van Assen think, then false results are strewn across scientific literature, like unexploded mines that threaten any new structure built over them. At the very least, if science is truly invested in its ideal of self-correction, it seems essential to know the extent of the problem.
But there is little motivation within the scientific community to ramp up efforts to detect fraud. Part of this has to do with the way the field is organised. Science isn’t a traditional hierarchy, but a loose confederation of research groups, institutions, and professional organisations. Universities are clearly central to the scientific enterprise, but they are not in the business of evaluating scientific results, and as long as fraud doesn’t become public they have little incentive to go after it. There is also the questionable perception, although widespread in the scientific community, that there are already measures in place that preclude fraud. When Gore and his fellow congressmen held their hearings 35 years ago, witnesses routinely insisted that science had a variety of self-correcting mechanisms, such as peer-review and replication. But, as the science journalists William Broad and Nicholas Wade pointed out at the time, the vast majority of cases of fraud are actually exposed by whistleblowers, and that holds true to this day.
And so the enormous task of keeping science honest is left to individual scientists in the hope that they will police themselves, and each other. “Not only is it not sustainable,” said Simonsohn, “it doesn’t even work. You only catch the most obvious fakers, and only a small share of them.” There is also the problem of relying on whistleblowers, who face the thankless and emotionally draining prospect of accusing their own colleagues of fraud. (“It’s like saying someone is a paedophile,” one of the students at Tilburg told me.) Neither Simonsohn nor any of the Tilburg whistleblowers I interviewed said they would come forward again. “There is no way we as a field can deal with fraud like this,” the student said. “There has to be a better way.”
In the winter of 2013, soon after Hartgerink began working with Van Assen, they began to investigate another social psychology researcher who they noticed was reporting suspiciously large effect sizes, one of the “tells” that doomed Stapel. When they requested that the researcher provide additional data to verify her results, she stalled – claiming that she was undergoing treatment for stomach cancer. Months later, she informed them that she had deleted all the data in question. But instead of contacting the researcher’s co-authors for copies of the data, or digging deeper into her previous work, they opted to let it go.
They had been thoroughly stonewalled, and they knew that trying to prosecute individual cases of fraud – the “pitbull” approach that Van Assen had taken when investigating Stapel – would never expose more than a handful of dishonest scientists. What they needed was a way to analyse vast quantities of data in search of signs of manipulation or error, which could then be flagged for public inspection without necessarily accusing the individual scientists of deliberate misconduct. After all, putting a fence around a minefield has many of the same benefits as clearing it, with none of the tricky business of digging up the mines.
As Van Assen had earlier argued in a letter to the journal Nature, the traditional approach to investigating other scientists was needlessly fraught – since it combined the messy task of proving that a researcher had intended to commit fraud with a much simpler technical problem: whether the data underlying their results was valid. The two issues, he argued, could be separated.
Scientists can commit fraud in a multitude of ways. In 1974, the American immunologist William Summerlin famously tried to pass a patch of skin on a mouse darkened with permanent marker pen as a successful interspecies skin-graft. But most instances are more mundane: the majority of fraud cases in recent years have emerged from scientists either falsifying images – deliberately mislabelling scans and micrographs – or fabricating or altering their recorded data. And scientists have used statistical tests to scrutinise each other’s data since at least the 1930s, when Ronald Fisher, the father of biostatistics, used a basic chi-squared test to suggest that Gregor Mendel, the father of genetics, had cherrypicked some of his data.
In 2014, Hartgerink and Van Assen started to sort through the variety of tests used in ad-hoc investigations of fraud in order to determine which were powerful and versatile enough to reliably detect statistical anomalies across a wide range of fields. After narrowing down a promising arsenal of tests, they hit a tougher problem. To prove that their methods work, Hartgerink and Van Assen have to show they can reliably distinguish false from real data. But research misconduct is relatively uncharted territory. Only a handful of cases come to light each year – a dismally small sample size – so it’s hard to get an idea of what constitutes “normal” fake data, what its features and particular quirks are. Hartgerink devised a workaround, challenging other academics to produce simple fake datasets, a sort of game to see if they could come up with data that looked real enough to fool the statistical tests, with an Amazon gift card as a prize.
By 2015, the Meta-Research group had expanded to seven researchers, and Hartgerink was helping his colleagues with a separate error-detection project that would become Statcheck. He was pleased with the study that Michèle Nuitjen published that autumn, which used Statcheck to show that something like half of all published psychology papers appeared to contain calculation errors, but as he tinkered with the program and the database of psychology papers they had assembled, he found himself increasingly uneasy about what he saw as the closed and secretive culture of science.
When scientists publish papers in journals, they release only the data they wish to share. Critical evaluation of the results by other scientists – peer review – takes place in secret and the discussion is not released publicly. Once a paper is published, all comments, concerns, and retractions must go through the editors of the journal before they reach the public. There are good, or at least defensible, arguments for all of this. But Hartgerink is part of an increasingly vocal group that believes that the closed nature of science, with authority resting in the hands of specific gatekeepers – journals, universities, and funders – is harmful, and that a more open approach would better serve the scientific method.
Hartgerink realised that with a few adjustments to Statcheck, he could make public all the statistical errors it had exposed. He hoped that this would shift the conversation away from talk of broad, representative results – such as the proportion of studies that contained errors – and towards a discussion of the individual papers and their mistakes. The critique would be complete, exhaustive, and in the public domain, where the authors could address it; everyone else could draw their own conclusions.
In August 2016, with his colleagues’ blessing, he posted the full set of Statcheck results publicly on the anonymous science message board PubPeer. At first there was praise on Twitter and science blogs, which skew young and progressive – and then, condemnations, largely from older scientists, who feared an intrusive new world of public blaming and shaming. In December, after everyone had weighed in, Nature, a bellwether of mainstream scientific thought for more than a century, cautiously supported a future of automated scientific scrutiny in an editorial that addressed the Statcheck controversy without explicitly naming it. Its conclusion seemed to endorse Hartgerink’s approach, that “criticism itself must be embraced”.
In the same month, the Office of Research Integrity (ORI), an obscure branch of the US National Institutes of Health, awarded Hartgerink a small grant – about $100,000 – to pursue new projects investigating misconduct, including the completion of his program to detect fabricated data. For Hartgerink and Van Assen, who had not received any outside funding for their research, it felt like vindication.
Yet change in science comes slowly, if at all, Van Assen reminded me. The current push for more open and accountable science, of which they are a part, has “only really existed since 2011”, he said. It has captured an outsize share of the science media’s attention, and set laudable goals, but it remains a small, fragile outpost of true believers within the vast scientific enterprise. “I have the impression that many scientists in this group think that things are going to change.” Van Assen said. “Chris, Michèle, they are quite optimistic. I think that’s bias. They talk to each other all the time.”
When I asked Hartgerink what it would take to totally eradicate fraud from the scientific process, he suggested that scientists make all of their data public; register the intentions of their work before conducting experiments, to prevent post-hoc reasoning, and that they have their results checked by algorithms during and after the publishing process.
To any working scientist – currently enjoying nearly unprecedented privacy and freedom for a profession that is in large part publicly funded – Hartgerink’s vision would be an unimaginably draconian scientific surveillance state. For his part, Hartgerink believes the preservation of public trust in science requires nothing less – but in the meantime, he intends to pursue this ideal without the explicit consent of the entire scientific community, by investigating published papers and making the results available to the public.
Even scientists who have done similar work uncovering fraud have reservations about Van Assen and Hartgerink’s approach. In January, I met with Dr John Carlisle and Dr Steve Yentis at an anaesthetics conference that took place in London, near Westminster Abbey. In 2012, Yentis, then the editor of the journal Anaesthesia, asked Carlisle to investigate data from a researcher named Yoshitaka Fujii, who the community suspected was falsifying clinical trials. In time, Carlisle demonstrated that 168 of Fujii’s trials contained dubious statistical results. Yentis and the other journal editors contacted Fujii’s employers, who launched a full investigation. Fujii currently sits at the top of the RetractionWatch leaderboard with 183 retracted studies. By sheer numbers he is the biggest scientific fraud in recorded history.
You’re saying to a person, ‘I think you’re a liar.’ How many fraudulent papers are worth one false accusation?
Carlisle, who, like Van Assen, found that he enjoyed the detective work (“it takes a certain personality, or personality disorder”, he said), showed me his latest project, a larger-scale analysis of the rate of suspicious clinical trial results across multiple fields of medicine. He and Yentis discussed their desire to automate these statistical tests – which, in theory, would look a lot like what Hartgerink and Van Assen are developing – but they have no plans to make the results public; instead they envision that journal editors might use the tests to screen incoming articles for signs of possible misconduct.
“It is an incredibly difficult balance,” said Yentis, “you’re saying to a person, ‘I think you’re a liar.’ We have to decide how many fraudulent papers are worth one false accusation. How many is too many?”
With the introduction of programs such as Statcheck, and the growing desire to conduct as much of the critical conversation as possible in public view, Yentis expects a stormy reckoning with those very questions. “That’s a big debate that hasn’t happened,” he said, “and it’s because we simply haven’t had the tools.”
For all their dispassionate distance, when Hartgerink and Van Assen say that they are simply identifying data that “cannot be trusted”, they mean flagging papers and authors that fail their tests. And, as they learned with Statcheck, for many scientists, that will be indistinguishable from an accusation of deceit. When Hartgerink eventually deploys his fraud-detection program, it will flag up some very real instances of fraud, as well as many unintentional errors and false positives – and present all of the results in a messy pile for the scientific community to sort out. Simonsohn called it “a bit like leaving a loaded gun on a playground”.
When I put this question to Van Assen, he told me it was certain that some scientists would be angered or offended by having their work and its possible errors exposed and discussed. He didn’t want to make anyone feel bad, he said – but he didn’t feel bad about it. Science should be about transparency, criticism, and truth.
“The problem, also with scientists, is that people think they are important, they think they have a special purpose in life,” he said. “Maybe you too. But that’s a human bias. I think when you look at it objectively, individuals don’t matter at all. We should only look at what is good for science and society.”
Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics.
Susan Fiske, the former head of the Association for Psychological Science, wrote an op-ed accusing “self-appointed data police” of pioneering a new “form of harassment”. The German Psychological Society issued a statement condemning the unauthorised use of Statcheck. The intensity of the reaction suggested that many were afraid that the program was not just attributing mere statistical errors, but some impropriety, to the scientists.
The man behind all this controversy was a 25-year-old Dutch scientist named Chris Hartgerink, based at Tilburg University’s Meta-Research Center, which studies bias and error in science. Statcheck was the brainchild of Hartgerink’s colleague Michèle Nuijten, who had used the program to conduct a 2015 study that demonstrated that about half of all papers in psychology journals contained a statistical error. Nuijten’s study was written up in Nature as a valuable contribution to the growing literature acknowledging bias and error in science – but she had not published an inventory of the specific errors it had detected, or the authors who had committed them. The real flashpoint came months later,when Hartgerink modified Statcheck with some code of his own devising, which catalogued the individual errors and posted them online – sparking uproar across the scientific community.
Hartgerink is one of only a handful of researchers in the world who work full-time on the problem of scientific fraud – and he is perfectly happy to upset his peers. “The scientific system as we know it is pretty screwed up,” he told me last autumn. Sitting in the offices of the Meta-Research Center, which look out on to Tilburg’s grey, mid-century campus, he added: “I’ve known for years that I want to help improve it.” Hartgerink approaches his work with a professorial seriousness – his office is bare, except for a pile of statistics textbooks and an equation-filled whiteboard – and he is appealingly earnest about his aims. His conversations tend to rapidly ascend to great heights, as if they were balloons released from his hands – the simplest things soon become grand questions of ethics, or privacy, or the future of science.
“Statcheck is a good example of what is now possible,” he said. The top priority,for Hartgerink, is something much more grave than correcting simple statistical miscalculations. He is now proposing to deploy a similar program that will uncover fake or manipulated results – which he believes are far more prevalent than most scientists would like to admit.
When it comes to fraud – or in the more neutral terms he prefers, “scientific misconduct” – Hartgerink is aware that he is venturing into sensitive territory. “It is not something people enjoy talking about,” he told me, with a weary grin. Despite its professed commitment to self-correction, science is a discipline that relies mainly on a culture of mutual trust and good faith to stay clean. Talking about its faults can feel like a kind of heresy. In 1981, when a young Al Gore led a congressional inquiry into a spate of recent cases of scientific fraud in biomedicine, the historian Daniel Kevles observed that “for Gore and for many others, fraud in the biomedical sciences was akin to pederasty among priests”.
The comparison is apt. The exposure of fraud directly threatens the special claim science has on truth, which relies on the belief that its methods are purely rational and objective. As the congressmen warned scientists during the hearings, “each and every case of fraud serves to undermine the public’s trust in the research enterprise of our nation”.
But three decades later, scientists still have only the most crude estimates of how much fraud actually exists. The current accepted standard is a 2009 study by the Stanford researcher Daniele Fanelli that collated the results of 21 previous surveys given to scientists in various fields about research misconduct. The studies, which depended entirely on scientists honestly reporting their own misconduct, concluded that about 2% of scientists had falsified data at some point in their career.
If Fanelli’s estimate is correct, it seems likely that thousands of scientists are getting away with misconduct each year. Fraud – including outright fabrication, plagiarism and self-plagiarism – accounts for the majority of retracted scientific articles. But, according to RetractionWatch, which catalogues papers that have been withdrawn from the scientific literature, only 684 were retracted in 2015, while more than 800,000 new papers were published. If even just a few of the suggested 2% of scientific fraudsters – which, relying on self-reporting, is itself probably a conservative estimate – are active in any given year, the vast majority are going totally undetected. “Reviewers and editors, other gatekeepers – they’re not looking for potential problems,” Hartgerink said.
But if none of the traditional authorities in science are going to address the problem, Hartgerink believes that there is another way. If a program similar to Statcheck can be trained to detect the traces of manipulated data, and then make those results public, the scientific community can decide for itself whether a given study should still be regarded as trustworthy.
Hartgerink’s university, which sits at the western edge of Tilburg, a small, quiet city in the southern Netherlands, seems an unlikely place to try and correct this hole in the scientific process. The university is best known for its economics and business courses and does not have traditional lab facilities. But Tilburg was also the site of one of the biggest scientific scandals in living memory – and no one knows better than Hartgerink and his colleagues just how devastating individual cases of fraud can be.
In September 2010, the School of Social and Behavioral Science at Tilburg University appointed Diederik Stapel, a promising young social psychologist, as its new dean. Stapel was already popular with students for his warm manner, and with the faculty for his easy command of scientific literature and his enthusiasm for collaboration. He would often offer to help his colleagues, and sometimes even his students, by conducting surveys and gathering data for them.
As dean, Stapel appeared to reward his colleagues’ faith in him almost immediately. In April 2011 he published a paper in Science, the first study the small university had ever landed in that prestigious journal. Stapel’s research focused on what psychologists call “priming”: the idea that small stimuli can affect our behaviour in unnoticed but significant ways. “Could being discriminated against depend on such seemingly trivial matters as garbage on the streets?” Stapel’s paper in Science asked. He proceeded to show that white commuters at the Utrecht railway station tended to sit further away from visible minorities when the station was dirty. Similarly, Stapel found that white people were more likely to give negative answers on a quiz about minorities if they were interviewed on a dirty street, rather than a clean one.
Stapel had a knack for devising and executing such clever studies, cutting through messy problems to extract clean data. Since becoming a professor a decade earlier, he had published more than 100 papers, showing, among other things, that beauty product advertisements, regardless of context, prompted women to think about themselves more negatively, and that judges who had been primed to think about concepts of impartial justice were less likely to make racially motivated decisions.
His findings regularly reached the public through the media. The idea that huge, intractable social issues such as sexism and racism could be affected in such simple ways had a powerful intuitive appeal, and hinted at the possibility of equally simple, elegant solutions. If anything united Stapel’s diverse interests, it was this Gladwellian bent. His studies were often featured in the popular press, including the Los Angeles Times and New York Times, and he was a regular guest on Dutch television programmes.
But as Stapel’s reputation skyrocketed, a small group of colleagues and students began to view him with suspicion. “It was too good to be true,” a professor who was working at Tilburg at the time told me. (The professor, who I will call Joseph Robin, asked to remain anonymous so that he could frankly discuss his role in exposing Stapel.) “All of his experiments worked. That just doesn’t happen.”
A student of Stapel’s had mentioned to Robin in 2010 that some of Stapel’s data looked strange, so that autumn, shortly after Stapel was made Dean, Robin proposed a collaboration with him, hoping to see his methods first-hand. Stapel agreed, and the data he returned a few months later, according to Robin, “looked crazy. It was internally inconsistent in weird ways; completely unlike any real data I had ever seen.” Meanwhile, as the student helped get hold of more datasets from Stapel’s former students and collaborators, the evidence mounted: more “weird data”, and identical sets of numbers copied directly from one study to another.
In August 2011, the whistleblowers took their findings to the head of the department, Marcel Zeelenberg, who confronted Stapel with the evidence. At first, Stapel denied the charges, but just days later he admitted what his accusers suspected: he had never interviewed any commuters at the railway station, no women had been shown beauty advertisements and no judges had been surveyed about impartial justice and racism.
Stapel hadn’t just tinkered with numbers, he had made most of them up entirely, producing entire datasets at home in his kitchen after his wife and children had gone to bed. His method was an inversion of the proper scientific method: he started by deciding what result he wanted and then worked backwards, filling out the individual “data” points he was supposed to be collecting.
On 7 September 2011, the university revealed that Stapel had been suspended. The media initially speculated that there might have been an issue with his latest study – announced just days earlier, showing that meat-eaters were more selfish and less sociable – but the problem went much deeper. Stapel’s students and colleagues were about to learn that his enviable skill with data was, in fact, a sham, and his golden reputation, as well as nearly a decade of results that they had used in their own work, were built on lies.
Chris Hartgerink was studying late at the library when he heard the news. The extent of Stapel’s fraud wasn’t clear by then, but it was big. Hartgerink, who was then an undergraduate in the Tilburg psychology programme, felt a sudden disorientation, a sense that something solid and integral had been lost. Stapel had been a mentor to him, hiring him as a research assistant and giving him constant encouragement. “This is a guy who inspired me to actually become enthusiastic about research,” Hartgerink told me. “When that reason drops out, what remains, you know?”
Hartgerink wasn’t alone; the whole university was stunned. “It was a really difficult time,” said one student who had helped expose Stapel. “You saw these people on a daily basis who were so proud of their work, and you know it’s just based on a lie.” Even after Stapel resigned, the media coverage was relentless. Reporters roamed the campus – first from the Dutch press, and then, as the story got bigger, from all over the world.
On 9 September, just two days after Stapel was suspended, the university convened an ad-hoc investigative committee of current and former faculty. To help determine the true extent of Stapel’s fraud, the committee turned to Marcel van Assen, a statistician and psychologist in the department. At the time, Van Assen was growing bored with his current research, and the idea of investigating the former dean sounded like fun to him. Van Assen had never much liked Stapel, believing that he relied more on the force of his personality than reason when running the department. “Some people believe him charismatic,” Van Assen told me. “I am less sensitive to it.”
Van Assen – who is 44, tall and rangy, with a mop of greying, curly hair – approaches his work with relentless, unsentimental practicality. When speaking, he maintains an amused, half-smile, as if he is joking. He once told me that to fix the problems in psychology, it might be simpler to toss out 150 years of research and start again; I’m still not sure whether or not he was serious.
To prove misconduct, Van Assen said, you must be a pitbull: biting deeper and deeper, clamping down not just on the papers, but the datasets behind them, the research methods, the collaborators – using everything available to bring down the target. He spent a year breaking down the 45 studies Stapel produced at Tilburg and cataloguing their individual aberrations, noting where the effect size – a standard measure of the difference between the two groups in an experiment –seemed suspiciously large, where sequences of numbers were copied, where variables were too closely related, or where variables that should have moved in tandem instead appeared adrift.
The committee released its final report in October 2012 and, based largely on its conclusions, 55 of Stapel’s publications were officially retracted by the journals that had published them. Stapel also returned his PhD to the University of Amsterdam. He is, by any measure, one of the biggest scientific frauds of all time. (RetractionWatch has him third on their all-time retraction leaderboard.) The committee also had harsh words for Stapel’s colleagues, concluding that “from the bottom to the top, there was a general neglect of fundamental scientific standards”. “It was a real blow to the faculty,” Jacques Hagenaars, a former professor of methodology at Tilburg, who served on the committee, told me.
By extending some of the blame to the methods and attitudes of the scientists around Stapel, the committee situated the case within a larger problem that was attracting attention at the time, which has come to be known as the “replication crisis”. For the past decade, the scientific community has been grappling with the discovery that many published results cannot be reproduced independently by other scientists – in spite of the traditional safeguards of publishing and peer-review – because the original studies were marred by some combination of unchecked bias and human error.
After the committee disbanded, Van Assen found himself fascinated by the way science is susceptible to error, bias, and outright fraud. Investigating Stapel had been exciting, and he had no interest in returning to his old work. Van Assen had also found a like mind, a new professor at Tilburg named Jelte Wicherts, who had a long history working on bias in science and who shared his attitude of upbeat cynicism about the problems in their field. “We simply agree, there are findings out there that cannot be trusted,” Van Assen said. They began planning a new sort of research group: one that would investigate the very practice of science.
Van Assen does not like assigning Stapel too much credit for the creation of the Meta-Research Center, which hired its first students in late 2012, but there is an undeniable symmetry: he and Wicherts have created, in Stapel’s old department, a platform to investigate the sort of “sloppy science” and misconduct that very department had been condemned for.
Hartgerink joined the group in 2013. “For many people, certainly for me, Stapel launched an existential crisis in science,” he said. After Stapel’s fraud was exposed, Hartgerink struggled to find “what could be trusted” in his chosen field. He began to notice how easy it was for scientists to subjectively interpret data – or manipulate it. For a brief time he considered abandoning a future in research and joining the police.
There are probably several very famous papers that have fake data, and very famous people who have done it
Van Assen, who Hartgerink met through a statistics course, helped put him on another path. Hartgerink learned that a growing number of scientists in every field were coming to agree that the most urgent task for their profession was to establish what results and methods could still be trusted – and that many of these people had begun to investigate the unpredictable human factors that, knowingly or not, knocked science off its course. What was more, he could be a part of it. Van Assen offered Hartgerink a place in his yet-unnamed research group. All of the current projects were on errors or general bias, but Van Assen proposed they go out and work closer to the fringes, developing methods that could detect fake data in published scientific literature.
“I’m not normally an expressive person,” Hartgerink told me. “But I said: ‘Hell, yes. Let’s do that.’”
Hartgerink and Van Assen believe not only that most scientific fraud goes undetected, but that the true rate of misconduct is far higher than 2%. “We cannot trust self reports,” Van Assen told me. “If you ask people, ‘At the conference, did you cheat on your fiancee?’ – people will very likely not admit this.”
Uri Simonsohn, a psychology professor at University of Pennsylvania’s Wharton School who gained notoriety as a “data vigilante” for exposing two serious cases of fraud in his field in 2012, believes that as much as 5% of all published research contains fraudulent data. “It’s not only in the periphery, it’s not only in the journals people don’t read,” he told me. “There are probably several very famous papers that have fake data, and very famous people who have done it.”
But as long as it remains undiscovered, there is a tendency for scientists to dismiss fraud in favour of more widely documented – and less seedy – issues. Even Arturo Casadevall, an American microbiologist who has published extensively on the rate, distribution, and detection of fraud in science, told me that despite his personal interest in the topic, my time would be better served investigating the broader issues driving the replication crisis. Fraud, he said, was “probably a relatively minor problem in terms of the overall level of science”.
This way of thinking goes back at least as far as scientists have been grappling with high-profile cases of misconduct. In 1983, Peter Medawar, the British immunologist and Nobel laureate, wrote in the London Review of Books: “The number of dishonest scientists cannot, of course, be known, but even if they were common enough to justify scary talk of ‘tips of icebergs’, they have not been so numerous as to prevent science’s having become the most successful enterprise (in terms of the fulfilment of declared ambitions) that human beings have ever engaged upon.”
From this perspective, as long as science continues doing what it does well – as long as genes are sequenced and chemicals classified and diseases reliably identified and treated – then fraud will remain a minor concern. But while this may be true in the long run, it may also be dangerously complacent. Furthermore, scientific misconduct can cause serious harm, as, for instance, in the case of patients treated by Paolo Macchiarini, a doctor at Karolinska Institute in Sweden who allegedly misrepresented the effectiveness of an experimental surgical procedure he had developed. Macchiarini is currently being investigated by a Swedish prosecutor after several of the patients who received the procedure later died.
Even in the more mundane business of day-to-day research, scientists are constantly building on past work, relying on its solidity to underpin their own theories. If misconduct really is as widespread as Hartgerink and Van Assen think, then false results are strewn across scientific literature, like unexploded mines that threaten any new structure built over them. At the very least, if science is truly invested in its ideal of self-correction, it seems essential to know the extent of the problem.
But there is little motivation within the scientific community to ramp up efforts to detect fraud. Part of this has to do with the way the field is organised. Science isn’t a traditional hierarchy, but a loose confederation of research groups, institutions, and professional organisations. Universities are clearly central to the scientific enterprise, but they are not in the business of evaluating scientific results, and as long as fraud doesn’t become public they have little incentive to go after it. There is also the questionable perception, although widespread in the scientific community, that there are already measures in place that preclude fraud. When Gore and his fellow congressmen held their hearings 35 years ago, witnesses routinely insisted that science had a variety of self-correcting mechanisms, such as peer-review and replication. But, as the science journalists William Broad and Nicholas Wade pointed out at the time, the vast majority of cases of fraud are actually exposed by whistleblowers, and that holds true to this day.
And so the enormous task of keeping science honest is left to individual scientists in the hope that they will police themselves, and each other. “Not only is it not sustainable,” said Simonsohn, “it doesn’t even work. You only catch the most obvious fakers, and only a small share of them.” There is also the problem of relying on whistleblowers, who face the thankless and emotionally draining prospect of accusing their own colleagues of fraud. (“It’s like saying someone is a paedophile,” one of the students at Tilburg told me.) Neither Simonsohn nor any of the Tilburg whistleblowers I interviewed said they would come forward again. “There is no way we as a field can deal with fraud like this,” the student said. “There has to be a better way.”
In the winter of 2013, soon after Hartgerink began working with Van Assen, they began to investigate another social psychology researcher who they noticed was reporting suspiciously large effect sizes, one of the “tells” that doomed Stapel. When they requested that the researcher provide additional data to verify her results, she stalled – claiming that she was undergoing treatment for stomach cancer. Months later, she informed them that she had deleted all the data in question. But instead of contacting the researcher’s co-authors for copies of the data, or digging deeper into her previous work, they opted to let it go.
They had been thoroughly stonewalled, and they knew that trying to prosecute individual cases of fraud – the “pitbull” approach that Van Assen had taken when investigating Stapel – would never expose more than a handful of dishonest scientists. What they needed was a way to analyse vast quantities of data in search of signs of manipulation or error, which could then be flagged for public inspection without necessarily accusing the individual scientists of deliberate misconduct. After all, putting a fence around a minefield has many of the same benefits as clearing it, with none of the tricky business of digging up the mines.
As Van Assen had earlier argued in a letter to the journal Nature, the traditional approach to investigating other scientists was needlessly fraught – since it combined the messy task of proving that a researcher had intended to commit fraud with a much simpler technical problem: whether the data underlying their results was valid. The two issues, he argued, could be separated.
Scientists can commit fraud in a multitude of ways. In 1974, the American immunologist William Summerlin famously tried to pass a patch of skin on a mouse darkened with permanent marker pen as a successful interspecies skin-graft. But most instances are more mundane: the majority of fraud cases in recent years have emerged from scientists either falsifying images – deliberately mislabelling scans and micrographs – or fabricating or altering their recorded data. And scientists have used statistical tests to scrutinise each other’s data since at least the 1930s, when Ronald Fisher, the father of biostatistics, used a basic chi-squared test to suggest that Gregor Mendel, the father of genetics, had cherrypicked some of his data.
In 2014, Hartgerink and Van Assen started to sort through the variety of tests used in ad-hoc investigations of fraud in order to determine which were powerful and versatile enough to reliably detect statistical anomalies across a wide range of fields. After narrowing down a promising arsenal of tests, they hit a tougher problem. To prove that their methods work, Hartgerink and Van Assen have to show they can reliably distinguish false from real data. But research misconduct is relatively uncharted territory. Only a handful of cases come to light each year – a dismally small sample size – so it’s hard to get an idea of what constitutes “normal” fake data, what its features and particular quirks are. Hartgerink devised a workaround, challenging other academics to produce simple fake datasets, a sort of game to see if they could come up with data that looked real enough to fool the statistical tests, with an Amazon gift card as a prize.
By 2015, the Meta-Research group had expanded to seven researchers, and Hartgerink was helping his colleagues with a separate error-detection project that would become Statcheck. He was pleased with the study that Michèle Nuitjen published that autumn, which used Statcheck to show that something like half of all published psychology papers appeared to contain calculation errors, but as he tinkered with the program and the database of psychology papers they had assembled, he found himself increasingly uneasy about what he saw as the closed and secretive culture of science.
When scientists publish papers in journals, they release only the data they wish to share. Critical evaluation of the results by other scientists – peer review – takes place in secret and the discussion is not released publicly. Once a paper is published, all comments, concerns, and retractions must go through the editors of the journal before they reach the public. There are good, or at least defensible, arguments for all of this. But Hartgerink is part of an increasingly vocal group that believes that the closed nature of science, with authority resting in the hands of specific gatekeepers – journals, universities, and funders – is harmful, and that a more open approach would better serve the scientific method.
Hartgerink realised that with a few adjustments to Statcheck, he could make public all the statistical errors it had exposed. He hoped that this would shift the conversation away from talk of broad, representative results – such as the proportion of studies that contained errors – and towards a discussion of the individual papers and their mistakes. The critique would be complete, exhaustive, and in the public domain, where the authors could address it; everyone else could draw their own conclusions.
In August 2016, with his colleagues’ blessing, he posted the full set of Statcheck results publicly on the anonymous science message board PubPeer. At first there was praise on Twitter and science blogs, which skew young and progressive – and then, condemnations, largely from older scientists, who feared an intrusive new world of public blaming and shaming. In December, after everyone had weighed in, Nature, a bellwether of mainstream scientific thought for more than a century, cautiously supported a future of automated scientific scrutiny in an editorial that addressed the Statcheck controversy without explicitly naming it. Its conclusion seemed to endorse Hartgerink’s approach, that “criticism itself must be embraced”.
In the same month, the Office of Research Integrity (ORI), an obscure branch of the US National Institutes of Health, awarded Hartgerink a small grant – about $100,000 – to pursue new projects investigating misconduct, including the completion of his program to detect fabricated data. For Hartgerink and Van Assen, who had not received any outside funding for their research, it felt like vindication.
Yet change in science comes slowly, if at all, Van Assen reminded me. The current push for more open and accountable science, of which they are a part, has “only really existed since 2011”, he said. It has captured an outsize share of the science media’s attention, and set laudable goals, but it remains a small, fragile outpost of true believers within the vast scientific enterprise. “I have the impression that many scientists in this group think that things are going to change.” Van Assen said. “Chris, Michèle, they are quite optimistic. I think that’s bias. They talk to each other all the time.”
When I asked Hartgerink what it would take to totally eradicate fraud from the scientific process, he suggested that scientists make all of their data public; register the intentions of their work before conducting experiments, to prevent post-hoc reasoning, and that they have their results checked by algorithms during and after the publishing process.
To any working scientist – currently enjoying nearly unprecedented privacy and freedom for a profession that is in large part publicly funded – Hartgerink’s vision would be an unimaginably draconian scientific surveillance state. For his part, Hartgerink believes the preservation of public trust in science requires nothing less – but in the meantime, he intends to pursue this ideal without the explicit consent of the entire scientific community, by investigating published papers and making the results available to the public.
Even scientists who have done similar work uncovering fraud have reservations about Van Assen and Hartgerink’s approach. In January, I met with Dr John Carlisle and Dr Steve Yentis at an anaesthetics conference that took place in London, near Westminster Abbey. In 2012, Yentis, then the editor of the journal Anaesthesia, asked Carlisle to investigate data from a researcher named Yoshitaka Fujii, who the community suspected was falsifying clinical trials. In time, Carlisle demonstrated that 168 of Fujii’s trials contained dubious statistical results. Yentis and the other journal editors contacted Fujii’s employers, who launched a full investigation. Fujii currently sits at the top of the RetractionWatch leaderboard with 183 retracted studies. By sheer numbers he is the biggest scientific fraud in recorded history.
You’re saying to a person, ‘I think you’re a liar.’ How many fraudulent papers are worth one false accusation?
Carlisle, who, like Van Assen, found that he enjoyed the detective work (“it takes a certain personality, or personality disorder”, he said), showed me his latest project, a larger-scale analysis of the rate of suspicious clinical trial results across multiple fields of medicine. He and Yentis discussed their desire to automate these statistical tests – which, in theory, would look a lot like what Hartgerink and Van Assen are developing – but they have no plans to make the results public; instead they envision that journal editors might use the tests to screen incoming articles for signs of possible misconduct.
“It is an incredibly difficult balance,” said Yentis, “you’re saying to a person, ‘I think you’re a liar.’ We have to decide how many fraudulent papers are worth one false accusation. How many is too many?”
With the introduction of programs such as Statcheck, and the growing desire to conduct as much of the critical conversation as possible in public view, Yentis expects a stormy reckoning with those very questions. “That’s a big debate that hasn’t happened,” he said, “and it’s because we simply haven’t had the tools.”
For all their dispassionate distance, when Hartgerink and Van Assen say that they are simply identifying data that “cannot be trusted”, they mean flagging papers and authors that fail their tests. And, as they learned with Statcheck, for many scientists, that will be indistinguishable from an accusation of deceit. When Hartgerink eventually deploys his fraud-detection program, it will flag up some very real instances of fraud, as well as many unintentional errors and false positives – and present all of the results in a messy pile for the scientific community to sort out. Simonsohn called it “a bit like leaving a loaded gun on a playground”.
When I put this question to Van Assen, he told me it was certain that some scientists would be angered or offended by having their work and its possible errors exposed and discussed. He didn’t want to make anyone feel bad, he said – but he didn’t feel bad about it. Science should be about transparency, criticism, and truth.
“The problem, also with scientists, is that people think they are important, they think they have a special purpose in life,” he said. “Maybe you too. But that’s a human bias. I think when you look at it objectively, individuals don’t matter at all. We should only look at what is good for science and society.”
Sunday, 15 January 2017
Time to hold our lying leaders to account
Nick Cohen in The Guardian
Post-truth politics isn’t a coherent description of the world but a cry of despair. Propositions have not stopped being right or wrong just because of the invention of Facebook. Whatever the authoritarian cults who rage across Twitter say to the contrary, the Earth still goes round the sun and two plus two still equals four.
“Everything is relative. Stories are being made up all the time. There is no such thing as the truth,” cried Anthony Grayling. But unless the professor has abandoned every philosophical principle he has held, what Grayling and millions like him mean is something like this. Donald Trump, Boris Johnson, and other liars the like of which they cannot remember, have made fantastical promises to their electorates. They said they could build a wall and make Mexico pay for it or make Britain richer by crashing her out of the EU.
But instead of laughing at their transparent falsehoods or being insulted at being taken for fools, blocs of voters have handed them victory. Evidence could not shake them. Common sense could not reach them. Surely, their gullibility shows we have arrived in a new dystopia. You can see why they got that way. Trump is clear that the checks and balances that restrained power in the old world will not apply to him. His refusal to release his tax returns shows it. The Russian dissident Garry Kasparov put the urgent case for transparency best when he said Trump has criticised Republicans, Democrats, the pope, the CIA, FBI, Nato, Meryl Streep… everyone and anyone “except Vladimir Putin”.
What gives here? And more to the point, who’s on the take? I see an ideological affinity between Russian autocracy, the western far left and the western populist right: they band together against the common enemy of liberal democracy. But it has always been reasonable to ask whether the traditional inducements of sex and money have tightened Putin’s grip on Trump.
You could lay this canard to rest by publishing your tax returns, American journalists told their president-elect. You must know the American public wants to see them.
The public doesn’t care, Trump replied. I went into an election refusing to release my tax returns and “I won.” So now I can do what I want.
His spokeswoman, Kellyanne Conway, who could work for a Russian propaganda channel when she’s thrown out of politics, uses the same logic when asked whether it is “presidential” for her master to lie so often and so blatantly. “He’s the president-elect, so that’s presidential behaviour.”
The British are experiencing their own version of Trumpish triumphalism. In our case, too, the answer to every hard question is a brute proclamation of power. Are you seriously going to take us out of the single market? Leave won. And the customs union? Leave won. What about EU citizens here? Leave won. And British citizens there? Leave won.
Fighting back should be easy – if you cannot expose charlatans such as Trump and Johnson, you should step aside a make way for people who can. But a terrible uncertainty grips opposition politics across the English-speaking world. Trump’s victory strikes me as a far greater cause for self-doubt than Brexit. Because we never had to endure invasion by Hitler or Stalin, or government by Greek colonels or Spanish falangists, the British did not have the same emotional attachment to an EU that freed the rest of Europe from a terrible past.
Even if, as I do, you regard the decision to leave as a monumental blunder, it is not, given Britain’s lucky history, inexplicable. Trump’s victory, by contrast, overturns truths that western liberals felt to be self-evident. You cannot abuse women and ethnic minorities. You cannot lie in your every second utterance. If you do, the media will expose and destroy you.
I can’t find a better way of illustrating the demoralising change in the weather than by referring you to Alan Ryan’s history of western political thought, On Politics. I don’t mean to criticise Ryan. He has produced a vast and brilliant book that stands comparison with Bertrand Russell’s History of Western Philosophy. But unlike Russell, who was gloriously waspish and prejudiced, Ryan is a careful writer and his rare opinionated judgments are all the more authoritative for that.
In 2013 he, like nearly every serious person, could say with absolute certainty that, despite its legion of faults, the 21st century was better than the 20th. For instance, Ryan explained, Governor George Wallace’s infamous battle cry of the 1950s – “I will never be out-niggered”, after he had been beaten by a politician who was even more of a racist than he was – “would today instantly terminate his career”.
Yet in 2016, Trump echoed Wallace and far from seeing his career terminated became president of the United States, an office that Wallace never came near, incidentally. After that, I can understand why the disoriented talk about a post-truth world, but it remains a sign of their trauma rather than a description of our times.
It is as dangerous to overestimate the importance of technological change as to underestimate it. There was no web in 1968, and US broadcasters had to be accurate and impartial. The old world of 20th-century technology did not, however, stop George Wallace winning millions of white, working-class voteswhen he ran for president as an open white supremacist. Wallace was beaten by Richard Nixon, a closet racist and crook.
When his crimes caught up with him, Nixon declared that he could not be prosecuted because “when the president does it, that means it is not illegal”, a line that Conway might have written for him.
Post-truth world or not, a Republican abolition of Obamacare will still leave white, working-class Americans who voted for Trump to rot without decent treatment, a hard Brexit will still hurt the British working class more than their rightwing leaders, the Earth will still go round the sun, and two plus two will still equal four.
To pretend that we are living in a culture without historical precedent is to make modernity an excuse for the abnegation of political responsibility. The question for the Anglo-Saxon opposition is not how to cope with a world where truth has suddenly become as hard to find as Trump’s tax returns. It is the same question that has faced every opposition in the history of democracy: how can we make the powerful pay for the lies they have fed to the masses?
Post-truth politics isn’t a coherent description of the world but a cry of despair. Propositions have not stopped being right or wrong just because of the invention of Facebook. Whatever the authoritarian cults who rage across Twitter say to the contrary, the Earth still goes round the sun and two plus two still equals four.
“Everything is relative. Stories are being made up all the time. There is no such thing as the truth,” cried Anthony Grayling. But unless the professor has abandoned every philosophical principle he has held, what Grayling and millions like him mean is something like this. Donald Trump, Boris Johnson, and other liars the like of which they cannot remember, have made fantastical promises to their electorates. They said they could build a wall and make Mexico pay for it or make Britain richer by crashing her out of the EU.
But instead of laughing at their transparent falsehoods or being insulted at being taken for fools, blocs of voters have handed them victory. Evidence could not shake them. Common sense could not reach them. Surely, their gullibility shows we have arrived in a new dystopia. You can see why they got that way. Trump is clear that the checks and balances that restrained power in the old world will not apply to him. His refusal to release his tax returns shows it. The Russian dissident Garry Kasparov put the urgent case for transparency best when he said Trump has criticised Republicans, Democrats, the pope, the CIA, FBI, Nato, Meryl Streep… everyone and anyone “except Vladimir Putin”.
What gives here? And more to the point, who’s on the take? I see an ideological affinity between Russian autocracy, the western far left and the western populist right: they band together against the common enemy of liberal democracy. But it has always been reasonable to ask whether the traditional inducements of sex and money have tightened Putin’s grip on Trump.
You could lay this canard to rest by publishing your tax returns, American journalists told their president-elect. You must know the American public wants to see them.
The public doesn’t care, Trump replied. I went into an election refusing to release my tax returns and “I won.” So now I can do what I want.
His spokeswoman, Kellyanne Conway, who could work for a Russian propaganda channel when she’s thrown out of politics, uses the same logic when asked whether it is “presidential” for her master to lie so often and so blatantly. “He’s the president-elect, so that’s presidential behaviour.”
The British are experiencing their own version of Trumpish triumphalism. In our case, too, the answer to every hard question is a brute proclamation of power. Are you seriously going to take us out of the single market? Leave won. And the customs union? Leave won. What about EU citizens here? Leave won. And British citizens there? Leave won.
Fighting back should be easy – if you cannot expose charlatans such as Trump and Johnson, you should step aside a make way for people who can. But a terrible uncertainty grips opposition politics across the English-speaking world. Trump’s victory strikes me as a far greater cause for self-doubt than Brexit. Because we never had to endure invasion by Hitler or Stalin, or government by Greek colonels or Spanish falangists, the British did not have the same emotional attachment to an EU that freed the rest of Europe from a terrible past.
Even if, as I do, you regard the decision to leave as a monumental blunder, it is not, given Britain’s lucky history, inexplicable. Trump’s victory, by contrast, overturns truths that western liberals felt to be self-evident. You cannot abuse women and ethnic minorities. You cannot lie in your every second utterance. If you do, the media will expose and destroy you.
I can’t find a better way of illustrating the demoralising change in the weather than by referring you to Alan Ryan’s history of western political thought, On Politics. I don’t mean to criticise Ryan. He has produced a vast and brilliant book that stands comparison with Bertrand Russell’s History of Western Philosophy. But unlike Russell, who was gloriously waspish and prejudiced, Ryan is a careful writer and his rare opinionated judgments are all the more authoritative for that.
In 2013 he, like nearly every serious person, could say with absolute certainty that, despite its legion of faults, the 21st century was better than the 20th. For instance, Ryan explained, Governor George Wallace’s infamous battle cry of the 1950s – “I will never be out-niggered”, after he had been beaten by a politician who was even more of a racist than he was – “would today instantly terminate his career”.
Yet in 2016, Trump echoed Wallace and far from seeing his career terminated became president of the United States, an office that Wallace never came near, incidentally. After that, I can understand why the disoriented talk about a post-truth world, but it remains a sign of their trauma rather than a description of our times.
It is as dangerous to overestimate the importance of technological change as to underestimate it. There was no web in 1968, and US broadcasters had to be accurate and impartial. The old world of 20th-century technology did not, however, stop George Wallace winning millions of white, working-class voteswhen he ran for president as an open white supremacist. Wallace was beaten by Richard Nixon, a closet racist and crook.
When his crimes caught up with him, Nixon declared that he could not be prosecuted because “when the president does it, that means it is not illegal”, a line that Conway might have written for him.
Post-truth world or not, a Republican abolition of Obamacare will still leave white, working-class Americans who voted for Trump to rot without decent treatment, a hard Brexit will still hurt the British working class more than their rightwing leaders, the Earth will still go round the sun, and two plus two will still equal four.
To pretend that we are living in a culture without historical precedent is to make modernity an excuse for the abnegation of political responsibility. The question for the Anglo-Saxon opposition is not how to cope with a world where truth has suddenly become as hard to find as Trump’s tax returns. It is the same question that has faced every opposition in the history of democracy: how can we make the powerful pay for the lies they have fed to the masses?
Sunday, 14 August 2016
From Donald Trump to the Brexit campaign, outrageous untruths are almost a matter of course. How did we reach the point where ‘falsehood flies’?
Steven Poole in The Guardian
Champion of ‘free speech’: Donald Trump at a campaign rally last week. Photograph: Evan Vucci/AP
Not so long ago it was “soundbites” that were thought to be corrupting political debate by reducing complex ideas to slogans. The 1988 US presidential election was called the “soundbite election” by some commentators, the most famous example being George HW Bush’s promise: “Read my lips: No new taxes.” (Two years later Bush agreed to a bipartisan budget that did increase taxes.) It was a mysteriously brilliant piece of verbal engineering. Why would you have to read Bush’s lips when you could hear what he was saying on the TV? But the surprising image and arresting rhythm made it stick.
Soundbites and slogans (“Take Back Control”) still work. Trump, too, has a conventional campaign slogan: “Make America Great Again”. (Great how, exactly? By doing what? Don’t ask.) But he gets most publicity for his antic, apparently off-the-cuff remarks that rhetorically perform an absence of rhetoric. His real genius might be read as a satirical absolutism about the first amendment. If speech is genuinely free, there should be no consequences to speech whatsoever. And, to the mystification of the commenting class, this is what Trump repeatedly finds to be the case.
After the media furore surrounding Trump’s claim that Obama founded Isis, he tweeted: “THEY DON’T GET SARCASM?” Thus he rows back from any outrageous claim dreamed up by a brain that works like a cleverly programmed internet meme-generator. “I don’t know,” he says, all innocence, “that’s what some people are saying.” (No one was before he did.) Yet the idea Obama is the founder of Isis will stick in at least some voters’ minds come polling day, as will the imaginary Mexican wall (though they will probably have forgotten its “big beautiful door”) – just as the “£350m a week for the NHS” promise did for many Leave voters.
Trump is not a perversion of the tradition of political campaigning; he is the logical culmination of it. It doesn’t matter what you say, if it helps you get elected. Trump is not a liar, exactly, but a bullshitter. According to the canonical definition by the philosopher Harry Frankfurt, a liar still cares about the truth because he wants to conceal it from you. A bullshitter, on the other hand, simply doesn’t care what is true at all.
Trump is merely the most energetic current exploiter of a fact that modern politicians have long known: the media is broken, and you can mercilessly exploit its flaws to your own benefit. (That, after all, is what “spin doctors” are for.) If you repeat a lie often enough, then that claim becomes the story, and it’s what most people remember. And a structural confusion between “impartiality” and “balance” undermines the mission to inform of institutions such as the BBC. To be impartial would be to point out untruths wherever they come from. But to be “balanced” is to have a three-way between a presenter and two economists on opposite sides of some question. Never mind that one economist represents the views of 95% of the profession and the other is an ideologically blinkered outlier: the structure of the interview itself implies to the audience that the arguments are evenly divided.
In the age of social media, moreover, dubious political claims are packed into atomised fragments and attract thousands of enthusiastic retweets, while the people who help to redistribute them are unlikely ever to see a rebuttal that comes later or in someone else’s timeline. We’ve all moved on.
Social media is less a conversation than it is a virtually distributed riot of “happy firing” (a term for the celebratory shooting of assault rifles into the sky). That lies can go viral more quickly than the truth is another old observation. In 1710 Jonathan Swift wrote: “Falsehood flies, and the Truth comes limping after it.” But what is certain is that Twitter and Facebook now help it fly faster and further than ever before.
Nigel Farage proved the power of powerful slogans and images during the EU referendum campaign. Photograph: Philip Toscano/PA
Because attention is the currency of social media, public figures are incentivised to use outrage to vie for visibility, which further coarsens the public discourse – as when the American shock-journo Ann Coulter lately defended Trump by calling him a “victim of media rape” who is being blamed for “wearing a short skirt”. Any such outburst these days, along with the wave of overt post-Brexit racism in Britain, may be defended as a healthy refusal to kowtow to “political correctness”, a term that originally denoted the careful use of language so as not to needlessly upset people, and now just means common decency.
What, then, is to be done? The modern bullshitting demagogue succeeds because he says arresting and often amusing things that cut through the anomie of those who feel left behind by politics as usual. Exquisitely reasoned liberal conversation is exactly what turns those voters off. Lately it has been notable that Hillary Clinton, not previously considered the wittiest person in US politics, has used an impressive array of scripted zingers to put down her opponent. What the bullshitters do so well is define the rules of the game, so perhaps their opponents will have to play it at least to this extent, while trying to keep the moral high ground by still caring about what is true and what isn’t.
It’s not an edifying thought, but if the insurgent right is to have its Trumps and its Farages, maybe the centre and the left need their own versions too.
A Vote Leave battle bus, rebranded outside parliament in London by Greenpeace last month. Photograph: Jack Taylor/Getty Images
Donald Trump announced last week that Barack Obama was the “founder of Isis” and its “most valuable player”. Earlier he had hinted that gun activists might want to assassinate Hillary Clinton to prevent her appointing liberal justices to the Supreme Court. In Britain, meanwhile, calls for the moderation of violent political language after the death of Jo Cox have not resulted in much reduction of the gleeful talk of “stabbings” and “traitors”, and did not discourage Nigel Farage from exulting that the Brexit vote had been won “without a shot being fired”. In what some call an era of “post-truth politics”, public discourse seems more abusive and angry, and further from the ideal of reasoned conversation about social goods, than ever before. Is our political language broken?
Well, people have been complaining about the corruption of political language since political language existed. Confucius warned that a ruler should use the correct names for things, or social catastrophe would result. Orwell lamented that political language in his time was “designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind”. And the era of the “war on terror” gave rise to a whole new constellation of what I call Unspeak: carefully engineered phrases designed to smuggle in a biased point of view and shut down thought and argument – like “war on terror” itself.
Nor is flat-out lying in politics anything new. There is a marvellous 18th-century pamphlet usually attributed to John Arbuthnot, friend of Swift and Pope and founder of the Scriblerus club. It describes a yet-to-be-written book, The Art of Political Lying, in which the author will show “that the People have a Right to private Truth from their Neighbours ... but that they have no Right at all to Political Truth”.
The coinage “post-truth politics”, indeed, implies that there was once a golden age of politics in which its elevated practitioners spoke nothing but perfect truth. The sun never dawned on such a day. But perhaps what feels new to us now is the shamelessness of the lying, and the barefaced repeating of a lie repeatedly debunked. Arbuthnot cautioned that the same lie should not be “obstinately insisted upon”, but he did not live to see this strategy work so brilliantly during the EU referendum, with the Leave campaign’s claim that we sent £350m a week to the EU.
Shameless, too, was the haste with which this lie, having done its work, was disowned on the morning of the referendum result. It was a “mistake”, muttered Nigel Farage, before carefully lowering his snout back into the EU trough that continues to pay his MEP’s salary. This rather called to mind Paul Wolfowitz’s candid admission that the issue of Saddam’s alleged WMD was chosen as the justification for the Iraq war “for bureaucratic reasons”. The surprise, perhaps, is that you can show how the magic trick works, and people still believe it next time.
Donald Trump announced last week that Barack Obama was the “founder of Isis” and its “most valuable player”. Earlier he had hinted that gun activists might want to assassinate Hillary Clinton to prevent her appointing liberal justices to the Supreme Court. In Britain, meanwhile, calls for the moderation of violent political language after the death of Jo Cox have not resulted in much reduction of the gleeful talk of “stabbings” and “traitors”, and did not discourage Nigel Farage from exulting that the Brexit vote had been won “without a shot being fired”. In what some call an era of “post-truth politics”, public discourse seems more abusive and angry, and further from the ideal of reasoned conversation about social goods, than ever before. Is our political language broken?
Well, people have been complaining about the corruption of political language since political language existed. Confucius warned that a ruler should use the correct names for things, or social catastrophe would result. Orwell lamented that political language in his time was “designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind”. And the era of the “war on terror” gave rise to a whole new constellation of what I call Unspeak: carefully engineered phrases designed to smuggle in a biased point of view and shut down thought and argument – like “war on terror” itself.
Nor is flat-out lying in politics anything new. There is a marvellous 18th-century pamphlet usually attributed to John Arbuthnot, friend of Swift and Pope and founder of the Scriblerus club. It describes a yet-to-be-written book, The Art of Political Lying, in which the author will show “that the People have a Right to private Truth from their Neighbours ... but that they have no Right at all to Political Truth”.
The coinage “post-truth politics”, indeed, implies that there was once a golden age of politics in which its elevated practitioners spoke nothing but perfect truth. The sun never dawned on such a day. But perhaps what feels new to us now is the shamelessness of the lying, and the barefaced repeating of a lie repeatedly debunked. Arbuthnot cautioned that the same lie should not be “obstinately insisted upon”, but he did not live to see this strategy work so brilliantly during the EU referendum, with the Leave campaign’s claim that we sent £350m a week to the EU.
Shameless, too, was the haste with which this lie, having done its work, was disowned on the morning of the referendum result. It was a “mistake”, muttered Nigel Farage, before carefully lowering his snout back into the EU trough that continues to pay his MEP’s salary. This rather called to mind Paul Wolfowitz’s candid admission that the issue of Saddam’s alleged WMD was chosen as the justification for the Iraq war “for bureaucratic reasons”. The surprise, perhaps, is that you can show how the magic trick works, and people still believe it next time.
Champion of ‘free speech’: Donald Trump at a campaign rally last week. Photograph: Evan Vucci/AP
Not so long ago it was “soundbites” that were thought to be corrupting political debate by reducing complex ideas to slogans. The 1988 US presidential election was called the “soundbite election” by some commentators, the most famous example being George HW Bush’s promise: “Read my lips: No new taxes.” (Two years later Bush agreed to a bipartisan budget that did increase taxes.) It was a mysteriously brilliant piece of verbal engineering. Why would you have to read Bush’s lips when you could hear what he was saying on the TV? But the surprising image and arresting rhythm made it stick.
Soundbites and slogans (“Take Back Control”) still work. Trump, too, has a conventional campaign slogan: “Make America Great Again”. (Great how, exactly? By doing what? Don’t ask.) But he gets most publicity for his antic, apparently off-the-cuff remarks that rhetorically perform an absence of rhetoric. His real genius might be read as a satirical absolutism about the first amendment. If speech is genuinely free, there should be no consequences to speech whatsoever. And, to the mystification of the commenting class, this is what Trump repeatedly finds to be the case.
After the media furore surrounding Trump’s claim that Obama founded Isis, he tweeted: “THEY DON’T GET SARCASM?” Thus he rows back from any outrageous claim dreamed up by a brain that works like a cleverly programmed internet meme-generator. “I don’t know,” he says, all innocence, “that’s what some people are saying.” (No one was before he did.) Yet the idea Obama is the founder of Isis will stick in at least some voters’ minds come polling day, as will the imaginary Mexican wall (though they will probably have forgotten its “big beautiful door”) – just as the “£350m a week for the NHS” promise did for many Leave voters.
Trump is not a perversion of the tradition of political campaigning; he is the logical culmination of it. It doesn’t matter what you say, if it helps you get elected. Trump is not a liar, exactly, but a bullshitter. According to the canonical definition by the philosopher Harry Frankfurt, a liar still cares about the truth because he wants to conceal it from you. A bullshitter, on the other hand, simply doesn’t care what is true at all.
Trump is merely the most energetic current exploiter of a fact that modern politicians have long known: the media is broken, and you can mercilessly exploit its flaws to your own benefit. (That, after all, is what “spin doctors” are for.) If you repeat a lie often enough, then that claim becomes the story, and it’s what most people remember. And a structural confusion between “impartiality” and “balance” undermines the mission to inform of institutions such as the BBC. To be impartial would be to point out untruths wherever they come from. But to be “balanced” is to have a three-way between a presenter and two economists on opposite sides of some question. Never mind that one economist represents the views of 95% of the profession and the other is an ideologically blinkered outlier: the structure of the interview itself implies to the audience that the arguments are evenly divided.
In the age of social media, moreover, dubious political claims are packed into atomised fragments and attract thousands of enthusiastic retweets, while the people who help to redistribute them are unlikely ever to see a rebuttal that comes later or in someone else’s timeline. We’ve all moved on.
Social media is less a conversation than it is a virtually distributed riot of “happy firing” (a term for the celebratory shooting of assault rifles into the sky). That lies can go viral more quickly than the truth is another old observation. In 1710 Jonathan Swift wrote: “Falsehood flies, and the Truth comes limping after it.” But what is certain is that Twitter and Facebook now help it fly faster and further than ever before.
Nigel Farage proved the power of powerful slogans and images during the EU referendum campaign. Photograph: Philip Toscano/PA
Because attention is the currency of social media, public figures are incentivised to use outrage to vie for visibility, which further coarsens the public discourse – as when the American shock-journo Ann Coulter lately defended Trump by calling him a “victim of media rape” who is being blamed for “wearing a short skirt”. Any such outburst these days, along with the wave of overt post-Brexit racism in Britain, may be defended as a healthy refusal to kowtow to “political correctness”, a term that originally denoted the careful use of language so as not to needlessly upset people, and now just means common decency.
What, then, is to be done? The modern bullshitting demagogue succeeds because he says arresting and often amusing things that cut through the anomie of those who feel left behind by politics as usual. Exquisitely reasoned liberal conversation is exactly what turns those voters off. Lately it has been notable that Hillary Clinton, not previously considered the wittiest person in US politics, has used an impressive array of scripted zingers to put down her opponent. What the bullshitters do so well is define the rules of the game, so perhaps their opponents will have to play it at least to this extent, while trying to keep the moral high ground by still caring about what is true and what isn’t.
It’s not an edifying thought, but if the insurgent right is to have its Trumps and its Farages, maybe the centre and the left need their own versions too.
Monday, 18 July 2016
A nine-point guide to spotting a dodgy statistic
Boris Johnson did not remove the £350m figure from the Leave campaign bus even after it had been described as ‘misleading’. Photograph: Stefan Rousseau/PA
David Spiegelhalter in The Guardian
I love numbers. They allow us to get a sense of magnitude, to measure change, to put claims in context. But despite their bold and confident exterior, numbers are delicate things and that’s why it upsets me when they are abused. And since there’s been a fair amount of number abuse going on recently, it seems a good time to have a look at the classic ways in which politicians and spin doctors meddle with statistics.
Every statistician is familiar with the tedious “Lies, damned lies, and statistics” gibe, but the economist, writer and presenter of Radio 4’s More or Less, Tim Harford, has identified the habit of some politicians as not so much lying – to lie means having some knowledge of the truth – as “bullshitting”: a carefree disregard of whether the number is appropriate or not.
So here, with some help from the UK fact-checking organisation Full Fact, is a nine-point guide to what’s really going on.
David Spiegelhalter in The Guardian
I love numbers. They allow us to get a sense of magnitude, to measure change, to put claims in context. But despite their bold and confident exterior, numbers are delicate things and that’s why it upsets me when they are abused. And since there’s been a fair amount of number abuse going on recently, it seems a good time to have a look at the classic ways in which politicians and spin doctors meddle with statistics.
Every statistician is familiar with the tedious “Lies, damned lies, and statistics” gibe, but the economist, writer and presenter of Radio 4’s More or Less, Tim Harford, has identified the habit of some politicians as not so much lying – to lie means having some knowledge of the truth – as “bullshitting”: a carefree disregard of whether the number is appropriate or not.
So here, with some help from the UK fact-checking organisation Full Fact, is a nine-point guide to what’s really going on.
Use a real number, but change its meaning
There’s almost always some basis for numbers that get quoted, but it’s often rather different from what is claimed. Take, for example, the famous £350m, as in the “We send the EU £350m a week” claim plastered over the big red Brexit campaign bus. This is a true National Statistic (see Table 9.9 of the ONS Pink Book 2015), but, in the words of Sir Andrew Dilnot, chair of the UK Statistics Authority watchdog, it “is not an amount of money that the UK pays to the EU”. In fact, the UK’s net contribution is more like £250m a week when Britain’s rebate is taken into account – and much of that is returned in the form of agricultural subsidies and grants to poorer UK regions, reducing the figure to £136m. Sir Andrew expressed disappointment that this “misleading” claim was being made by Brexit campaigners but this ticking-off still did not get the bus repainted.
George Osborne quoted the Treasury’s projection of £4,300 as the cost per household of leaving the EU. Photograph: Matt Cardy/Getty Images
Make the number look big (but not too big)
Why did the Leave campaign frame the amount of money as “£350m per week”, rather than the equivalent “£19bn a year”? They probably realised that, once numbers get large, say above 10m, they all start seeming the same – all those extra zeros have diminishing emotional impact. Billions, schmillions, it’s just a Big Number.
Of course they could have gone the other way and said “£50m a day”, but then people might have realised that this is equivalent to around a packet of crisps each, which does not sound so impressive.
George Osborne, on the other hand, preferred to quote the Treasury’s projection of the potential cost of leaving the EU as £4,300 per household per year, rather than as the equivalent £120bn for the whole country. Presumably he was trying to make the numbers seem relevant, but perhaps he would have been better off framing the projected cost as “£2.5bn a week” so as to provide a direct comparison with the Leave campaign’s £350m. It probably would not have made any difference: the weighty 200-page Treasury report is on course to become a classic example of ignored statistics.
Recent studies confirmed higher death rates at weekends, but showed no relationship to weekend staffing levels. Photograph: Peter Byrne/PA
Casually imply causation from correlation
In July 2015 Jeremy Hunt said: “Around 6,000 people lose their lives every year because we do not have a proper seven-day service in hospitals….” and by February 2016 this had increased to “11,000 excess deaths because we do not staff our hospitals properly at weekends”. These categorical claims that weekend staffing was responsible for increased weekend death rates were widely criticised at the time, particularly by the people who had done the actual research. Recent studies have confirmed higher death rates at weekends, but these showed no relationship to weekend staffing levels.
Choose your definitions carefully
On 17 December 2014, Tom Blenkinsop MP said, “Today, there are 2,500 fewer nurses in our NHS than in May 2010”, while on the same day David Cameron claimed “Today, actually, there are new figures out on the NHS… there are 3,000 more nurses under this government.” Surely one must be wrong?
But Mr Blenkinsop compared the number of people working as nurses between September 2010 and September 2014, while Cameron used the full-time-equivalent number of nurses, health visitors and midwives between the start of the government in May 2010 and September 2014. So they were both, in their own particular way, right.
Use total numbers rather than proportions (or whichever way suits your argument)
In the final three months of 2014, less than 93% of attendances at Accident and Emergency units were seen within four hours, the lowest proportion for 10 years. And yet Jeremy Hunt managed to tweet that “More patients than ever being seen in less than four hours”. Which, strictly speaking, was correct, but only because more people were attending A&E than ever before. Similarly, when it comes to employment, an increasing population means that the number of employed can go up even when the employment rate goes down. Full Fact has shown how the political parties play “indicator hop”, picking whichever measure currently supports their argument.
Don’t provide any relevant context
Last September shadow home secretary Andy Burnham declared that “crime is going up”, and when pressed pointed to the police recording more violent and sexual offences than the previous year. But police-recorded crime data were de-designated as “official” statistics by the UK Statistics Authority in 2014 as they were so unreliable: they depend strongly on what the public choose to report, and how the police choose to record it.
Instead the Crime Survey for England and Wales is the official source of data, as it records crimes that are not reported to the police. And the Crime Survey shows a steady reduction in crime for more than 20 years, and no evidence of an increase in violent and sexual offences last year.
Exaggerate the importance of a possibly illusory change
Next time you hear a politician boasting that unemployment has dropped by 30,000 over the previous quarter, just remember that this is an estimate based on a survey. And that estimate has a margin of error of +/- 80,000, meaning that unemployment may well have gone down, but it may have gone up – the best we can say is that it hasn’t changed very much, but that hardly makes a speech. And to be fair, the politician probably has no idea that this is an estimate and not a head count.
Serious youth crime has actually declined, but that’s not because of TKAP. Photograph: Action Press / Rex Features
Last September shadow home secretary Andy Burnham declared that “crime is going up”, and when pressed pointed to the police recording more violent and sexual offences than the previous year. But police-recorded crime data were de-designated as “official” statistics by the UK Statistics Authority in 2014 as they were so unreliable: they depend strongly on what the public choose to report, and how the police choose to record it.
Instead the Crime Survey for England and Wales is the official source of data, as it records crimes that are not reported to the police. And the Crime Survey shows a steady reduction in crime for more than 20 years, and no evidence of an increase in violent and sexual offences last year.
Exaggerate the importance of a possibly illusory change
Next time you hear a politician boasting that unemployment has dropped by 30,000 over the previous quarter, just remember that this is an estimate based on a survey. And that estimate has a margin of error of +/- 80,000, meaning that unemployment may well have gone down, but it may have gone up – the best we can say is that it hasn’t changed very much, but that hardly makes a speech. And to be fair, the politician probably has no idea that this is an estimate and not a head count.
Serious youth crime has actually declined, but that’s not because of TKAP. Photograph: Action Press / Rex Features
Prematurely announce the success of a policy initiative using unofficial selected data
In June 2008, just a year after the start of the Tackling Knives Action Programme (TKAP), No 10 got the Home Office to issue a press release saying “the number of teenagers admitted to hospital for knife or sharp instrument wounding in nine… police force areas fell by 27% according to new figures published today”. But this used unchecked unofficial data, and was against the explicit advice of official statisticians. They got publicity, but also a serious telling-off from the UK Statistics Authority which accused No 10 of making an announcement that was “corrosive of public trust in official statistics”. The final conclusion about the TKAP was that serious youth violence had declined in the country, but no more in TKAP areas than elsewhere.
If all else fails, just make the numbers up
Last November, Donald Trump tweeted a recycled image that included the claim that “Whites killed by blacks – 81%”, citing “Crime Statistics Bureau – San Francisco”. The US fact-checking site Politifact identified this as completely fabricated – the “Bureau” did not exist, and the true figure is around 15%. When confronted with this, Trump shrugged and said, “Am I going to check every statistic?”
Not all politicians are so cavalier with statistics, and of course it’s completely reasonable for them to appeal to our feelings and values. But there are some serial offenders who conscript innocent numbers, purely to provide rhetorical flourish to their arguments.
We deserve to have statistical evidence presented in a fair and balanced way, and it’s only by public scrutiny and exposure that anything will ever change. There are noble efforts to dam the flood of naughty numbers. The BBC’s More or Less team take apart dodgy data, organisations such as Full Fact and Channel 4’s FactCheck expose flagrant abuses, the UK Statistics Authority write admonishing letters. The Royal Statistical Society offers statistical training for MPs, and the House of Commons library publishes a Statistical Literacy Guide: how to spot spin and inappropriate use of statistics.
They are all doing great work, but the shabby statistics keep on coming. Maybe these nine points can provide a checklist, or even the basis for a competition – how many points can your favourite minister score? In my angrier moments I feel that number abuse should be made a criminal offence. But that’s a law unlikely to be passed by politicians.
David Spiegelhalter is the Winton Professor of the Public Understanding of Risk at the University of Cambridge and president elect of the Royal Statistical Society
Subscribe to:
Posts (Atom)