Search This Blog

Showing posts with label expert. Show all posts
Showing posts with label expert. Show all posts

Tuesday 29 August 2023

A level Economics: How to Improve Economic Forecasting

 Nicholas Gruen in The FT 


Today’s four-day weather forecasts are as accurate as one-day forecasts were 30 years ago. Economic forecasts, on the other hand, aren’t noticeably better. Former Federal Reserve chair Ben Bernanke should ponder this in his forthcoming review of the Bank of England’s forecasting. 

There’s growing evidence that we can improve. But myopia and complacency get in the way. Myopia is an issue because economists think technical expertise is the essence of good forecasting when, actually, two things matter more: forecasters’ understanding of the limits of their expertise and their judgment in handling those limits. 

Enter Philip Tetlock, whose 2005 book on geopolitical forecasting showed how little experts added to forecasting done by informed non-experts. To compare forecasts between the two groups, he forced participants to drop their vague weasel words — “probably”, “can’t be ruled out” — and specify exactly what they were forecasting and with what probability.  

That started sorting the sheep from the goats. The simple “point forecasts” provided by economists — such as “growth will be 3.0 per cent” — are doubly unhelpful in this regard. They’re silent about what success looks like. If I have forecast 3.0 per cent growth and actual growth comes in at 3.2 per cent — did I succeed or fail? Such predictions also don’t tell us how confident the forecaster is. 

By contrast, “a 70 per cent chance of rain” specifies a clear event with a precise estimation of the weather forecaster’s confidence. Having rigorously specified the rules of the game, Tetlock has since shown how what he calls “superforecasting” is possible and how diverse teams of superforecasters do even better.  

What qualities does Tetlock see in superforecasters? As well as mastering necessary formal techniques, they’re open-minded, careful, curious and self-critical — in other words, they’re not complacent. Aware, like Socrates, of how little they know, they’re constantly seeking to learn — from unfolding events and from colleagues. 

Superforecasters actively resist the pull to groupthink, which is never far away in most organisations — or indeed, in the profession of economics as a whole, as practitioners compensate for their ignorance by keeping close to the herd. The global financial crisis is just one example of an event that economists collectively failed to warn the world about. 

There are just five pages referencing superforecasting on the entire Bank of England website — though that’s more than other central banks. 

Bernanke could recommend that we finally set about the search for economic superforecasters. He should also propose that the BoE lead the world by open sourcing economic forecasting.  

In this scenario, all models used would be released fully documented and a “prediction tournament” would focus on the key forecasts. Outsiders would be encouraged to enter the tournament — offering their own forecasts, their own models and their own reconfiguration or re-parameterisation of the BoE’s models. Prizes could be offered for the best teams and the best schools and universities.  

The BoE’s forecasting team(s) should also compete. The BoE could then release its official forecasts using the work it has the most confidence in, whether it is that of its own team(s), outsiders or some hybrid option. Over time, we’d be able to identify which ones were consistently better.  

Using this formula, I predict that the Bank of England’s official forecasts would find their way towards the top of the class — in the UK, and the world.

Tuesday 1 June 2021

If the Wuhan lab-leak hypothesis is true, expect a political earthquake

 Thomas Frank in The Guardian


‘My own complacency on the matter was dynamited by the lab-leak essay that ran in the Bulletin of the Atomic Scientists earlier this month.’
‘My own complacency on the matter was dynamited by the lab-leak essay that ran in the Bulletin of the Atomic Scientists earlier this month.’ Photograph: Thomas Peter/Reuters
 

There was a time when the Covid pandemic seemed to confirm so many of our assumptions. It cast down the people we regarded as villains. It raised up those we thought were heroes. It prospered people who could shift easily to working from home even as it problematized the lives of those Trump voters living in the old economy.

Like all plagues, Covid often felt like the hand of God on earth, scourging the people for their sins against higher learning and visibly sorting the righteous from the unmasked wicked. “Respect science,” admonished our yard signs. And lo!, Covid came and forced us to do so, elevating our scientists to the highest seats of social authority, from where they banned assembly, commerce, and all the rest.

We cast blame so innocently in those days. We scolded at will. We knew who was right and we shook our heads to behold those in the wrong playing in their swimming pools and on the beach. It made perfect sense to us that Donald Trump, a politician we despised, could not grasp the situation, that he suggested people inject bleach, and that he was personally responsible for more than one super-spreading event. Reality itself punished leaders like him who refused to bow to expertise. The prestige news media even figured out a way to blame the worst death tolls on a system of organized ignorance they called “populism.”

But these days the consensus doesn’t consense quite as well as it used to. Now the media is filled with disturbing stories suggesting that Covid might have come — not from “populism” at all, but from a laboratory screw-up in Wuhan, China. You can feel the moral convulsions beginning as the question sets in: What if science itself is in some way culpable for all this?

*

I am no expert on epidemics. Like everyone else I know, I spent the pandemic doing as I was told. A few months ago I even tried to talk a Fox News viewer out of believing in the lab-leak theory of Covid’s origins. The reason I did that is because the newspapers I read and the TV shows I watched had assured me on many occasions that the lab-leak theory wasn’t true, that it was a racist conspiracy theory, that only deluded Trumpists believed it, that it got infinite pants-on-fire ratings from the fact-checkers, and because (despite all my cynicism) I am the sort who has always trusted the mainstream news media.

My own complacency on the matter was dynamited by the lab-leak essay that ran in the Bulletin of the Atomic Scientists earlier this month; a few weeks later everyone from Doctor Fauci to President Biden is acknowledging that the lab-accident hypothesis might have some merit. We don’t know the real answer yet, and we probably will never know, but this is the moment to anticipate what such a finding might ultimately mean. What if this crazy story turns out to be true?

The answer is that this is the kind of thing that could obliterate the faith of millions. The last global disaster, the financial crisis of 2008, smashed people’s trust in the institutions of capitalism, in the myths of free trade and the New Economy, and eventually in the elites who ran both American political parties. 

In the years since (and for complicated reasons), liberal leaders have labored to remake themselves into defenders of professional rectitude and established legitimacy in nearly every field. In reaction to the fool Trump, liberalism made a sort of cult out of science, expertise, the university system, executive-branch “norms,” the “intelligence community,” the State Department, NGOs, the legacy news media, and the hierarchy of credentialed achievement in general.

Now here we are in the waning days of Disastrous Global Crisis #2. Covid is of course worse by many orders of magnitude than the mortgage meltdown — it has killed millions and ruined lives and disrupted the world economy far more extensively. Should it turn out that scientists and experts and NGOs, etc. are villains rather than heroes of this story, we may very well see the expert-worshiping values of modern liberalism go up in a fireball of public anger.

Consider the details of the story as we have learned them in the last few weeks:

  • Lab leaks happen. They aren’t the result of conspiracies: “a lab accident is an accident,” as Nathan Robinson points out; they happen all the time, in this country and in others, and people die from them.
  • There is evidence that the lab in question, which studies bat coronaviruses, may have been conducting what is called “gain of function” research, a dangerous innovation in which diseases are deliberately made more virulent. By the way, right-wingers didn’t dream up “gain of function”: all the cool virologists have been doing it (in this country and in others) even as the squares have been warning against it for years.
  • There are strong hints that some of the bat-virus research at the Wuhan lab was funded in part by the American national-medical establishment — which is to say, the lab-leak hypothesis doesn’t implicate China alone.
  • There seem to have been astonishing conflicts of interest among the people assigned to get to the bottom of it all, and (as we know from Enron and the housing bubble) conflicts of interest are always what trip up the well-credentialed professionals whom liberals insist we must all heed, honor, and obey.
  • The news media, in its zealous policing of the boundaries of the permissible, insisted that Russiagate was ever so true but that the lab-leak hypothesis was false false false, and woe unto anyone who dared disagree. Reporters gulped down whatever line was most flattering to the experts they were quoting and then insisted that it was 100% right and absolutely incontrovertible — that anything else was only unhinged Trumpist folly, that democracy dies when unbelievers get to speak, and so on.
  • The social media monopolies actually censored posts about the lab-leak hypothesis. Of course they did! Because we’re at war with misinformation, you know, and people need to be brought back to the true and correct faith — as agreed upon by experts.
*

“Let us pray, now, for science,” intoned a New York Times columnist back at the beginning of the Covid pandemic. The title of his article laid down the foundational faith of Trump-era liberalism: “Coronavirus is What You Get When You Ignore Science.”

Ten months later, at the end of a scary article about the history of “gain of function” research and its possible role in the still ongoing Covid pandemic, Nicholson Baker wrote as follows: “This may be the great scientific meta-experiment of the 21st century. Could a world full of scientists do all kinds of reckless recombinant things with viral diseases for many years and successfully avoid a serious outbreak? The hypothesis was that, yes, it was doable. The risk was worth taking. There would be no pandemic.”

Except there was. If it does indeed turn out that the lab-leak hypothesis is the right explanation for how it began — that the common people of the world have been forced into a real-life lab experiment, at tremendous cost — there is a moral earthquake on the way.

Because if the hypothesis is right, it will soon start to dawn on people that our mistake was not insufficient reverence for scientists, or inadequate respect for expertise, or not enough censorship on Facebook. It was a failure to think critically about all of the above, to understand that there is no such thing as absolute expertise. Think of all the disasters of recent years: economic neoliberalism, destructive trade policies, the Iraq War, the housing bubble, banks that are “too big to fail,” mortgage-backed securities, the Hillary Clinton campaign of 2016 — all of these disasters brought to you by the total, self-assured unanimity of the highly educated people who are supposed to know what they’re doing, plus the total complacency of the highly educated people who are supposed to be supervising them.

Then again, maybe I am wrong to roll out all this speculation. Maybe the lab-leak hypothesis will be convincingly disproven. I certainly hope it is.

But even if it inches closer to being confirmed, we can guess what the next turn of the narrative will be. It was a “perfect storm,” the experts will say. Who coulda known? And besides (they will say), the origins of the pandemic don’t matter any more. Go back to sleep.

Sunday 21 March 2021

DECODING DENIALISM

Nadeem F. Paracha in The Dawn

Illustration by Abro


On November 12, 2009, the New York Times (NYT) ran a video report on its website. In it, the NYT reporter Adam B. Ellick interviewed some Pakistani pop stars to gauge how lifestyle liberals were being affected by the spectre of so-called ‘Talibanisation’ in Pakistan. To his surprise, almost every single pop artiste that he managed to engage, refused to believe that there were men willing to blow themselves up in public in the name of faith.

It wasn’t an outright denial, as such, but the interviewed pop acts went to great lengths to ‘prove’ that the attacks were being carried out at the behest of the US, and that those who were being called ‘terrorists’ were simply fighting for their rights. Ellick’s surprise was understandable. Between 2007 and 2009, hundreds of people had already been killed in Pakistan by suicide bombers.

But it wasn’t just these ‘confused’ lifestyle liberals who chose to look elsewhere for answers when the answer was right in front of them. Unregulated talk shows on TV news channels were constantly providing space to men who would spin the most ludicrous narratives that presented the terrorists as ‘misunderstood brothers.’

From 2007 till 2014, terrorist attacks and assassinations were a daily occurrence. Security personnel, politicians, men, women and children were slaughtered. Within hours, the cacophony of inarticulate noises on the electronic media would drown out these tragedies. The bottom-line of almost every such ‘debate’ was always, ‘ye hum mein se nahin’ [these (terrorists) are not from among us]. In fact, there was also a song released with this as its title and ‘message.’

The perpetrators of the attacks were turned into intangible, invisible entities, like characters of urban myths that belong to a different realm. The fact was that they were very much among us, for all to see, even though most Pakistanis chose not to. 

Just before the 2013 elections, the website of an English daily ran a poll on the foremost problems facing Pakistan. The poll mentioned unemployment, corruption, inflation and street crimes, but there was no mention of terrorism even though, by 2013, thousands had been killed in terrorist attacks.

So how does one explain this curious refusal to acknowledge a terrifying reality that was operating in plain sight? In an August 3, 2018 essay for The Guardian, Keith Kahn-Harris writes that individual self-deception becomes a problem when it turns into ‘public dogma.’ It then becomes what is called ‘denialism.’

The American science journalist and author Michael Specter, in his book Denialism, explains it to mean an entire segment of society, when struggling with trauma, turning away from reality in favour of a more comfortable lie. Psychologists have often explained denial as a coping mechanism that humans use in times of stress. But they also warn that if denial establishes itself as a constant disposition in an individual or society, it starts to inhibit the ability to resolve the source of the stress.

Denialism, as a social condition, is understood by sociologists as an undeclared ‘ism’, adhered to by certain segments of a society whose rhetoric and actions in this context can impact a country’s political, social and even economic fortunes.

In the January 2009 issue of European Journal of Public Health, Pascal Diethelm and Martin McKee write that the denialism process employs five main characteristics. Even though Diethelm and McKee were more focused on the emergence of denialism in the face of evidence in scientific fields of research, I will paraphrase four out of the five stated characteristics to explore denialism in the context of extremist violence in Pakistan from 2007 till 2017.

The deniers have their own interpretation of the same evidence.
In early 2013, when a study showed that 1,652 people had been killed in 2012 alone in Pakistan because of terrorism, an ‘analyst’ on a news channel falsely claimed that these figures included those killed during street crimes and ‘revenge murders.’ Another gentleman insisted that the figures were concocted by foreign-funded NGOs ‘to give Pakistan and Islam a bad name.’

This brings us to denialism’s second characteristic: The use of fake experts. These are individuals who purport to be experts in a particular area but whose views are entirely inconsistent with established knowledge. During the peak years of terrorist activity in the country, self-appointed ‘political experts’ and ‘religious scholars’ were a common sight on TV channels. Their ‘expert opinions’ were heavily tilted towards presenting the terrorists as either ‘misunderstood brothers’ or people fighting to impose a truly Islamic system in Pakistan. Many such experts suddenly vanished from TV screens after the intensification of the military operation against militants in 2015. Some were even booked for hate speech.

The third characteristic is about selectivity, drawing on isolated opinions or highlighting flaws in the weakest opinions to discredit entire facts. In October 2012, when extremists attempted to assassinate a teenaged school girl, Malala Yousafzai, a sympathiser of the extremists on TV justified the assassination attempt by mentioning ‘similar incidents’ that he discovered in some obscure books of religious traditions. Within months Malala became the villain, even among some of the most ‘educated’ Pakistanis. When the nuclear physicist and intellectual Dr Pervez Hoodbhoy exhibited his disgust over this, he was not only accused of being ‘anti-Islam’, but his credibility as a scientist too was questioned.

The fourth characteristic is about misrepresenting the opposing argument to make it easier to refute. For example, when terrorists were wreaking havoc in Pakistan, the arguments of those seeking to investigate the issue beyond conspiracy theories and unabashed apologias, were deliberately misconstrued as being criticisms of religious faith.

Today we are seeing all this returning. But this time, ‘experts’ are appearing on TV pointing out conspiracies and twisting facts about the Covid-19 pandemic and vaccines. They are also offering their expert opinions on events such as the Aurat March and, in the process, whipping up a dangerous moral panic.

It seems, not much was learned by society’s collective disposition during the peak years of terrorism and how it delayed a timely response that might have saved hundreds of innocent lives.

Friday 15 January 2021

Conspiracy theorists destroy a rational society: resist them

John Thornhill in The FT

Buzz Aldrin’s reaction to the conspiracy theorist who told him the moon landings never happened was understandable, if not excusable. The astronaut punched him in the face. 

Few things in life are more tiresome than engaging with cranks who refuse to accept evidence that disproves their conspiratorial beliefs — even if violence is not the recommended response. It might be easier to dismiss such conspiracy theorists as harmless eccentrics. But while that is tempting, it is in many cases wrong. 

As we have seen during the Covid-19 pandemic and in the mob assault on the US Congress last week, conspiracy theories can infect the real world — with lethal effect. Our response to the pandemic will be undermined if the anti-vaxxer movement persuades enough people not to take a vaccine. Democracies will not endure if lots of voters refuse to accept certified election results. We need to rebut unproven conspiracy theories. But how? 

The first thing to acknowledge is that scepticism is a virtue and critical scrutiny is essential. Governments and corporations do conspire to do bad things. The powerful must be effectively held to account. The US-led war against Iraq in 2003, to destroy weapons of mass destruction that never existed, is a prime example.  

The second is to re-emphasise the importance of experts, while accepting there is sometimes a spectrum of expert opinion. Societies have to base decisions on experts’ views in many fields, such as medicine and climate change, otherwise there is no point in having a debate. Dismissing the views of experts, as Michael Gove famously did during the Brexit referendum campaign, is to erode the foundations of a rational society. No sane passenger would board an aeroplane flown by an unqualified pilot.  

In extreme cases, societies may well decide that conspiracy theories are so harmful that they must suppress them. In Germany, for example, Holocaust denial is a crime. Social media platforms that do not delete such content within 24 hours of it being flagged are fined. 

In Sweden, the government is even establishing a national psychological defence agency to combat disinformation. A study published this week by the Oxford Internet Institute found “computational propaganda” is now being spread in 81 countries. 

Viewing conspiracy theories as political propaganda is the most useful way to understand them, according to Quassim Cassam, a philosophy professor at Warwick university who has written a book on the subject. In his view, many conspiracy theories support an implicit or explicit ideological goal: opposition to gun control, anti-Semitism or hostility to the federal government, for example. What matters to the conspiracy theorists is not whether their theories are true, but whether they are seductive. 

So, as with propaganda, conspiracy theories must be as relentlessly opposed as they are propagated. 

That poses a particular problem when someone as powerful as the US president is the one shouting the theories. Amid huge controversy, Twitter and Facebook have suspended Donald Trump’s accounts. But Prof Cassam says: “Trump is a mega disinformation factory. You can de-platform him and address the supply side. But you still need to address the demand side.” 

On that front, schools and universities should do more to help students discriminate fact from fiction. Behavioural scientists say it is more effective to “pre-bunk” a conspiracy theory — by enabling people to dismiss it immediately — than debunk it later. But debunking serves a purpose, too. 

As of 2019, there were 188 fact-checking sites in more than 60 countries. Their ability to inject facts into any debate can help sway those who are curious about conspiracy theories, even if they cannot convince true believers. 

Under intense public pressure, social media platforms are also increasingly filtering out harmful content and nudging users towards credible sources of information, such as medical bodies’ advice on Covid. 

Some activists have even argued for “cognitive infiltration” of extremist groups, suggesting that government agents should intervene in online chat rooms to puncture conspiracy theories. That may work in China but is only likely to backfire in western democracies, igniting an explosion of new conspiracy theories. 

Ultimately, we cannot reason people out of beliefs that they have not reasoned themselves into. But we can, and should, punish those who profit from harmful irrationality. There is a tried-and-tested method of countering politicians who peddle and exploit conspiracy theories: vote them out of office.

Thursday 25 June 2020

60 is the new 80 thanks to Corona

 Patti Waldmeir in The FT

“Better be safe than sorry.” I have never believed that. 


I have lived my first 65 years often turning a blind eye to risk. I lived in China for eight years, enduring some of the worst industrial pollution on earth, despite having asthma. I risked damaging the lungs of my then small children by raising them in a place where their school often locked them in air-purified classrooms to protect them from the smog. 

Before that, I lived for 20 years in Africa, refusing to boil water in areas where it needed boiling, eating bushmeat at roadside stalls — not to mention the escapades that I got up to as a young woman in the pre-Aids era. 

But now, as I peer over the precipice into life as a senior citizen, coronavirus has finally introduced me to the concept of risk. Part of it is the whole “60 is the new 80” paradigm that the pandemic has forced on us — but most of it is that, whether I like it or not, I fit squarely in the category of “at risk” for severe illness or death if I catch Covid-19. 

I have diabetes, asthma and am finishing my 65th year. I don’t live in a nursing home, a jail, a monastery or a convent (as does one close friend with Covid-19), but according to the US Centers for Disease Control and Prevention (CDC), I still qualify as high risk because of my underlying conditions and age. 

So what do I — and people like me, I am far from alone — do now that the world is reopening without us? I’ve got some big decisions to make in the next few days. My youngest child is moving back to our flat outside Chicago after a month living elsewhere: does one of us need to be locked in the bedroom? Do I have to eat on the balcony for two weeks? 

There is no shortage of people, not least President Donald Trump, telling me that all this is simple: vulnerable people should just stay home. But what if they live with other people? What if those people have jobs? And what about our dogs? Our two old mutts are overdue for a rabies shot because the vet was only seeing emergencies. Is it safe for me to take them in now? Can my kids go to the dentist, and then come home to live at close quarters with me? 

I asked several medical experts these questions, and they all offered versions of “we haven’t got a clue”. Robert Gabbay, incoming chief scientific and medical officer of the American Diabetes Association, was the most helpful: “Individuals with diabetes are all in the higher-risk category but even within that category, those who are older and with co-morbidities are at more risk — and control of blood glucose seems to matter. 

“You are probably somewhere in the middle” of the high-risk category, he decided. My diabetes is well controlled and I don’t have many other illnesses. “But your age is a factor,” he added. Up to now, I’ve thought I was in the “60 is the new 40 crowd”: now I know there is no such crowd. 

The head of the Illinois Department of Public Health underlined this at the weekend when she gave her personal list of Covid dos and don’ts, including don’t visit a parent who is over 65 with pre-existing conditions for at least a year, or until there is a cure. Dr Ngozi Ezike also said she would not attend a wedding or a dinner party for a year and would avoid indoor restaurants for three months to a year — despite the fact that Chicago’s indoor restaurants reopen on Friday. 

I turned to the CDC, which initially said it would issue new guidance for “at risk” people last week, but didn’t. This would be the same CDC that I trusted when it said not to wear a mask — though 1.3 billion people in China were masking up. Today China, which is 100 times larger by population than my home state of Illinois, has less than three-quarters as many total pandemic deaths. (Yes, I know China has been accused of undercounting cases, but so has the US.) Masks aren’t the only reason; but they are enough of a reason to erode my trust in what the CDC thinks I should do now. 

It doesn’t help that the CDC website lists “moderate to severe asthma” as one of the primary risk factors for poor coronavirus outcomes — while the American Academy of Allergy Asthma and Immunology says “there are no published data to support this determination”, adding that there is “no evidence” that those with asthma are more at risk. Who’s right? 

I need to know: this weekend is the one-year anniversary of the death of my eldest sibling. I’ve chosen not to make the trip to visit his grave in Michigan. Next month, I turn 65, and I want to spend that day with my 89-year-old father: should we rent a camper van, so we don’t infect his household? I thought about a porta potty for the journey, since public toilets are apparently a coronavirus hotspot. When I started searching for “female urination devices” online, I knew it was time to ditch this new “better safe than sorry” persona I’ve assumed under lockdown. 

Maybe it’s time to remind myself of a fact that I once knew: that life is a risky business, and there is only so much I can do about that. I’ll die when it’s my time — probably not a day before or after, coronavirus or no coronavirus.

Monday 8 June 2020

We often accuse the right of distorting science. But the left changed the coronavirus narrative overnight

Racism is a health crisis. But poverty is too – yet progressives blithely accepted the costs of throwing millions of people like George Floyd out of work writes Thomas Chatterton Williams in The Guardian


 
‘Less than two weeks ago, the enlightened position was to exercise extreme caution. Many of us went further, taking to social media to shame others for insufficient social distancing.’ Photograph: Devon Ravine/AP


When I reflect back on the extraordinary year of 2020 – from, I hope, some safer, saner vantage – one of the two defining images in my mind will be the surreal figure of the Grim Reaper stalking the blazing Florida shoreline, scythe in hand, warning the sunbathing masses of imminent death and granting interviews to reporters. The other will be a prostrate George Floyd, whose excruciating Memorial Day execution sparked a global protest movement against racism and police violence.

Less than two weeks after Floyd’s killing, the American death toll from the novel coronavirus has surpassed 100,000. Rates of infection, domestically and worldwide, are rising. But one of the few things it seems possible to say without qualification is that the country has indeed reopened. For 13 days straight, in cities across the nation, tens of thousands of men and women have massed in tight-knit proximity, with and without personal protective equipment, often clashing with armed forces, chanting, singing and inevitably increasing the chances of the spread of contagion.

Scenes of outright pandemonium unfold daily. Anyone claiming to have a precise understanding of what is happening, and what the likely risks and consequences may be, should be regarded with the utmost skepticism. We are all living in a techno-dystopian fantasy, the internet-connected portals we rely on rendering the world in all its granular detail and absurdity like Borges’s Aleph. Yet we know very little about what it is we are watching.

I open my laptop and glimpse a rider on horseback galloping through the Chicago streets like Ras the Destroyer in Ralph Ellison’s Invisible Man; I scroll down further and find myself in Los Angeles, as the professional basketball star JR Smith pummels a scrawny anarchist who smashed his car window. I keep going and encounter a mixed group of business owners in Van Nuys risking their lives to defend their businesses from rampaging looters; the black community members trying to help them are swiftly rounded up by police officers who mistake them for the criminals. In Buffalo, a 75-year-old white man approaches a police phalanx and is immediately thrown to the pavement; blood spills from his ear as the police continue to march over him. Looming behind all of this chaos is a reality-TV president giddily tweeting exhortations to mass murder, only venturing out of his bunker to teargas peaceful protesters and stage propaganda pictures.


George Floyd wasn’t merely killed for being black – he was also killed for being poor

But this virus – for which we may never even find a vaccine – knows and respects none of this socio-political context. Its killing trajectory isn’t rational, emotional, or ethical – only mathematical. And just as two plus two is four, when a flood comes, low-lying areas get hit the hardest. Relatively poor, densely clustered populations with underlying conditions suffer disproportionately in any environment in which Covid-19 flourishes. Since the virus made landfall in the US, it has killed at least 20,000 black Americans.

After two and a half months of death, confinement, and unemployment figures dwarfing even the Great Depression, we have now entered the stage of competing urgencies where there are zero perfect options. Police brutality is a different if metaphorical epidemic in an America slouching toward authoritarianism. Catalyzed by the spectacle of Floyd’s reprehensible death, it is clear that the emergency in Minneapolis passes my own and many peoples’ threshold for justifying the risk of contagion.

But poverty is also a public health crisis. George Floyd wasn’t merely killed for being black – he was also killed for being poor. He died over a counterfeit banknote. Poverty destroys Americans every day by means of confrontations with the law, disease, pollution, violence and despair. Yet even as the coronavirus lockdown threw 40 million Americans out of work – including Floyd himself – many progressives accepted this calamity, sometimes with stunning blitheness, as the necessary cost of guarding against Covid-19.

The new, “correct” narrative about public health – that one kind of crisis has superseded the other – grows shakier as it spans out from Minnesota, across America to as far as London, Amsterdam and Paris – cities that have in recent days seen extraordinary manifestations of public solidarity against both American and local racism, with protesters in the many thousands flooding public spaces.

Consider France, where I live. The country has only just begun reopening after two solid months of one of the world’s severest national quarantines, and in the face of the world’s fifth-highest coronavirus body count. As recently as 11 May, it was mandatory here to carry a fully executed state-administered permission slip on one’s person in order to legally exercise or go shopping. The country has only just begun to flatten the curve of deaths – nearly 30,000 and counting – which have brought its economy to a standstill. Yet even here, in the time it takes to upload a black square to your Instagram profile, those of us who move in progressive circles now find ourselves under significant moral pressure to understand that social distancing is an issue of merely secondary importance.

This feels like gaslighting. Less than two weeks ago, the enlightened position in both Europe and America was to exercise nothing less than extreme caution. Many of us went much further, taking to social media to castigate others for insufficient social distancing or neglecting to wear masks or daring to believe they could maintain some semblance of a normal life during coronavirus. At the end of April, when the state of Georgia moved to end its lockdown, the Atlantic ran an article with the headline “Georgia’s Experiment in Human Sacrifice”. Two weeks ago we shamed people for being in the street; today we shame them for not being in the street.

As a result of lockdowns and quarantines, many millions of people around the world have lost their jobs, depleted their savings, missed funerals of loved ones, postponed cancer screenings and generally put their lives on hold for the indefinite future. They accepted these sacrifices as awful but necessary when confronted by an otherwise unstoppable virus. Was this or wasn’t this all an exercise in futility?

“The risks of congregating during a global pandemic shouldn’t keep people from protesting racism,” NPR suddenly tells us, citing a letter signed by dozens of American public health and disease experts. “White supremacy is a lethal public health issue that predates and contributes to Covid-19,” the letter said. One epidemiologist has gone even further, arguing that the public health risks of not protesting for an end to systemic racism “greatly exceed the harms of the virus”.

The climate-change-denying right is often ridiculed, correctly, for politicizing science. Yet the way the public health narrative around coronavirus has reversed itself overnight seems an awful lot like … politicizing science.

What are we to make of such whiplash-inducing messaging? Merely pointing out the inconsistency in such a polarized landscape feels like an act of heresy. But “‘Your gatherings are a threat, mine aren’t,’ is fundamentally illogical, no matter who says it or for what reason,” as the author of The Death of Expertise, Tom Nichols, put it. “We’ve been told for months to stay as isolated as humanely possible,” Suzy Khimm, an NBC reporter covering Covid-19, noted, but “some of the same public officials and epidemiologists are [now] saying it’s OK to go to mass gatherings – but only certain ones.”

Public health experts – as well as many mainstream commentators, plenty of whom in the beginning of the pandemic were already incoherent about the importance of face masks and stay-at-home orders – have hemorrhaged credibility and authority. This is not merely a short-term problem; it will constitute a crisis of trust going forward, when it may be all the more urgent to convince skeptical masses to submit to an unproven vaccine or to another round of crushing stay-at-home orders. Will anyone still listen?

Seventy years ago Camus showed us that the human condition itself amounts to a plague-like emergency – we are only ever managing our losses, striving for dignity in the process. Risk and safety are relative notions and never strictly objective. However, there is one inconvenient truth that cannot be disputed: more black Americans have been killed by three months of coronavirus than the number who have been killed by cops and vigilantes since the turn of the millennium. We may or may not be willing to accept that brutal calculus, but we are obligated, at the very least, to be honest.

Saturday 28 September 2019

Beware the nuclear con man

Pervez Hoodbhoy in The Dawn


INDIAN leaders of unbridled ambition and meagre wisdom have recently suggested that India might revoke its earlier policy of No First Use (NFU) of nuclear weapons. They should be forgiven. To stay in the public eye, South Asia’s street-smart politicians need to make a lot of noise all the time. Most did not do very well in school and even fewer made it to college or university (and some ended up playing sports there).

Nuclear strategists, on the other hand, are advertised to be academic hotshots. The high-flying ones belong to various think tanks and universities — including prestigious ones in the United States. These so-called experts fill academic journals with thickly referenced research papers, participate in weighty-sounding conferences, and endlessly split hairs on minutiae like the difference between nuclear deterrence versus nuclear dissuasion.

Slyly hinting that NFU has run its course and needs a replacement, several Indian strategists have been openly flirting with a so-called counterforce doctrine — ie the possibility of knocking out Pakistan’s nuclear forces before they are activated. Paid to serve power rather than truth, like the proverbial serpent they whisper ideas into eager official ears. Their academic discourse and heavy language gives the impression that they really know what they are talking about. They don’t. In fact, they are clueless.

Here’s why. Every nuclear nation confines its deepest secrets to an extremely tight inner circle. Outsiders — meaning civilians — are excluded from what is critical. They cannot know such crucial details as the chain of nuclear command, geographical dispersal of warheads and delivery vehicles, intelligence on how well the adversary has concealed its nukes, whether warheads are mated or de-mated from delivery vehicles, integrity of communication channels, the efficacy of decoys and countermeasures, and much other vital information that would determine whether a first strike would achieve its objective.

So how do self-important know-nothing strategists — Indian, Pakistani, and American — ensure their salaries will continue reaching their bank accounts? Well, they write papers and therefore have to perfect the art of saying nothing — or perhaps next to nothing — in 5,000 words. Fact: no nuclear strategist knows the threshold of a nuclear war, can predict the sequence of events following a first strike, or persuasively argue whether nuclear hostilities could somehow be wound down. Of course he can guess — just as every Tom, Dick, and Harry can. But guesses are only guesses.

Could it perhaps be better inside a military organisation? War gaming is certainly a compulsory part of an officer’s training and one can feed parameters into a computer set up for simulating the onset and subsequent trajectory of a nuclear conflict. If properly programmed and proper probabilities are inputted, it will output the probabilities of various possible outcomes. But, as in tossing coins, probabilities make sense only when something can be repeated a large number of times. The problem is that nuclear war can happen only once.

That’s bad enough but, in fact, it’s even worse than that. You can give probabilities for missiles to be intercepted or for getting through, and for mechanical and electrical systems to work or fail. But you cannot assign probabilities for humans to act in a particular way during a crisis because that depends on mood, perception, personality and circumstance. Nuclear strategy pretends to be a science but is by no means one. Where has the other party drawn its nuclear red line (the real, not stated, one)? No one knows.

Consider: would one nuke fired at invading Indian tanks from a Pakistani Nasr missile battery elicit zero, one, three, or 30 Indian nukes as retaliation? The Indians say that a single nuke used against them, whether on Pakistani or Indian soil, constitutes a full-blown nuclear attack upon India. Should one believe them? Would panic ensue and cause one or both sides to descend into a totalistic use-them-or-lose-them mode? No one knows.

The nuclearised confrontation between India and Pakistan over Kashmir is best seen as a territorial fight between two street cats. I have had occasion to watch several. You can hear the growls grow louder. These then combine with hissing after which howls and growls get mixed. Sometimes they fight and sometimes not. Since they have only claws and teeth, never do both cats end up dead. But with nuclear weapons two opponents would strictly eliminate each other. In addition, their war would seriously devastate neighbouring countries and poison much of the globe.

The catfight analogy helps illuminate, for example, Defence Minister Rajnath Singh’s statement that continuation of India’s NFU policy depends upon ‘circumstances’. Since he left ‘circumstances’ unspecified, this could cover everything under the sun. Although dozens of articles were published commenting on his statement, in fact it carried exactly zero content. NFU is purely declaratory, impossible to verify and impossible to enforce. Nevertheless the statement was significant — the growling had become a tad louder. Plus, did you hear a slight hiss?

India’s hint at moving away from NFU towards counterforce owes to its increased military advantage over Pakistan. But hubris often paves the way to overconfidence and disaster. As every military commander worth his salt knows, all plans look fine until the battle begins. Last week a ragtag Houthi militia took out 50 per cent of Saudi Arabia’s oil-producing capacity, underscoring how even a relatively ill-equipped force can wreck an adversary bristling with the most advanced weapons that limitless oil dollars could buy. Sellers of snake oil and con men do not deserve anyone’s ears or respect. Whoever advocates a nuclear first strike should be quickly locked up in a mental asylum.

Some years after the Kargil episode, Gen Pervez Musharraf realised that nuclear weapons had brought Pakistan and India to an impasse. He is so far the only leader courageous enough to explicitly acknowledge this and — most importantly — to say out aloud that, for better or for worse, mutual fear of nuclear annihilation has etched the LoC in stone. It remains to be seen if other Pakistani and Indian leaders can dare to follow his example. Only then might peace get half a chance.

Friday 1 June 2018

I can make one confident prediction: my forecasts will fail

Tim Harford in The Financial Times 

I am not one of those clever people who claims to have seen the 2008 financial crisis coming, but by this time 10 years ago I could see that the fallout was going to be bad. Banking crises are always damaging, and this was a big one. The depth of the recession and the long-lasting hit to productivity came as no surprise to me. I knew it would happen. 


Or did I? This is the story I tell myself, but if I am honest I do not really know. I did not keep a diary, and so must rely on my memory — which, it turns out, is not a reliable servant. 

In 1972, the psychologists Baruch Fischhoff and Ruth Beyth conducted a survey in which they asked for predictions about Richard Nixon’s imminent presidential visit to China and Russia. How likely was it that Nixon and Mao Zedong would meet? What were the chances that the US would grant diplomatic recognition to China? Professors Fischhoff and Beyth wanted to know how people would later remember their forecasts. Since their subjects had taken the unusual step of writing down a specific probability for each of 15 outcomes, one might have hoped for accuracy. But no — the subjects flattered themselves hopelessly. The Fischhoff-Beyth paper was titled, “I knew it would happen”. 

This is a reminder of what a difficult task we face when we try to make big-picture macroeconomic and geopolitical forecasts. To start with, the world is a complicated place, which makes predictions challenging. For many of the subjects that interest us, there is a substantial delay between the forecast and the outcome, and this delayed feedback makes it harder to learn from our successes and failures. Even worse, as Profs Fischhoff and Beyth discovered, we systematically misremember what we once believed. 

Small wonder that forecasters turn to computers for help. We have also known for a long time — since work in the 1950s by the late psychologist Paul Meehl — that simple statistical rules often outperform expert intuition. Meehl’s initial work focused on clinical cases — for example, faced with a patient suffering chest pains, could a two or three-point checklist beat the judgment of an expert doctor? The experts did not fare well. However, Meehl’s rules, like more modern machine learning systems, require data to work. It is all very well for Amazon to forecast what impact a price drop may have on the demand for a book — and some of the most successful hedge funds use algorithmically-driven strategies — but trying to forecast the chance of Italy leaving the eurozone, or Donald Trump’s impeachment, is not as simple. Faced with an unprecedented situation, machines are no better than we are. And they may be worse. 

Much of what we know about forecasting in a complex world, we know from the research of the psychologist Philip Tetlock. In the 1980s, Prof Tetlock began to build on the Fischhoff-Beyth research by soliciting specific and often long-term forecasts from a wide variety of forecasters — initially hundreds. The early results, described in Prof Tetlock’s book Expert Political Judgement, were not encouraging. Yet his idea of evaluating large numbers of forecasters over an extended period of time has blossomed, and some successful forecasters have emerged. 

The latest step in this research is a “Hybrid Forecasting Tournament”, sponsored by the US Intelligence Advanced Research Projects Activity, designed to explore ways in which humans and machine learning systems can co-operate to produce better forecasts. We await the results. If the computers do produce some insight, it may be because they can tap into data that we could hardly have imagined using before. Satellite imaging can now track the growth of crops or the stockpiling of commodities such as oil. Computers can guess at human sentiment by analysing web searches for terms such as “job seekers allowance”, mentions of “recession” in news stories, and positive emotions in tweets. 

And there are stranger correlations, too. A study by economists Kasey Buckles, Daniel Hungerman and Steven Lugauer showed that a few quarters before an economic downturn in the US, the rate of conceptions also falls. Conceptions themselves may be deducible by computers tracking sales of pregnancy tests and folic acid. 

Back in 1991, a psychologist named Harold Zullow published research suggesting that the emotional content of songs in the Billboard Hot 100 chart could predict recessions. Hits containing “pessimistic rumination” (“I heard it through the grapevine / Not much longer would you be mine”) tended to predict an economic downturn. 

His successor is a young economist named Hisam Sabouni, who reckons that a computer-aided analysis of Spotify streaming gives him an edge in forecasting stock market movements and consumer sentiment. Will any of this prove useful for forecasting significant economic and political events? Perhaps. But for now, here is an easy way to use a computer to help you forecast: open up a spreadsheet, note down what you believe today, and regularly revisit and reflect. The simplest forecasting tip of all is to keep score.

Tuesday 1 May 2018

Should politicians be replaced by experts?

In the age of Trump and Brexit, some people say that democracy is fatally flawed and we should be ruled by ‘those who know best’. Here’s why that’s not very clever. David Runciman in The Guardian

Democracy is tired, vindictive, self-deceiving, paranoid, clumsy and frequently ineffectual. Much of the time it is living on past glories. This sorry state of affairs reflects what we have become. But current democracy is not who we are. It is just a system of government, which we built, and which we could replace. So why don’t we replace it with something better?

This line of argument has grown louder in recent years, as democratic politics has become more unpredictable and, to many, deeply alarming in its outcomes. First Brexit, then Donald Trump, plus the rise of populism and the spread of division, has started a tentative search for plausible alternatives. But the rival systems we see around us have a very limited appeal. The unlovely forms of 21st-century authoritarianism can at best provide only a partial, pragmatic alternative to democracy. The world’s strongmen still pander to public opinion, and in the case of competitive authoritarian regimes such as the ones in Hungary and Turkey, they persist with the rigmarole of elections. From Trump to Recep Tayyip ErdoÄŸan is not much of a leap into a brighter future.

There is a far more dogmatic alternative, which has its roots in the 19th century. Why not ditch the charade of voting altogether? Stop pretending to respect the views of ordinary people – it’s not worth it, since the people keep getting it wrong. Respect the experts instead! This is the truly radical option. So should we try it?

The name for this view of politics is epistocracy: the rule of the knowers. It is directly opposed to democracy, because it argues that the right to participate in political decision-making depends on whether or not you know what you are doing. The basic premise of democracy has always been that it doesn’t matter how much you know: you get a say because you have to live with the consequences of what you do. In ancient Athens, this principle was reflected in the practice of choosing office-holders by lottery. Anyone could do it because everyone – well, everyone who wasn’t a woman, a foreigner, a pauper, a slave or a child – counted as a member of the state. With the exception of jury service in some countries, we don’t choose people at random for important roles any more. But we do uphold the underlying idea by letting citizens vote without checking their suitability for the task.

Critics of democracy – starting with Plato – have always argued that it means rule by the ignorant, or worse, rule by the charlatans that the ignorant people fall for. Living in Cambridge, a passionately pro-European town and home to an elite university, I heard echoes of that argument in the aftermath of the Brexit vote. It was usually uttered sotto voce – you have to be a brave person to come out as an epistocrat in a democratic society – but it was unquestionably there. Behind their hands, very intelligent people muttered to each other that this is what you get if you ask a question that ordinary people don’t understand. Dominic Cummings, the author of the “Take Back Control” slogan that helped win the referendum, found that his critics were not so shy about spelling it out to his face. Brexithappened, they told him, because the wicked people lied to the stupid people. So much for democracy.

To say that democrats want to be ruled by the stupid and the ignorant is unfair. No defender of democracy has ever claimed that stupidity or ignorance are virtues in themselves. But it is true that democracy doesn’t discriminate on the grounds of a lack of knowledge. It considers the ability to think intelligently about difficult questions a secondary consideration. The primary consideration is whether an individual is implicated in the outcome. Democracy asks only that the voters should be around long enough to suffer for their own mistakes.

The question that epistocracy poses is: why don’t we discriminate on the basis of knowledge? What’s so special about letting everyone take part? Behind it lies the intuitively appealing thought that, instead of living with our mistakes, we should do everything in our power to prevent them in the first place – then it wouldn’t matter who has to take responsibility.

This argument has been around for more than 2,000 years. For most of that time, it has been taken very seriously. The consensus until the end of the 19th century was that democracy is usually a bad idea: it is just too risky to put power in the hands of people who don’t know what they are doing. Of course, that was only the consensus among intellectuals. We have little way of knowing what ordinary people thought about the question. Nobody was asking them.

Over the course of the 20th century, the intellectual consensus was turned around. Democracy established itself as the default condition of politics, its virtues far outweighing its weaknesses. Now the events of the 21st century have revived some of the original doubts. Democracies do seem to be doing some fairly stupid things at present. Perhaps no one will be able to live with their mistakes. In the age of Trump, climate change and nuclear weapons, epistocracy has teeth again.

So why don’t we give more weight to the views of the people who are best qualified to evaluate what to do? Before answering that question, it is important to distinguish between epistocracy and something with which it is often confused: technocracy. They are different. Epistocracy means rule by the people who know best. Technocracy is rule by mechanics and engineers. A technocrat is someone who understands how the machinery works.

In November 2011, Greek democracy was suspended and an elected government was replaced by a cabinet of experts, tasked with stabilising the collapsing Greek economy before new elections could be held. This was an experiment in technocracy, however, not epistocracy. The engineers in this case were economists. Even highly qualified economists often haven’t a clue what’s best to do. What they know is how to operate a complex system that they have been instrumental in building – so long as it behaves the way it is meant to. Technocrats are the people who understand what’s best for the machine. But keeping the machine running might be the worst thing we could do. Technocrats won’t help with that question.

Both representative democracy and pragmatic authoritarianism have plenty of space for technocracy. Increasingly, each system has put decision-making capacity in the hands of specially trained experts, particularly when it comes to economic questions. Central bankers wield significant power in a wide variety of political systems around the world. For that reason, technocracy is not really an alternative to democracy. Like populism, it is more of an add-on. What makes epistocracy different is that it prioritises the “right” decision over the technically correct decision. It tries to work out where we should be going. A technocrat can only tell us how we should get there.

How would epistocracy function in practice? The obvious difficulty is knowing who should count as the knowers. There is no formal qualification for being a general expert. It is much easier to identify a suitable technocrat. Technocracy is more like plumbing than philosophy. When Greece went looking for economic experts to sort out its financial mess, it headed to Goldman Sachs and the other big banks, since that is where the technicians were congregated. When a machine goes wrong, the people responsible for fixing it often have their fingerprints all over it already.

Historically, some epistocrats have tackled the problem of identifying who knows best by advocating non-technical qualifications for politics. If there were such a thing as the university of life, that’s where these epistocrats would want political decision-makers to get their higher degrees. But since there is no such university, they often have to make do with cruder tests of competence. The 19th-century philosopher John Stuart Mill argued for a voting system that granted varying numbers of votes to different classes of people depending on what jobs they did. Professionals and other highly educated individuals would get six or more votes each; farmers and traders would get three or four; skilled labourers would get two; unskilled labourers would get one. Mill also pushed hard for women to get the vote, at a time when that was a deeply unfashionable view. He did not do this because he thought women were the equals of men. It was because he thought some women, especially the better educated, were superior to most men. Mill was a big fan of discrimination, so long as it was on the right grounds.

To 21st-century eyes, Mill’s system looks grossly undemocratic. Why should a lawyer get more votes than a labourer? Mill’s answer would be to turn the question on its head: why should a labourer get the same number of votes as a lawyer? Mill was no simple democrat, but he was no technocrat either. Lawyers didn’t qualify for their extra votes because politics placed a special premium on legal expertise. No, lawyers got their extra votes because what’s needed are people who have shown an aptitude for thinking about questions with no easy answers. Mill was trying to stack the system to ensure as many different points of view as possible were represented. A government made up exclusively of economists or legal experts would have horrified him. The labourer still gets a vote. Skilled labourers get two. But even though a task like bricklaying is a skill, it is a narrow one. What was needed was breadth. Mill believed that some points of view carried more weight simply because they had been exposed to more complexity along the way.

Jason Brennan, a very 21st-century philosopher, has tried to revive the epistocratic conception of politics, drawing on thinkers like Mill. In his 2016 book Against Democracy, Brennan insists that many political questions are simply too complex for most voters to comprehend. Worse, the voters are ignorant about how little they know: they lack the ability to judge complexity because they are so attached to simplistic solutions that feel right to them.

Brennan writes: “Suppose the United States had a referendum on whether to allow significantly more immigrants into the country. Knowing whether this is a good idea requires tremendous social scientific knowledge. One needs to know how immigration tends to affect crime rates, domestic wages, immigrants’ welfare, economic growth, tax revenues, welfare expenditures and the like. Most Americans lack this knowledge; in fact, our evidence is that they are systematically mistaken.”

In other words, it’s not just that they don’t know; it’s not even that they don’t know that they don’t know; it’s that they are wrong in ways that reflect their unwavering belief that they are right.

 
Some philosophers advocate exams for voters, to ‘screen out citizens who are badly misinformed’. Photograph: David Jones/PA

Brennan doesn’t have Mill’s faith that we can tell how well-equipped someone is to tackle a complex question by how difficult that person’s job is. There is too much chance and social conditioning involved. He would prefer an actual exam, to “screen out citizens who are badly misinformed or ignorant about the election, or who lack basic social scientific knowledge”. Of course, this just pushes the fundamental problem back a stage without resolving it: who gets to set the exam? Brennan teaches at a university, so he has little faith in the disinterested qualities of most social scientists, who have their own ideologies and incentives. He has also seen students cramming for exams, which can produce its own biases and blind spots. Still, he thinks Mill was right to suggest that the further one advances up the educational ladder, the more votes one should get: five extra votes for finishing high school, another five for a bachelor’s degree, and five more for a graduate degree.

Brennan is under no illusions about how provocative this case is today, 150 years after Mill made it. In the middle of the 19th century, the idea that political status should track social and educational standing was barely contentious; today, it is barely credible. Brennan also has to face the fact that contemporary social science provides plenty of evidence that the educated are just as subject to groupthink as other people, sometimes even more so. The political scientists Larry Bartels and Christopher Achen point this out in their 2016 book Democracy for Realists: “The historical record leaves little doubt that the educated, including the highly educated, have gone wrong in their moral and political thinking as often as everyone else.” Cognitive biases are no respecters of academic qualifications. How many social science graduates would judge the question about immigration according to the demanding tests that Brennan lays out, rather than according to what they would prefer to believe? The irony is that if Brennan’s voter exam were to ask whether the better-educated deserve more votes, the technically correct answer might be no. It would depend on who was marking it.

However, in one respect Brennan insists that the case for epistocracy has grown far stronger since Mill made it. That is because Mill was writing at the dawn of democracy. Mill published his arguments in the run-up to what became the Second Reform Act of 1867, which doubled the size of the franchise in Britain to nearly 2.5 million voters (out of a general population of 30 million). Mill’s case for epistocracy was based on his conviction that over time it would merge into democracy. The labourer who gets one vote today would get more tomorrow, once he had learned how to use his vote wisely. Mill was a great believer in the educative power of democratic participation.

Brennan thinks we now have 100-plus years of evidence that Mill was wrong. Voting is bad for us. It doesn’t make people better informed. If anything, it makes them stupider, because it dignifies their prejudices and ignorance in the name of democracy. “Political participation is not valuable for most people,” Brennan writes. “On the contrary, it does most of us little good and instead tends to stultify and corrupt us. It turns us into civic enemies who have grounds to hate one another.” The trouble with democracy is that it gives us no reason to become better informed. It tells us we are fine as we are. And we’re not.

In the end, Brennan’s argument is more historical than philosophical. If we were unaware of how democracy would turn out, it might make sense to cross our fingers and assume the best of it. But he insists that we do know, and so we have no excuse to keep kidding ourselves. Brennan thinks that we should regard epistocrats like himself as being in the same position as democrats were in the mid-19th century. What he is championing is anathema to many people, as democracy was back then. Still, we took a chance on democracy, waiting to see how it would turn out. Why shouldn’t we take a chance on epistocracy, now we know how the other experiment went? Why do we assume that democracy is the only experiment we are ever allowed to run, even after it has run out of steam?

It’s a serious question, and it gets to how the longevity of democracy has stifled our ability to think about the possibility of something different. What was once a seemingly reckless form of politics has become a byword for caution. And yet there are still good reasons to be cautious about ditching it. Epistocracy remains the reckless idea. There are two dangers in particular.

The first is that we set the bar too high in politics by insisting on looking for the best thing to do. Sometimes it is more important to avoid the worst. Even if democracy is often bad at coming up with the right answers, it is good at unpicking the wrong ones. Moreover, it is good at exposing people who think they always know best. Democratic politics assumes there is no settled answer to any question and it ensures that is the case by allowing everyone a vote, including the ignorant. The randomness of democracy – which remains its essential quality – protects us against getting stuck with truly bad ideas. It means that nothing will last for long, because something else will come along to disrupt it.

Epistocracy is flawed because of the second part of the word rather than the first – this is about power (kratos) as much as it is about knowledge (episteme). Fixing power to knowledge risks creating a monster that can’t be deflected from its course, even when it goes wrong – which it will, since no one and nothing is infallible. Not knowing the right answer is a great defence against people who believe that their knowledge makes them superior.

Brennan’s response to this argument (a version of which is made by David Estlund in his 2007 book Democratic Authority) is to turn it on its head. Since democracy is a form of kratos, too, he says, why aren’t we concerned about protecting individuals from the incompetence of the demos just as much as from the arrogance of the epistocrats? But these are not the same kinds of power. Ignorance and foolishness don’t oppress in the same way that knowledge and wisdom do, precisely because they are incompetent: the demos keeps changing its mind.

The democratic case against epistocracy is a version of the democratic case against pragmatic authoritarianism. You have to ask yourself where you’d rather be when things go wrong. Maybe things will go wrong quicker and more often in a democracy, but that is a different issue. Rather than thinking of democracy as the least worst form of politics, we could think of it as the best when at its worst. It is the difference between Winston Churchill’s famous dictum and a similar one from Alexis de Tocqueville a hundred years earlier that is less well-known but more apposite. More fires get started in a democracy, de Tocqueville said, but more fires get put out, too.

The recklessness of epistocracy is also a function of the historical record that Brennan uses to defend it. A century or more of democracy may have uncovered its failings, but they have also taught us that we can live with them. We are used to the mess and attached to the benefits. Being an epistocrat like Mill before democracy had got going is very different from being one now that democracy is well established. We now know what we know, not just about democracy’s failings, but about our tolerance for its incompetences.

The great German sociologist Max Weber, writing at the turn of the 20th century, took it for granted that universal suffrage was a dangerous idea, because of the way that it empowered the mindless masses. But he argued that once it had been granted, no sane politician should ever think about taking it away: the backlash would be too terrible. The only thing worse than letting everyone vote is telling some people that they no longer qualify. Never mind who sets the exam, who is going to tell us that we’ve failed? Mill was right: democracy comes after epistocracy, not before. You can’t run the experiment in reverse.

The cognitive biases that epistocracy is meant to rescue us from are what will ultimately scupper it. Loss aversion makes it more painful to be deprived of something we have that doesn’t always work than something we don’t have that might. It’s like the old joke. Q: “Do you know the way to Dublin?” A: “Well, I wouldn’t start from here.” How do we get to a better politics? Well, maybe we shouldn’t start from here. But here is where we are.

Still, there must be other ways of trying to inject more wisdom into democratic politics than an exam. This is the 21st century: we have new tools to work with. If many of the problems with democracy derive from the business of politicians hawking for votes at election time, which feeds noise and bile into the decision-making process, perhaps we should try to simulate what people would choose under more sedate and reflective conditions. For instance, it may be possible to extrapolate from what is known about voters’ interests and preferences what they ought to want if they were better able to access the knowledge they needed. We could run mock elections that replicate the input from different points of view, as happens in real elections, but which strip out all the distractions and distortions of democracy in action.

Brennan suggests the following: “We can administer surveys that track citizens’ political preferences and demographic characteristics, while testing their basic objective political knowledge. Once we have this information, we can simulate what would happen if the electorate’s demographics remained unchanged, but all citizens were able to get perfect scores on tests of objective political knowledge. We can determine, with a strong degree of confidence, what ‘We the People’ would want, if only ‘We the People’ understood what we were talking about.”

Democratic dignity – the idea that all citizens should be allowed to express their views and have them taken seriously by politicians – goes out the window under such a system. We are each reduced to data points in a machine-learning exercise. But, according to Brennan, the outcomes should improve.

In 2017, a US-based digital technology company called Kimera Systems announced that it was close to developing an AI named Nigel, whose job was to help voters know how they should vote in an election, based on what it already knew of their personal preferences. Its creator, Mounir Shita, declared: “Nigel tries to figure out your goals and what reality looks like to you and is constantly assimilating paths to the future to reach your goals. It’s constantly trying to push you in the right direction.”

 
‘Politicians don’t care what we actually want. They care what they can persuade us we want’ … Donald Trump in Michigan last week. Photograph: Chirag Wakaskar/SOPA/Rex/Shutterstock

This is the more personalised version of what Brennan is proposing, with some of the democratic dignity plugged back in. Nigel is not trying to work out what’s best for everyone, only what’s best for you. It accepts your version of reality. Yet Nigel understands that you are incapable of drawing the correct political inferences from your preferences. You need help, from a machine that has seen enough of your personal behaviour to understand what it is you are after. Siri recommends books you might like. Nigel recommends political parties and policy positions.

Would this be so bad? To many people it instinctively sounds like a parody of democracy because it treats us like confused children. But to Shita it is an enhancement of democracy because it takes our desires seriously. Democratic politicians don’t much care what it is that we actually want. They care what it is they can persuade us we want, so they can better appeal to it. Nigel puts the voter first. At the same time, by protecting us from our own confusion and inattention, Nigel strives to improve our self-understanding. Brennan’s version effectively gives up on Mill’s original idea that voting might be an educative experience. Shita hasn’t given up. Nigel is trying to nudge us along the path to self-knowledge. We might end up learning who we really are.

The fatal flaw with this approach, however, is that we risk learning only who it is we think we are, or who it is we would like to be. Worse, it is who we would like to be now, not who or what we might become in the future. Like focus groups, Nigel provides a snapshot of a set of attitudes at a moment in time. The danger of any system of machine learning is that it produces feedback loops. By restricting the dataset to our past behaviour, Nigel teaches us nothing about what other people think, or even about other ways of seeing the world. Nigel simply mines the archive of our attitudes for the most consistent expression of our identities. If we lean left, we will end up leaning further left. If we lean right, we will end up leaning further right. Social and political division would widen. Nigel is designed to close the circle in our minds.

There are technical fixes for feedback loops. Systems can be adjusted to inject alternative points of view, to notice when data is becoming self-reinforcing or simply to randomise the evidence. We can shake things up to lessen the risk that we get set in our ways. For instance, Nigel could make sure that we visit websites that challenge rather than reinforce our preferences. Alternatively, on Brennan’s model, the aggregation of our preferences could seek to take account of the likelihood that Nigel had exaggerated rather than tempered who we really are. A Nigel of Nigels – a machine that helps other machines to better align their own goals – could try to strip out the distortions from the artificial democracy we have built. After all, Nigel is our servant, not our master. We can always tell him what to do.

But that is the other fundamental problem with 21st-century epistocracy: we won’t be the ones telling Nigel what to do. It will be the technicians who have built the system. They are the experts we rely on to rescue us from feedback loops. For this reason, it is hard to see how 21st-century epistocracy can avoid collapsing back into technocracy. When things go wrong, the knowers will be powerless to correct for them. Only the engineers who built the machines have that capacity, which means that it will be the engineers who have the power.

In recent weeks, we have been given a glimpse of what rule by engineers might look like. It is not an authoritarian nightmare of oppression and violence. It is a picture of confusion and obfuscation. The power of engineers never fully comes out into the open, because most people don’t understand what it is they do. The sight of Mark Zuckerberg, perched on his cushion, batting off the ignorant questions of the people’s representatives in Congress is a glimpse of a technocratic future in which democracy meets its match. But this is not a radical alternative to democratic politics. It is simply a distortion of it.


Tuesday 11 July 2017

How economics became a religion

John Rapley in The Guardian



Although Britain has an established church, few of us today pay it much mind. We follow an even more powerful religion, around which we have oriented our lives: economics. Think about it. Economics offers a comprehensive doctrine with a moral code promising adherents salvation in this world; an ideology so compelling that the faithful remake whole societies to conform to its demands. It has its gnostics, mystics and magicians who conjure money out of thin air, using spells such as “derivative” or “structured investment vehicle”. And, like the old religions it has displaced, it has its prophets, reformists, moralists and above all, its high priests who uphold orthodoxy in the face of heresy.

Over time, successive economists slid into the role we had removed from the churchmen: giving us guidance on how to reach a promised land of material abundance and endless contentment. For a long time, they seemed to deliver on that promise, succeeding in a way few other religions had ever done, our incomes rising thousands of times over and delivering a cornucopia bursting with new inventions, cures and delights.

This was our heaven, and richly did we reward the economic priesthood, with status, wealth and power to shape our societies according to their vision. At the end of the 20th century, amid an economic boom that saw the western economies become richer than humanity had ever known, economics seemed to have conquered the globe. With nearly every country on the planet adhering to the same free-market playbook, and with university students flocking to do degrees in the subject, economics seemed to be attaining the goal that had eluded every other religious doctrine in history: converting the entire planet to its creed.

Yet if history teaches anything, it’s that whenever economists feel certain that they have found the holy grail of endless peace and prosperity, the end of the present regime is nigh. On the eve of the 1929 Wall Street crash, the American economist Irving Fisher advised people to go out and buy shares; in the 1960s, Keynesian economists said there would never be another recession because they had perfected the tools of demand management.

The 2008 crash was no different. Five years earlier, on 4 January 2003, the Nobel laureate Robert Lucas had delivered a triumphal presidential address to the American Economics Association. Reminding his colleagues that macroeconomics had been born in the depression precisely to try to prevent another such disaster ever recurring, he declared that he and his colleagues had reached their own end of history: “Macroeconomics in this original sense has succeeded,” he instructed the conclave. “Its central problem of depression prevention has been solved.”

No sooner do we persuade ourselves that the economic priesthood has finally broken the old curse than it comes back to haunt us all: pride always goes before a fall. Since the crash of 2008, most of us have watched our living standards decline. Meanwhile, the priesthood seemed to withdraw to the cloisters, bickering over who got it wrong. Not surprisingly, our faith in the “experts” has dissipated.

Hubris, never a particularly good thing, can be especially dangerous in economics, because its scholars don’t just observe the laws of nature; they help make them. If the government, guided by its priesthood, changes the incentive-structure of society to align with the assumption that people behave selfishly, for instance, then lo and behold, people will start to do just that. They are rewarded for doing so and penalised for doing otherwise. If you are educated to believe greed is good, then you will be more likely to live accordingly.

The hubris in economics came not from a moral failing among economists, but from a false conviction: the belief that theirs was a science. It neither is nor can be one, and has always operated more like a church. You just have to look at its history to realise that.

The American Economic Association, to which Robert Lucas gave his address, was created in 1885, just when economics was starting to define itself as a distinct discipline. At its first meeting, the association’s founders proposed a platform that declared: “The conflict of labour and capital has brought to the front a vast number of social problems whose solution is impossible without the united efforts of church, state and science.” It would be a long path from that beginning to the market evangelism of recent decades.

Yet even at that time, such social activism provoked controversy. One of the AEA’s founders, Henry Carter Adams, subsequently delivered an address at Cornell University in which he defended free speech for radicals and accused industrialists of stoking xenophobia to distract workers from their mistreatment. Unknown to him, the New York lumber king and Cornell benefactor Henry Sage was in the audience. As soon as the lecture was done, Sage stormed into the university president’s office and insisted: “This man must go; he is sapping the foundations of our society.” When Adams’s tenure was subsequently blocked, he agreed to moderate his views. Accordingly, the final draft of the AEA platform expunged the reference to laissez-faire economics as being “unsafe in politics and unsound in morals”.

 
‘Economics has always operated more like a church’ … Trinity Church seen from Wall Street. Photograph: Alamy Stock Photo

So was set a pattern that has persisted to this day. Powerful political interests – which historically have included not only rich industrialists, but electorates as well – helped to shape the canon of economics, which was then enforced by its scholarly community.

Once a principle is established as orthodox, its observance is enforced in much the same way that a religious doctrine maintains its integrity: by repressing or simply eschewing heresies. In Purity and Danger, the anthropologist Mary Douglas observed the way taboos functioned to help humans impose order on a seemingly disordered, chaotic world. The premises of conventional economics haven’t functioned all that differently. Robert Lucas once noted approvingly that by the late 20th century, economics had so effectively purged itself of Keynesianism that “the audience start(ed) to whisper and giggle to one another” when anyone expressed a Keynesian idea at a seminar. Such responses served to remind practitioners of the taboos of economics: a gentle nudge to a young academic that such shibboleths might not sound so good before a tenure committee. This preoccupation with order and coherence may be less a function of the method than of its practitioners. Studies of personality traits common to various disciplines have discovered that economics, like engineering, tends to attract people with an unusually strong preference for order, and a distaste for ambiguity.

The irony is that, in its determination to make itself a science that can reach hard and fast conclusions, economics has had to dispense with scientific method at times. For starters, it rests on a set of premises about the world not as it is, but as economists would like it to be. Just as any religious service includes a profession of faith, membership in the priesthood of economics entails certain core convictions about human nature. Among other things, most economists believe that we humans are self-interested, rational, essentially individualistic, and prefer more money to less. These articles of faith are taken as self-evident. Back in the 1930s, the great economist Lionel Robbins described his profession in a way that has stood ever since as a cardinal rule for millions of economists. The field’s basic premises came from “deduction from simple assumptions reflecting very elementary facts of general experience” and as such were “as universal as the laws of mathematics or mechanics, and as little capable of ‘suspension’”.

Deducing laws from premises deemed eternal and beyond question is a time-honoured method. For thousands of years, monks in medieval monasteries built a vast corpus of scholarship doing just that, using a method perfected by Thomas Aquinas known as scholasticism. However, this is not the method used by scientists, who tend to require assumptions to be tested empirically before a theory can be built out of them.
But, economists will maintain, this is precisely what they themselves do – what sets them apart from the monks is that they must still test their hypotheses against the evidence. Well, yes, but this statement is actually more problematic than many mainstream economists may realise. Physicists resolve their debates by looking at the data, upon which they by and large agree. The data used by economists, however, is much more disputed. When, for example, Robert Lucas insisted that Eugene Fama’s efficient-markets hypothesis – which maintains that since a free market collates all available information to traders, the prices it yields can never be wrong – held true despite “a flood of criticism”, he did so with as much conviction and supporting evidence as his fellow economist Robert Shiller had mustered in rejecting the hypothesis. When the Swedish central bank had to decide who would win the 2013 Nobel prize in economics, it was torn between Shiller’s claim that markets frequently got the price wrong and Fama’s insistence that markets always got the price right. Thus it opted to split the difference and gave both men the medal – a bit of Solomonic wisdom that would have elicited howls of laughter had it been a science prize. In economic theory, very often, you believe what you want to believe – and as with any act of faith, your choice of heads or tails will as likely reflect sentimental predisposition as scientific assessment.

It’s no mystery why the data used by economists and other social scientists so rarely throws up incontestable answers: it is human data. Unlike people, subatomic particles don’t lie on opinion surveys or change their minds about things. Mindful of that difference, at his own presidential address to the American Economic Association nearly a half-century ago, another Nobel laureate, Wassily Leontief, struck a modest tone. He reminded his audience that the data used by economists differed greatly from that used by physicists or biologists. For the latter, he cautioned, “the magnitude of most parameters is practically constant”, whereas the observations in economics were constantly changing. Data sets had to be regularly updated to remain useful. Some data was just simply bad. Collecting and analysing the data requires civil servants with a high degree of skill and a good deal of time, which less economically developed countries may not have in abundance. So, for example, in 2010 alone, Ghana’s government – which probably has one of the better data-gathering capacities in Africa – recalculated its economic output by 60%. Testing your hypothesis before and after that kind of revision would lead to entirely different results.

 
‘The data used by economists rarely throws up incontestable answers’ … traders at the New York Stock Exchange in October 2008. Photograph: Spencer Platt/Getty Images

Leontief wanted economists to spend more time getting to know their data, and less time in mathematical modelling. However, as he ruefully admitted, the trend was already going in the opposite direction. Today, the economist who wanders into a village to get a deeper sense of what the data reveals is a rare creature. Once an economic model is ready to be tested, number-crunching ends up being done largely at computers plugged into large databases. It’s not a method that fully satisfies a sceptic. For, just as you can find a quotation in the Bible that will justify almost any behaviour, you can find human data to support almost any statement you want to make about the way the world works.

That’s why ideas in economics can go in and out of fashion. The progress of science is generally linear. As new research confirms or replaces existing theories, one generation builds upon the next. Economics, however, moves in cycles. A given doctrine can rise, fall and then later rise again. That’s because economists don’t confirm their theories in quite the same way physicists do, by just looking at the evidence. Instead, much as happens with preachers who gather a congregation, a school rises by building a following – among both politicians and the wider public.

For example, Milton Friedman was one of the most influential economists of the late 20th century. But he had been around for decades before he got much of a hearing. He might well have remained a marginal figure had it not been that politicians such as Margaret Thatcher and Ronald Reagan were sold on his belief in the virtue of a free market. They sold that idea to the public, got elected, then remade society according to those designs. An economist who gets a following gets a pulpit. Although scientists, in contrast, might appeal to public opinion to boost their careers or attract research funds, outside of pseudo-sciences, they don’t win support for their theories in this way.
However, if you think describing economics as a religion debunks it, you’re wrong. We need economics. It can be – it has been – a force for tremendous good. But only if we keep its purpose in mind, and always remember what it can and can’t do.

The Irish have been known to describe their notionally Catholic land as one where a thin Christian veneer was painted over an ancient paganism. The same might be said of our own adherence to today’s neoliberal orthodoxy, which stresses individual liberty, limited government and the free market. Despite outward observance of a well-entrenched doctrine, we haven’t fully transformed into the economic animals we are meant to be. Like the Christian who attends church but doesn’t always keep the commandments, we behave as economic theory predicts only when it suits us. Contrary to the tenets of orthodox economists, contemporary research suggests that, rather than seeking always to maximise our personal gain, humans still remain reasonably altruistic and selfless. Nor is it clear that the endless accumulation of wealth always makes us happier. And when we do make decisions, especially those to do with matters of principle, we seem not to engage in the sort of rational “utility-maximizing” calculus that orthodox economic models take as a given. The truth is, in much of our daily life we don’t fit the model all that well.


Economists work best when they take the stories we have given them, and advise us on how we can help them to come true


For decades, neoliberal evangelists replied to such objections by saying it was incumbent on us all to adapt to the model, which was held to be immutable – one recalls Bill Clinton’s depiction of neoliberal globalisation, for instance, as a “force of nature”. And yet, in the wake of the 2008 financial crisis and the consequent recession, there has been a turn against globalisation across much of the west. More broadly, there has been a wide repudiation of the “experts”, most notably in the 2016 US election and Brexit referendum.

It would be tempting for anyone who belongs to the “expert” class, and to the priesthood of economics, to dismiss such behaviour as a clash between faith and facts, in which the facts are bound to win in the end. In truth, the clash was between two rival faiths – in effect, two distinct moral tales. So enamoured had the so-called experts become with their scientific authority that they blinded themselves to the fact that their own narrative of scientific progress was embedded in a moral tale. It happened to be a narrative that had a happy ending for those who told it, for it perpetuated the story of their own relatively comfortable position as the reward of life in a meritocratic society that blessed people for their skills and flexibility. That narrative made no room for the losers of this order, whose resentments were derided as being a reflection of their boorish and retrograde character – which is to say, their fundamental vice. The best this moral tale could offer everyone else was incremental adaptation to an order whose caste system had become calcified. For an audience yearning for a happy ending, this was bound to be a tale of woe.

The failure of this grand narrative is not, however, a reason for students of economics to dispense with narratives altogether. Narratives will remain an inescapable part of the human sciences for the simple reason that they are inescapable for humans. It’s funny that so few economists get this, because businesses do. As the Nobel laureates George Akerlof and Robert Shiller write in their recent book, Phishing for Phools, marketers use them all the time, weaving stories in the hopes that we will place ourselves in them and be persuaded to buy what they are selling. Akerlof and Shiller contend that the idea that free markets work perfectly, and the idea that big government is the cause of so many of our problems, are part of a story that is actually misleading people into adjusting their behaviour in order to fit the plot. They thus believe storytelling is a “new variable” for economics, since “the mental frames that underlie people’s decisions” are shaped by the stories they tell themselves.

Economists arguably do their best work when they take the stories we have given them, and advise us on how we can help them to come true. Such agnosticism demands a humility that was lacking in economic orthodoxy in recent years. Nevertheless, economists don’t have to abandon their traditions if they are to overcome the failings of a narrative that has been rejected. Rather they can look within their own history to find a method that avoids the evangelical certainty of orthodoxy.

In his 1971 presidential address to the American Economic Association, Wassily Leontief counselled against the dangers of self-satisfaction. He noted that although economics was starting to ride “the crest of intellectual respectability … an uneasy feeling about the present state of our discipline has been growing in some of us who have watched its unprecedented development over the last three decades”.

Noting that pure theory was making economics more remote from day-to-day reality, he said the problem lay in “the palpable inadequacy of the scientific means” of using mathematical approaches to address mundane concerns. So much time went into model-construction that the assumptions on which the models were based became an afterthought. “But,” he warned – a warning that the sub-prime boom’s fascination with mathematical models, and the bust’s subsequent revelation of their flaws, now reveals to have been prophetic – “it is precisely the empirical validity of these assumptions on which the usefulness of the entire exercise depends.”

Leontief thought that economics departments were increasingly hiring and promoting young economists who wanted to build pure models with little empirical relevance. Even when they did empirical analysis, Leontief said economists seldom took any interest in the meaning or value of their data. He thus called for economists to explore their assumptions and data by conducting social, demographic and anthropological work, and said economics needed to work more closely with other disciplines.


Leontief’s call for humility some 40 years ago stands as a reminder that the same religions that can speak up for human freedom and dignity when in opposition, can become obsessed with their rightness and the need to purge others of their wickedness once they attain power. When the church retains its distance from power, and a modest expectation about what it can achieve, it can stir our minds to envision new possibilities and even new worlds. Once economists apply this kind of sceptical scientific method to a human realm in which ultimate reality may never be fully discernible, they will probably find themselves retreating from dogmatism in their claims.

Paradoxically, therefore, as economics becomes more truly scientific, it will become less of a science. Acknowledging these limitations will free it to serve us once more.