Search This Blog

Showing posts with label expert. Show all posts
Showing posts with label expert. Show all posts

Sunday 29 January 2017

‘Trump makes sense to a grocery store owner’ N N Taleb

Suhasini Haider in The Hindu

Economist-mathematician Nassim Nicholas Taleb contends that there is a global riot against pseudo-experts


After predicting the 2008 economic crisis, the Brexit vote, the U.S. presidential election and other events correctly, Nassim Nicholas Taleb, author of the Incerto series on global uncertainties, which includes The Black Swan: The Impact of the Highly Improbable, is seen as something of a maverick and an oracle. Equally, the economist-mathematician has been criticised for advocating a “dumbing down” of the economic system, and his reasoning for U.S. President Donald Trump and global populist movements. In an interview in Jaipur, Taleb explains why he thinks the world is seeing a “global riot against pseudo-experts”.

I’d like to start by asking about your next book, Skin in the Game, the fifth of the Incerto series. You do something unusual with your books: before you launch, you put chapters out on your website. Why is that?

Putting my work online motivates me to go deeper into a subject. I put it online and it gives some structure to my thought. The only way to judge a book is by something called the Lindy effect, and that is its survival. My books have survived. I noticed that The Black Swan did well because it was picked up early online, long before the launch. I also prefer social media to interviews in the mainstream media as many journalists don’t do their research, and ‘zeitgeist’ updates [Top Ten lists] pass for journalism.

The media is not one organisation or a monolithic entity.

Well, I’m talking about the United States where I get more credible news from the social media than the mainstream media. But I am very impressed with the Indian media that seems to present both sides of the story. In the U.S., you only get either the official, bureaucratic or the academic side of the story.

In Skin in the Game, you seem to build on theories from The Black Swan that give a sense of foreboding about the world economy. Do you see another crisis coming?

Oh, absolutely! The last crisis [2008] hasn’t ended yet because they just delayed it. [Barack] Obama is an actor. He looks good, he raises good children, he is respectable. But he didn’t fix the economic system, he put novocaine [local anaesthetic] in the system. He delayed the problem by working with the bankers whom he should have prosecuted. And now we have double the deficit, adjusted for GDP, to create six million jobs, with a massive debt and the system isn’t cured. We retained zero interest rates, and that hasn’t helped. Basically we shifted the problem from the private corporates to the government in the U.S. So, the system remains very fragile.

You say Obama put novocaine in the system. How will the Trump administration be able to address this?

Of course. The whole mandate he got was because he understood the economic problems. People don’t realise that Obama created inequalities when he distorted the system. You can only get rich if you have assets. What Trump is doing is put some kind of business sense in the system. You don’t have to be a genius to see what’s wrong. Instead of Trump being elected, if you went to the local souk [bazaar] in Aleppo and brought one of the retail shop owners, he would do the same thing Trump is doing. Like making a call to Boeing and asking why are we paying so much.

You’re seen as something of an oracle, given that you saw the 2008 economic crash coming, you predicted the Brexit vote, the outcome of the Syrian crisis. You said the Islamic State would benefit if Bashar al-Assad was pushed out and you predicted Trump’s win. How do you explain it?

Not the Islamic State, but al-Qaeda at the time, and I said the U.S. administration was helping fund them. See, you have to have courage to say things others don’t. I was lucky financially in life, that I didn’t need to work for a living and can spend all my time thinking. When Trump was running for election, I said what he says makes sense to a grocery store owner. Because the grocery guy can say Trump is wrong because he can see where he is wrong. But with Obama, he can’t understand what he’s saying, so the grocery man doesn’t know where he is wrong.

Is it a choice between dumbing down versus over-intellectualisation, then?

Exactly. Trump never ran for archbishop, so you never saw anything in his behaviour that was saintly, and that was fine. Whereas Obama behaved like the Archbishop of Canterbury, and was going to do good but people didn’t feel their lives were better. As I said, if it was a shopkeeper from Aleppo, or a grocery store owner in Mumbai, people would have liked them as much as Trump. What he says makes common sense, asking why are we paying so much for this rubbish or why do we need these complex taxes, or why do we want lobbyists. You can call Trump’s plain-speaking what you like. But the way intellectuals treat people who don’t agree with them isn’t good either. I remember I had an academic friend who supported Brexit, and he said he knew what it meant to be a leper in the U.K. It was the same with supporting Trump in the U.S.

But there were valid reasons for people to be worried about Trump too.

Well, if you’re a businessman, for example, what Trump said didn’t bother you. The intellectual class of no more than 2,00,000 people in the U.S. don’t represent everyone upset with Trump. The real problem is the ‘faux-expert problem’, one who doesn’t know what he doesn’t know, and assumes he knows what people think. An electrician doesn’t have that problem.

Is the election of Trump part of a global phenomena? You have commented on the similarity to the election of Narendra Modi in India.

Well, with Trump, Modi, Brexit, and now France, there are some similar problems in those countries. What you are hearing is people getting fed up with the ruling class. This is not fascism. It has nothing to do with fascism. It has to do with the faux-experts problem and a world with too many experts. If we had a different elite, we may not see the same problem.

There are other similarities, to quote from studies of populist movements worldwide: these leaders are majoritarian, they build on resentment, they use social media for direct access to their voters, and they can take radical decisions.
I often say that a mathematician thinks in numbers, a lawyer in laws, and an idiot thinks in words. These words don’t amount to anything. I think you have to draw the conclusion that there is a global riot against pseudo-experts. I saw it with Brexit, and Nigel Farage [leader of the U.K. Independence Party], who was a trader for 15 years, said the problem with the government was that none of them had ever had a proper job. Being a bureaucrat is not a proper job.

As a businessperson, you have a point about experts and pseudo-experts who you say are ‘left-wing’. How do you explain the other parts to the phenomenon that aren’t economic: the xenophobia, Islamophobia, misogyny, etc.?

I don’t understand how a left-wing person can defend Salafism, or religious extremism. In a democracy, you can allow people to have any view, but they can’t come with a message to destroy democracy. Why should people who come to the West come with a message to finish the West? This is where the discourse goes haywire. So in Yemen, the [Saudi] intervention is good, but the intervention [by Russia] in Aleppo shouldn’t be allowed. I don’t think Trump was racist when he said Mexican criminals shouldn’t be allowed into the U.S.; he was targeting criminals. If you are Naziphobic, you are not against Germans. If I oppose Salafism, I am not an Islamophobe. Obama also deported Mexicans and refused to accept immigrants.

Is anti-globalisation a part of this sentiment?

I am not anti-globalisation, but I am against big global corporations. One of the reasons is what they cost. Today, every project sees cost overruns because these projects have to factor in global risks as well. In nature there is an ‘island effect’. The number of species on an island drops significantly when you go to the mainland. Similarly, when you open up your small economies, you lose some of your ethnicity or diversity. Artisans are being killed by globalisation. Think of the effect on so many artists who have been put out of work while people are buying wrinkle-free shirts and cheap mobile phones. I’m a localist. The problem is globalisation comes through large global corporates that are predatory, and so we want to counter its ill-effects.

Where do you see the world moving now? Further right, or will it revert to the centre?


I don’t think it will go left or right, and I don’t know about the short term. But I think in the long term, the world can only survive if it lives like nature does. Many smaller units of governance, and a collection of super islands with some separation, quick decision-making, and visible implementation. Lots of Switzerlands, that’s what we need. What we need is not leaders, we don’t need them. We just need someone at the top who doesn’t mess the system up.

Thursday 20 October 2016

The cult of the expert – and how it collapsed

Led by a class of omnipotent central bankers, experts have gained extraordinary political power. Will a populist backlash shatter their technocratic dream?

Sebastian Mallaby in The Guardian

On Tuesday 16 September 2008, early in the afternoon, a self-effacing professor with a neatly clipped beard sat with the president in the Roosevelt Room of the White House. Flanked by a square-shouldered banker who had recently run Goldman Sachs, the professor was there to tell the elected leader of the world’s most powerful country how to rescue its economy. Following the bankruptcy of one of the nation’s storied investment banks, a global insurance company was now on the brink, but drawing on a lifetime of scholarly research, the professor had resolved to commit $85bn of public funds to stabilising it.

The sum involved was extraordinary: $85bn was more than the US Congress spent annually on transportation, and nearly three times as much as it spent on fighting Aids, a particular priority of the president’s. But the professor encountered no resistance. “Sometimes you have to make the tough decisions,”the president reflected. “If you think this has to be done, you have my blessing.”

Later that same afternoon, Federal Reserve chairman Ben Bernanke, the bearded hero of this tale, showed up on Capitol Hill, at the other end of Pennsylvania Avenue. At the White House, he had at least been on familiar ground: he had spent eight months working there. But now Bernanke appeared in the Senate majority leader’s conference room, where he and his ex-Wall Street comrade, Treasury secretary Hank Paulson, would meet the senior leaders of both chambers of Congress. A quiet, balding, unassuming technocrat confronted the lions of the legislative branch, armed with nothing but his expertise in monetary plumbing.

Bernanke repeated his plan to commit $85bn of public money to the takeover of an insurance company.

“Do you have 85bn?” one sceptical lawmaker demanded.

“I have 800bn,” Bernanke replied evenly – a central bank could conjure as much money as it deemed necessary.

But did the Federal Reserve have the legal right to take this sort of action unilaterally, another lawmaker inquired?

Yes, Bernanke answered: as Fed chairman, he wielded the largest chequebook in the world – and the only counter-signatures required would come from other Fed experts, who were no more elected or accountable than he was. Somehow America’s famous apparatus of democratic checks and balances did not apply to the monetary priesthood. Their authority derived from technocratic virtuosity.

When the history is written of the revolt against experts, September 2008 will be seen as a milestone. The $85bn rescue of the American International Group (AIG) dramatised the power of monetary gurus in all its anti-democratic majesty. The president and Congress could decide to borrow money, or raise it from taxpayers; the Fed could simply create it. And once the AIG rescue had legitimised the broadest possible use of this privilege, the Fed exploited it unflinchingly. Over the course of 2009, it injected a trillion dollars into the economy – a sum equivalent to nearly 30% of the federal budget – via its newly improvised policy of “quantitative easing”. Time magazine anointed Bernanke its person of the year. “The decisions he has made, and those he has yet to make, will shape the path of our prosperity, the direction of our politics and our relationship to the world,” the magazine declared admiringly.

The Fed’s swashbuckling example galvanized central bankers in all the big economies. Soon Europe saw the rise of its own path-shaping monetary chieftain, when Mario Draghi, president of the European Central Bank, defused panic in the eurozone in July 2012 with two magical sentences. “Within our mandate, the ECB is ready to do whatever it takes to preserve the euro,” he vowed, adding, with a twist of Clint Eastwood menace, “And believe me, it will be enough.” For months, Europe’s elected leaders had waffled ineffectually, inviting hedge-fund speculators to test the cohesion of the eurozone. But now Draghi was announcing that he was badder than the baddest hedge-fund goon. Whatever it takes. Believe me.

In the summer of 2013, when Hollywood rolled out its latest Superman film, cartoonists quickly seized upon a gag that would soon become obvious. Caricatures depicted central-bank chieftains decked out in Superman outfits. One showed Bernanke ripping off his banker’s shirt and tie, exposing that thrilling S emblazoned on his vest. Another showed the bearded hero hurtling through space, red cape fluttering, right arm stretched forward, a powerful fist punching at the void in front of him. “Superman and Federal Reserve chairman Ben Bernanke are both mild-mannered,” a financial columnist deadpanned. “They are both calm, even in the face of global disasters. They are both sometimes said to be from other planets.”

At some point towards the middle of the decade, shortly before the cult of the expert smashed into the populist backlash, the shocking power of central banks came to feel normal. Nobody blinked an eye when Haruhiko Kuroda, the head of Japan’s central bank, created money at a rate that made his western counterparts seem timid. Nobody thought it strange when Britain’s government, perhaps emulating the style of the national football team, conducted a worldwide talent search for the new Bank of England chief. Nobody was surprised when the winner of that contest, the telegenic Canadian Mark Carney, quickly appeared in newspaper cartoons in his own superman outfit. And nobody missed a beat when India’s breathless journalists described Raghuram Rajan, the new head of the Reserve Bank of India, as a “rock star”, or when he was pictured as James Bond in the country’s biggest business newspaper. “Clearly I am not a superman,” Rajan modestly responded.


No senator would have his child’s surgery performed by an amateur. So why would he not entrust experts with the economy?

If Bernanke’s laconic “I have 800bn” moment signalled a new era of central-banking power, Rajan’s “I am not a superman” wisecrack marked its apotheosis. And it was a high watermark for a wider phenomenon as well, for the cult of the central banker was only the most pronounced example of a broader cult that had taken shape over the previous quarter of a century: the cult of the expert. Even before Bernanke rescued the global economy, technocrats of all stripes – business leaders, scientists, foreign and domestic policy wonks – were enthralled by the notion that politicians might defer to the authority of experts armed with facts and rational analysis. Those moments when Bernanke faced down Congress, or when Draghi succeeded where bickering politicians had failed, made it seem possible that this technocratic vision, with its apolitical ideal of government, might actually be realised.

The key to the power of the central bankers – and the envy of all the other experts – lay precisely in their ability to escape political interference. Democratically elected leaders had given them a mission – to vanquish inflation – and then let them get on with it. To public-health experts, climate scientists and other members of the knowledge elite, this was the model of how things should be done. Experts had built Microsoft. Experts were sequencing the genome. Experts were laying fibre-optic cable beneath the great oceans. No senator would have his child’s surgery performed by an amateur. So why would he not entrust experts with the economy?

In 1997, the economist Alan Blinder published an essay in Foreign Affairs, the house journal of the American foreign policy establishment. His title posed a curious question: “Is government too political?”

Four years earlier, Blinder had left Princeton University, his academic home for two decades, to do battle in the public square as a member of President Bill Clinton’s Council of Economic Advisors. The way Blinder saw things, this was a responsibility more than a pleasure: experts had a duty to engage in public debates – otherwise, “the quacks would continue to dominate the pond”, as he had once written. Earnest, idealistic, but with a self-deprecating wit, Blinder was out to save the world from returning to that dark period in the Reagan era when supply-side ideologues ruled the roost and “nonsense was worshipped as gospel”. After two years at the White House and another two as vice chairman of the Fed, Blinder’s essay was a reflection on his years of service.

His argument reflected the contrast between his two jobs in Washington. At the White House, he had advised a brainy president on budget policy and much else, but turning policy wisdom into law had often proved impossible. Even when experts from both parties agreed what should be done, vested interests in Congress conspired to frustrate enlightened progress. At the Fed, by contrast, experts were gloriously empowered. They could debate the minutiae of the economy among themselves, then manoeuvre the growth rate this way or that, without deferring to anyone.

To Blinder, it was self-evident that the Fed model was superior – not only for the experts, but also in the eyes of the public. The voters did not want their members of Congress micromanaging technical affairs – polls showed declining trust in politicians, and it was only a small stretch to suggest that citizens wanted their political leaders to delegate as much as possible to experts. “Americans increasingly believe that their elected officials are playing games rather than solving problems,” Blinder wrote. “Political debate has too much ‘spin’ and too little straight talk.” In sum, too much meddling by elected politicians was a turn-off for the voters who elected them. It was a paradoxical contention.

Disaffection with the political mainstream in the America of the 1990s had created a yearning for white-hatted outsiders as potential presidential candidates: the billionaire businessman Ross Perot, who ran in 1992 and 1996; the anti-politician, Steve Forbes, whose signature proposal was to radically simplify America’s byzantine tax code. But rather than replace politicians with populist outsiders, whose grasp of public policy was suspect, Blinder advanced an alternative idea: the central-bank model of expert empowerment should be extended to other spheres of governance.

Blinder’s proposal was most clearly illustrated by tax policy. Experts from both political parties agreed that the tax system should be stripped of perverse incentives and loopholes. There was no compelling reason, for example, to encourage companies to finance themselves with debt rather than equity, yet the tax code allowed companies to make interest payments to their creditors tax-free, whereas dividend payments to shareholders were taxed twice over. The nation would be better off if Congress left the experts to fix such glitches rather than allowing politics to frustrate progress. Likewise, environmental targets, which balanced economic growth on the one hand and planetary preservation on the other, were surely best left to the scholars who understood how best to reconcile these duelling imperatives. Politicians who spent more of their time dialing for dollars than thinking carefully about policy were not up to these tasks. Better to hand them off to the technicians in white coats who knew what they were doing.


A dark question lurked in educated minds. If all the isms were wasms, if history was over, what good were politicians?

The call to empower experts, and to keep politics to a minimum, failed to trigger a clear shift in how Washington did business. But it did crystallise the assumptions of the late 1990s and early 2000s – a time when sharp criticisms of gridlock and lobbying were broadly accepted, and technocratic work-arounds to political paralysis were frequently proposed, even if seldom adopted. President Barack Obama’s (unsuccessful) attempt to remove the task of tackling long-term budget challenges from Congress by handing them off to the bipartisan Simpson-Bowles commission was emblematic of this same mood. Equally, elected leaders at least paid lip service to the authority of experts in the government’s various regulatory agencies – the Food and Drug Administration, the Securities and Exchange Commission, and so on. If they nonetheless overruled them for political reasons, it was in the dead of night and with a guilty conscience.

And so, by the turn of the 21st century, a new elite consensus had emerged: democracy had to be managed. The will of the people had its place, but that place had to be defined, and not in an expansive fashion. After all, Bill Clinton and Tony Blair, the two most successful political leaders of the time, had proclaimed their allegiance to a “third way”, which proposed that the grand ideological disputes of the cold war had come to an end. If the clashes of abstractions – communism, socialism, capitalism and so on –were finished, all that remained were practical questions, which were less subjects of political choice and more objects of expert analysis. Indeed, at some tacit, unarticulated level, a dark question lurked in educated minds. If all the isms were wasms, if history was over, what good were politicians?

 

Federal Reserve chairman Ben Bernanke testifies before Congress in October 2011. Photograph: Jim Lo Scalzo/EPA

For Blinder and many of his contemporaries, the ultimate embodiment of empowered gurudom was Alan Greenspan, the lugubrious figure with a meandering syntax who presided over the Federal Reserve for almost two decades. Greenspan was a technocrat’s technocrat, a walking, talking cauldron of statistics and factoids, and even though his ideological roots were in the libertarian right, his happy collaboration with Democratic experts in the Clinton administration fitted the end-of-history template perfectly. At Greenspan’s retirement in 2006, Blinder and a co-author summed up his extraordinary standing. They proclaimed him “a living legend”. On Wall Street, “financial markets now view Chairman Greenspan’s infallibility more or less as the Chinese once viewed Chairman Mao’s”.

Greenspan was raised during the Great Depression, and for much of his career, such adulation would have been inconceivable – for him or any central banker. Through most of the 20th century, the men who acted as bankers to the bankers were deliberately low-key. They spurned public attention and doubted their own influence. They fully expected that politicians would bully them into trying to stimulate the economy, even at the risk of inflation. In 1964, in a successful effort to get the central bank to cut interest rates, Lyndon Johnson summoned the Fed chairman William McChesney Martin to his Texas ranch and pushed him around the living room, yelling in his face, “Boys are dying in Vietnam, and Bill Martin doesn’t care!” In democracies, evidently, technocratic power had limits.

Through the 1970s and into the 1980s, central-bank experts continued to be tormented. Richard Nixon and his henchmen once smeared Arthur Burns, the Fed chairman, by planting a fictitious story in the press, insinuating that Burns was simultaneously demanding a huge pay rise for himself and a pay freeze for other Americans. Following in this tradition, the Reagan administration frequently denounced the Fed chief, Paul Volcker, and packed the Fed’s board with pro-Reagan loyalists, who ganged up against their chairman.


There were Alan Greenspan postcards, Alan Greenspan cartoons, Alan Greenspan T-shirts, even an Alan Greenspan doll

When Greenspan replaced Volcker in 1987, the same pattern continued at first. The George HW Bush administration tried everything it could to force Greenspan to cut interest rates, to the point that a White House official put it about that the unmarried, 65-year-old Fed chairman reminded him of Norman Bates, the mother-fixated loner in Hitchcock’s Psycho.

And yet, starting with the advent of the Clinton administration, Greenspan effected a magical shift in the prestige of monetary experts. For the last 13 years of his tenure, running from 1993 to 2006, he attained the legendary status that Blinder recognised and celebrated. There were Alan Greenspan postcards, Alan Greenspan cartoons, Alan Greenspan T-shirts, even an Alan Greenspan doll. “How many central bankers does it take to screw in a lightbulb?” asked a joke of the time. “One,” the answer went: “Greenspan holds the bulb and the world revolves around him.” Through quiet force of intellect, Greenspan seemed to control the American economy with the finesse of a master conductor. He was the “Maestro”, one biographer suggested. The New Yorker’s John Cassidy wrote that Greenspan’s oracular pronouncements became “as familiar and as comforting to ordinary Americans as Prozac and The Simpsons, both of which debuted in 1987, the same year President Reagan appointed him to office”.

Greenspan’s sway in Washington stretched far beyond the Fed’s core responsibility, which was to set interest rates. When the Clinton administration wanted to know how much deficit reduction was necessary, it asked Greenspan for a number, at which point that number assumed a talismanic importance, for no other reason than that Greenspan had endorsed it. When Congress wanted to understand how far deficit reduction would bring bond yields down, it demanded an answer from Greenspan, and his answer duly became a key plank of the case for moving towards budget balance. The Clinton adviser Dick Morris summed up economic policy in this period: “You figure out what Greenspan wants, and then you get it to him.”

Greenspan loomed equally large in the US government’s management of a series of emerging market meltdowns in the 1990s. Formally, the responsibility for responding to foreign crises fell mainly to the Treasury, but the Clinton team relied on Greenspan – for ideas and for political backing. With the Republicans controlling Congress, a Democratic president needed a Republican economist to vouch for his plans – to the press, Congress, and even the conservative talk radio host Rush Limbaugh. “Officials at the notoriously reticent Federal Reserve say they have seldom seen anything like it,” the New York Times reported in January 1995, remarking on the Fed chairman’s metamorphosis from monetary technocrat into rescue salesman. In 1999, anticipating the moment when it anointed Ben Bernanke its man of the year, Time put Greenspan on its cover, with smaller images of the Treasury secretary and deputy Treasury secretary flanking him. Greenspan and his sidemen were “economist heroes”, Time lectured its readers. They had “outgrown ideology”.

By the last years of his tenure, Greenspan’s reputation had risen so high that even fellow experts were afraid of him. When he held forth at the regular gatherings of central bank chiefs in Basel, the distinguished figures at the table, titans in their own fields, took notes with the eagerness of undergraduates. So great was Greenspan’s status that he started to seem irreplaceable. As vice-president Al Gore prepared his run for the White House, he pronounced himself Greenspan’s “biggest fan” and rated the chairman’s performance as “outstanding A-plus-plus”. Not to be outdone, the Republican senator John McCain wished the chairman could stay at his post into the afterlife. “I would do like we did in the movie Weekend at Bernie’s,” McCain joked during a Republican presidential primary debate. “I’d prop him up and put a pair of dark glasses on him and keep him as long as we could.”

How did Greenspan achieve this legendary status, creating the template for expert empowerment on which a generation of technocrats sought to build a new philosophy of anti-politics? The question is not merely of historical interest. With experts now in retreat, in the United States, Britain and elsewhere, the story of their rise may hold lessons for the future.

Part of the answer lies in the circumstances that Greenspan inherited. In the United States and elsewhere, central bankers were given space to determine interest rates without political meddling because the existing model had failed. The bullying of central banks by Johnson and Nixon produced the disastrous inflation of the 1970s, with the result that later politicians wanted to be saved from themselves – they stopped harassing central banks, understanding that doing so damaged economic performance and therefore their own reputations. Paul Volcker was a partial beneficiary of this switch: even though some Reagan officials attacked him, others recognised that he must be given the space to drive down inflation. Following Volcker’s tenure, a series of countries, starting with New Zealand, granted formal independence to their central banks. Britain crossed this Rubicon in 1997. In the United States, the Fed’s independence has never been formal. But the climate of opinion on monetary issues offered a measure of protection.

Healthy economic growth was another factor underpinning Greenspan’s exalted status. Globalisation, coupled with the surge of productivity that followed the personal computer revolution, made the 1990s a boom time. The pro-market policies that Greenspan and his fellow experts had long advocated seemed to be delivering the goods, not only in terms of growth but also in falling inequality, lower rates of crime, and lower unemployment for disadvantaged minorities. The legitimacy of experts relies on their presumed ability to deliver progress. In Greenspan’s heyday, experts over-delivered.

Yet these fortunate circumstances are not the whole story. Greenspan amassed more influence and reputation than anyone else because there was something special about him. He was not the sort of expert who wanted to confine politics to its box. To the contrary, he embraced politics, and loved the game. He understood power, and was not afraid to wield it.


Greenspan’s genius was to combine high-calibre expert analysis with raw political methods

Greenspan is regarded as the ultimate geek: obsessed with obscure numbers, convoluted in his speech, awkward in social settings. Yet he was far more worldly than his technocratic manner suggested. He entered public life when he worked for Nixon’s 1968 campaign – not just as an economic adviser, but as a polling analyst. In Nixon’s war room, he allied himself with the future populist presidential candidate Patrick Buchanan, and his memos to Nixon were peppered with ideas on campaign spin and messaging. In 1971, when Nixon went after the Fed chairman, Arthur Burns, Greenspan was recruited to coax Burns into supporting the president. In the mid-1970s, when Greenspan worked in the Gerald Ford administration, he once sneaked into the White House on a weekend to help rewrite a presidential speech, burying an earlier draft penned by a bureaucratic opponent. At the Republican convention in 1980, Greenspan tried to manoeuvre Ford on to Ronald Reagan’s ticket – an outlandish project to get an ex-president to serve as vice president.

Greenspan’s genius was to combine high-calibre expert analysis with raw political methods. He had more muscle than a mere expert and more influence than a mere politician. The combination was especially potent because the first could be a cover for the second: his political influence depended on the perception that he was an expert, and therefore above the fray, and therefore not really political. Unlike politician-politicians, Greenspan’s advice had the ring of objectivity: he was the man who knew the details of the federal budget, the outlook for Wall Street, the political tides as they revealed themselves through polling data. The more complex the problems confronting the president, the more indispensable Greenspan’s expertise became. “He has the best bedside manner I’ve ever seen,” a jealous Ford administration colleague recalled, remarking on Greenspan’s hypnotic effect on his boss. “Extraordinary. That was his favourite word. He’d go in to see Ford and say, ‘Mr President, this is an extraordinarily complex problem.’ And Ford’s eyes would get big and round and start to go around in circles.”

By the time Greenspan became Fed chairman, he was a master of the dark arts of Washington. He went to extraordinary lengths to cultivate allies, fighting through his natural shyness to attend A-list parties, playing tennis with potentially troublesome financial lobbyists, maintaining his contacts on Wall Street, building up his capital by giving valuable counsel to anyone who mattered. Drawing on the advantage of his dual persona, Greenspan offered economic advice to politicians and political advice to economists. When Laura Tyson, an exuberant Berkeley economist, was appointed to chair Bill Clinton’s Council of Economic Advisers, she was flattered to find that the Fed chairman had tips on her speaking style. Too many hand gestures and facial expressions could undermine her credibility, Greenspan observed. The CEA chairwoman should simply present facts, with as little visual commentary as possible.

Greenspan’s critics frequently complained that he was undermining the independence of the Fed by cosying up to politicians. But the critics were 180 degrees wrong: only by building political capital could Greenspan protect the Fed’s prerogatives. Clinton had no natural love for Greenspan: he would sometimes entertain his advisers with a cruel imitation of him – a cheerless old man droning on about inflation. But after a landmark 1993 budget deal and a 1995 bailout of Mexico, Clinton became a firm supporter of the Fed. Greenspan had proved that he had clout. Clinton wanted to be on the right side of him.

The contrast with Greenspan’s predecessor, the rumpled, egg-headed Paul Volcker, is revealing. Volcker lacked Greenspan’s political skills, which is why the Reagan administration succeeded in packing his board with governors who were ready to outvote him. When Greenspan faced a similar prospect, he had the muscle to fight back: in at least one instance, he let his allies in the Senate know that they should block the president’s candidate. Volcker also lacked Greenspan’s facility in dealing with the press – he refused to court public approval and sometimes pretended not to notice a journalist who had been shown into his office to interview him. Greenspan inhabited the opposite extreme: he courted journalists assiduously, opening presents each Christmas at the home of the Wall Street Journal’s Washington bureau chief, Al Hunt, flattering reporters with private interviews even as he berated other Fed governors for leaking to them. It was only fitting that, halfway through his tenure, Greenspan married a journalist whose source he had once been.

The upshot was that Greenspan maximised a form of power that is invaluable to experts. Because journalists admired him, it was dangerous for politicians to pick a fight with the Fed: in any public dispute, the newspaper columnists and talking heads would take Greenspan’s side of the argument. As a result, the long tradition of Fed-bashing ceased almost completely. Every Washington insider understood that Greenspan was too powerful to touch. People who got on the wrong side of him would find their career prospects dim. They would see their intellectual shortcomings exposed. They would find themselves diminished.


 
Mark Carney, the governor of the Bank of England, in 2015. Photograph: Jonathan Brady/AFP/Getty Images

Of course, the triumph of the expert was bound to be fragile. In democracies, the will of the people can be sidelined only for so long, and 2016 has brought the whirlwind. The Brexit referendum featured Michael Gove’s infamous assertion that “the British people have had enough of experts”. Since the vote, Mark Carney, the Bank of England governor once pictured as superman, has been accused by the government of running dubious monetary experiments that exacerbate inequality – an attack picked up by William Hague, who this week threatened the central bank with the loss of its independence unless it raised interest rates. In the United States, Donald Trump has ripped into intellectuals of all stripes, charging Fed chair Janet Yellen with maintaining a dangerously loose monetary policy in order to help Obama’s poll ratings.




Inside the Bank of England



Both Gove and Trump sensed, correctly, that experts were primed for a fall. The inflationary catastrophe sparked by 1970s populism has faded from the public memory, and no longer serves as a cautionary tale. Economies have recovered disappointingly from the 2008 crash – a crash, incidentally, for which Greenspan must share the blame, since he presided over the inflation of the subprime mortgage bubble. What little growth there has been has also passed most people by, since the spoils have been so unequally distributed. If the experts’ legitimacy depends on delivering results, it is hardly surprising that they are on the defensive.

And yet the history of the rise of the experts should remind us of three things. First, the pendulum will swing back, just as it did after the 1970s. The saving grace of anti-expert populists is that they do discredit themselves, simply because policies originating from the gut tend to be lousy. If Donald Trump were to be elected, he would almost certainly cure voters of populism for decades, though the price in the meantime could be frightening. In Britain, which is sliding towards a wreck of a divorce with its most important trading partners, the delusions and confusions of the Brexit camp will probably exact an economic price that will be remembered for a generation.

Second, Alan Blinder had a point: democratic politics is prone to errors and gridlock, and there is much to be said for empowering technocrats. The right balance between democratic accountability and expert input is not impossible to strike: the model of an independent central bank does provide a template. Popularly elected politicians have a mandate to determine the priorities and ambitions of the state, which in turn determine the goals for expert bodies – whether these are central banks, environmental agencies, or the armed forces. But then it behooves the politicians to step back. Democracy is strengthened, not weakened, when it harnesses experts.

Thirdly, however, if the experts want to hasten their comeback, they must study the example of Greenspan’s politicking. It is no use thinking that, in a democracy, facts and analysis are enough to win the day. As the advertising entrepreneur John Kearon has argued, the public has to feel you are correct; the truth has to be sold as well as told; you have to capture the high ground with a brand that is more emotionally compelling than that of your opponents. In this process, as Greenspan’s career demonstrates, the media must be wooed. Enemies must be undermined. And, if you succeed, your face might just appear on a T-shirt.

Two decades ago, in his final and posthumous book, the American cultural critic Christopher Lasch went after contemporary experts. “Elites, who define the issues, have lost touch with the people,” he wrote. “There has always been a privileged class, even in America, but it has never been so dangerously isolated from its surroundings.” These criticisms presciently anticipated the rise of Davos Man – the rootless cosmopolitan elite, unburdened by any sense of obligation to a place of origin, its arrogance enhanced by the conviction that its privilege reflects brains and accomplishment, not luck and inheritance. To survive these inevitable resentments, elites will have to understand that they are not beyond politics – and they will have to demonstrate the skill to earn the public trust, and preserve it by deserving it. Given the alternative, we had better hope that they are up to it.

Tuesday 28 June 2016

Why bad ideas refuse to die

Steven Poole in The Guardian

In January 2016, the rapper BoB took to Twitter to tell his fans that the Earth is really flat. “A lot of people are turned off by the phrase ‘flat earth’,” he acknowledged, “but there’s no way u can see all the evidence and not know … grow up.” At length the astrophysicist Neil deGrasse Tyson joined in the conversation, offering friendly corrections to BoB’s zany proofs of non-globism, and finishing with a sarcastic compliment: “Being five centuries regressed in your reasoning doesn’t mean we all can’t still like your music.”

Actually, it’s a lot more than five centuries regressed. Contrary to what we often hear, people didn’t think the Earth was flat right up until Columbus sailed to the Americas. In ancient Greece, the philosophers Pythagoras and Parmenides had already recognised that the Earth was spherical. Aristotle pointed out that you could see some stars in Egypt and Cyprus that were not visible at more northerly latitudes, and also that the Earth casts a curved shadow on the moon during a lunar eclipse. The Earth, he concluded with impeccable logic, must be round.

The flat-Earth view was dismissed as simply ridiculous – until very recently, with the resurgence of apparently serious flat-Earthism on the internet. An American named Mark Sargent, formerly a professional videogamer and software consultant, has had millions of views on YouTube for his Flat Earth Clues video series. (“You are living inside a giant enclosed system,” his website warns.) The Flat Earth Society is alive and well, with a thriving website. What is going on?

Many ideas have been brilliantly upgraded or repurposed for the modern age, and their revival seems newly compelling. Some ideas from the past, on the other hand, are just dead wrong and really should have been left to rot. When they reappear, what is rediscovered is a shambling corpse. These are zombie ideas. You can try to kill them, but they just won’t die. And their existence is a big problem for our normal assumptions about how the marketplace of ideas operates.

The phrase “marketplace of ideas” was originally used as a way of defending free speech. Just as traders and customers are free to buy and sell wares in the market, so freedom of speech ensures that people are free to exchange ideas, test them out, and see which ones rise to the top. Just as good consumer products succeed and bad ones fail, so in the marketplace of ideas the truth will win out, and error and dishonesty will disappear.

There is certainly some truth in the thought that competition between ideas is necessary for the advancement of our understanding. But the belief that the best ideas will always succeed is rather like the faith that unregulated financial markets will always produce the best economic outcomes. As the IMF chief Christine Lagarde put this standard wisdom laconically in Davos: “The market sorts things out, eventually.” Maybe so. But while we wait, very bad things might happen.

Zombies don’t occur in physical marketplaces – take technology, for example. No one now buys Betamax video recorders, because that technology has been superseded and has no chance of coming back. (The reason that other old technologies, such as the manual typewriter or the acoustic piano, are still in use is that, according to the preferences of their users, they have not been superseded.) So zombies such as flat-Earthism simply shouldn’t be possible in a well‑functioning marketplace of ideas. And yet – they live. How come?

One clue is provided by economics. It turns out that the marketplace of economic ideas itself is infested with zombies. After the 2008 financial crisis had struck, the Australian economist John Quiggin published an illuminating work called Zombie Economics, describing theories that still somehow shambled around even though they were clearly dead, having been refuted by actual events in the world. An example is the notorious efficient markets hypothesis, which holds, in its strongest form, that “financial markets are the best possible guide to the value of economic assets and therefore to decisions about investment and production”. That, Quiggin argues, simply can’t be right. Not only was the efficient markets hypothesis refuted by the global meltdown of 2007–8, in Quiggin’s view it actually caused it in the first place: the idea “justified, and indeed demanded, financial deregulation, the removal of controls on international capital flows, and a massive expansion of the financial sector. These developments ultimately produced the global financial crisis.”

Even so, an idea will have a good chance of hanging around as a zombie if it benefits some influential group of people. The efficient markets hypothesis is financially beneficial for bankers who want to make deals unencumbered by regulation. A similar point can be made about the privatisation of state-owned industry: it is seldom good for citizens, but is always a cash bonanza for those directly involved.

The marketplace of ideas, indeed, often confers authority through mere repetition – in science as well as in political campaigning. You probably know, for example, that the human tongue has regional sensitivities: sweetness is sensed on the tip, saltiness and sourness on the sides, and bitter at the back. At some point you’ve seen a scientific tongue map showing this – they appear in cookery books as well as medical textbooks. It’s one of those nice, slightly surprising findings of science that no one questions. And it’s rubbish.

 
A fantasy map of a flat earth. Photograph: Antar Dayal/Getty Images/Illustration Works

As the eminent professor of biology, Stuart Firestein, explained in his 2012 book Ignorance: How it Drives Science, the tongue-map myth arose because of a mistranslation of a 1901 German physiology textbook. Regions of the tongue are just “very slightly” more or less sensitive to each of the four basic tastes, but they each can sense all of them. The translation “considerably overstated” the original author’s claims. And yet the mythical tongue map has endured for more than a century.

One of the paradoxes of zombie ideas, though, is that they can have positive social effects. The answer is not necessarily to suppress them, since even apparently vicious and disingenuous ideas can lead to illuminating rebuttal and productive research. Few would argue that a commercial marketplace needs fraud and faulty products. But in the marketplace of ideas, zombies can actually be useful. Or if not, they can at least make us feel better. That, paradoxically, is what I think the flat-Earthers of today are really offering – comfort.

Today’s rejuvenated flat-Earth philosophy, as promoted by rappers and YouTube videos, is not simply a recrudescence of pre-scientific ignorance. It is, rather, the mother of all conspiracy theories. The point is that everyone who claims the Earth is round is trying to fool you, and keep you in the dark. In that sense, it is a very modern version of an old idea.

As with any conspiracy theory, the flat-Earth idea is introduced by way of a handful of seeming anomalies, things that don’t seem to fit the “official” story. Have you ever wondered, the flat-Earther will ask, why commercial aeroplanes don’t fly over Antarctica? It would, after all, be the most direct route from South Africa to New Zealand, or from Sydney to Buenos Aires – if the Earth were round. But it isn’t. There is no such thing as the South Pole, so flying over Antarctica wouldn’t make any sense. Plus, the Antarctic treaty, signed by the world’s most powerful countries, bans any flights over it, because something very weird is going on there. So begins the conspiracy sell. Well, in fact, some commercial routes do fly over part of the continent of Antarctica. The reason none fly over the South Pole itself is because of aviation rules that require any aircraft taking such a route to have expensive survival equipment for all passengers on board – which would obviously be prohibitive for a passenger jet.

OK, the flat-Earther will say, then what about the fact that photographs taken from mountains or hot-air balloons don’t show any curvature of the horizon? It is perfectly flat – therefore the Earth must be flat. Well, a reasonable person will respond, it looks flat because the Earth, though round, is really very big. But photographs taken from the International Space Station in orbit show a very obviously curved Earth.

And here is where the conspiracy really gets going. To a flat-Earther, any photograph from the International Space Station is just a fake. So too are the famous photographs of the whole round Earth hanging in space that were taken on the Apollo missions. Of course, the Moon landings were faked too. This is a conspiracy theory that swallows other conspiracy theories whole. According to Mark Sargent’s “enclosed world” version of the flat-Earth theory, indeed, space travel had to be faked because there is actually an impermeable solid dome enclosing our flat planet. The US and USSR tried to break through this dome by nuking it in the 1950s: that’s what all those nuclear tests were really about.

 
Flat-Earthers regard as fake any photographs of the Earth that were taken on the Apollo missions Photograph: Alamy

The intellectual dynamic here, is one of rejection and obfuscation. A lot of ingenuity evidently goes into the elaboration of modern flat-Earth theories to keep them consistent. It is tempting to suppose that some of the leading writers (or, as fans call them, “researchers”) on the topic are cynically having some intellectual fun, but there are also a lot of true believers on the messageboards who find the notion of the “globist” conspiracy somehow comforting and consonant with their idea of how the world works. You might think that the really obvious question here, though, is: what purpose would such an incredibly elaborate and expensive conspiracy serve? What exactly is the point?

It seems to me that the desire to believe such stuff stems from a deranged kind of optimism about the capabilities of human beings. It is a dark view of human nature, to be sure, but it is also rather awe-inspiring to think of secret agencies so single-minded and powerful that they really can fool the world’s population over something so enormous. Even the pro-Brexit activists who warned one another on polling day to mark their crosses with a pen so that MI5 would not be able to erase their votes, were in a way expressing a perverse pride in the domination of Britain’s spookocracy. “I literally ran out of new tin hat topics to research and I STILL wouldn’t look at this one without embarrassment,” confesses Sargent on his website, “but every time I glanced at it there was something unresolved, and once I saw the near perfection of the whole plan, I was hooked.” It is rather beautiful. Bonkers, but beautiful. As the much more noxious example of Scientology also demonstrates, it is all too tempting to take science fiction for truth – because narratives always make more sense than reality.

We know that it’s a good habit to question received wisdom. Sometimes, though, healthy scepticism can run over into paranoid cynicism, and giant conspiracies seem oddly consoling. One reason why myths and urban legends hang around so long seems to be that we like simple explanations – such as that immigrants are to blame for crumbling public services – and are inclined to believe them. The “MMR causes autism” scare perpetrated by Andrew Wakefield, for example, had the apparent virtue of naming a concrete cause (vaccination) for a deeply worrying and little-understood syndrome (autism). Years after it was shown that there was nothing to Wakefield’s claims, there is still a strong and growing “anti-vaxxer” movement, particularly in the US, which poses a serious danger to public health. The benefits of immunisation, it seems, have been forgotten.

The yearning for simple explanations also helps to account for the popularity of outlandish conspiracy theories that paint a reassuring picture of all the world’s evils as being attributable to a cabal of supervillains. Maybe a secret society really is running the show – in which case the world at least has a weird kind of coherence. Hence, perhaps, the disappointed amazement among some of those who had not expected their protest votes for Brexit to count.

And what happens when the world of ideas really does operate as a marketplace? It happens to be the case that many prominent climate sceptics have been secretly funded by oil companies. The idea that there is some scientific controversy over whether burning fossil fuels has contributed in large part to the present global warming (there isn’t) is an idea that has been literally bought and sold, and remains extraordinarily successful. That, of course, is just a particularly dramatic example of the way all western democracies have been captured by industry lobbying and party donations, in which friendly consideration of ideas that increase the profits of business is simply purchased, like any other commodity. If the marketplace of ideas worked as advertised, not only would this kind of corruption be absent, it would be impossible in general for ideas to stay rejected for hundreds or thousands of years before eventually being revived. Yet that too has repeatedly happened.

While the return of flat-Earth theories is silly and rather alarming, meanwhile, it also illustrates some real and deep issues about human knowledge. How, after all, do you or I know that the Earth really is round? Essentially, we take it on trust. We may have experienced some common indications of it ourselves, but we accept the explanations of others. The experts all say the Earth is round; we believe them, and get on with our lives. Rejecting the economic consensus that Brexit would be bad for the UK, Michael Gove said that the British public had had enough of experts (or at least of experts who lurked in acronymically named organisations), but the truth is that we all depend on experts for most of what we think we know.

The second issue is that we cannot actually know for sure that the way the world appears to us is not actually the result of some giant conspiracy or deception. The modern flat-Earth theory comes quite close to an even more all-encompassing species of conspiracy theory. As some philosophers have argued, it is not entirely impossible that God created the whole universe, including fossils, ourselves and all our (false) memories, only five minutes ago. Or it might be the case that all my sensory impressions are being fed to my brain by a clever demon intent on deceiving me (Descartes) or by a virtual-reality program controlled by evil sentient artificial intelligences (The Matrix).

The resurgence of flat-Earth theory has also spawned many web pages that employ mathematics, science, and everyday experience to explain why the world actually is round. This is a boon for public education. And we should not give in to the temptation to conclude that belief in a conspiracy is prima facie evidence of stupidity. Evidently, conspiracies really happen. Members of al-Qaida really did conspire in secret to fly planes into the World Trade Center. And, as Edward Snowden revealed, the American and British intelligence services really did conspire in secret to intercept the electronic communications of millions of ordinary citizens. Perhaps the most colourful official conspiracy that we now know of happened in China. When the half-millennium-old Tiananmen Gate was found to be falling down in the 1960s, it was secretly replaced, bit by bit, with an exact replica, in a successful conspiracy that involved nearly 3,000 people who managed to keep it a secret for years.

Indeed, a healthy openness to conspiracy may be said to underlie much honest intellectual inquiry. This is how the physicist Frank Wilczek puts it: “When I was growing up, I loved the idea that great powers and secret meanings lurk behind the appearance of things.” Newton’s grand idea of an invisible force (gravity) running the universe was definitely a cosmological conspiracy theory in this sense. Yes, many conspiracy theories are zombies – but so is the idea that conspiracies never happen.

 
‘When the half-millennium-old Tiananmen Gate was found to be falling down in the 1960s, it was secretly replaced, bit by bit, with an exact replica’ Photograph: Kevin Frayer/Getty Images

Things are better, one assumes, in the rarefied marketplace of scientific ideas. There, the revered scientific journals have rigorous editorial standards. Zombies and other market failures are thereby prevented. Not so fast. Remember the tongue map. It turns out that the marketplace of scientific ideas is not perfect either.
The scientific community operates according to the system of peer review, in which an article submitted to a journal will be sent out by the editor to several anonymous referees who are expert in the field and will give a considered view on whether the paper is worthy of publication, or will be worthy if revised. (In Britain, the Royal Society began to seek such reports in 1832.) The barriers to entry for the best journals in the sciences and humanities mean that – at least in theory – it is impossible to publish clownish, evidence-free hypotheses.

But there are increasing rumblings in the academic world itself that peer review is fundamentally broken. Even that it actively suppresses good new ideas while letting through a multitude of very bad ones. “False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years,” reported Scientific American magazine in 2011. Indeed, the writer of that column, a professor of medicine named John Ioannidis, had previously published a famous paper titled Why Most Published Research Findings Are False. The issues, he noted, are particularly severe in healthcare research, in which conflicts of interest arise because studies are funded by large drug companies, but there is also a big problem in psychology.

Take the widely popularised idea of priming. In 1996, a paper was published claiming that experimental subjects who had been verbally primed to think of old age by being made to think about words such as bingo, Florida, grey, and wrinkles subsequently walked more slowly when they left the laboratory than those who had not been primed. It was a dazzling idea, and led to a flurry of other findings that priming could affect how well you did on a quiz, or how polite you were to a stranger. In recent years, however, researchers have become suspicious, and have not been able to generate the same findings as many of the early studies. This is not definitive proof of falsity, but it does show that publication in a peer-reviewed journal is no guarantee of reliability. Psychology, some argue, is currently going through a crisis in replicability, which Daniel Kahneman has called a looming “train wreck” for the field as a whole.

Could priming be a future zombie idea? Well, most people think it unlikely that all such priming effects will be refuted, since there is now such a wide variety of studies on them. The more interesting problem is to work out what scientists call the idea’s “ecological validity” – that is, how well do the effects translate from the artificial simplicity of the lab situation to the ungovernable messiness of real life? This controversy in psychology just shows science working as it should – being self-correcting. One marketplace-of-ideas problem here, though, is that papers with surprising and socially intriguing results will be described throughout the media, and lauded as definitive evidence in popularising books, as soon as they are published, and long before awkward second questions begin to be asked.




China’s memory manipulators



It would be sensible, for a start, for us to make the apparently trivial rhetorical adjustment from the popular phrase “studies show …” and limit ourselves to phrases such as “studies suggest” or “studies indicate”. After all, “showing” strongly implies proving, which is all too rare an activity outside mathematics. Studies can always be reconsidered. That is part of their power.

Nearly every academic inquirer I talked to while researching this subject says that the interface of research with publishing is seriously flawed. Partly because the incentives are all wrong – a “publish or perish” culture rewards academics for quantity of published research over quality. And partly because of the issue of “publication bias”: the studies that get published are the ones that have yielded hoped-for results. Studies that fail to show what they hoped for end up languishing in desk drawers.

One reform suggested by many people to counteract publication bias would be to encourage the publication of more “negative findings” – papers where a hypothesis was not backed up by the experiment performed. One problem, of course, is that such findings are not very exciting. Negative results do not make headlines. (And they sound all the duller for being called “negative findings”, rather than being framed as positive discoveries that some ideas won’t fly.)

The publication-bias issue is even more pressing in the field of medicine, where it is estimated that the results of around half of all trials conducted are never published at all, because their results are negative. “When half the evidence is withheld,” writes the medical researcher Ben Goldacre, “doctors and patients cannot make informed decisions about which treatment is best.”Accordingly, Goldacre has kickstarted a campaigning group named AllTrials to demand that all results be published.

When lives are not directly at stake, however, it might be difficult to publish more negative findings in other areas of science. One idea, floated by the Economist, is that “Journals should allocate space for ‘uninteresting’ work, and grant-givers should set aside money to pay for it.” It sounds splendid, to have a section in journals for tedious results, or maybe an entire journal dedicated to boring and perfectly unsurprising research. But good luck getting anyone to fund it.

The good news, though, is that some of the flaws in the marketplace of scientific ideas might be hidden strengths. It’s true that some people think peer review, at its glacial pace and with its bias towards the existing consensus, works to actively repress new ideas that are challenging to received opinion. Notoriously, for example, the paper that first announced the invention of graphene – a way of arranging carbon in a sheet only a single atom thick – was rejected by Nature in 2004 on the grounds that it was simply “impossible”. But that idea was too impressive to be suppressed; in fact, the authors of the graphene paper had it published in Science magazine only six months later. Most people have faith that very well-grounded results will find their way through the system. Yet it is right that doing so should be difficult. If this marketplace were more liquid and efficient, we would be overwhelmed with speculative nonsense. Even peremptory or aggressive dismissals of new findings have a crucial place in the intellectual ecology. Science would not be so robust a means of investigating the world if it eagerly embraced every shiny new idea that comes along. It has to put on a stern face and say: “Impress me.” Great ideas may well face a lot of necessary resistance, and take a long time to gain traction. And we wouldn’t wish things to be otherwise.

In many ways, then, the marketplace of ideas does not work as advertised: it is not efficient, there are frequent crashes and failures, and dangerous products often win out, to widespread surprise and dismay. It is important to rethink the notion that the best ideas reliably rise to the top: that itself is a zombie idea, which helps entrench powerful interests. Yet even zombie ideas can still be useful when they motivate energetic refutations that advance public education. Yes, we may regret that people often turn to the past to renew an old theory such as flat-Earthism, which really should have stayed dead. But some conspiracies are real, and science is always engaged in trying to uncover the hidden powers behind what we see. The resurrection of zombie ideas, as well as the stubborn rejection of promising new ones, can both be important mechanisms for the advancement of human understanding.

Sunday 26 June 2016

There are liars and then there’s Boris Johnson and Michael Gove

Nick Cohen in The Guardian


The Brexit figureheads had no plan besides exploiting populist fears and dismissing experts who rubbished their thinking


‘Prospered by treating public life as a game’: Boris Johnson leaves his home in Oxfordshire on Saturday. Photograph: Peter Nicholls/Reuters




Where was the champagne at the Vote Leave headquarters? The happy tears and whoops of joy? If you believed Boris Johnson and Michael Gove, the Brexit vote was a moment of national liberation, a day that Nigel Farage said our grateful children would celebrate with an annual bank holiday.

Johnson and Gove had every reason to celebrate. The referendum campaign showed the only arguments that matter now in England are on the right. With the Labour leadership absent without leave and the Liberal Democrats and Greens struggling to be heard, the debate was between David Cameron and George Osborne, defending the status quo, and the radical right, demanding its destruction. Johnson and Gove won a dizzying victory with the potential to change every aspect of national life, from workers’ rights to environmental protection.

Yet they gazed at the press with coffin-lid faces and wept over the prime minister they had destroyed. David Cameron was “brave and principled”, intoned Johnson. “A great prime minister”, muttered Gove. Like Goneril and Regan competing to offer false compliments to Lear, they covered the leader they had doomed with hypocritical praise. No one whoops at a funeral, especially not mourners who are glad to see the back of the deceased. But I saw something beyond hypocrisy in those frozen faces: the fear of journalists who have been found out.

The media do not damn themselves, so I am speaking out of turn when I say that if you think rule by professional politicians is bad wait until journalist politicians take over. Johnson and Gove are the worst journalist politicians you can imagine: pundits who have prospered by treating public life as a game. Here is how they play it. They grab media attention by blaring out a big, dramatic thought. An institution is failing? Close it. A public figure blunders? Sack him. They move from journalism to politics, but carry on as before. When presented with a bureaucratic EU that sends us too many immigrants, they say the answer is simple, as media answers must be. Leave. Now. Then all will be well.

Johnson and Gove carried with them a second feature of unscrupulous journalism: the contempt for practical questions. Never has a revolution in Britain’s position in the world been advocated with such carelessness. The Leave campaign has no plan. And that is not just because there was a shamefully under-explored division between the bulk of Brexit voters who wanted the strong welfare state and solid communities of their youth and the leaders of the campaign who wanted Britain to become an offshore tax haven. Vote Leave did not know how to resolve difficulties with Scotland, Ireland, the refugee camp at Calais, and a thousand other problems, and did not want to know either.

It responded to all who predicted the chaos now engulfing us like an unscrupulous pundit who knows that his living depends on shutting up the experts who gainsay him. For why put the pundit on air, why pay him a penny, if experts can show that everything he says is windy nonsense? The worst journalists, editors and broadcasters know their audiences want entertainment, not expertise. If you doubt me, ask when you last saw panellists on Question Time who knew what they were talking about.

Naturally, Michael Gove, former Times columnist, responded to the thousands of economists who warned he was taking an extraordinary risk with the sneer that will follow him to his grave: “People in this country have had enough of experts.” He’s being saying the same for years.

If sneers won’t work, the worst journalists lie. The Times fired Johnson for lying to its readers. Michael Howard fired Johnson for lying to him. When he’s cornered, Johnson accuses others of his own vices, as unscrupulous journalists always do. Those who question him are the true liars, he blusters, whose testimony cannot be trusted because, as he falsely said of the impeccably honest chairman of the UK Statistics Authority, they are “stooges”.

The Vote Leave campaign followed the tactics of the sleazy columnist to the letter. First, it came out with the big, bold solution: leave. Then it dismissed all who raised well-founded worries with “the country is sick of experts”. Then, like Johnson the journalist, it lied.

I am not going to be over-dainty about mendacity. Politicians, including Remain politicians lie, as do the rest of us. But not since Suez has the nation’s fate been decided by politicians who knowingly made a straight, shameless, incontrovertible lie the first plank of their campaign. Vote Leave assured the electorate it would reclaim a supposed £350m Brussels takes from us each week. They knew it was a lie. Between them, they promised to spend £111bn on the NHS, cuts to VAT and council tax, higher pensions, a better transport system and replacements for the EU subsidies to the arts, science, farmers and deprived regions. When boring experts said that, far from being rich, we would face a £40bn hole in our public finances, Vote Leave knew how to fight back. In Johnsonian fashion, it said that the truth tellers were corrupt liars in Brussels’ pocket.

Now they have won and what Kipling said of the demagogues of his age applies to Michael Gove, Boris Johnson and Nigel Farage.


I could not dig; I dared not rob:
Therefore I lied to please the mob.
Now all my lies are proved untrue
And I must face the men I slew.
What tale shall serve me here among
Mine angry and defrauded young?



The real division in Britain is not between London and the north, Scotland and Wales or the old and young, but between Johnson, Gove and Farage and the voters they defrauded. What tale will serve them now? On Thursday, they won by promising cuts in immigration. On Friday, Johnson and the Eurosceptic ideologue Dan Hannan said that in all probability the number of foreigners coming here won’t fall. On Thursday, they promised the economy would boom. By Friday, the pound was at a 30-year low and Daily Mail readers holidaying abroad were learning not to believe what they read in the papers. On Thursday, they promised £350m extra a week for the NHS. On Friday, it turns out there are “no guarantees”.

If we could only find a halfway competent opposition, the very populist forces they have exploited and misled so grievously would turn on them. The fear in their eyes shows that they know it.

Sunday 15 May 2016

How Little do Experts Know- On Ranieri and Leicester, One Media Expert Apologises

In July of last year I may have written an article suggesting that the Italian was likely to get Leicester City relegated from the Premier League

 
Leicester City manager Claudio Ranieri lifts the Premier League trophy. Photograph: Carl Recine/Reuters


Marcus Christenson in The Guardian


No one likes to be wrong. It is much nicer to be right. In life, however, it is not possible to be right all the time. We all try our best but there are times when things go horribly wrong.
I should know. In July last year I sat down to write an article about Claudio Ranieri. The 63-year-old had just been appointed the new manager of Leicester City and I decided, in the capacity of being the football editor at the Guardian, that I was the right person to write that piece.




Claudio Ranieri: the anti-Pearson … and the wrong man for Leicester City?



I made that decision based on the following: I have lived and worked as a journalist in Italy and have followed Ranieri’s career fairly closely since his early days in management. I also made sure that I spoke to several people in Greece, where Ranieri’s last job before replacing Nigel Pearson at Leicester, had ended in disaster with the team losing against the Faroe Islands and the manager getting sacked.

It was quite clear to me that this was a huge gamble by Leicester and that it was unlikely to end well. And I was hardly the only one to be sceptical. Gary Lineker, the former Leicester striker and now Match of the Day presenter, tweeted “Claudio Ranieri? Really?” and followed it up with by saying: “Claudio Ranieri is clearly experienced, but this is an uninspired choice by Leicester. It’s amazing how the same old names keep getting a go on the managerial merry-go-round.”

I started my article by explaining what had gone wrong in Greece (which was several things) before moving on to talk about the rest of his long managerial career, pointing out that he had never won a league title in any country and nor had he stayed at any club for more than two seasons since being charge at Chelsea at the beginning of the 2000s.

I threw in some light-hearted “lines”, such as the fact that he was the manager in charge of Juventus when they signed Christian Poulsen (not really a Juventus kind of player) and proclaimed that the appointment was “baffling”.

I added: “In some ways, it seems as if the Leicester owners went looking for the anti-Nigel Pearson. Ranieri is not going to call a journalist an ostrich. He is not going to throttle a player during a match. He is not going to tell a supporter to ‘fuck off and die’, no matter how bad the abuse gets.”


Claudio Ranieri instructs his players during Greece’s defeat by the Faroe Islands, the Italian’s last game in charge of the Euro 2004 winners. Photograph: Thanassis Stavrakis/AP

Rather pleased with myself – thinking that I was giving the readers a good insight to the man and the manager – I also put a headline on the piece, which read: “Claudio Ranieri: the anti-Pearson … and the wrong man for Leicester City?”

I did not think much more of the piece until a few months later when Leicester were top of the league and showing all the signs of being capable of staying there.

After a while, the tweets started to appear from people pointing out that I may not have called this one right. As the season wore on, these tweets became more and more frequent, and they have been sent to me after every Leicester win since the turn of the year.

At some point in February I decided to go back and look at the piece again. It made for uncomfortable reading. I had said that describing his spell in charge of Greece as “poor” would be an understatement. I wrote that 11 years after being given the nickname “Tinkerman” because he changed his starting XI so often when in charge of Chelsea, he was still an incorrigible “Tinkerman”.

It gets worse. “Few will back him to succeed but one thing is for sure: he will conduct himself in an honourable and humble way, as he always has done,” the articles said. “If Leicester wanted someone nice, they’ve got him. If they wanted someone to keep them in the Premier League, then they may have gone for the wrong guy.”

Ouch. Reading it back again I was faced with a couple of uncomfortable questions, the key one being “who do you think you are, writing such an snobbish piece about a dignified man and a good manager?”

The second question was a bit easier to answer. Was this as bad as the “In defence of Nicklas Bendtner” article I wrote a couple of years ago? (The answer is “no”, by the way, few things come close to an error of judgment of that scale).

I would like to point out a few things though. I did get – as a very kind colleague pointed out – 50% of that last paragraph right. He clearly is a wonderful human being and when Paolo Bandini spoke to several of his former players recently one thing stood out: the incredible affection they still feel for this gentle 64-year-old.

All in all, though, there is no point defending the indefensible: I could not have got it more wrong.


At the start of this piece I said that no one likes to be wrong. Well, I was wrong about that too. I’ve enjoyed every minute of being embarrassingly wrong this season. Leicester is the best story that could have happened to football in this country, their triumph giving hope to all of us who want to start a season dreaming that something unthinkable might happen.

So thank you Leicester and thank you Claudio, it’s been quite wonderful.

Saturday 1 March 2014

Can 10,000 hours of practice make you an expert?

By Ben Carter BBC News

A much-touted theory suggests that practising any skill for 10,000 hours is sufficient to make you an expert. No innate talent? Not a problem. You just practice. But is it true?
One man who decided to test it is Dan McLaughlin, 34, a former commercial photographer from Portland, Oregon.
"The idea came in 2009. I was visiting my brother and we decided to play a par three, nine-hole course," he says. "I had never really been on a golf course and went out and shot a 57, which is horrible. It's 30 over par on an easy nine-hole course."
Far from being discouraged by his apparent lack of any natural talent for golf, Dan and his brother started talking about what it would take to become a professional golfer. Dan soon decided he wanted to try.
"When I announced I was going to quit my job, my co-workers started bringing books in and I read Malcolm Gladwell's Outliers, Geoff Colvin's Talent is Overrated and The Talent Code by Daniel Coyle," he says. "These books all had this idea of 10,000 hours in them."
The 10,000-hours concept can be traced back to a 1993 paper written by Anders Ericsson, a Professor at the University of Colorado, called The Role of Deliberate Practice in the Acquisition of Expert Performance.
It highlighted the work of a group of psychologists in Berlin, who had studied the practice habits of violin students in childhood, adolescence and adulthood.
All had begun playing at roughly five years of age with similar practice times. However, at age eight, practice times began to diverge. By age 20, the elite performers had averaged more than 10,000 hours of practice each, while the less able performers had only done 4,000 hours of practice.
The psychologists didn't see any naturally gifted performers emerge and this surprised them. If natural talent had played a role it wouldn't have been unreasonable to expect gifted performers to emerge after, say, 5,000 hours.
Anders Ericsson concluded that "many characteristics once believed to reflect innate talent are actually the result of intense practice extended for a minimum of 10 years".
It is Malcolm Gladwell's hugely popular book, Outliers, that is largely responsible for introducing "the 10,000-hour rule" to a mass audience - it's the name of one of the chapters.
But Ericsson was not pleased. He wrote a rebuttal paper in 2012, called The Danger of Delegating Education to Journalists.
"The 10,000-hour rule was invented by Malcolm Gladwell who stated that, 'Researchers have settled on what they believe is the magic number for true expertise: 10,000 hours.' Gladwell cited our research on expert musicians as a stimulus for his provocative generalisation to a magical number," Ericsson writes.
Ericsson then pointed out that 10,000 was an average, and that many of the best musicians in his study had accumulated "substantially fewer" hours of practice. He underlined, also, that the quality of the practice was important.
"In contrast, Gladwell does not even mention the concept of deliberate practice," Ericsson writes.
Gladwell counters that Ericsson doesn't really think that talent exists.
"When he disagrees with the way I interpreted his work, it's because I disagree with him," he says.
"I think that being very, very good at something requires a big healthy dose of natural talent. And when I talk about the Beatles - they had masses of natural talent. They were born geniuses. Ericsson wouldn't say that.
"Ericsson, if you read some of his writings, is... saying the right kind of practice is sufficient."
Gladwell places himself roughly in the middle of a sliding scale with Ericsson at one end, placing little emphasis on the role of natural talent, and at the other end a writer such as David Epstein, author of the The Sports Gene. Epstein is "a bit more of a talent person than me" Gladwell suggests.
One of the difficulties with assessing whether expert-level performance can be obtained just through practice is that most studies are done after the subjects have reached that level.
It would be better to follow the progress of someone with no innate talent in a particular discipline who chooses to complete 10,000 hours of deliberate practice in it.
And we can, thanks to our wannabe professional golfer, Dan McLaughlin.
"I began the plan in April 2010 and I basically putted from one foot and slowly worked away from the hole," he says.
"Eighteen months into it I hit my first driver and now it's approaching four years and I'm about half way. So I'm 5,000 hours into the project. My current handicap is right at a 4.1 and the goal is to get down to a plus handicap [below zero] where I have the skill set to compete in a legitimate PGA tour event."
David Epstein hopes that McLaughlin can reach his goal, but he has some doubts. In the sporting world innate ability is mandatory, he believes.
A recent study of baseball players, Epstein points out, found that the average player had 20/13 vision as opposed to normal 20/20 vision. What this means is that they can see at 20 feet what a normal person would need to be at 13 feet to see clearly. That gives a hitter an enormous advantage when it comes to striking a ball being thrown towards them at 95mph from 60 feet (or 153km/h from 18m).
Using an analogy from computing, Epstein says the hardware is someone's visual acuity - or the physiology of their eye that they cannot change - while the software is the set of skills they learn by many, many hours of practice.
"No matter how good their vision is, it's like a laptop with only the hardware - with no programmes on it, it's useless. But once they've downloaded that software, once they have learned those sports-specific skills, the better the hardware is the better the total machine is going to be."
But is there a simpler way to think about all this? Maybe talented people just practise more and try harder at the thing they're already good at - because they enjoy it?
"Imagine being in calculus class on your first day and the teacher being at the board writing an equation, and you look at it and think 'Wow, that's the most beautiful thing I've ever seen,' which some people do," says Gladwell.
"For those people to go home and do two hours of calculus homework is thrilling, whereas for the rest of us it's beyond a chore and more like a nightmare.
"Those that have done the two hours' practice come in the following day and everything is easier than it is for those who didn't enjoy it in the first place and didn't do the two hours' homework."
What Dan McLaughlin is hoping is that what he lacks in innate talent he more than makes up for with his 10,000 hours of deliberate practice.
If Dan's plan goes well he could be mixing it with the likes of Tiger Woods and Rory McIlroy in 2018. If not, he will just be a very good golfer.