Economies are not delivering for most citizens because of weak competition, feeble productivity growth and tax loopholes writes Martin Wolf in The FT
“While each of our individual companies serves its own corporate purpose, we share a fundamental commitment to all of our stakeholders.”
With this sentence, the US Business Roundtable, which represents the chief executives of 181 of the world’s largest companies, abandoned their longstanding view that “corporations exist principally to serve their shareholders”.
This is certainly a moment. But what does — and should — that moment mean? The answer needs to start with acknowledgment of the fact that something has gone very wrong. Over the past four decades, and especially in the US, the most important country of all, we have observed an unholy trinity of slowing productivity growth, soaring inequality and huge financial shocks.
As Jason Furman of Harvard University and Peter Orszag of Lazard Frères noted in a paper last year: “From 1948 to 1973, real median family income in the US rose 3 per cent annually. At this rate . . . there was a 96 per cent chance that a child would have a higher income than his or her parents. Since 1973, the median family has seen its real income grow only 0.4 per cent annually . . . As a result, 28 per cent of children have lower income than their parents did.”
So why is the economy not delivering? The answer lies, in large part, with the rise of rentier capitalism. In this case “rent” means rewards over and above those required to induce the desired supply of goods, services, land or labour. “Rentier capitalism” means an economy in which market and political power allows privileged individuals and businesses to extract a great deal of such rent from everybody else.
That does not explain every disappointment. As Robert Gordon, professor of social sciences at Northwestern University, argues, fundamental innovation slowed after the mid-20th century. Technology has also created greater reliance on graduates and raised their relative wages, explaining part of the rise of inequality. But the share of the top 1 per cent of US earners in pre-tax income jumped from 11 per cent in 1980 to 20 per cent in 2014. This was not mainly the result of such skill-biased technological change.
If one listens to the political debates in many countries, notably the US and UK, one would conclude that the disappointment is mainly the fault of imports from China or low-wage immigrants, or both. Foreigners are ideal scapegoats. But the notion that rising inequality and slow productivity growth are due to foreigners is simply false.
Every western high-income country trades more with emerging and developing countries today than it did four decades ago. Yet increases in inequality have varied substantially. The outcome depended on how the institutions of the market economy behaved and on domestic policy choices.
Harvard economist Elhanan Helpman ends his overview of a huge academic literature on the topic with the conclusion that “globalisation in the form of foreign trade and offshoring has not been a large contributor to rising inequality. Multiple studies of different events around the world point to this conclusion.”
The shift in the location of much manufacturing, principally to China, may have lowered investment in high-income economies a little. But this effect cannot have been powerful enough to reduce productivity growth significantly. To the contrary, the shift in the global division of labour induced high-income economies to specialise in skill-intensive sectors, where there was more potential for fast productivity growth.
Donald Trump, a naive mercantilist, focuses, instead, on bilateral trade imbalances as a cause of job losses. These deficits reflect bad trade deals, the American president insists. It is true that the US has overall trade deficits, while the EU has surpluses. But their trade policies are quite similar. Trade policies do not explain bilateral balances. Bilateral balances, in turn, do not explain overall balances. The latter are macroeconomic phenomena. Both theory and evidence concur on this.
The economic impact of immigration has also been small, however big the political and cultural “shock of the foreigner” may be. Research strongly suggests that the effect of immigration on the real earnings of the native population and on receiving countries’ fiscal position has been small and frequently positive.
Far more productive than this politically rewarding, but mistaken, focus on the damage done by trade and migration is an examination of contemporary rentier capitalism itself.
Finance plays a key role, with several dimensions. Liberalised finance tends to metastasise, like a cancer. Thus, the financial sector’s ability to create credit and money finances its own activities, incomes and (often illusory) profits.
A 2015 study by Stephen Cecchetti and Enisse Kharroubi for the Bank for International Settlements said “the level of financial development is good only up to a point, after which it becomes a drag on growth, and that a fast-growing financial sector is detrimental to aggregate productivity growth”. When the financial sector grows quickly, they argue, it hires talented people. These then lend against property, because it generates collateral. This is a diversion of talented human resources in unproductive, useless directions.
Again, excessive growth of credit almost always leads to crises, as Carmen Reinhart and Kenneth Rogoff showed in This Time is Different. This is why no modern government dares let the supposedly market-driven financial sector operate unaided and unguided. But that in turn creates huge opportunities to gain from irresponsibility: heads, they win; tails, the rest of us lose. Further crises are guaranteed.
Finance also creates rising inequality. Thomas Philippon of the Stern School of Business and Ariell Reshef of the Paris School of Economics showed that the relative earnings of finance professionals exploded upwards in the 1980s with the deregulation of finance. They estimated that “rents” — earnings over and above those needed to attract people into the industry — accounted for 30-50 per cent of the pay differential between finance professionals and the rest of the private sector.
This explosion of financial activity since 1980 has not raised the growth of productivity. If anything, it has lowered it, especially since the crisis. The same is true of the explosion in pay of corporate management, yet another form of rent extraction. As Deborah Hargreaves, founder of the High Pay Centre, notes, in the UK the ratio of average chief executive pay to that of average workers rose from 48 to one in 1998 to 129 to one in 2016. In the US, the same ratio rose from 42 to one in 1980 to 347 to one in 2017.
As the US essayist HL Mencken wrote: “For every complex problem, there is an answer that is clear, simple and wrong.” Pay linked to the share price gave management a huge incentive to raise that price, by manipulating earnings or borrowing money to buy the shares. Neither adds value to the company. But they can add a great deal of wealth to management. A related problem with governance is conflicts of interest, notably over independence of auditors.
In sum, personal financial considerations permeate corporate decision-making. As the independent economist Andrew Smithers argues in Productivity and the Bonus Culture, this comes at the expense of corporate investment and so of long-run productivity growth.
A possibly still more fundamental issue is the decline of competition. Mr Furman and Mr Orszag say there is evidence of increased market concentration in the US, a lower rate of entry of new firms and a lower share of young firms in the economy compared with three or four decades ago. Work by the OECD and Oxford Martin School also notes widening gaps in productivity and profit mark-ups between the leading businesses and the rest. This suggests weakening competition and rising monopoly rent. Moreover, a great deal of the increase in inequality arises from radically different rewards for workers with similar skills in different firms: this, too, is a form of rent extraction.
A part of the explanation for weaker competition is “winner-takes-almost-all” markets: superstar individuals and their companies earn monopoly rents, because they can now serve global markets so cheaply. The network externalities — benefits of using a network that others are using — and zero marginal costs of platform monopolies (Facebook, Google, Amazon, Alibaba and Tencent) are the dominant examples.
Another such natural force is the network externalities of agglomerations, stressed by Paul Collier in The Future of Capitalism. Successful metropolitan areas — London, New York, the Bay Area in California — generate powerful feedback loops, attracting and rewarding talented people. This disadvantages businesses and people trapped in left-behind towns. Agglomerations, too, create rents, not just in property prices, but also in earnings.
Yet monopoly rent is not just the product of such natural — albeit worrying — economic forces. It is also the result of policy. In the US, Yale University law professor Robert Bork argued in the 1970s that “consumer welfare” should be the sole objective of antitrust policy. As with shareholder value maximisation, this oversimplified highly complex issues. In this case, it led to complacency about monopoly power, provided prices stayed low. Yet tall trees deprive saplings of the light they need to grow. So, too, may giant companies.
Some might argue, complacently, that the “monopoly rent” we now see in leading economies is largely a sign of the “creative destruction” lauded by the Austrian economist Joseph Schumpeter. In fact, we are not seeing enough creation, destruction or productivity growth to support that view convincingly.
A disreputable aspect of rent-seeking is radical tax avoidance. Corporations (and so also shareholders) benefit from the public goods — security, legal systems, infrastructure, educated workforces and sociopolitical stability — provided by the world’s most powerful liberal democracies. Yet they are also in a perfect position to exploit tax loopholes, especially those companies whose location of production or innovation is difficult to determine.
The biggest challenges within the corporate tax system are tax competition and base erosion and profit shifting. We see the former in falling tax rates. We see the latter in the location of intellectual property in tax havens, in charging tax-deductible debt against profits accruing in higher-tax jurisdictions and in rigging transfer prices within firms.
A 2015 study by the IMF calculated that base erosion and profit shifting reduced long-run annual revenue in OECD countries by about $450bn (1 per cent of gross domestic product) and in non-OECD countries by slightly over $200bn (1.3 per cent of GDP). These are significant figures in the context of a tax that raised an average of only 2.9 per cent of GDP in 2016 in OECD countries and just 2 per cent in the US.
Brad Setser of the Council on Foreign Relations shows that US corporations report seven times as much profit in small tax havens (Bermuda, the British Caribbean, Ireland, Luxembourg, Netherlands, Singapore and Switzerland) as in six big economies (China, France, Germany, India, Italy and Japan). This is ludicrous. The tax reform under Mr Trump changed essentially nothing. Needless to say, not only US corporations benefit from such loopholes.
In such cases, rents are not merely being exploited. They are being created, through lobbying for distorting and unfair tax loopholes and against needed regulation of mergers, anti-competitive practices, financial misbehaviour, the environment and labour markets. Corporate lobbying overwhelms the interests of ordinary citizens. Indeed, some studies suggest that the wishes of ordinary people count for next to nothing in policymaking.
Not least, as some western economies have become more Latin American in their distribution of incomes, their politics have also become more Latin American. Some of the new populists are considering radical, but necessary, changes in competition, regulatory and tax policies. But others rely on xenophobic dog whistles while continuing to promote a capitalism rigged to favour a small elite. Such activities could well end up with the death of liberal democracy itself.
Members of the Business Roundtable and their peers have tough questions to ask themselves. They are right: seeking to maximise shareholder value has proved a doubtful guide to managing corporations. But that realisation is the beginning, not the end. They need to ask themselves what this understanding means for how they set their own pay and how they exploit — indeed actively create — tax and regulatory loopholes.
They must, not least, consider their activities in the public arena. What are they doing to ensure better laws governing the structure of the corporation, a fair and effective tax system, a safety net for those afflicted by economic forces beyond their control, a healthy local and global environment and a democracy responsive to the wishes of a broad majority?
We need a dynamic capitalist economy that gives everybody a justified belief that they can share in the benefits. What we increasingly seem to have instead is an unstable rentier capitalism, weakened competition, feeble productivity growth, high inequality and, not coincidentally, an increasingly degraded democracy. Fixing this is a challenge for us all, but especially for those who run the world’s most important businesses. The way our economic and political systems work must change, or they will perish.
'People will forgive you for being wrong, but they will never forgive you for being right - especially if events prove you right while proving them wrong.' Thomas Sowell
Search This Blog
Thursday, 19 September 2019
Sunday, 15 September 2019
Never mind ‘tax raids’, Labour – just abolish private education
As drivers of inequality, private schools are at the heart of Britain’s problems. Labour must be bold and radical on this writes Owen Jones in The Guardian

No grammar schools, lots of play: the secrets of Europe’s top education system
Unless this rotten system is abolished, Britain will never be free of social and political turmoil. It is therefore welcome – overdue, in fact – to read the Daily Telegraph’s horrified front-page story: “Corbyn tax raid on private schools”.
The segregation of children by the bank balances of their parents is integral to the class system, and the Labour Against Private Schools group has been leading an energetic campaign to shift the party’s position. The party is looking at scrapping the tax subsidies enjoyed by private education, which are de facto public subsidies for class privilege: moves such as ending VAT exemptions for school fees, as well as making private schools pay the rates other businesses are expected to. If the class system has an unofficial motto, it is “one rule for us, and one rule for everybody else”. Private schools encapsulate that, and forcing these gilded institutions to stand on their own two feet should be a bare minimum.
More radically, Labour is debating whether to commit to abolishing private education. This is exactly what the party should do, even if it is via the “slow and painless euthanasia” advocated by Robert Verkaik, the author of Posh Boys: How English Public Schools Ruin Britain. Compelling private schools to apply by the same VAT and business rate rules as others will starve them of funds, forcing many of them out of business.
Private education is, in part, a con: past OECD research has suggested that there is not “much of a performance difference” between state and private schools when socio-economic background is factored in. In other words, children from richer backgrounds – because the odds are stacked in their favour from their very conception – tend to do well, whichever school they’re sent to. However unpalatable it is for some to hear it, many well-to-do parents send their offspring to private schools because they fear them mixing with the children of the poor. Private schools do confer other advantages, of course: whether it be networks, or a sense of confidence that can shade into a poisonous sense of social superiority.
Mixing together is good for children from different backgrounds: the evidence suggests that the “cultural capital” of pupils with more privileged, university-educated parents rubs off on poorer peers without their own academic progress suffering. Such mixing creates more well-rounded human beings, breaking down social barriers. If sharp-elbowed parents are no longer able to buy themselves out of state education, they are incentivised to improve their local schools.
Look at Finland: it has almost no private or grammar schools, and instead provides a high-quality local state school for every pupil, and its education system is among the best performing on Earth. It shows why Labour should be more radical still: not least committing to abolishing grammar schools, which take in far fewer pupils who are eligible for free school meals.
Other radical measures are necessary too. Poverty damages the educational potential of children, whether through stress or poor diet, while overcrowded, poor-quality housing has the same impact too. Gaps in vocabulary open up an early age, underlining the need for early intervention. The educational expert Melissa Benn recommends that, rather than emulating the often narrow curriculums of private schools, there should be a move by state schools away from exam results: a wrap-around qualification could include a personal project, community work and a broader array of subjects.
In the coming election, Labour has to be more radical and ambitious than it was 2017. At the very core of its new manifesto must be a determination to overcome a class system that is a ceaseless engine of misery, insecurity and injustice.
Britain is a playground for the rich, but this is not a fact of life – and a commitment to ending private education will send a strong message that time has finally been called on a rotten class system.
Labour leader Jeremy Corbyn at the TUC Congress in Brighton. Photograph: Ben Stansall/AFP/Getty Images
The British class system is an organised racket. It concentrates wealth and power in the hands of the few, while 14 million Britons languish in poverty.
If you are dim but have rich parents, a life of comfort, affluence and power is almost inevitable – while the bright but poor are systematically robbed of their potential. The well-to-do are all but guaranteed places at the top table of the media, law, politics, medicine, military, civil service and arts. As inequality grows, so too does the stranglehold of the rich over democracy. The wealthiest 1,000 can double their fortunes in the aftermath of financial calamity, while workers suffer the worst squeeze in wages since the Napoleonic wars. State support is lavished on rich vested interests – such as the banks responsible for Britain’s economic turmoil – but stripped from disabled and low-paid people. The powerful have less stressful lives, and the prosperous are healthier, expecting to live a decade longer than those living in the most deprived areas.
The British class system is an organised racket. It concentrates wealth and power in the hands of the few, while 14 million Britons languish in poverty.
If you are dim but have rich parents, a life of comfort, affluence and power is almost inevitable – while the bright but poor are systematically robbed of their potential. The well-to-do are all but guaranteed places at the top table of the media, law, politics, medicine, military, civil service and arts. As inequality grows, so too does the stranglehold of the rich over democracy. The wealthiest 1,000 can double their fortunes in the aftermath of financial calamity, while workers suffer the worst squeeze in wages since the Napoleonic wars. State support is lavished on rich vested interests – such as the banks responsible for Britain’s economic turmoil – but stripped from disabled and low-paid people. The powerful have less stressful lives, and the prosperous are healthier, expecting to live a decade longer than those living in the most deprived areas.

No grammar schools, lots of play: the secrets of Europe’s top education system
Unless this rotten system is abolished, Britain will never be free of social and political turmoil. It is therefore welcome – overdue, in fact – to read the Daily Telegraph’s horrified front-page story: “Corbyn tax raid on private schools”.
The segregation of children by the bank balances of their parents is integral to the class system, and the Labour Against Private Schools group has been leading an energetic campaign to shift the party’s position. The party is looking at scrapping the tax subsidies enjoyed by private education, which are de facto public subsidies for class privilege: moves such as ending VAT exemptions for school fees, as well as making private schools pay the rates other businesses are expected to. If the class system has an unofficial motto, it is “one rule for us, and one rule for everybody else”. Private schools encapsulate that, and forcing these gilded institutions to stand on their own two feet should be a bare minimum.
More radically, Labour is debating whether to commit to abolishing private education. This is exactly what the party should do, even if it is via the “slow and painless euthanasia” advocated by Robert Verkaik, the author of Posh Boys: How English Public Schools Ruin Britain. Compelling private schools to apply by the same VAT and business rate rules as others will starve them of funds, forcing many of them out of business.
Private education is, in part, a con: past OECD research has suggested that there is not “much of a performance difference” between state and private schools when socio-economic background is factored in. In other words, children from richer backgrounds – because the odds are stacked in their favour from their very conception – tend to do well, whichever school they’re sent to. However unpalatable it is for some to hear it, many well-to-do parents send their offspring to private schools because they fear them mixing with the children of the poor. Private schools do confer other advantages, of course: whether it be networks, or a sense of confidence that can shade into a poisonous sense of social superiority.
Mixing together is good for children from different backgrounds: the evidence suggests that the “cultural capital” of pupils with more privileged, university-educated parents rubs off on poorer peers without their own academic progress suffering. Such mixing creates more well-rounded human beings, breaking down social barriers. If sharp-elbowed parents are no longer able to buy themselves out of state education, they are incentivised to improve their local schools.
Look at Finland: it has almost no private or grammar schools, and instead provides a high-quality local state school for every pupil, and its education system is among the best performing on Earth. It shows why Labour should be more radical still: not least committing to abolishing grammar schools, which take in far fewer pupils who are eligible for free school meals.
Other radical measures are necessary too. Poverty damages the educational potential of children, whether through stress or poor diet, while overcrowded, poor-quality housing has the same impact too. Gaps in vocabulary open up an early age, underlining the need for early intervention. The educational expert Melissa Benn recommends that, rather than emulating the often narrow curriculums of private schools, there should be a move by state schools away from exam results: a wrap-around qualification could include a personal project, community work and a broader array of subjects.
In the coming election, Labour has to be more radical and ambitious than it was 2017. At the very core of its new manifesto must be a determination to overcome a class system that is a ceaseless engine of misery, insecurity and injustice.
Britain is a playground for the rich, but this is not a fact of life – and a commitment to ending private education will send a strong message that time has finally been called on a rotten class system.
BBC to New York Times – Why Indian governments have always been wary of foreign press
Be it India or China or Russia – you can be sure that when a country accuses the foreign media of biased coverage, it has something it wants to hide writes KAVEREE BAMZAI in The Print
P
An urban legend goes like this – when Indira Gandhi was assassinated, her son Rajiv Gandhi wanted to know if it had been confirmed by the BBC. Until the BBC broadcast the news, it could be dismissed as a rumour.
That was then. Today, fanboys of Prime Minister Narendra Modi’s strident nationalism, accuse the venerable BBC of peddling fake news.
The Western gaze on India is acceptable only if it is about yoga and ayurveda, not Kashmir. Curiously, the Indira Gandhi regime often accused the BBC of being an extension of the Cold War ‘foreign hand’ out to undermine India. Today, the Modi ecosystem accuses it of being anti-Hindu.
The government and the BJP want to actively fix this – with both the carrot and the stick. On the one hand, Hindu groups are protesting outside The Washington Post office in the US, and on the other, NSA Ajit Doval is feting foreign journalists and RSS’ Mohan Bhagwat is scheduling meetings with them.
Be it India or China or Russia – you can be sure that when a country accuses the foreign media of biased coverage, it has something it wants to hide. It’s a good barometer of what’s going on inside. That is why restricting access is common practice.
Fences & restrictions
Foreign journalists can visit Assam only after taking permission from the Ministry of External Affairs, which consults the Ministry of Home Affairs before issuing a permit. In Jammu and Kashmir, things are no better. A circular from the Ministry of External Affairs says permission has to be sought by foreign journalists eight weeks before the date of visit. From May 2018 to January 2019, only two foreign journalists had got this permission.
That’s not all. Media outlets such as the BBC and Al Jazeera have been trolled on social media for their coverage of Kashmir after the abrogation of Article 370, with the Modi government jumping to say their footage was fabricated.
The criticism has been echoed even by pro-government TV anchors and social media warriors (some like Shekhar Kapur who have justifiably picked on the BBC’s habit of referring to Jammu and Kashmir as Indian-occupied Kashmir).
But India Today did a detailed forensic analysis to show the BBC video was anything but “fake news”. The BBC has also stood by its video (initially reported by Reuters) showing protestors marching on the streets with Article 370 placards and tear gas being used to disperse protests. “A protest the Indian government said did not happen,” @BBCWorld said.
Always on high alert
India’s sensitivity to how the BBC, in particular, sees it, is not new. John Elliott, who has reported on India, from India, for 25 years, told The Print: “India always seems to want international approval and praise, indicating it is not yet fully confident on the world stage. That leads to extreme sensitivity over negative comment, maybe even more so under Prime Minister Narendra Modi for whom international recognition is a primary aim.”
It doesn’t take much to raise India’s hackles. In 1970, when French maestro Louis Malle’s documentary series Phantom India was shown on the BBC, it resulted in the closure of the BBC’s office in Delhi for two years and the repatriation of its news correspondent Ronald Robson. All because, even though the series was well received by British critics, Indians were upset about Malle’s inclusion in the first programme of ”a few shots of people sleeping on the pavements of Calcutta”. This was the “export of Indian poverty” argument that Nargis Dutt used about Satyajit Ray in 1980, with her now-famous quote: “I don’t believe Mr Satyajit Ray cannot be criticised. He is only a Ray, not the Sun.”
As Sunil Khilnani notes in his book, Incarnations: India in 50 Lives, Nargis felt Ray’s movies were popular in the West because “people there want to see India in an abject condition”. She wanted him to show “modern India”, not merely project “Indian poverty abroad”.
Thin-skinned governments
Of late, though, it is India’s fractious politics, which has made Indian governments extremely thin-skinned. This too has a history. Mark Tully, who became BBC’s Delhi bureau chief after it was allowed to return to India in 1972, fell afoul of prime minister Indira Gandhi in 1975 during the Emergency. As he says in this 2018 interview, at the time it was said he had reported that one of the senior-most cabinet ministers had resigned from her government in protest against the Emergency. Then information and broadcasting minister Inder Gujral stood up for him telling Mohammed Yunus (part of Indira Gandhi’s ‘kitchen cabinet’) that he had checked with the monitoring service and there was no evidence of Tully having said so.
Tully says Yunus told Gujral: ”I want you to arrest him, take his trousers down, and give him a beating and then put him in jail. Those were roughly the words I have recorded in the interview and it is also transcribed in a book I wrote with Zareer Masani called Raj to Rajiv. So, I discovered 18 months after the Emergency that I had had a lucky escape.”
In 2002, Time magazine’s Alex Perry had to face questioning over alleged passport irregularities after he wrote the widely quoted cover story on then prime minister Atal Bihari Vajpayee, wherein he said Vajpayee “fell asleep in cabinet meetings, was prone to ‘interminable silences’ and enjoyed a nightly whisky”. Although there was talk of Perry being thrown out of India, much like Tully, it didn’t happen. Perry left as Delhi bureau chief much later, in 2006. Now a well-known writer, he declined to comment for this story to ThePrint, calling it “old history”.
Rot within
Nothing is really history in Indian politics, where personalities, issues, and allegations tend to be recycled. The New York Times is routinely accused of an anti-India bias – whether it was the diplomatic immunity of IFS officer Devyani Khobragade then or the Indian government’s abrogation of Article 370 in Jammu and Kashmir now.
As veteran journalist Mannika Chopra points out to ThePrint, Indian politicians have always been wary of the foreign press. “Under Indira Gandhi, it was difficult for foreign correspondents to report on Kashmir or the northeast. Or for visiting reporters to get visas. But the situation has changed. In India today, it would be fair to say the domestic media has, by and large, been won over by the current government, and those who haven’t are wary of speaking out. Independent voices are few. Political journalism has also changed. There are no hard-hitting investigations,” she said.
She points out that it has been left to the foreign press to present a counter-narrative, a dialogue independent of ideological blinkers and pressures. “As for the media within, it is all about being not merely anti-national but also supra-national.”
Elliott jokes that he wished Britain had some of the same sensitivity over international comments on Brexit so ”that we realised how the world sees our descent into constitutional and political chaos”. But perhaps not, given that India’s outrage can span the spectrum—from a BBC interview with a jubilant Jagjit Singh Chauhan in 1984 after Indira Gandhi’s death (as noted by scholar Suzanne Franks) to Jade Goody’s racist slurs in 2007 again then Celebrity Big Brother contestant Shilpa Shetty.
In India’s Republic of Easy Offence, the bar for public anger and government censure is quite low.
PAn urban legend goes like this – when Indira Gandhi was assassinated, her son Rajiv Gandhi wanted to know if it had been confirmed by the BBC. Until the BBC broadcast the news, it could be dismissed as a rumour.
That was then. Today, fanboys of Prime Minister Narendra Modi’s strident nationalism, accuse the venerable BBC of peddling fake news.
The Western gaze on India is acceptable only if it is about yoga and ayurveda, not Kashmir. Curiously, the Indira Gandhi regime often accused the BBC of being an extension of the Cold War ‘foreign hand’ out to undermine India. Today, the Modi ecosystem accuses it of being anti-Hindu.
The government and the BJP want to actively fix this – with both the carrot and the stick. On the one hand, Hindu groups are protesting outside The Washington Post office in the US, and on the other, NSA Ajit Doval is feting foreign journalists and RSS’ Mohan Bhagwat is scheduling meetings with them.
Be it India or China or Russia – you can be sure that when a country accuses the foreign media of biased coverage, it has something it wants to hide. It’s a good barometer of what’s going on inside. That is why restricting access is common practice.
Fences & restrictions
Foreign journalists can visit Assam only after taking permission from the Ministry of External Affairs, which consults the Ministry of Home Affairs before issuing a permit. In Jammu and Kashmir, things are no better. A circular from the Ministry of External Affairs says permission has to be sought by foreign journalists eight weeks before the date of visit. From May 2018 to January 2019, only two foreign journalists had got this permission.
That’s not all. Media outlets such as the BBC and Al Jazeera have been trolled on social media for their coverage of Kashmir after the abrogation of Article 370, with the Modi government jumping to say their footage was fabricated.
The criticism has been echoed even by pro-government TV anchors and social media warriors (some like Shekhar Kapur who have justifiably picked on the BBC’s habit of referring to Jammu and Kashmir as Indian-occupied Kashmir).
But India Today did a detailed forensic analysis to show the BBC video was anything but “fake news”. The BBC has also stood by its video (initially reported by Reuters) showing protestors marching on the streets with Article 370 placards and tear gas being used to disperse protests. “A protest the Indian government said did not happen,” @BBCWorld said.
Always on high alert
India’s sensitivity to how the BBC, in particular, sees it, is not new. John Elliott, who has reported on India, from India, for 25 years, told The Print: “India always seems to want international approval and praise, indicating it is not yet fully confident on the world stage. That leads to extreme sensitivity over negative comment, maybe even more so under Prime Minister Narendra Modi for whom international recognition is a primary aim.”
It doesn’t take much to raise India’s hackles. In 1970, when French maestro Louis Malle’s documentary series Phantom India was shown on the BBC, it resulted in the closure of the BBC’s office in Delhi for two years and the repatriation of its news correspondent Ronald Robson. All because, even though the series was well received by British critics, Indians were upset about Malle’s inclusion in the first programme of ”a few shots of people sleeping on the pavements of Calcutta”. This was the “export of Indian poverty” argument that Nargis Dutt used about Satyajit Ray in 1980, with her now-famous quote: “I don’t believe Mr Satyajit Ray cannot be criticised. He is only a Ray, not the Sun.”
As Sunil Khilnani notes in his book, Incarnations: India in 50 Lives, Nargis felt Ray’s movies were popular in the West because “people there want to see India in an abject condition”. She wanted him to show “modern India”, not merely project “Indian poverty abroad”.
Thin-skinned governments
Of late, though, it is India’s fractious politics, which has made Indian governments extremely thin-skinned. This too has a history. Mark Tully, who became BBC’s Delhi bureau chief after it was allowed to return to India in 1972, fell afoul of prime minister Indira Gandhi in 1975 during the Emergency. As he says in this 2018 interview, at the time it was said he had reported that one of the senior-most cabinet ministers had resigned from her government in protest against the Emergency. Then information and broadcasting minister Inder Gujral stood up for him telling Mohammed Yunus (part of Indira Gandhi’s ‘kitchen cabinet’) that he had checked with the monitoring service and there was no evidence of Tully having said so.
Tully says Yunus told Gujral: ”I want you to arrest him, take his trousers down, and give him a beating and then put him in jail. Those were roughly the words I have recorded in the interview and it is also transcribed in a book I wrote with Zareer Masani called Raj to Rajiv. So, I discovered 18 months after the Emergency that I had had a lucky escape.”
In 2002, Time magazine’s Alex Perry had to face questioning over alleged passport irregularities after he wrote the widely quoted cover story on then prime minister Atal Bihari Vajpayee, wherein he said Vajpayee “fell asleep in cabinet meetings, was prone to ‘interminable silences’ and enjoyed a nightly whisky”. Although there was talk of Perry being thrown out of India, much like Tully, it didn’t happen. Perry left as Delhi bureau chief much later, in 2006. Now a well-known writer, he declined to comment for this story to ThePrint, calling it “old history”.
Rot within
Nothing is really history in Indian politics, where personalities, issues, and allegations tend to be recycled. The New York Times is routinely accused of an anti-India bias – whether it was the diplomatic immunity of IFS officer Devyani Khobragade then or the Indian government’s abrogation of Article 370 in Jammu and Kashmir now.
As veteran journalist Mannika Chopra points out to ThePrint, Indian politicians have always been wary of the foreign press. “Under Indira Gandhi, it was difficult for foreign correspondents to report on Kashmir or the northeast. Or for visiting reporters to get visas. But the situation has changed. In India today, it would be fair to say the domestic media has, by and large, been won over by the current government, and those who haven’t are wary of speaking out. Independent voices are few. Political journalism has also changed. There are no hard-hitting investigations,” she said.
She points out that it has been left to the foreign press to present a counter-narrative, a dialogue independent of ideological blinkers and pressures. “As for the media within, it is all about being not merely anti-national but also supra-national.”
Elliott jokes that he wished Britain had some of the same sensitivity over international comments on Brexit so ”that we realised how the world sees our descent into constitutional and political chaos”. But perhaps not, given that India’s outrage can span the spectrum—from a BBC interview with a jubilant Jagjit Singh Chauhan in 1984 after Indira Gandhi’s death (as noted by scholar Suzanne Franks) to Jade Goody’s racist slurs in 2007 again then Celebrity Big Brother contestant Shilpa Shetty.
In India’s Republic of Easy Offence, the bar for public anger and government censure is quite low.
Saturday, 14 September 2019
Thursday, 12 September 2019
Central banks were always political – so their ‘independence’ doesn’t mean much
The separation of monetary and fiscal policy serves the neoliberal status quo. It won’t survive the next crash writes Larry Elliott in The Guardian
‘The Federal Reserve is coming under enormous pressure from Donald Trump to cut interest rates.’ Donald Trump with Jerome Powell, then his nominee for chairman of the Federal Reserve, Washington DC, November 2017. Photograph: Carlos BarrÃa/Reuters
Independent central banks were once all the rage. Taking decisions over interest rates and handing them to technocrats was seen as a sensible way of preventing politicians from trying to buy votes with cheap money. They couldn’t be trusted to keep inflation under control, but central banks could.
And when the global economy came crashing down in the autumn of 2008, it was central banks that prevented another Great Depression. Interest rates were slashed and the electronic money taps were turned on with quantitative easing (QE). That, at least, is the way central banks tell the story.
An alternative narrative goes like this. Collectively, central banks failed to stop the biggest asset-price bubble in history from developing during the early 2000s. Instead of taking action to prevent a ruinous buildup of debt, they congratulated themselves on keeping inflation low.
Even when the storm broke, some institutions – most notably the European Central Bank (ECB) – were slow to act. And while the monetary stimulus provided by record-low interest rates and QE did arrest the slide into depression, the recovery was slow and patchy. The price of houses and shares soared, but wages flatlined.
A decade on from the 2008 crash, another financial crisis is brewing. The US central bank – the Federal Reserve – is coming under huge pressure from Donald Trump to cut interest rates and restart QE. The poor state of the German economy and the threat of deflation means that on Thursday the ECB will cut the already negative interest rate for bank deposits and announce the resumption of its QE programme.
But central banks are almost out of ammo. If cutting interest rates to zero or just above was insufficient to bring about the sort of sustained recovery seen after previous recessions, then it is not obvious why a couple of quarter-point cuts will make much difference now. Likewise, expecting a bit more QE to do anything other than give a fillip to shares on Wall Street and the City is the triumph of hope over experience.
There were alternatives to the response to the 2008 crisis. Governments could have changed the mix, placing more emphasis on fiscal measures – tax cuts and spending increases – than on monetary stimulus, and then seeking to make the two arms of policy work together. They could have taken advantage of low interest rates to borrow more for the public spending programmes that would have created jobs and demand in their economies. Finance ministries could have ensured that QE contributed to the long-term good of the economy – the environment, for example – if they had issued bonds and instructed central banks to buy them.
This sort of approach does, though, involve breaking one of the big taboos of the modern age: the belief that monetary and fiscal policy should be kept separate and that central banks should be allowed to operate free from political interference.
The consensus blossomed during the good times of the late 1990s and early 2000s, and survived the financial crisis of 2008 . But challenges from both the left and right, especially in the US, suggest that it won’t survive the next one. Trump says the Fed has damaged the economy by pushing up interest rates too quickly. Bernie Sanders says the US central bank has been captured by Wall Street. Both arguments are correct. It is a good thing that central bank independence is finally coming under scrutiny.
For a start, it has become clear that the notion of depoliticised central bankers is a myth. When he was governor of the Bank of England, Mervyn King lectured the government about the need for austerity while jealously guarding the right to set interest rates free from any political interference. Likewise, rarely does Mario Draghi, the outgoing president of the ECB, hold a press conference without urging eurozone countries to reduce budget deficits and embrace structural reform.
Central bankers have views and – perhaps unsurprisingly – they tend to be quite conservative ones. As the US economist Thomas Palley notes in a recent paper, central bank independence is a product of the neoliberal Chicago school of economics and aims to advance neoliberal interests. More specifically, workers like high employment because in those circumstances it is easier to bid up pay. Employers prefer higher unemployment because it keeps wages down and profits up. Central banks side with capital over labour because they accept the neoliberal idea that there is a point – the natural rate of unemployment – beyond which stimulating the economy merely leads to higher inflation. They are, Palley says, institutions “favoured by capital to guard against the danger that a democracy may choose economic policies capital dislikes”.
Until now, monetary policy has been deemed too important to be left to politicians. When the next crisis arrives it will become too political an issue to be left to unelected technocrats. If that crisis is to be tackled effectively, the age of independent central banks will have to come to an end.
Independent central banks were once all the rage. Taking decisions over interest rates and handing them to technocrats was seen as a sensible way of preventing politicians from trying to buy votes with cheap money. They couldn’t be trusted to keep inflation under control, but central banks could.
And when the global economy came crashing down in the autumn of 2008, it was central banks that prevented another Great Depression. Interest rates were slashed and the electronic money taps were turned on with quantitative easing (QE). That, at least, is the way central banks tell the story.
An alternative narrative goes like this. Collectively, central banks failed to stop the biggest asset-price bubble in history from developing during the early 2000s. Instead of taking action to prevent a ruinous buildup of debt, they congratulated themselves on keeping inflation low.
Even when the storm broke, some institutions – most notably the European Central Bank (ECB) – were slow to act. And while the monetary stimulus provided by record-low interest rates and QE did arrest the slide into depression, the recovery was slow and patchy. The price of houses and shares soared, but wages flatlined.
A decade on from the 2008 crash, another financial crisis is brewing. The US central bank – the Federal Reserve – is coming under huge pressure from Donald Trump to cut interest rates and restart QE. The poor state of the German economy and the threat of deflation means that on Thursday the ECB will cut the already negative interest rate for bank deposits and announce the resumption of its QE programme.
But central banks are almost out of ammo. If cutting interest rates to zero or just above was insufficient to bring about the sort of sustained recovery seen after previous recessions, then it is not obvious why a couple of quarter-point cuts will make much difference now. Likewise, expecting a bit more QE to do anything other than give a fillip to shares on Wall Street and the City is the triumph of hope over experience.
There were alternatives to the response to the 2008 crisis. Governments could have changed the mix, placing more emphasis on fiscal measures – tax cuts and spending increases – than on monetary stimulus, and then seeking to make the two arms of policy work together. They could have taken advantage of low interest rates to borrow more for the public spending programmes that would have created jobs and demand in their economies. Finance ministries could have ensured that QE contributed to the long-term good of the economy – the environment, for example – if they had issued bonds and instructed central banks to buy them.
This sort of approach does, though, involve breaking one of the big taboos of the modern age: the belief that monetary and fiscal policy should be kept separate and that central banks should be allowed to operate free from political interference.
The consensus blossomed during the good times of the late 1990s and early 2000s, and survived the financial crisis of 2008 . But challenges from both the left and right, especially in the US, suggest that it won’t survive the next one. Trump says the Fed has damaged the economy by pushing up interest rates too quickly. Bernie Sanders says the US central bank has been captured by Wall Street. Both arguments are correct. It is a good thing that central bank independence is finally coming under scrutiny.
For a start, it has become clear that the notion of depoliticised central bankers is a myth. When he was governor of the Bank of England, Mervyn King lectured the government about the need for austerity while jealously guarding the right to set interest rates free from any political interference. Likewise, rarely does Mario Draghi, the outgoing president of the ECB, hold a press conference without urging eurozone countries to reduce budget deficits and embrace structural reform.
Central bankers have views and – perhaps unsurprisingly – they tend to be quite conservative ones. As the US economist Thomas Palley notes in a recent paper, central bank independence is a product of the neoliberal Chicago school of economics and aims to advance neoliberal interests. More specifically, workers like high employment because in those circumstances it is easier to bid up pay. Employers prefer higher unemployment because it keeps wages down and profits up. Central banks side with capital over labour because they accept the neoliberal idea that there is a point – the natural rate of unemployment – beyond which stimulating the economy merely leads to higher inflation. They are, Palley says, institutions “favoured by capital to guard against the danger that a democracy may choose economic policies capital dislikes”.
Until now, monetary policy has been deemed too important to be left to politicians. When the next crisis arrives it will become too political an issue to be left to unelected technocrats. If that crisis is to be tackled effectively, the age of independent central banks will have to come to an end.
Wednesday, 11 September 2019
Boeing's travails show what's wrong with modern capitalism
Matt Stoller in The Guardian
The plight of Boeing shows the perils of modern capitalism. The corporation is a wounded giant. Much of its productive capacity has been mothballed following two crashes in six months of the 737 Max, the firm’s flagship product: the result of safety problems Boeing hid from regulators.
Just a year ago Boeing appeared unstoppable. In 2018, the company delivered more aircraft than its rival Airbus, with revenue hitting $100bn. It was also a cash machine, shedding 20% of its workforce since 2012 while funneling $43bn into stock buybacks in roughly the same period. Boeing’s board rewarded its CEO, Dennis Muilenburg, lavishly, paying him $23m in 2018, up 27% from the year before.
There was only one problem. The company was losing its ability to make safe airplanes. As Scott Hamilton, an aerospace analyst and editor of Leeham News and Analysis, puts it: “Boeing Commercial Airplanes clearly has a systemic problem in designing, producing and delivering airplanes.”
Something is wrong with today’s version of capitalism. It’s not just that it’s unfair. It’s that it’s no longer capable of delivering products that work. The root cause is the generation of high and persistent profits, to the exclusion of production. We have let financiers take over our corporations. They monopolize industries and then loot the corporations they run.
The executive team at Boeing is quite skilled – just at generating cash, rather than as engineers. Boeing’s competitive advantage centered on politics, not planes. The corporation is now a political machine with a side business making aerospace and defense products. Boeing’s general counsel, former judge Michael Luttig, is the former boss of the FBI director, Christopher Wray, whose agents are investigating potential criminal activity at the company. Luttig is so well connected in high-level legal circles he served as a groomsman for the supreme court chief justice, John Roberts.
The company’s board members also include Nikki Haley, until recently the United Nations ambassador, former Nato supreme allied commander Edmund PGiambastiani Jr, former AIG CEO Edward M Liddy, and a host of former political officials and private equity icons.
Boeing used its political connections to monopolize the American aerospace industry and corrupt its regulators. In the 1990s, Boeing and McDonnell Douglas merged, leaving America with just one major producer of civilian aircraft. Before this merger, when there was a competitive market, Boeing was a wonderful company. As journalist Jerry Useem put it just 20 years ago, “Boeing has always been less a business than an association of engineers devoted to building amazing flying machines.”
High profits masked the collapse in productive skill until the crashes of the 737 Max
But after the merger, the engineers lost power to the financiers. Boeing could increase prices, lay off workers, reduce quality and spend its cash buying back stock.
And no one could do anything about it. Customers and suppliers no longer had any alternative to Boeing, and Boeing corrupted officials in both parties who were supposed to regulate it. High profits masked the collapse in productive skill until the crashes of the 737 Max.
Boeing’s inability to make good safe airplanes is a clear weakness. It is, after all, an airplane aerospace company. But because Boeing is America’s only commercial airplane company, the crisis is rippling across the economy. Michael O’Leary, CEO of Ryanair, which ordered 58 737 Max planes, says his company cannot grow as planned until Boeing, “gets its shit together”. Contractors and subcontractors slowed production of parts for the airplane, and airline customers scrambled to address shortages of airplanes.
Far from being an anomaly, Boeing is the norm in the corporate world across the west. In 2016, the Economist noted that profits across the corporate sector were high and persistent, a function of a lack of competition across swaths of the economy. If corporations don’t have to compete, they can raise prices to buyers, lower what they pay to suppliers and workers, and reduce quality.
High profits result in sloth and corruption. Many of our industrial goliaths are now run in ways that are fundamentally destructive. General Electric, for instance, was once a jewel of American productive capacity, a corporation created out of George Westinghouse and Thomas Edison’s patents for electric systems. Edison helped invent the lightbulb itself, brightening the world. Today, as a result of decisions made by Jack Welch in the 1990s to juice profit returns, GE slaps its label on lightbulbs made in China. Even worse, if investigator Harry Markopoulos is right, General Electric may in fact be riddled with accounting fraud, a once great productive institution strip-mined by financiers.
These are not the natural, inevitable results of capitalism. Boeing and GE were once great companies, working in capitalist open markets.
So what went wrong? In short, the law. In the 1970s, a host of thinkers on the right and left – from Milton Friedman to George Stigler to Alfred Kahn to the current liberal supreme court justice Stephen Breyer – argued that policymakers should take restraints off capital and get rid of anti-monopoly rules. They used many terms to make this case, including deregulation, cost/benefit analysis, and the consumer welfare standard in antitrust law. They embraced the shareholder theory of capitalism, which emphasizes short-term profits. What followed was a radical consolidation of market power, and then systemic looting.
Today, high profit margins are a pervasive and corrupting influence across the government and corporate sectors. Private equity firms moved capital from corporations and workers to themselves, destroying once healthy retailers like RadioShack, Toys R Us, Payless and K-Mart.
The disease of inefficiency and graft has spread to the government. In 1992, Harvard Professor Ash Carter, who later become the secretary of defense under Obama, wrote that the Pentagon was too difficult to do business with. “The most straightforward step” to address this, he wrote, “would be to raise the profit margins allowed on defense contracts.” The following year Prof Carter was appointed assistant secretary of defense for international security policy in the first Clinton administration, which followed his advice.
Earlier this year, the defense department found that one defense contractor run by private equity executives had profit margins of up to 4,451% on spare parts it sold to the military. Consulting giant McKinsey was recently caught trying to charge the government $3m a year for the services of a recent college graduate.
The ultimate result of concentrating wealth and corrupting government is to concentrate power in the hands of a few. We’ve been here before. In the 1930s, fascists in Italy and Germany were gaining strength, as were communists in the Russia. Meanwhile, leaders in liberal democracies were confronted by a frightened populace losing faith in democracy. American political leaders were able to take on domestic money lords with a radical antitrust campaign to break the power of the plutocrats. Today we are in a similar situation, with autocrats making an increasingly persuasive case that liberal democracy is weak.
The solution to this political crisis is fairly simple, and it involves two basic principles. One, policymakers have to increase competition for large powerful companies, to bring profits down. Executives should spend their time competing with each other to build quality products, not finding ways of attracting former generals, or administration officials to their board of directors. Two, policymakers should raise taxes on wealth and high incomes to radically reduce the concentration of wealth, which will make looting irrational.
Our system is no longer aligning rewards with productive skill. Despite the 737 Max crisis, Boeing’s stock price is still twice as high as in July 2015, when Muilenburg took over as CEO. That right there is what is broken about modern capitalism. We had better fix it fast.
The plight of Boeing shows the perils of modern capitalism. The corporation is a wounded giant. Much of its productive capacity has been mothballed following two crashes in six months of the 737 Max, the firm’s flagship product: the result of safety problems Boeing hid from regulators.
Just a year ago Boeing appeared unstoppable. In 2018, the company delivered more aircraft than its rival Airbus, with revenue hitting $100bn. It was also a cash machine, shedding 20% of its workforce since 2012 while funneling $43bn into stock buybacks in roughly the same period. Boeing’s board rewarded its CEO, Dennis Muilenburg, lavishly, paying him $23m in 2018, up 27% from the year before.
There was only one problem. The company was losing its ability to make safe airplanes. As Scott Hamilton, an aerospace analyst and editor of Leeham News and Analysis, puts it: “Boeing Commercial Airplanes clearly has a systemic problem in designing, producing and delivering airplanes.”
Something is wrong with today’s version of capitalism. It’s not just that it’s unfair. It’s that it’s no longer capable of delivering products that work. The root cause is the generation of high and persistent profits, to the exclusion of production. We have let financiers take over our corporations. They monopolize industries and then loot the corporations they run.
The executive team at Boeing is quite skilled – just at generating cash, rather than as engineers. Boeing’s competitive advantage centered on politics, not planes. The corporation is now a political machine with a side business making aerospace and defense products. Boeing’s general counsel, former judge Michael Luttig, is the former boss of the FBI director, Christopher Wray, whose agents are investigating potential criminal activity at the company. Luttig is so well connected in high-level legal circles he served as a groomsman for the supreme court chief justice, John Roberts.
The company’s board members also include Nikki Haley, until recently the United Nations ambassador, former Nato supreme allied commander Edmund PGiambastiani Jr, former AIG CEO Edward M Liddy, and a host of former political officials and private equity icons.
Boeing used its political connections to monopolize the American aerospace industry and corrupt its regulators. In the 1990s, Boeing and McDonnell Douglas merged, leaving America with just one major producer of civilian aircraft. Before this merger, when there was a competitive market, Boeing was a wonderful company. As journalist Jerry Useem put it just 20 years ago, “Boeing has always been less a business than an association of engineers devoted to building amazing flying machines.”
High profits masked the collapse in productive skill until the crashes of the 737 Max
But after the merger, the engineers lost power to the financiers. Boeing could increase prices, lay off workers, reduce quality and spend its cash buying back stock.
And no one could do anything about it. Customers and suppliers no longer had any alternative to Boeing, and Boeing corrupted officials in both parties who were supposed to regulate it. High profits masked the collapse in productive skill until the crashes of the 737 Max.
Boeing’s inability to make good safe airplanes is a clear weakness. It is, after all, an airplane aerospace company. But because Boeing is America’s only commercial airplane company, the crisis is rippling across the economy. Michael O’Leary, CEO of Ryanair, which ordered 58 737 Max planes, says his company cannot grow as planned until Boeing, “gets its shit together”. Contractors and subcontractors slowed production of parts for the airplane, and airline customers scrambled to address shortages of airplanes.
Far from being an anomaly, Boeing is the norm in the corporate world across the west. In 2016, the Economist noted that profits across the corporate sector were high and persistent, a function of a lack of competition across swaths of the economy. If corporations don’t have to compete, they can raise prices to buyers, lower what they pay to suppliers and workers, and reduce quality.
High profits result in sloth and corruption. Many of our industrial goliaths are now run in ways that are fundamentally destructive. General Electric, for instance, was once a jewel of American productive capacity, a corporation created out of George Westinghouse and Thomas Edison’s patents for electric systems. Edison helped invent the lightbulb itself, brightening the world. Today, as a result of decisions made by Jack Welch in the 1990s to juice profit returns, GE slaps its label on lightbulbs made in China. Even worse, if investigator Harry Markopoulos is right, General Electric may in fact be riddled with accounting fraud, a once great productive institution strip-mined by financiers.
These are not the natural, inevitable results of capitalism. Boeing and GE were once great companies, working in capitalist open markets.
So what went wrong? In short, the law. In the 1970s, a host of thinkers on the right and left – from Milton Friedman to George Stigler to Alfred Kahn to the current liberal supreme court justice Stephen Breyer – argued that policymakers should take restraints off capital and get rid of anti-monopoly rules. They used many terms to make this case, including deregulation, cost/benefit analysis, and the consumer welfare standard in antitrust law. They embraced the shareholder theory of capitalism, which emphasizes short-term profits. What followed was a radical consolidation of market power, and then systemic looting.
Today, high profit margins are a pervasive and corrupting influence across the government and corporate sectors. Private equity firms moved capital from corporations and workers to themselves, destroying once healthy retailers like RadioShack, Toys R Us, Payless and K-Mart.
The disease of inefficiency and graft has spread to the government. In 1992, Harvard Professor Ash Carter, who later become the secretary of defense under Obama, wrote that the Pentagon was too difficult to do business with. “The most straightforward step” to address this, he wrote, “would be to raise the profit margins allowed on defense contracts.” The following year Prof Carter was appointed assistant secretary of defense for international security policy in the first Clinton administration, which followed his advice.
Earlier this year, the defense department found that one defense contractor run by private equity executives had profit margins of up to 4,451% on spare parts it sold to the military. Consulting giant McKinsey was recently caught trying to charge the government $3m a year for the services of a recent college graduate.
The ultimate result of concentrating wealth and corrupting government is to concentrate power in the hands of a few. We’ve been here before. In the 1930s, fascists in Italy and Germany were gaining strength, as were communists in the Russia. Meanwhile, leaders in liberal democracies were confronted by a frightened populace losing faith in democracy. American political leaders were able to take on domestic money lords with a radical antitrust campaign to break the power of the plutocrats. Today we are in a similar situation, with autocrats making an increasingly persuasive case that liberal democracy is weak.
The solution to this political crisis is fairly simple, and it involves two basic principles. One, policymakers have to increase competition for large powerful companies, to bring profits down. Executives should spend their time competing with each other to build quality products, not finding ways of attracting former generals, or administration officials to their board of directors. Two, policymakers should raise taxes on wealth and high incomes to radically reduce the concentration of wealth, which will make looting irrational.
Our system is no longer aligning rewards with productive skill. Despite the 737 Max crisis, Boeing’s stock price is still twice as high as in July 2015, when Muilenburg took over as CEO. That right there is what is broken about modern capitalism. We had better fix it fast.
Thursday, 5 September 2019
The race to create a perfect lie detector – and the dangers of succeeding
Amit Katwala in The Guardian
We learn to lie as children, between the ages of two and five. By adulthood, we are prolific. We lie to our employers, our partners and, most of all, one study has found, to our mothers. The average person hears up to 200 lies a day, according to research by Jerry Jellison, a psychologist at the University of Southern California. The majority of the lies we tell are “white”, the inconsequential niceties – “I love your dress!” – that grease the wheels of human interaction. But most people tell one or two “big” lies a day, says Richard Wiseman, a psychologist at the University of Hertfordshire. We lie to promote ourselves, protect ourselves and to hurt or avoid hurting others.
The mystery is how we keep getting away with it. Our bodies expose us in every way. Hearts race, sweat drips and micro-expressions leak from small muscles in the face. We stutter, stall and make Freudian slips. “No mortal can keep a secret,” wrote the psychoanalyst in 1905. “If his lips are silent, he chatters with his fingertips. Betrayal oozes out of him at every pore.”
Even so, we are hopeless at spotting deception. On average, across 206 scientific studies, people can separate truth from lies just 54% of the time – only marginally better than tossing a coin. “People are bad at it because the differences between truth-tellers and liars are typically small and unreliable,” said Aldert Vrij, a psychologist at the University of Portsmouth who has spent years studying ways to detect deception. Some people stiffen and freeze when put on the spot, others become more animated. Liars can spin yarns packed with colour and detail, and truth-tellers can seem vague and evasive.
Humans have been trying to overcome this problem for millennia. The search for a perfect lie detector has involved torture, trials by ordeal and, in ancient India, an encounter with a donkey in a dark room. Three thousand years ago in China, the accused were forced to chew and spit out rice; the grains were thought to stick in the dry, nervous mouths of the guilty. In 1730, the English writer Daniel Defoe suggested taking the pulse of suspected pickpockets. “Guilt carries fear always about with it,” he wrote. “There is a tremor in the blood of a thief.” More recently, lie detection has largely been equated with the juddering styluses of the polygraph machine – the quintessential lie detector beloved by daytime television hosts and police procedurals. But none of these methods has yielded a reliable way to separate fiction from fact.
That could soon change. In the past couple of decades, the rise of cheap computing power, brain-scanning technologies and artificial intelligence has given birth to what many claim is a powerful new generation of lie-detection tools. Startups, racing to commercialise these developments, want us to believe that a virtually infallible lie detector is just around the corner.
Their inventions are being snapped up by police forces, state agencies and nations desperate to secure themselves against foreign threats. They are also being used by employers, insurance companies and welfare officers. “We’ve seen an increase in interest from both the private sector and within government,” said Todd Mickelsen, the CEO of Converus, which makes a lie detector based on eye movements and subtle changes in pupil size.
Converus’s technology, EyeDetect, has been used by FedEx in Panama and Uber in Mexico to screen out drivers with criminal histories, and by the credit ratings agency Experian, which tests its staff in Colombia to make sure they aren’t manipulating the company’s database to secure loans for family members. In the UK, Northumbria police are carrying out a pilot scheme that uses EyeDetect to measure the rehabilitation of sex offenders. Other EyeDetect customers include the government of Afghanistan, McDonald’s and dozens of local police departments in the US. Soon, large-scale lie-detection programmes could be coming to the borders of the US and the European Union, where they would flag potentially deceptive travellers for further questioning.
But as tools such as EyeDetect infiltrate more and more areas of public and private life, there are urgent questions to be answered about their scientific validity and ethical use. In our age of high surveillance and anxieties about all-powerful AIs, the idea that a machine could read our most personal thoughts feels more plausible than ever to us as individuals, and to the governments and corporations funding the new wave of lie-detection research. But what if states and employers come to believe in the power of a lie-detection technology that proves to be deeply biased – or that doesn’t actually work?
And what do we do with these technologies if they do succeed? A machine that reliably sorts truth from falsehood could have profound implications for human conduct. The creators of these tools argue that by weeding out deception they can create a fairer, safer world. But the ways lie detectors have been used in the past suggests such claims may be far too optimistic.
For most of us, most of the time, lying is more taxing and more stressful than honesty. To calculate another person’s view, suppress emotions and hold back from blurting out the truth requires more thought and more energy than simply being honest. It demands that we bear what psychologists call a cognitive load. Carrying that burden, most lie-detection theories assume, leaves evidence in our bodies and actions.
Lie-detection technologies tend to examine five different types of evidence. The first two are verbal: the things we say and the way we say them. Jeff Hancock, an expert on digital communication at Stanford, has found that people who are lying in their online dating profiles tend to use the words “I”, “me” and “my” more often, for instance. Voice-stress analysis, which aims to detect deception based on changes in tone of voice, was used during the interrogation of George Zimmerman, who shot the teenager Trayvon Martin in 2012, and by UK councils between 2007 and 2010 in a pilot scheme that tried to catch benefit cheats over the phone. Only five of the 23 local authorities where voice analysis was trialled judged it a success, but in 2014, it was still in use in 20 councils, according to freedom of information requests by the campaign group False Economy.
The third source of evidence – body language – can also reveal hidden feelings. Some liars display so-called “duper’s delight”, a fleeting expression of glee that crosses the face when they think they have got away with it. Cognitive load makes people move differently, and liars trying to “act natural” can end up doing the opposite. In an experiment in 2015, researchers at the University of Cambridge were able to detect deception more than 70% of the time by using a skintight suit to measure how much subjects fidgeted and froze under questioning.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning
The fourth type of evidence is physiological. The polygraph measures blood pressure, breathing rate and sweat. Penile plethysmography tests arousal levels in sex offenders by measuring the engorgement of the penis using a special cuff. Infrared cameras analyse facial temperature. Unlike Pinocchio, our noses may actually shrink slightly when we lie as warm blood flows towards the brain.
In the 1990s, new technologies opened up a fifth, ostensibly more direct avenue of investigation: the brain. In the second season of the Netflix documentary Making a Murderer, Steven Avery, who is serving a life sentence for a brutal killing he says he did not commit, undergoes a “brain fingerprinting” exam, which uses an electrode-studded headset called an electroencephalogram, or EEG, to read his neural activity and translate it into waves rising and falling on a graph. The test’s inventor, Dr Larry Farwell, claims it can detect knowledge of a crime hidden in a suspect’s brain by picking up a neural response to phrases or pictures relating to the crime that only the perpetrator and investigators would recognise. Another EEG-based test was used in 2008 to convict a 24-year-old Indian woman named Aditi Sharma of murdering her fiance by lacing his food with arsenic, but Sharma’s sentence was eventually overturned on appeal when the Indian supreme court held that the test could violate the subject’s rights against self-incrimination.
After 9/11, the US government – long an enthusiastic sponsor of deception science – started funding other kinds of brain-based lie-detection work through Darpa, the Defence Advanced Research Projects Agency. By 2006, two companies – Cephos and No Lie MRI – were offering lie detection based on functional magnetic resonance imaging, or fMRI. Using powerful magnets, these tools track the flow of blood to areas of the brain involved in social calculation, memory recall and impulse control.
But just because a lie-detection tool seems technologically sophisticated doesn’t mean it works. “It’s quite simple to beat these tests in ways that are very difficult to detect by a potential investigator,” said Dr Giorgio Ganis, who studies EEG and fMRI-based lie detection at the University of Plymouth. In 2007, a research group set up by the MacArthur Foundation examined fMRI-based deception tests. “After looking at the literature, we concluded that we have no idea whether fMRI can or cannot detect lies,” said Anthony Wagner, a Stanford psychologist and a member of the MacArthur group, who has testified against the admissibility of fMRI lie detection in court.
A new frontier in lie detection is now emerging. An increasing number of projects are using AI to combine multiple sources of evidence into a single measure for deception. Machine learning is accelerating deception research by spotting previously unseen patterns in reams of data. Scientists at the University of Maryland, for example, have developed software that they claim can detect deception from courtroom footage with 88% accuracy.
The algorithms behind such tools are designed to improve continuously over time, and may ultimately end up basing their determinations of guilt and innocence on factors that even the humans who have programmed them don’t understand. These tests are being trialled in job interviews, at border crossings and in police interviews, but as they become increasingly widespread, civil rights groups and scientists are growing more and more concerned about the dangers they could unleash on society.
Nothing provides a clearer warning about the threats of the new generation of lie-detection than the history of the polygraph, the world’s best-known and most widely used deception test. Although almost a century old, the machine still dominates both the public perception of lie detection and the testing market, with millions of polygraph tests conducted every year. Ever since its creation, it has been attacked for its questionable accuracy, and for the way it has been used as a tool of coercion. But the polygraph’s flawed science continues to cast a shadow over lie detection technologies today.
Even John Larson, the inventor of the polygraph, came to hate his creation. In 1921, Larson was a 29-year-old rookie police officer working the downtown beat in Berkeley, California. But he had also studied physiology and criminology and, when not on patrol, he was in a lab at the University of California, developing ways to bring science to bear in the fight against crime.
In the spring of 1921, Larson built an ugly device that took continuous measurements of blood pressure and breathing rate, and scratched the results on to a rolling paper cylinder. He then devised an interview-based exam that compared a subject’s physiological response when answering yes or no questions relating to a crime with the subject’s answers to control questions such as “Is your name Jane Doe?” As a proof of concept, he used the test to solve a theft at a women’s dormitory.
In his 1995 science-fiction novel The Truth Machine, James Halperin imagined a world in which someone succeeds in building a perfect lie detector. The invention helps unite the warring nations of the globe into a world government, and accelerates the search for a cancer cure. But evidence from the last hundred years suggests that it probably wouldn’t play out like that in real life. Politicians are hardly queueing up to use new technology on themselves. Terry Mullins, a long-time private polygraph examiner – one of about 30 in the UK – has been trying in vain to get police forces and government departments interested in the EyeDetect technology. “You can’t get the government on board,” he said. “I think they’re all terrified.”
Daniel Langleben, the scientist behind No Lie MRI, told me one of the government agencies he was approached by was not really interested in the accuracy rates of his brain-based lie detector. An fMRI machine cannot be packed into a suitcase or brought into a police interrogation room. The investigator cannot manipulate the test results to apply pressure to an uncooperative suspect. The agency just wanted to know whether it could be used to train agents to beat the polygraph.
“Truth is not really a commodity,” Langleben reflected. “Nobody wants it.”
We learn to lie as children, between the ages of two and five. By adulthood, we are prolific. We lie to our employers, our partners and, most of all, one study has found, to our mothers. The average person hears up to 200 lies a day, according to research by Jerry Jellison, a psychologist at the University of Southern California. The majority of the lies we tell are “white”, the inconsequential niceties – “I love your dress!” – that grease the wheels of human interaction. But most people tell one or two “big” lies a day, says Richard Wiseman, a psychologist at the University of Hertfordshire. We lie to promote ourselves, protect ourselves and to hurt or avoid hurting others.
The mystery is how we keep getting away with it. Our bodies expose us in every way. Hearts race, sweat drips and micro-expressions leak from small muscles in the face. We stutter, stall and make Freudian slips. “No mortal can keep a secret,” wrote the psychoanalyst in 1905. “If his lips are silent, he chatters with his fingertips. Betrayal oozes out of him at every pore.”
Even so, we are hopeless at spotting deception. On average, across 206 scientific studies, people can separate truth from lies just 54% of the time – only marginally better than tossing a coin. “People are bad at it because the differences between truth-tellers and liars are typically small and unreliable,” said Aldert Vrij, a psychologist at the University of Portsmouth who has spent years studying ways to detect deception. Some people stiffen and freeze when put on the spot, others become more animated. Liars can spin yarns packed with colour and detail, and truth-tellers can seem vague and evasive.
Humans have been trying to overcome this problem for millennia. The search for a perfect lie detector has involved torture, trials by ordeal and, in ancient India, an encounter with a donkey in a dark room. Three thousand years ago in China, the accused were forced to chew and spit out rice; the grains were thought to stick in the dry, nervous mouths of the guilty. In 1730, the English writer Daniel Defoe suggested taking the pulse of suspected pickpockets. “Guilt carries fear always about with it,” he wrote. “There is a tremor in the blood of a thief.” More recently, lie detection has largely been equated with the juddering styluses of the polygraph machine – the quintessential lie detector beloved by daytime television hosts and police procedurals. But none of these methods has yielded a reliable way to separate fiction from fact.
That could soon change. In the past couple of decades, the rise of cheap computing power, brain-scanning technologies and artificial intelligence has given birth to what many claim is a powerful new generation of lie-detection tools. Startups, racing to commercialise these developments, want us to believe that a virtually infallible lie detector is just around the corner.
Their inventions are being snapped up by police forces, state agencies and nations desperate to secure themselves against foreign threats. They are also being used by employers, insurance companies and welfare officers. “We’ve seen an increase in interest from both the private sector and within government,” said Todd Mickelsen, the CEO of Converus, which makes a lie detector based on eye movements and subtle changes in pupil size.
Converus’s technology, EyeDetect, has been used by FedEx in Panama and Uber in Mexico to screen out drivers with criminal histories, and by the credit ratings agency Experian, which tests its staff in Colombia to make sure they aren’t manipulating the company’s database to secure loans for family members. In the UK, Northumbria police are carrying out a pilot scheme that uses EyeDetect to measure the rehabilitation of sex offenders. Other EyeDetect customers include the government of Afghanistan, McDonald’s and dozens of local police departments in the US. Soon, large-scale lie-detection programmes could be coming to the borders of the US and the European Union, where they would flag potentially deceptive travellers for further questioning.
But as tools such as EyeDetect infiltrate more and more areas of public and private life, there are urgent questions to be answered about their scientific validity and ethical use. In our age of high surveillance and anxieties about all-powerful AIs, the idea that a machine could read our most personal thoughts feels more plausible than ever to us as individuals, and to the governments and corporations funding the new wave of lie-detection research. But what if states and employers come to believe in the power of a lie-detection technology that proves to be deeply biased – or that doesn’t actually work?
And what do we do with these technologies if they do succeed? A machine that reliably sorts truth from falsehood could have profound implications for human conduct. The creators of these tools argue that by weeding out deception they can create a fairer, safer world. But the ways lie detectors have been used in the past suggests such claims may be far too optimistic.
For most of us, most of the time, lying is more taxing and more stressful than honesty. To calculate another person’s view, suppress emotions and hold back from blurting out the truth requires more thought and more energy than simply being honest. It demands that we bear what psychologists call a cognitive load. Carrying that burden, most lie-detection theories assume, leaves evidence in our bodies and actions.
Lie-detection technologies tend to examine five different types of evidence. The first two are verbal: the things we say and the way we say them. Jeff Hancock, an expert on digital communication at Stanford, has found that people who are lying in their online dating profiles tend to use the words “I”, “me” and “my” more often, for instance. Voice-stress analysis, which aims to detect deception based on changes in tone of voice, was used during the interrogation of George Zimmerman, who shot the teenager Trayvon Martin in 2012, and by UK councils between 2007 and 2010 in a pilot scheme that tried to catch benefit cheats over the phone. Only five of the 23 local authorities where voice analysis was trialled judged it a success, but in 2014, it was still in use in 20 councils, according to freedom of information requests by the campaign group False Economy.
The third source of evidence – body language – can also reveal hidden feelings. Some liars display so-called “duper’s delight”, a fleeting expression of glee that crosses the face when they think they have got away with it. Cognitive load makes people move differently, and liars trying to “act natural” can end up doing the opposite. In an experiment in 2015, researchers at the University of Cambridge were able to detect deception more than 70% of the time by using a skintight suit to measure how much subjects fidgeted and froze under questioning.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning
The fourth type of evidence is physiological. The polygraph measures blood pressure, breathing rate and sweat. Penile plethysmography tests arousal levels in sex offenders by measuring the engorgement of the penis using a special cuff. Infrared cameras analyse facial temperature. Unlike Pinocchio, our noses may actually shrink slightly when we lie as warm blood flows towards the brain.
In the 1990s, new technologies opened up a fifth, ostensibly more direct avenue of investigation: the brain. In the second season of the Netflix documentary Making a Murderer, Steven Avery, who is serving a life sentence for a brutal killing he says he did not commit, undergoes a “brain fingerprinting” exam, which uses an electrode-studded headset called an electroencephalogram, or EEG, to read his neural activity and translate it into waves rising and falling on a graph. The test’s inventor, Dr Larry Farwell, claims it can detect knowledge of a crime hidden in a suspect’s brain by picking up a neural response to phrases or pictures relating to the crime that only the perpetrator and investigators would recognise. Another EEG-based test was used in 2008 to convict a 24-year-old Indian woman named Aditi Sharma of murdering her fiance by lacing his food with arsenic, but Sharma’s sentence was eventually overturned on appeal when the Indian supreme court held that the test could violate the subject’s rights against self-incrimination.
After 9/11, the US government – long an enthusiastic sponsor of deception science – started funding other kinds of brain-based lie-detection work through Darpa, the Defence Advanced Research Projects Agency. By 2006, two companies – Cephos and No Lie MRI – were offering lie detection based on functional magnetic resonance imaging, or fMRI. Using powerful magnets, these tools track the flow of blood to areas of the brain involved in social calculation, memory recall and impulse control.
But just because a lie-detection tool seems technologically sophisticated doesn’t mean it works. “It’s quite simple to beat these tests in ways that are very difficult to detect by a potential investigator,” said Dr Giorgio Ganis, who studies EEG and fMRI-based lie detection at the University of Plymouth. In 2007, a research group set up by the MacArthur Foundation examined fMRI-based deception tests. “After looking at the literature, we concluded that we have no idea whether fMRI can or cannot detect lies,” said Anthony Wagner, a Stanford psychologist and a member of the MacArthur group, who has testified against the admissibility of fMRI lie detection in court.
A new frontier in lie detection is now emerging. An increasing number of projects are using AI to combine multiple sources of evidence into a single measure for deception. Machine learning is accelerating deception research by spotting previously unseen patterns in reams of data. Scientists at the University of Maryland, for example, have developed software that they claim can detect deception from courtroom footage with 88% accuracy.
The algorithms behind such tools are designed to improve continuously over time, and may ultimately end up basing their determinations of guilt and innocence on factors that even the humans who have programmed them don’t understand. These tests are being trialled in job interviews, at border crossings and in police interviews, but as they become increasingly widespread, civil rights groups and scientists are growing more and more concerned about the dangers they could unleash on society.
Nothing provides a clearer warning about the threats of the new generation of lie-detection than the history of the polygraph, the world’s best-known and most widely used deception test. Although almost a century old, the machine still dominates both the public perception of lie detection and the testing market, with millions of polygraph tests conducted every year. Ever since its creation, it has been attacked for its questionable accuracy, and for the way it has been used as a tool of coercion. But the polygraph’s flawed science continues to cast a shadow over lie detection technologies today.
Even John Larson, the inventor of the polygraph, came to hate his creation. In 1921, Larson was a 29-year-old rookie police officer working the downtown beat in Berkeley, California. But he had also studied physiology and criminology and, when not on patrol, he was in a lab at the University of California, developing ways to bring science to bear in the fight against crime.
In the spring of 1921, Larson built an ugly device that took continuous measurements of blood pressure and breathing rate, and scratched the results on to a rolling paper cylinder. He then devised an interview-based exam that compared a subject’s physiological response when answering yes or no questions relating to a crime with the subject’s answers to control questions such as “Is your name Jane Doe?” As a proof of concept, he used the test to solve a theft at a women’s dormitory.
John Larson (right), the inventor of the polygraph lie detector. Photograph: Pictorial Parade/Getty Images
Larson refined his invention over several years with the help of an enterprising young man named Leonarde Keeler, who envisioned applications for the polygraph well beyond law enforcement. After the Wall Street crash of 1929, Keeler offered a version of the machine that was concealed inside an elegant walnut box to large organisations so they could screen employees suspected of theft.
Not long after, the US government became the world’s largest user of the exam. During the “red scare” of the 1950s, thousands of federal employees were subjected to polygraphs designed to root out communists. The US Army, which set up its first polygraph school in 1951, still trains examiners for all the intelligence agencies at the National Center for Credibility Assessment at Fort Jackson in South Carolina.
Companies also embraced the technology. Throughout much of the last century, about a quarter of US corporations ran polygraph exams on employees to test for issues including histories of drug use and theft. McDonald’s used to use the machine on its workers. By the 1980s, there were up to 10,000 trained polygraph examiners in the US, conducting 2m tests a year.
The only problem was that the polygraph did not work. In 2003, the US National Academy of Sciences published a damning report that found evidence on the polygraph’s accuracy across 57 studies was “far from satisfactory”. History is littered with examples of known criminals who evaded detection by cheating the test. Aldrich Ames, a KGB double agent, passed two polygraphs while working for the CIA in the late 1980s and early 90s. With a little training, it is relatively easy to beat the machine. Floyd “Buzz” Fay, who was falsely convicted of murder in 1979 after a failed polygraph exam, became an expert in the test during his two-and-a-half-years in prison, and started coaching other inmates on how to defeat it. After 15 minutes of instruction, 23 of 27 were able to pass. Common “countermeasures”, which work by exaggerating the body’s response to control questions, include thinking about a frightening experience, stepping on a pin hidden in the shoe, or simply clenching the anus.
The upshot is that the polygraph is not and never was an effective lie detector. There is no way for an examiner to know whether a rise in blood pressure is due to fear of getting caught in a lie, or anxiety about being wrongly accused. Different examiners rating the same charts can get contradictory results and there are huge discrepancies in outcome depending on location, race and gender. In one extreme example, an examiner in Washington state failed one in 20 law enforcement job applicants for having sex with animals; he “uncovered” 10 times more bestiality than his colleagues, and twice as much child pornography.
As long ago as 1965, the year Larson died, the US Committee on Government Operations issued a damning verdict on the polygraph. “People have been deceived by a myth that a metal box in the hands of an investigator can detect truth or falsehood,” it concluded. By then, civil rights groups were arguing that the polygraph violated constitutional protections against self-incrimination. In fact, despite the polygraph’s cultural status, in the US, its results are inadmissible in most courts. And in 1988, citing concerns that the polygraph was open to “misuse and abuse”, the US Congress banned its use by employers. Other lie-detectors from the second half of the 20th century fared no better: abandoned Department of Defense projects included the “wiggle chair”, which covertly tracked movement and body temperature during interrogation, and an elaborate system for measuring breathing rate by aiming an infrared laser at the lip through a hole in the wall.
The polygraph remained popular though – not because it was effective, but because people thought it was. “The people who developed the polygraph machine knew that the real power of it was in convincing people that it works,” said Dr Andy Balmer, a sociologist at the University of Manchester who wrote a book called Lie Detection and the Law.
The threat of being outed by the machine was enough to coerce some people into confessions. One examiner in Cincinnati in 1975 left the interrogation room and reportedly watched, bemused, through a two-way mirror as the accused tore 1.8 metres of paper charts off the machine and ate them. (You didn’t even have to have the right machine: in the 1980s, police officers in Detroit extracted confessions by placing a suspect’s hand on a photocopier that spat out sheets of paper with the phrase “He’s Lying!” pre-printed on them.) This was particularly attractive to law enforcement in the US, where it is vastly cheaper to use a machine to get a confession out of someone than it is to take them to trial.
But other people were pushed to admit to crimes they did not commit after the machine wrongly labelled them as lying. The polygraph became a form of psychological torture that wrung false confessions from the vulnerable. Many of these people were then charged, prosecuted and sent to jail – whether by unscrupulous police and prosecutors, or by those who wrongly believed in the polygraph’s power.
Perhaps no one came to understand the coercive potential of his machine better than Larson. Shortly before his death in 1965, he wrote: “Beyond my expectation, through uncontrollable factors, this scientific investigation became for practical purposes a Frankenstein’s monster.”
The search for a truly effective lie detector gained new urgency after the terrorist attacks of 11 September 2001. Several of the hijackers had managed to enter the US after successfully deceiving border agents. Suddenly, intelligence and border services wanted tools that actually worked. A flood of new government funding made lie detection big business again. “Everything changed after 9/11,” writes psychologist Paul Ekman in Telling Lies.
Ekman was one of the beneficiaries of this surge. In the 1970s, he had been filming interviews with psychiatric patients when he noticed a brief flash of despair cross the features of Mary, a 42-year-old suicidal woman, when she lied about feeling better. He spent the next few decades cataloguing how these tiny movements of the face, which he termed “micro-expressions”, can reveal hidden truths.
Ekman’s work was hugely influential with psychologists, and even served as the basis for Lie to Me, a primetime television show that debuted in 2009 with an Ekman-inspired lead played by Tim Roth. But it got its first real-world test in 2006, as part of a raft of new security measures introduced to combat terrorism. That year, Ekman spent a month teaching US immigration officers how to detect deception at passport control by looking for certain micro-expressions. The results are instructive: at least 16 terrorists were permitted to enter the US in the following six years.
Investment in lie-detection technology “goes in waves”, said Dr John Kircher, a University of Utah psychologist who developed a digital scoring system for the polygraph. There were spikes in the early 1980s, the mid-90s and the early 2000s, neatly tracking with Republican administrations and foreign wars. In 2008, under President George W Bush, the US Army spent $700,000 on 94 handheld lie detectors for use in Iraq and Afghanistan. The Preliminary Credibility Assessment Screening System had three sensors that attached to the hand, connected to an off-the-shelf pager which flashed green for truth, red for lies and yellow if it couldn’t decide. It was about as good as a photocopier at detecting deception – and at eliciting the truth.
Some people believe an accurate lie detector would have allowed border patrol to stop the 9/11 hijackers. “These people were already on watch lists,” Larry Farwell, the inventor of brain fingerprinting, told me. “Brain fingerprinting could have provided the evidence we needed to bring the perpetrators to justice before they actually committed the crime.” A similar logic has been applied in the case of European terrorists who returned from receiving training abroad.
As a result, the frontline for much of the new government-funded lie detection technology has been the borders of the US and Europe. In 2014, travellers flying into Bucharest were interrogated by a virtual border agentcalled Avatar, an on-screen figure in a white shirt with blue eyes, which introduced itself as “the future of passport control”. As well as an e-passport scanner and fingerprint reader, the Avatar unit has a microphone, an infra-red eye-tracking camera and an Xbox Kinect sensor to measure body movement. It is one of the first “multi-modal” lie detectors – one that incorporates a number of different sources of evidence – since the polygraph.
But the “secret sauce”, according to David Mackstaller, who is taking the technology in Avatar to market via a company called Discern Science, is in the software, which uses an algorithm to combine all of these types of data. The machine aims to send a verdict to a human border guard within 45 seconds, who can either wave the traveller through or pull them aside for additional screening. Mackstaller said he is in talks with governments – he wouldn’t say which ones – about installing Avatar permanently after further tests at Nogales in Arizona on the US-Mexico border, and with federal employees at Reagan Airport near Washington DC. Discern Science claims accuracy rates in their preliminary studies – including the one in Bucharest – have been between 83% and 85%.
The Bucharest trials were supported by Frontex, the EU border agency, which is now funding a competing system called iBorderCtrl, with its own virtual border guard. One aspect of iBorderCtrl is based on Silent Talker, a technology that has been in development at Manchester Metropolitan University since the early 2000s. Silent Talker uses an AI model to analyse more than 40 types of microgestures in the face and head; it only needs a camera and an internet connection to function. On a recent visit to the company’s office in central Manchester, I watched video footage of a young man lying about taking money from a box during a mock crime experiment, while in the corner of the screen a dial swung from green, to yellow, to red. In theory, it could be run on a smartphone or used on live television footage, perhaps even during political debates, although co-founder James O’Shea said the company doesn’t want to go down that route – it is targeting law enforcement and insurance.
O’Shea and his colleague Zuhair Bandar claim Silent Talker has an accuracy rate of 75% in studies so far. “We don’t know how it works,” O’Shea said. They stressed the importance of keeping a “human in the loop” when it comes to making decisions based on Silent Talker’s results.
Mackstaller said Avatar’s results will improve as its algorithm learns. He also expects it to perform better in the real world because the penalties for getting caught are much higher, so liars are under more stress. But research shows that the opposite may be true: lab studies tend to overestimate real-world success.
Before these tools are rolled out at scale, clearer evidence is required that they work across different cultures, or with groups of people such as psychopaths, whose non-verbal behaviour may differ from the norm. Much of the research so far has been conducted on white Europeans and Americans. Evidence from other domains, including bail and prison sentencing, suggests that algorithms tend to encode the biases of the societies in which they are created. These effects could be heightened at the border, where some of society’s greatest fears and prejudices play out. What’s more, the black box of an AI model is not conducive to transparent decision making since it cannot explain its reasoning. “We don’t know how it works,” O’Shea said. “The AI system learned how to do it by itself.”
Andy Balmer, the University of Manchester sociologist, fears that technology will be used to reinforce existing biases with a veneer of questionable science – making it harder for individuals from vulnerable groups to challenge decisions. “Most reputable science is clear that lie detection doesn’t work, and yet it persists as a field of study where other things probably would have been abandoned by now,” he said. “That tells us something about what we want from it.”
The truth has only one face, wrote the 16th-century French philosopher Michel de Montaigne, but a lie “has a hundred thousand shapes and no defined limits”. Deception is not a singular phenomenon and, as of yet, we know of no telltale sign of deception that holds true for everyone, in every situation. There is no Pinocchio’s nose. “That’s seen as the holy grail of lie detection,” said Dr Sophie van der Zee, a legal psychologist at Erasmus University in Rotterdam. “So far no one has found it.”
The accuracy rates of 80-90% claimed by the likes of EyeDetect and Avatar sound impressive, but applied at the scale of a border crossing, they would lead to thousands of innocent people being wrongly flagged for every genuine threat it identified. It might also mean that two out of every 10 terrorists easily slips through.
History suggests that such shortcomings will not stop these new tools from being used. After all, the polygraph has been widely debunked, but an estimated 2.5m polygraph exams are still conducted in the US every year. It is a $2.5bn industry. In the UK, the polygraph has been used on sex offenders since 2014, and in January 2019, the government announced plans to use it on domestic abusers on parole. The test “cannot be killed by science because it was not born of science”, writes the historian Ken Alder in his book The Lie Detectors.
New technologies may be harder than the polygraph for unscrupulous examiners to deliberately manipulate, but that does not mean they will be fair. AI-powered lie detectors prey on the tendency of both individuals and governments to put faith in science’s supposedly all-seeing eye. And the closer they get to perfect reliability, or at least the closer they appear to get, the more dangerous they will become, because lie detectors often get aimed at society’s most vulnerable: women in the 1920s, suspected dissidents and homosexuals in the 60s, benefit claimants in the 2000s, asylum seekers and migrants today. “Scientists don’t think much about who is going to use these methods,” said Giorgio Ganis. “I always feel that people should be aware of the implications.”
In an era of fake news and falsehoods, it can be tempting to look for certainty in science. But lie detectors tend to surface at “pressure-cooker points” in politics, when governments lower their requirements for scientific rigour, said Balmer. In this environment, dubious new techniques could “slip neatly into the role the polygraph once played”, Alder predicts.
One day, improvements in artificial intelligence could find a reliable pattern for deception by scouring multiple sources of evidence, or more detailed scanning technologies could discover an unambiguous sign lurking in the brain. In the real world, however, practised falsehoods – the stories we tell ourselves about ourselves, the lies that form the core of our identity – complicate matters. “We have this tremendous capacity to believe our own lies,” Dan Ariely, a renowned behavioural psychologist at Duke University, said. “And once we believe our own lies, of course we don’t provide any signal of wrongdoing.”
Larson refined his invention over several years with the help of an enterprising young man named Leonarde Keeler, who envisioned applications for the polygraph well beyond law enforcement. After the Wall Street crash of 1929, Keeler offered a version of the machine that was concealed inside an elegant walnut box to large organisations so they could screen employees suspected of theft.
Not long after, the US government became the world’s largest user of the exam. During the “red scare” of the 1950s, thousands of federal employees were subjected to polygraphs designed to root out communists. The US Army, which set up its first polygraph school in 1951, still trains examiners for all the intelligence agencies at the National Center for Credibility Assessment at Fort Jackson in South Carolina.
Companies also embraced the technology. Throughout much of the last century, about a quarter of US corporations ran polygraph exams on employees to test for issues including histories of drug use and theft. McDonald’s used to use the machine on its workers. By the 1980s, there were up to 10,000 trained polygraph examiners in the US, conducting 2m tests a year.
The only problem was that the polygraph did not work. In 2003, the US National Academy of Sciences published a damning report that found evidence on the polygraph’s accuracy across 57 studies was “far from satisfactory”. History is littered with examples of known criminals who evaded detection by cheating the test. Aldrich Ames, a KGB double agent, passed two polygraphs while working for the CIA in the late 1980s and early 90s. With a little training, it is relatively easy to beat the machine. Floyd “Buzz” Fay, who was falsely convicted of murder in 1979 after a failed polygraph exam, became an expert in the test during his two-and-a-half-years in prison, and started coaching other inmates on how to defeat it. After 15 minutes of instruction, 23 of 27 were able to pass. Common “countermeasures”, which work by exaggerating the body’s response to control questions, include thinking about a frightening experience, stepping on a pin hidden in the shoe, or simply clenching the anus.
The upshot is that the polygraph is not and never was an effective lie detector. There is no way for an examiner to know whether a rise in blood pressure is due to fear of getting caught in a lie, or anxiety about being wrongly accused. Different examiners rating the same charts can get contradictory results and there are huge discrepancies in outcome depending on location, race and gender. In one extreme example, an examiner in Washington state failed one in 20 law enforcement job applicants for having sex with animals; he “uncovered” 10 times more bestiality than his colleagues, and twice as much child pornography.
As long ago as 1965, the year Larson died, the US Committee on Government Operations issued a damning verdict on the polygraph. “People have been deceived by a myth that a metal box in the hands of an investigator can detect truth or falsehood,” it concluded. By then, civil rights groups were arguing that the polygraph violated constitutional protections against self-incrimination. In fact, despite the polygraph’s cultural status, in the US, its results are inadmissible in most courts. And in 1988, citing concerns that the polygraph was open to “misuse and abuse”, the US Congress banned its use by employers. Other lie-detectors from the second half of the 20th century fared no better: abandoned Department of Defense projects included the “wiggle chair”, which covertly tracked movement and body temperature during interrogation, and an elaborate system for measuring breathing rate by aiming an infrared laser at the lip through a hole in the wall.
The polygraph remained popular though – not because it was effective, but because people thought it was. “The people who developed the polygraph machine knew that the real power of it was in convincing people that it works,” said Dr Andy Balmer, a sociologist at the University of Manchester who wrote a book called Lie Detection and the Law.
The threat of being outed by the machine was enough to coerce some people into confessions. One examiner in Cincinnati in 1975 left the interrogation room and reportedly watched, bemused, through a two-way mirror as the accused tore 1.8 metres of paper charts off the machine and ate them. (You didn’t even have to have the right machine: in the 1980s, police officers in Detroit extracted confessions by placing a suspect’s hand on a photocopier that spat out sheets of paper with the phrase “He’s Lying!” pre-printed on them.) This was particularly attractive to law enforcement in the US, where it is vastly cheaper to use a machine to get a confession out of someone than it is to take them to trial.
But other people were pushed to admit to crimes they did not commit after the machine wrongly labelled them as lying. The polygraph became a form of psychological torture that wrung false confessions from the vulnerable. Many of these people were then charged, prosecuted and sent to jail – whether by unscrupulous police and prosecutors, or by those who wrongly believed in the polygraph’s power.
Perhaps no one came to understand the coercive potential of his machine better than Larson. Shortly before his death in 1965, he wrote: “Beyond my expectation, through uncontrollable factors, this scientific investigation became for practical purposes a Frankenstein’s monster.”
The search for a truly effective lie detector gained new urgency after the terrorist attacks of 11 September 2001. Several of the hijackers had managed to enter the US after successfully deceiving border agents. Suddenly, intelligence and border services wanted tools that actually worked. A flood of new government funding made lie detection big business again. “Everything changed after 9/11,” writes psychologist Paul Ekman in Telling Lies.
Ekman was one of the beneficiaries of this surge. In the 1970s, he had been filming interviews with psychiatric patients when he noticed a brief flash of despair cross the features of Mary, a 42-year-old suicidal woman, when she lied about feeling better. He spent the next few decades cataloguing how these tiny movements of the face, which he termed “micro-expressions”, can reveal hidden truths.
Ekman’s work was hugely influential with psychologists, and even served as the basis for Lie to Me, a primetime television show that debuted in 2009 with an Ekman-inspired lead played by Tim Roth. But it got its first real-world test in 2006, as part of a raft of new security measures introduced to combat terrorism. That year, Ekman spent a month teaching US immigration officers how to detect deception at passport control by looking for certain micro-expressions. The results are instructive: at least 16 terrorists were permitted to enter the US in the following six years.
Investment in lie-detection technology “goes in waves”, said Dr John Kircher, a University of Utah psychologist who developed a digital scoring system for the polygraph. There were spikes in the early 1980s, the mid-90s and the early 2000s, neatly tracking with Republican administrations and foreign wars. In 2008, under President George W Bush, the US Army spent $700,000 on 94 handheld lie detectors for use in Iraq and Afghanistan. The Preliminary Credibility Assessment Screening System had three sensors that attached to the hand, connected to an off-the-shelf pager which flashed green for truth, red for lies and yellow if it couldn’t decide. It was about as good as a photocopier at detecting deception – and at eliciting the truth.
Some people believe an accurate lie detector would have allowed border patrol to stop the 9/11 hijackers. “These people were already on watch lists,” Larry Farwell, the inventor of brain fingerprinting, told me. “Brain fingerprinting could have provided the evidence we needed to bring the perpetrators to justice before they actually committed the crime.” A similar logic has been applied in the case of European terrorists who returned from receiving training abroad.
As a result, the frontline for much of the new government-funded lie detection technology has been the borders of the US and Europe. In 2014, travellers flying into Bucharest were interrogated by a virtual border agentcalled Avatar, an on-screen figure in a white shirt with blue eyes, which introduced itself as “the future of passport control”. As well as an e-passport scanner and fingerprint reader, the Avatar unit has a microphone, an infra-red eye-tracking camera and an Xbox Kinect sensor to measure body movement. It is one of the first “multi-modal” lie detectors – one that incorporates a number of different sources of evidence – since the polygraph.
But the “secret sauce”, according to David Mackstaller, who is taking the technology in Avatar to market via a company called Discern Science, is in the software, which uses an algorithm to combine all of these types of data. The machine aims to send a verdict to a human border guard within 45 seconds, who can either wave the traveller through or pull them aside for additional screening. Mackstaller said he is in talks with governments – he wouldn’t say which ones – about installing Avatar permanently after further tests at Nogales in Arizona on the US-Mexico border, and with federal employees at Reagan Airport near Washington DC. Discern Science claims accuracy rates in their preliminary studies – including the one in Bucharest – have been between 83% and 85%.
The Bucharest trials were supported by Frontex, the EU border agency, which is now funding a competing system called iBorderCtrl, with its own virtual border guard. One aspect of iBorderCtrl is based on Silent Talker, a technology that has been in development at Manchester Metropolitan University since the early 2000s. Silent Talker uses an AI model to analyse more than 40 types of microgestures in the face and head; it only needs a camera and an internet connection to function. On a recent visit to the company’s office in central Manchester, I watched video footage of a young man lying about taking money from a box during a mock crime experiment, while in the corner of the screen a dial swung from green, to yellow, to red. In theory, it could be run on a smartphone or used on live television footage, perhaps even during political debates, although co-founder James O’Shea said the company doesn’t want to go down that route – it is targeting law enforcement and insurance.
O’Shea and his colleague Zuhair Bandar claim Silent Talker has an accuracy rate of 75% in studies so far. “We don’t know how it works,” O’Shea said. They stressed the importance of keeping a “human in the loop” when it comes to making decisions based on Silent Talker’s results.
Mackstaller said Avatar’s results will improve as its algorithm learns. He also expects it to perform better in the real world because the penalties for getting caught are much higher, so liars are under more stress. But research shows that the opposite may be true: lab studies tend to overestimate real-world success.
Before these tools are rolled out at scale, clearer evidence is required that they work across different cultures, or with groups of people such as psychopaths, whose non-verbal behaviour may differ from the norm. Much of the research so far has been conducted on white Europeans and Americans. Evidence from other domains, including bail and prison sentencing, suggests that algorithms tend to encode the biases of the societies in which they are created. These effects could be heightened at the border, where some of society’s greatest fears and prejudices play out. What’s more, the black box of an AI model is not conducive to transparent decision making since it cannot explain its reasoning. “We don’t know how it works,” O’Shea said. “The AI system learned how to do it by itself.”
Andy Balmer, the University of Manchester sociologist, fears that technology will be used to reinforce existing biases with a veneer of questionable science – making it harder for individuals from vulnerable groups to challenge decisions. “Most reputable science is clear that lie detection doesn’t work, and yet it persists as a field of study where other things probably would have been abandoned by now,” he said. “That tells us something about what we want from it.”
The truth has only one face, wrote the 16th-century French philosopher Michel de Montaigne, but a lie “has a hundred thousand shapes and no defined limits”. Deception is not a singular phenomenon and, as of yet, we know of no telltale sign of deception that holds true for everyone, in every situation. There is no Pinocchio’s nose. “That’s seen as the holy grail of lie detection,” said Dr Sophie van der Zee, a legal psychologist at Erasmus University in Rotterdam. “So far no one has found it.”
The accuracy rates of 80-90% claimed by the likes of EyeDetect and Avatar sound impressive, but applied at the scale of a border crossing, they would lead to thousands of innocent people being wrongly flagged for every genuine threat it identified. It might also mean that two out of every 10 terrorists easily slips through.
History suggests that such shortcomings will not stop these new tools from being used. After all, the polygraph has been widely debunked, but an estimated 2.5m polygraph exams are still conducted in the US every year. It is a $2.5bn industry. In the UK, the polygraph has been used on sex offenders since 2014, and in January 2019, the government announced plans to use it on domestic abusers on parole. The test “cannot be killed by science because it was not born of science”, writes the historian Ken Alder in his book The Lie Detectors.
New technologies may be harder than the polygraph for unscrupulous examiners to deliberately manipulate, but that does not mean they will be fair. AI-powered lie detectors prey on the tendency of both individuals and governments to put faith in science’s supposedly all-seeing eye. And the closer they get to perfect reliability, or at least the closer they appear to get, the more dangerous they will become, because lie detectors often get aimed at society’s most vulnerable: women in the 1920s, suspected dissidents and homosexuals in the 60s, benefit claimants in the 2000s, asylum seekers and migrants today. “Scientists don’t think much about who is going to use these methods,” said Giorgio Ganis. “I always feel that people should be aware of the implications.”
In an era of fake news and falsehoods, it can be tempting to look for certainty in science. But lie detectors tend to surface at “pressure-cooker points” in politics, when governments lower their requirements for scientific rigour, said Balmer. In this environment, dubious new techniques could “slip neatly into the role the polygraph once played”, Alder predicts.
One day, improvements in artificial intelligence could find a reliable pattern for deception by scouring multiple sources of evidence, or more detailed scanning technologies could discover an unambiguous sign lurking in the brain. In the real world, however, practised falsehoods – the stories we tell ourselves about ourselves, the lies that form the core of our identity – complicate matters. “We have this tremendous capacity to believe our own lies,” Dan Ariely, a renowned behavioural psychologist at Duke University, said. “And once we believe our own lies, of course we don’t provide any signal of wrongdoing.”
In his 1995 science-fiction novel The Truth Machine, James Halperin imagined a world in which someone succeeds in building a perfect lie detector. The invention helps unite the warring nations of the globe into a world government, and accelerates the search for a cancer cure. But evidence from the last hundred years suggests that it probably wouldn’t play out like that in real life. Politicians are hardly queueing up to use new technology on themselves. Terry Mullins, a long-time private polygraph examiner – one of about 30 in the UK – has been trying in vain to get police forces and government departments interested in the EyeDetect technology. “You can’t get the government on board,” he said. “I think they’re all terrified.”
Daniel Langleben, the scientist behind No Lie MRI, told me one of the government agencies he was approached by was not really interested in the accuracy rates of his brain-based lie detector. An fMRI machine cannot be packed into a suitcase or brought into a police interrogation room. The investigator cannot manipulate the test results to apply pressure to an uncooperative suspect. The agency just wanted to know whether it could be used to train agents to beat the polygraph.
“Truth is not really a commodity,” Langleben reflected. “Nobody wants it.”
Subscribe to:
Comments (Atom)

