Search This Blog

Thursday 12 September 2019

Central banks were always political – so their ‘independence’ doesn’t mean much

The separation of monetary and fiscal policy serves the neoliberal status quo. It won’t survive the next crash writes Larry Elliott in The Guardian 


 
‘The Federal Reserve is coming under enormous pressure from Donald Trump to cut interest rates.’ Donald Trump with Jerome Powell, then his nominee for chairman of the Federal Reserve, Washington DC, November 2017. Photograph: Carlos Barría/Reuters


Independent central banks were once all the rage. Taking decisions over interest rates and handing them to technocrats was seen as a sensible way of preventing politicians from trying to buy votes with cheap money. They couldn’t be trusted to keep inflation under control, but central banks could.

And when the global economy came crashing down in the autumn of 2008, it was central banks that prevented another Great Depression. Interest rates were slashed and the electronic money taps were turned on with quantitative easing (QE). That, at least, is the way central banks tell the story.

An alternative narrative goes like this. Collectively, central banks failed to stop the biggest asset-price bubble in history from developing during the early 2000s. Instead of taking action to prevent a ruinous buildup of debt, they congratulated themselves on keeping inflation low.

Even when the storm broke, some institutions – most notably the European Central Bank (ECB) – were slow to act. And while the monetary stimulus provided by record-low interest rates and QE did arrest the slide into depression, the recovery was slow and patchy. The price of houses and shares soared, but wages flatlined.

A decade on from the 2008 crash, another financial crisis is brewing. The US central bank – the Federal Reserve – is coming under huge pressure from Donald Trump to cut interest rates and restart QE. The poor state of the German economy and the threat of deflation means that on Thursday the ECB will cut the already negative interest rate for bank deposits and announce the resumption of its QE programme.

But central banks are almost out of ammo. If cutting interest rates to zero or just above was insufficient to bring about the sort of sustained recovery seen after previous recessions, then it is not obvious why a couple of quarter-point cuts will make much difference now. Likewise, expecting a bit more QE to do anything other than give a fillip to shares on Wall Street and the City is the triumph of hope over experience.

There were alternatives to the response to the 2008 crisis. Governments could have changed the mix, placing more emphasis on fiscal measures – tax cuts and spending increases – than on monetary stimulus, and then seeking to make the two arms of policy work together. They could have taken advantage of low interest rates to borrow more for the public spending programmes that would have created jobs and demand in their economies. Finance ministries could have ensured that QE contributed to the long-term good of the economy – the environment, for example – if they had issued bonds and instructed central banks to buy them.

This sort of approach does, though, involve breaking one of the big taboos of the modern age: the belief that monetary and fiscal policy should be kept separate and that central banks should be allowed to operate free from political interference.

The consensus blossomed during the good times of the late 1990s and early 2000s, and survived the financial crisis of 2008 . But challenges from both the left and right, especially in the US, suggest that it won’t survive the next one. Trump says the Fed has damaged the economy by pushing up interest rates too quickly. Bernie Sanders says the US central bank has been captured by Wall Street. Both arguments are correct. It is a good thing that central bank independence is finally coming under scrutiny.

For a start, it has become clear that the notion of depoliticised central bankers is a myth. When he was governor of the Bank of England, Mervyn King lectured the government about the need for austerity while jealously guarding the right to set interest rates free from any political interference. Likewise, rarely does Mario Draghi, the outgoing president of the ECB, hold a press conference without urging eurozone countries to reduce budget deficits and embrace structural reform.

Central bankers have views and – perhaps unsurprisingly – they tend to be quite conservative ones. As the US economist Thomas Palley notes in a recent paper, central bank independence is a product of the neoliberal Chicago school of economics and aims to advance neoliberal interests. More specifically, workers like high employment because in those circumstances it is easier to bid up pay. Employers prefer higher unemployment because it keeps wages down and profits up. Central banks side with capital over labour because they accept the neoliberal idea that there is a point – the natural rate of unemployment – beyond which stimulating the economy merely leads to higher inflation. They are, Palley says, institutions “favoured by capital to guard against the danger that a democracy may choose economic policies capital dislikes”.

Until now, monetary policy has been deemed too important to be left to politicians. When the next crisis arrives it will become too political an issue to be left to unelected technocrats. If that crisis is to be tackled effectively, the age of independent central banks will have to come to an end.

Wednesday 11 September 2019

Boeing's travails show what's wrong with modern capitalism

Matt Stoller in The Guardian

The plight of Boeing shows the perils of modern capitalism. The corporation is a wounded giant. Much of its productive capacity has been mothballed following two crashes in six months of the 737 Max, the firm’s flagship product: the result of safety problems Boeing hid from regulators.

Just a year ago Boeing appeared unstoppable. In 2018, the company delivered more aircraft than its rival Airbus, with revenue hitting $100bn. It was also a cash machine, shedding 20% of its workforce since 2012 while funneling $43bn into stock buybacks in roughly the same period. Boeing’s board rewarded its CEO, Dennis Muilenburg, lavishly, paying him $23m in 2018, up 27% from the year before.

There was only one problem. The company was losing its ability to make safe airplanes. As Scott Hamilton, an aerospace analyst and editor of Leeham News and Analysis, puts it: “Boeing Commercial Airplanes clearly has a systemic problem in designing, producing and delivering airplanes.”

Something is wrong with today’s version of capitalism. It’s not just that it’s unfair. It’s that it’s no longer capable of delivering products that work. The root cause is the generation of high and persistent profits, to the exclusion of production. We have let financiers take over our corporations. They monopolize industries and then loot the corporations they run.

The executive team at Boeing is quite skilled – just at generating cash, rather than as engineers. Boeing’s competitive advantage centered on politics, not planes. The corporation is now a political machine with a side business making aerospace and defense products. Boeing’s general counsel, former judge Michael Luttig, is the former boss of the FBI director, Christopher Wray, whose agents are investigating potential criminal activity at the company. Luttig is so well connected in high-level legal circles he served as a groomsman for the supreme court chief justice, John Roberts.

The company’s board members also include Nikki Haley, until recently the United Nations ambassador, former Nato supreme allied commander Edmund PGiambastiani Jr, former AIG CEO Edward M Liddy, and a host of former political officials and private equity icons.

Boeing used its political connections to monopolize the American aerospace industry and corrupt its regulators. In the 1990s, Boeing and McDonnell Douglas merged, leaving America with just one major producer of civilian aircraft. Before this merger, when there was a competitive market, Boeing was a wonderful company. As journalist Jerry Useem put it just 20 years ago, “Boeing has always been less a business than an association of engineers devoted to building amazing flying machines.”


High profits masked the collapse in productive skill until the crashes of the 737 Max

But after the merger, the engineers lost power to the financiers. Boeing could increase prices, lay off workers, reduce quality and spend its cash buying back stock.

And no one could do anything about it. Customers and suppliers no longer had any alternative to Boeing, and Boeing corrupted officials in both parties who were supposed to regulate it. High profits masked the collapse in productive skill until the crashes of the 737 Max.

Boeing’s inability to make good safe airplanes is a clear weakness. It is, after all, an airplane aerospace company. But because Boeing is America’s only commercial airplane company, the crisis is rippling across the economy. Michael O’Leary, CEO of Ryanair, which ordered 58 737 Max planes, says his company cannot grow as planned until Boeing, “gets its shit together”. Contractors and subcontractors slowed production of parts for the airplane, and airline customers scrambled to address shortages of airplanes.

Far from being an anomaly, Boeing is the norm in the corporate world across the west. In 2016, the Economist noted that profits across the corporate sector were high and persistent, a function of a lack of competition across swaths of the economy. If corporations don’t have to compete, they can raise prices to buyers, lower what they pay to suppliers and workers, and reduce quality.

High profits result in sloth and corruption. Many of our industrial goliaths are now run in ways that are fundamentally destructive. General Electric, for instance, was once a jewel of American productive capacity, a corporation created out of George Westinghouse and Thomas Edison’s patents for electric systems. Edison helped invent the lightbulb itself, brightening the world. Today, as a result of decisions made by Jack Welch in the 1990s to juice profit returns, GE slaps its label on lightbulbs made in China. Even worse, if investigator Harry Markopoulos is right, General Electric may in fact be riddled with accounting fraud, a once great productive institution strip-mined by financiers.

These are not the natural, inevitable results of capitalism. Boeing and GE were once great companies, working in capitalist open markets.

So what went wrong? In short, the law. In the 1970s, a host of thinkers on the right and left – from Milton Friedman to George Stigler to Alfred Kahn to the current liberal supreme court justice Stephen Breyer – argued that policymakers should take restraints off capital and get rid of anti-monopoly rules. They used many terms to make this case, including deregulation, cost/benefit analysis, and the consumer welfare standard in antitrust law. They embraced the shareholder theory of capitalism, which emphasizes short-term profits. What followed was a radical consolidation of market power, and then systemic looting. 

Today, high profit margins are a pervasive and corrupting influence across the government and corporate sectors. Private equity firms moved capital from corporations and workers to themselves, destroying once healthy retailers like RadioShack, Toys R Us, Payless and K-Mart.

The disease of inefficiency and graft has spread to the government. In 1992, Harvard Professor Ash Carter, who later become the secretary of defense under Obama, wrote that the Pentagon was too difficult to do business with. “The most straightforward step” to address this, he wrote, “would be to raise the profit margins allowed on defense contracts.” The following year Prof Carter was appointed assistant secretary of defense for international security policy in the first Clinton administration, which followed his advice.

Earlier this year, the defense department found that one defense contractor run by private equity executives had profit margins of up to 4,451% on spare parts it sold to the military. Consulting giant McKinsey was recently caught trying to charge the government $3m a year for the services of a recent college graduate.

The ultimate result of concentrating wealth and corrupting government is to concentrate power in the hands of a few. We’ve been here before. In the 1930s, fascists in Italy and Germany were gaining strength, as were communists in the Russia. Meanwhile, leaders in liberal democracies were confronted by a frightened populace losing faith in democracy. American political leaders were able to take on domestic money lords with a radical antitrust campaign to break the power of the plutocrats. Today we are in a similar situation, with autocrats making an increasingly persuasive case that liberal democracy is weak.

The solution to this political crisis is fairly simple, and it involves two basic principles. One, policymakers have to increase competition for large powerful companies, to bring profits down. Executives should spend their time competing with each other to build quality products, not finding ways of attracting former generals, or administration officials to their board of directors. Two, policymakers should raise taxes on wealth and high incomes to radically reduce the concentration of wealth, which will make looting irrational.

Our system is no longer aligning rewards with productive skill. Despite the 737 Max crisis, Boeing’s stock price is still twice as high as in July 2015
, when Muilenburg took over as CEO. That right there is what is broken about modern capitalism. We had better fix it fast.

Thursday 5 September 2019

The race to create a perfect lie detector – and the dangers of succeeding

Amit Katwala in The Guardian


We learn to lie as children, between the ages of two and five. By adulthood, we are prolific. We lie to our employers, our partners and, most of all, one study has found, to our mothers. The average person hears up to 200 lies a day, according to research by Jerry Jellison, a psychologist at the University of Southern California. The majority of the lies we tell are “white”, the inconsequential niceties – “I love your dress!” – that grease the wheels of human interaction. But most people tell one or two “big” lies a day, says Richard Wiseman, a psychologist at the University of Hertfordshire. We lie to promote ourselves, protect ourselves and to hurt or avoid hurting others. 

The mystery is how we keep getting away with it. Our bodies expose us in every way. Hearts race, sweat drips and micro-expressions leak from small muscles in the face. We stutter, stall and make Freudian slips. “No mortal can keep a secret,” wrote the psychoanalyst in 1905. “If his lips are silent, he chatters with his fingertips. Betrayal oozes out of him at every pore.”

Even so, we are hopeless at spotting deception. On average, across 206 scientific studies, people can separate truth from lies just 54% of the time – only marginally better than tossing a coin. “People are bad at it because the differences between truth-tellers and liars are typically small and unreliable,” said Aldert Vrij, a psychologist at the University of Portsmouth who has spent years studying ways to detect deception. Some people stiffen and freeze when put on the spot, others become more animated. Liars can spin yarns packed with colour and detail, and truth-tellers can seem vague and evasive.

Humans have been trying to overcome this problem for millennia. The search for a perfect lie detector has involved torture, trials by ordeal and, in ancient India, an encounter with a donkey in a dark room. Three thousand years ago in China, the accused were forced to chew and spit out rice; the grains were thought to stick in the dry, nervous mouths of the guilty. In 1730, the English writer Daniel Defoe suggested taking the pulse of suspected pickpockets. “Guilt carries fear always about with it,” he wrote. “There is a tremor in the blood of a thief.” More recently, lie detection has largely been equated with the juddering styluses of the polygraph machine – the quintessential lie detector beloved by daytime television hosts and police procedurals. But none of these methods has yielded a reliable way to separate fiction from fact.

That could soon change. In the past couple of decades, the rise of cheap computing power, brain-scanning technologies and artificial intelligence has given birth to what many claim is a powerful new generation of lie-detection tools. Startups, racing to commercialise these developments, want us to believe that a virtually infallible lie detector is just around the corner.

Their inventions are being snapped up by police forces, state agencies and nations desperate to secure themselves against foreign threats. They are also being used by employers, insurance companies and welfare officers. “We’ve seen an increase in interest from both the private sector and within government,” said Todd Mickelsen, the CEO of Converus, which makes a lie detector based on eye movements and subtle changes in pupil size.

Converus’s technology, EyeDetect, has been used by FedEx in Panama and Uber in Mexico to screen out drivers with criminal histories, and by the credit ratings agency Experian, which tests its staff in Colombia to make sure they aren’t manipulating the company’s database to secure loans for family members. In the UK, Northumbria police are carrying out a pilot scheme that uses EyeDetect to measure the rehabilitation of sex offenders. Other EyeDetect customers include the government of Afghanistan, McDonald’s and dozens of local police departments in the US. Soon, large-scale lie-detection programmes could be coming to the borders of the US and the European Union, where they would flag potentially deceptive travellers for further questioning.

But as tools such as EyeDetect infiltrate more and more areas of public and private life, there are urgent questions to be answered about their scientific validity and ethical use. In our age of high surveillance and anxieties about all-powerful AIs, the idea that a machine could read our most personal thoughts feels more plausible than ever to us as individuals, and to the governments and corporations funding the new wave of lie-detection research. But what if states and employers come to believe in the power of a lie-detection technology that proves to be deeply biased – or that doesn’t actually work?

And what do we do with these technologies if they do succeed? A machine that reliably sorts truth from falsehood could have profound implications for human conduct. The creators of these tools argue that by weeding out deception they can create a fairer, safer world. But the ways lie detectors have been used in the past suggests such claims may be far too optimistic.

For most of us, most of the time, lying is more taxing and more stressful than honesty. To calculate another person’s view, suppress emotions and hold back from blurting out the truth requires more thought and more energy than simply being honest. It demands that we bear what psychologists call a cognitive load. Carrying that burden, most lie-detection theories assume, leaves evidence in our bodies and actions.

Lie-detection technologies tend to examine five different types of evidence. The first two are verbal: the things we say and the way we say them. Jeff Hancock, an expert on digital communication at Stanford, has found that people who are lying in their online dating profiles tend to use the words “I”, “me” and “my” more often, for instance. Voice-stress analysis, which aims to detect deception based on changes in tone of voice, was used during the interrogation of George Zimmerman, who shot the teenager Trayvon Martin in 2012, and by UK councils between 2007 and 2010 in a pilot scheme that tried to catch benefit cheats over the phone. Only five of the 23 local authorities where voice analysis was trialled judged it a success, but in 2014, it was still in use in 20 councils, according to freedom of information requests by the campaign group False Economy.

The third source of evidence – body language – can also reveal hidden feelings. Some liars display so-called “duper’s delight”, a fleeting expression of glee that crosses the face when they think they have got away with it. Cognitive load makes people move differently, and liars trying to “act natural” can end up doing the opposite. In an experiment in 2015, researchers at the University of Cambridge were able to detect deception more than 70% of the time by using a skintight suit to measure how much subjects fidgeted and froze under questioning.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

The fourth type of evidence is physiological. The polygraph measures blood pressure, breathing rate and sweat. Penile plethysmography tests arousal levels in sex offenders by measuring the engorgement of the penis using a special cuff. Infrared cameras analyse facial temperature. Unlike Pinocchio, our noses may actually shrink slightly when we lie as warm blood flows towards the brain.

In the 1990s, new technologies opened up a fifth, ostensibly more direct avenue of investigation: the brain. In the second season of the Netflix documentary Making a Murderer, Steven Avery, who is serving a life sentence for a brutal killing he says he did not commit, undergoes a “brain fingerprinting” exam, which uses an electrode-studded headset called an electroencephalogram, or EEG, to read his neural activity and translate it into waves rising and falling on a graph. The test’s inventor, Dr Larry Farwell, claims it can detect knowledge of a crime hidden in a suspect’s brain by picking up a neural response to phrases or pictures relating to the crime that only the perpetrator and investigators would recognise. Another EEG-based test was used in 2008 to convict a 24-year-old Indian woman named Aditi Sharma of murdering her fiance by lacing his food with arsenic, but Sharma’s sentence was eventually overturned on appeal when the Indian supreme court held that the test could violate the subject’s rights against self-incrimination.

After 9/11, the US government – long an enthusiastic sponsor of deception science – started funding other kinds of brain-based lie-detection work through Darpa, the Defence Advanced Research Projects Agency. By 2006, two companies – Cephos and No Lie MRI – were offering lie detection based on functional magnetic resonance imaging, or fMRI. Using powerful magnets, these tools track the flow of blood to areas of the brain involved in social calculation, memory recall and impulse control.

But just because a lie-detection tool seems technologically sophisticated doesn’t mean it works. “It’s quite simple to beat these tests in ways that are very difficult to detect by a potential investigator,” said Dr Giorgio Ganis, who studies EEG and fMRI-based lie detection at the University of Plymouth. In 2007, a research group set up by the MacArthur Foundation examined fMRI-based deception tests. “After looking at the literature, we concluded that we have no idea whether fMRI can or cannot detect lies,” said Anthony Wagner, a Stanford psychologist and a member of the MacArthur group, who has testified against the admissibility of fMRI lie detection in court.

A new frontier in lie detection is now emerging. An increasing number of projects are using AI to combine multiple sources of evidence into a single measure for deception. Machine learning is accelerating deception research by spotting previously unseen patterns in reams of data. Scientists at the University of Maryland, for example, have developed software that they claim can detect deception from courtroom footage with 88% accuracy.

The algorithms behind such tools are designed to improve continuously over time, and may ultimately end up basing their determinations of guilt and innocence on factors that even the humans who have programmed them don’t understand. These tests are being trialled in job interviews, at border crossings and in police interviews, but as they become increasingly widespread, civil rights groups and scientists are growing more and more concerned about the dangers they could unleash on society.

Nothing provides a clearer warning about the threats of the new generation of lie-detection than the history of the polygraph, the world’s best-known and most widely used deception test. Although almost a century old, the machine still dominates both the public perception of lie detection and the testing market, with millions of polygraph tests conducted every year. Ever since its creation, it has been attacked for its questionable accuracy, and for the way it has been used as a tool of coercion. But the polygraph’s flawed science continues to cast a shadow over lie detection technologies today.

Even John Larson, the inventor of the polygraph, came to hate his creation. In 1921, Larson was a 29-year-old rookie police officer working the downtown beat in Berkeley, California. But he had also studied physiology and criminology and, when not on patrol, he was in a lab at the University of California, developing ways to bring science to bear in the fight against crime.

In the spring of 1921, Larson built an ugly device that took continuous measurements of blood pressure and breathing rate, and scratched the results on to a rolling paper cylinder. He then devised an interview-based exam that compared a subject’s physiological response when answering yes or no questions relating to a crime with the subject’s answers to control questions such as “Is your name Jane Doe?” As a proof of concept, he used the test to solve a theft at a women’s dormitory.

 
John Larson (right), the inventor of the polygraph lie detector. Photograph: Pictorial Parade/Getty Images

Larson refined his invention over several years with the help of an enterprising young man named Leonarde Keeler, who envisioned applications for the polygraph well beyond law enforcement. After the Wall Street crash of 1929, Keeler offered a version of the machine that was concealed inside an elegant walnut box to large organisations so they could screen employees suspected of theft.

Not long after, the US government became the world’s largest user of the exam. During the “red scare” of the 1950s, thousands of federal employees were subjected to polygraphs designed to root out communists. The US Army, which set up its first polygraph school in 1951, still trains examiners for all the intelligence agencies at the National Center for Credibility Assessment at Fort Jackson in South Carolina.

Companies also embraced the technology. Throughout much of the last century, about a quarter of US corporations ran polygraph exams on employees to test for issues including histories of drug use and theft. McDonald’s used to use the machine on its workers. By the 1980s, there were up to 10,000 trained polygraph examiners in the US, conducting 2m tests a year.

The only problem was that the polygraph did not work. In 2003, the US National Academy of Sciences published a damning report that found evidence on the polygraph’s accuracy across 57 studies was “far from satisfactory”. History is littered with examples of known criminals who evaded detection by cheating the test. Aldrich Ames, a KGB double agent, passed two polygraphs while working for the CIA in the late 1980s and early 90s. With a little training, it is relatively easy to beat the machine. Floyd “Buzz” Fay, who was falsely convicted of murder in 1979 after a failed polygraph exam, became an expert in the test during his two-and-a-half-years in prison, and started coaching other inmates on how to defeat it. After 15 minutes of instruction, 23 of 27 were able to pass. Common “countermeasures”, which work by exaggerating the body’s response to control questions, include thinking about a frightening experience, stepping on a pin hidden in the shoe, or simply clenching the anus.

The upshot is that the polygraph is not and never was an effective lie detector. There is no way for an examiner to know whether a rise in blood pressure is due to fear of getting caught in a lie, or anxiety about being wrongly accused. Different examiners rating the same charts can get contradictory results and there are huge discrepancies in outcome depending on location, race and gender. In one extreme example, an examiner in Washington state failed one in 20 law enforcement job applicants for having sex with animals; he “uncovered” 10 times more bestiality than his colleagues, and twice as much child pornography.

As long ago as 1965, the year Larson died, the US Committee on Government Operations issued a damning verdict on the polygraph. “People have been deceived by a myth that a metal box in the hands of an investigator can detect truth or falsehood,” it concluded. By then, civil rights groups were arguing that the polygraph violated constitutional protections against self-incrimination. In fact, despite the polygraph’s cultural status, in the US, its results are inadmissible in most courts. And in 1988, citing concerns that the polygraph was open to “misuse and abuse”, the US Congress banned its use by employers. Other lie-detectors from the second half of the 20th century fared no better: abandoned Department of Defense projects included the “wiggle chair”, which covertly tracked movement and body temperature during interrogation, and an elaborate system for measuring breathing rate by aiming an infrared laser at the lip through a hole in the wall.

The polygraph remained popular though – not because it was effective, but because people thought it was. “The people who developed the polygraph machine knew that the real power of it was in convincing people that it works,” said Dr Andy Balmer, a sociologist at the University of Manchester who wrote a book called Lie Detection and the Law.

The threat of being outed by the machine was enough to coerce some people into confessions. One examiner in Cincinnati in 1975 left the interrogation room and reportedly watched, bemused, through a two-way mirror as the accused tore 1.8 metres of paper charts off the machine and ate them. (You didn’t even have to have the right machine: in the 1980s, police officers in Detroit extracted confessions by placing a suspect’s hand on a photocopier that spat out sheets of paper with the phrase “He’s Lying!” pre-printed on them.) This was particularly attractive to law enforcement in the US, where it is vastly cheaper to use a machine to get a confession out of someone than it is to take them to trial.

But other people were pushed to admit to crimes they did not commit after the machine wrongly labelled them as lying. The polygraph became a form of psychological torture that wrung false confessions from the vulnerable. Many of these people were then charged, prosecuted and sent to jail – whether by unscrupulous police and prosecutors, or by those who wrongly believed in the polygraph’s power.

Perhaps no one came to understand the coercive potential of his machine better than Larson. Shortly before his death in 1965, he wrote: “Beyond my expectation, through uncontrollable factors, this scientific investigation became for practical purposes a Frankenstein’s monster.”

The search for a truly effective lie detector gained new urgency after the terrorist attacks of 11 September 2001. Several of the hijackers had managed to enter the US after successfully deceiving border agents. Suddenly, intelligence and border services wanted tools that actually worked. A flood of new government funding made lie detection big business again. “Everything changed after 9/11,” writes psychologist Paul Ekman in Telling Lies.

Ekman was one of the beneficiaries of this surge. In the 1970s, he had been filming interviews with psychiatric patients when he noticed a brief flash of despair cross the features of Mary, a 42-year-old suicidal woman, when she lied about feeling better. He spent the next few decades cataloguing how these tiny movements of the face, which he termed “micro-expressions”, can reveal hidden truths.

Ekman’s work was hugely influential with psychologists, and even served as the basis for Lie to Me, a primetime television show that debuted in 2009 with an Ekman-inspired lead played by Tim Roth. But it got its first real-world test in 2006, as part of a raft of new security measures introduced to combat terrorism. That year, Ekman spent a month teaching US immigration officers how to detect deception at passport control by looking for certain micro-expressions. The results are instructive: at least 16 terrorists were permitted to enter the US in the following six years.

Investment in lie-detection technology “goes in waves”, said Dr John Kircher, a University of Utah psychologist who developed a digital scoring system for the polygraph. There were spikes in the early 1980s, the mid-90s and the early 2000s, neatly tracking with Republican administrations and foreign wars. In 2008, under President George W Bush, the US Army spent $700,000 on 94 handheld lie detectors for use in Iraq and Afghanistan. The Preliminary Credibility Assessment Screening System had three sensors that attached to the hand, connected to an off-the-shelf pager which flashed green for truth, red for lies and yellow if it couldn’t decide. It was about as good as a photocopier at detecting deception – and at eliciting the truth.

Some people believe an accurate lie detector would have allowed border patrol to stop the 9/11 hijackers. “These people were already on watch lists,” Larry Farwell, the inventor of brain fingerprinting, told me. “Brain fingerprinting could have provided the evidence we needed to bring the perpetrators to justice before they actually committed the crime.” A similar logic has been applied in the case of European terrorists who returned from receiving training abroad.

As a result, the frontline for much of the new government-funded lie detection technology has been the borders of the US and Europe. In 2014, travellers flying into Bucharest were interrogated by a virtual border agentcalled Avatar, an on-screen figure in a white shirt with blue eyes, which introduced itself as “the future of passport control”. As well as an e-passport scanner and fingerprint reader, the Avatar unit has a microphone, an infra-red eye-tracking camera and an Xbox Kinect sensor to measure body movement. It is one of the first “multi-modal” lie detectors – one that incorporates a number of different sources of evidence – since the polygraph.

But the “secret sauce”, according to David Mackstaller, who is taking the technology in Avatar to market via a company called Discern Science, is in the software, which uses an algorithm to combine all of these types of data. The machine aims to send a verdict to a human border guard within 45 seconds, who can either wave the traveller through or pull them aside for additional screening. Mackstaller said he is in talks with governments – he wouldn’t say which ones – about installing Avatar permanently after further tests at Nogales in Arizona on the US-Mexico border, and with federal employees at Reagan Airport near Washington DC. Discern Science claims accuracy rates in their preliminary studies – including the one in Bucharest – have been between 83% and 85%.

The Bucharest trials were supported by Frontex, the EU border agency, which is now funding a competing system called iBorderCtrl, with its own virtual border guard. One aspect of iBorderCtrl is based on Silent Talker, a technology that has been in development at Manchester Metropolitan University since the early 2000s. Silent Talker uses an AI model to analyse more than 40 types of microgestures in the face and head; it only needs a camera and an internet connection to function. On a recent visit to the company’s office in central Manchester, I watched video footage of a young man lying about taking money from a box during a mock crime experiment, while in the corner of the screen a dial swung from green, to yellow, to red. In theory, it could be run on a smartphone or used on live television footage, perhaps even during political debates, although co-founder James O’Shea said the company doesn’t want to go down that route – it is targeting law enforcement and insurance.

O’Shea and his colleague Zuhair Bandar claim Silent Talker has an accuracy rate of 75% in studies so far. “We don’t know how it works,” O’Shea said. They stressed the importance of keeping a “human in the loop” when it comes to making decisions based on Silent Talker’s results.

Mackstaller said Avatar’s results will improve as its algorithm learns. He also expects it to perform better in the real world because the penalties for getting caught are much higher, so liars are under more stress. But research shows that the opposite may be true: lab studies tend to overestimate real-world success.

Before these tools are rolled out at scale, clearer evidence is required that they work across different cultures, or with groups of people such as psychopaths, whose non-verbal behaviour may differ from the norm. Much of the research so far has been conducted on white Europeans and Americans. Evidence from other domains, including bail and prison sentencing, suggests that algorithms tend to encode the biases of the societies in which they are created. These effects could be heightened at the border, where some of society’s greatest fears and prejudices play out. What’s more, the black box of an AI model is not conducive to transparent decision making since it cannot explain its reasoning. “We don’t know how it works,” O’Shea said. “The AI system learned how to do it by itself.”

Andy Balmer, the University of Manchester sociologist, fears that technology will be used to reinforce existing biases with a veneer of questionable science – making it harder for individuals from vulnerable groups to challenge decisions. “Most reputable science is clear that lie detection doesn’t work, and yet it persists as a field of study where other things probably would have been abandoned by now,” he said. “That tells us something about what we want from it.”

The truth has only one face, wrote the 16th-century French philosopher Michel de Montaigne, but a lie “has a hundred thousand shapes and no defined limits”. Deception is not a singular phenomenon and, as of yet, we know of no telltale sign of deception that holds true for everyone, in every situation. There is no Pinocchio’s nose. “That’s seen as the holy grail of lie detection,” said Dr Sophie van der Zee, a legal psychologist at Erasmus University in Rotterdam. “So far no one has found it.”

The accuracy rates of 80-90% claimed by the likes of EyeDetect and Avatar sound impressive, but applied at the scale of a border crossing, they would lead to thousands of innocent people being wrongly flagged for every genuine threat it identified. It might also mean that two out of every 10 terrorists easily slips through.

History suggests that such shortcomings will not stop these new tools from being used. After all, the polygraph has been widely debunked, but an estimated 2.5m polygraph exams are still conducted in the US every year. It is a $2.5bn industry. In the UK, the polygraph has been used on sex offenders since 2014, and in January 2019, the government announced plans to use it on domestic abusers on parole. The test “cannot be killed by science because it was not born of science”, writes the historian Ken Alder in his book The Lie Detectors.

New technologies may be harder than the polygraph for unscrupulous examiners to deliberately manipulate, but that does not mean they will be fair. AI-powered lie detectors prey on the tendency of both individuals and governments to put faith in science’s supposedly all-seeing eye. And the closer they get to perfect reliability, or at least the closer they appear to get, the more dangerous they will become, because lie detectors often get aimed at society’s most vulnerable: women in the 1920s, suspected dissidents and homosexuals in the 60s, benefit claimants in the 2000s, asylum seekers and migrants today. “Scientists don’t think much about who is going to use these methods,” said Giorgio Ganis. “I always feel that people should be aware of the implications.”

In an era of fake news and falsehoods, it can be tempting to look for certainty in science. But lie detectors tend to surface at “pressure-cooker points” in politics, when governments lower their requirements for scientific rigour, said Balmer. In this environment, dubious new techniques could “slip neatly into the role the polygraph once played”, Alder predicts.

One day, improvements in artificial intelligence could find a reliable pattern for deception by scouring multiple sources of evidence, or more detailed scanning technologies could discover an unambiguous sign lurking in the brain. In the real world, however, practised falsehoods – the stories we tell ourselves about ourselves, the lies that form the core of our identity – complicate matters. “We have this tremendous capacity to believe our own lies,” Dan Ariely, a renowned behavioural psychologist at Duke University, said. “And once we believe our own lies, of course we don’t provide any signal of wrongdoing.” 

In his 1995 science-fiction novel The Truth Machine, James Halperin imagined a world in which someone succeeds in building a perfect lie detector. The invention helps unite the warring nations of the globe into a world government, and accelerates the search for a cancer cure. But evidence from the last hundred years suggests that it probably wouldn’t play out like that in real life. Politicians are hardly queueing up to use new technology on themselves. Terry Mullins, a long-time private polygraph examiner – one of about 30 in the UK – has been trying in vain to get police forces and government departments interested in the EyeDetect technology. “You can’t get the government on board,” he said. “I think they’re all terrified.”

Daniel Langleben, the scientist behind No Lie MRI, told me one of the government agencies he was approached by was not really interested in the accuracy rates of his brain-based lie detector. An fMRI machine cannot be packed into a suitcase or brought into a police interrogation room. The investigator cannot manipulate the test results to apply pressure to an uncooperative suspect. The agency just wanted to know whether it could be used to train agents to beat the polygraph.

“Truth is not really a commodity,” Langleben reflected. “Nobody wants it.”

Sunday 1 September 2019

We know life is a game of chance, so why not draw lots to see who gets the job?

Sonia Sodha in The Guardian

Remove human bias from the interview process and the world might start to become a fairer place 


 
Interviews are an unreliable way of selecting the best person for the job. Photograph: Alamy


The sweaty palms, the swotting, the tricky question that prompts your heart to plummet: job interviews are no one’s idea of a good time. The other side of the equation is hardly fun either: days out of a busy schedule spent interviewing candidates, some of whom you know within a couple of minutes you would never offer a job.

Interviews are time-consuming for all involved. But we persist in doing them because recruitment decisions are some of the most important we take in the workplace and it follows we should invest time and energy into a robust recruitment process, right?

Wrong. It is long established that unstructured interviews are a notoriously unreliable way of selecting the best people for the job. This is perhaps unsurprising, when you consider the limited overlap between the skills needed to ace an interview and perform well day to day in a job or on a university course. And how many of us can honestly say we have been 100% truthful in a job interview?

Experimental studies show how unreliable interviewers are at accurately predicting someone’s capabilities. This is borne out on the rare occasions it gets tested in the real world. In the late 1970s, there was a doctor shortage in Texas and politicians instructed the state medical school to increase its admissions, after it had already selected 150 applicants after interview. So it took another 50 candidates who had reached the interview stage and been rejected, even though many of the stronger rejected candidates had already been snapped up by other medical schools. Researchers found these 50 students performed just as well as the original crop. Once the candidates got through the on-paper sift, they might as well have been drawn out of a hat.

Not only are interviews a generally bad way to spot talent, they are terrible at smuggling in bias. There are the obvious implicit biases – sexism, racism, ageism, class discrimination – but others also exist. According to psychologist Ron Friedman, we tend to perceive good-looking people to be more competent, tall candidates as having greater leadership potential and deep-voiced candidates as more trustworthy. Interviews also encourage us to pick people who look like us, think similarly to us and with whom we strike up an easy rapport. The myth of the meritocratic interview allows all sorts of prejudice to flourish.

These days, huge effort goes into trying to unpick these biases in interviews. Vast sums are spent on unconscious bias training, but the evidence as to its effectiveness is mixed at best. It turns out training a person’s subconscious to think differently isn’t as easy as a half-day course.


An element of random selection might engender a bit more humility on the part of white, middle-class men

This is why it is no substitute for breaking down the structures that allow these biases to fester. For example, managers might only be allowed to make an appointment once they have a sufficiently diverse shortlist. I’ve long been a believer in quotas for underrepresented groups where improving diversity is happening at a glacial pace, for example, in Oxbridge admissions.

But a recent conversation with a friend who works at Nesta, a charitable foundation, got me thinking about whether we should ditch the pretence that we can accurately predict people’s potential. Her organisation is experimenting with a lottery to award funding to staff for innovative projects. Employees can put forward their own proposal. All of those that meet a minimum set of criteria go into a draw, with a number selected for funding at random.

My initial thought was that this sounded bonkers. But ponder it more and the logic is sound. Not only does it eliminate human bias, it encourages creativity and avoids groupthink, discouraging staff from self-censoring because they think their idea is one management simply wouldn’t go for. It chimes with those who have argued that at least some science funding should be awarded by lottery, because in the contemporary world of peer review and scoring grids, risky ideas with potentially huge pay-offs do not attract sufficient funding.

Random selection embodies a very different conception of fairness to meritocracy. But if we accept that what we call meritocracy is predominantly a way for advantage to self-replicate, why not at least experiment with lotteries instead? Big graduate recruiters or Oxbridge courses could set “on paper” entry criteria, select candidates who meet them at random and test whether there are any differences with candidates selected by interview.

I am willing to bet that, as observed in Texas, they would do no worse. And that there would be other benefits: diversity of thought as well as diversity of demography. Quotas are often criticised for their potential to undermine those individuals who benefit from positive discrimination; everyone knows they are there not purely on merit, or so the argument goes. An element of random selection might engender a bit more humility on the part of white, middle-class men; it goes alongside being honest that meritocracy is a convenient mask for privilege.

The reason such experiments remain unlikely is that studies show that even when people are aware of the fallibility of interviews, they sustain incredible self-belief in their ability to buck the trend. Not only that, there are a lot of powerful people with a stake in maintaining the illusion of meritocracy. Oxford and Cambridge want to preserve the misconception that their selection procedures embody the creme de la creme of today selecting the creme de la creme of tomorrow.

But if you find yourself balking at random selection, ask yourself this: have you ever formed a first impression that was wrong? It might go against the grain, but making more liberal use of lotteries might produce not just a fairer but a better and more diverse world.

Saturday 31 August 2019

The agony of returning to work in September

Janan Ganesh in The FT 

For eight improbable years, TS Eliot earned his crust as a clerk for Lloyds Bank. He did not have the excuse of ignorance, therefore, when he misidentified April as the “cruelest month”. All working people know the real ogre to be September. Millions of us are winding down our summer holidays around now and answering the call of necessary employment. 


I enjoy my job to an almost indecent degree. Yet even I felt a pang as I flew out of Perugia recently and into my nine-to-five (or, if you must, my eleven-to-two). La rentrée is all the harsher on people with proper jobs. 

The sour atmosphere in airport departure lounges does at least clarify something. The search for pleasure and meaning in work is, beyond a certain point, a fool’s errand. No doubt, some jobs are better than others. But as long as work is an obligation — something one must do, to uphold a standard of living — there is a limit to the joy it can ever bring. Leisure will always feel better, and by a margin that is unbridgeable with worker-friendly offices and other blandishments. 

I started my career just before any of this needed saying. But then the promise began to emerge of work that need not feel like work. Companies vied to lay on the most ergonomic environments, the kindest mentors, the loosest schedules. A generation of in-demand graduates came to expect not just these material incentives but a sort of credal alignment with their employer’s “values”. The next recession will retard this trend but it is unlikely to kill it. 

All of this is as it should be. I was raised by people who had to toil without any of these perks. I don’t romanticise it as an era of Spartan virtue. Whatever companies do to nudge their staff up Maslow’s hierarchy of needs is to be saluted. 


The perk to really haggle for is not in-job comfort but the maximisation of paid leave. 


 It is just that the kindest service we can do for the young is manage their expectations. Work can be made a lot better than it might otherwise be. It cannot be made to be something other than work. The idea is taking hold, I sense, that it is odd to do something that is not exactly what you would wish to be doing at a particular moment. But this is the lot of even the most “creative” worker, the most self-governing entrepreneur. Very few professional tasks are so absorbing as to be one’s first-choice pursuit in circumstances of total freedom. 

A personal ambition is to reach the end of my career without having managed a single person. Friends who have been less lucky, who have whole teams under their watch, report a quirk among their younger charges. It is not laziness or obstreperousness or those other millennial slanders. It is an air of disappointment with the reality of working life. They will be among the people described in Bullshit Jobs by the anthropologist David Graeber. They will not be among the mere 18 per cent who told YouGov in 2015 that work was “very fulfilling”. As much as the fogey in me blames their entitlement, they were promised more than was plausible by company brochures and a culture that pretends an office can feel like something else. 

Companies are only able to soften the experience of employment so much. What they cannot finesse out of existence is the crux: the surrender of time for money that you would ideally fill with something else. The perk to really haggle for, then, is not in-work comfort but the maximisation of paid leave. 

Twenty years have passed since Office Space, and the cult film remains the acutest satire of alienating employment. In the central scene, workers do to an eternally malfunctioning printer more or less what liberated Iraqis did to statues of Saddam Hussein. 

It has one dud note, though, and it comes at the end, when the main character quits his office cubicle for life as a construction worker. The message is that manual labour does not have its own kind of soul-sucking boredom and pressure. It takes a cocooned sort to believe this kind of thing, but lots of people believe it of careers other than their own. The simplest jobs and the most cerebral are both heroised. But the defining thing about work is not its exact content. It is the fact that you have to do it. Look around at the faces in the departure lounge. In a stratified labour force, a rare unifier is dread of the cruelest month.