Search This Blog

Showing posts with label truth. Show all posts
Showing posts with label truth. Show all posts

Saturday 27 March 2021

Aagamee Manushya Party / Human Future Party

 We the members believe: 

  1. Human knowledge and understanding are limited. We believe in a sceptical examination of all philosophies, knowledge systems and their methods.
  2. Life on planet earth appears on a downward spiral and all attempts should be made to prevent the extinction of the human race and its environment.
  3. Achievement of political power is crucial to achieving our objectives and all methods are fair.
  4. Land, labour, money, risk… are fictitious concepts and we will aim to search for better fictions to prevent the extinction of the human race and its environment.

 The above principles will be used to guide our approach to any issue.

 Membership:

Anybody can become a member of the party by affirming to the above four values and paying the requisite joining fee and annual membership charges.

 Anybody can leave the party by submitting their resignation to the appropriate authority in the party with six months notice.

 The party will evolve disciplinary policies after ascertaining that a member has violated its founding values.

 Governance:

 The party will have a Chairperson, a General Secretary and a Treasurer as a leadership troika. The troika will take decisions to achieve the party’s values. Each officer will have a vote each to decide on all operational issues and decisions can be made by a majority vote. Pursuing a consensus should always be the initial approach.

 On issues relating to the values of the party, these maybe amended with a 75% majority of the general membership.

 The leadership troika will have a term of three years. Elections will be held for each post every three years.

 The party may be dissolved with a 80% vote of the general membership.

 


Application form to join Aagamee Manushya Party / Human Future Party

 

 

I:                                                                                        

residing at:

 

 

hereby affirm:

 

  1. Human knowledge and understanding are limited. We believe in a sceptical examination of all philosophies, knowledge systems and their methods.
  2. Life on planet earth appears on a downward spiral and all attempts should be made to prevent the extinction of the human race and its environment.
  3. Achievement of political power is crucial to achieving our objectives and all methods are fair.
  4. Land, labour, money, risk… are fictitious concepts and we will aim to search for better fictions to prevent the extinction of the human race and its environment.

 

I wish to join The Aagamee Manushya Party / Human Future Party and promise to work in a diligent manner to propagating its values and beliefs.

 

I enclose the amount                                                              towards membership and annual subscription charges.

 

 

 

 

Signature

Thursday 18 February 2021

Why economists kept getting the policies wrong

 Philip Stephens in The FT


The other week I caught sight of a headline declaring that the IMF was warning against cuts in public spending and borrowing. The report stopped me in my tracks. After half a century or so as keeper of the sacred flame of fiscal prudence, the IMF was telling policymakers in rich industrial nations they should not fret overmuch about huge build-ups of public debt during the Covid-19 crisis. John Maynard Keynes had been disinterred, and the world turned upside down. 

To be clear, there is nothing irresponsible about the IMF’s advice that policymakers in advanced economies should prioritise a restoration of growth after the deflationary shock of the pandemic. The fund prefaced a shift last year, and most people would say it was common sense to allow economic recovery to take hold. Nations such as Britain might have learned that lesson from the damage inflicted by the ill-judged austerity programme imposed by David Cameron’s government after the 2008 financial crash. 

And yet. This was the IMF speaking — the hallowed (for some, hated) institution that, as many Brits will recall, formally read the rites over Keynesianism when in 1976 it forced James Callaghan’s Labour government to impose politically calamitous cuts in spending and borrowing. This is the organisation that in the intervening years had a few simple answers to any economic problem you care to think of: fiscal retrenchment, a smaller state and/or market liberalisation. The advice was heralded as the Washington consensus because of the IMF’s location.  

My first job after joining the Financial Times during the early 1980s was to learn the language of the new economic orthodoxy. Kindly officials at the UK Treasury explained to me that the technique of using fiscal policy to manage demand, put to rest in 1976, had been replaced by a new theory. Monetarism decreed that as long as the authorities kept control of the money supply, and thus inflation, everything would be fine. 

The snag was that every time the Treasury alighted on a particular measure of the money supply to target — sterling M3, PSL2, and M0 come in mind — it ceased to be a reliable guide to price changes. Goodhart’s law, this was called, after the eponymous economist Charles. By the end of the 1980s, monetarism had been ditched, and targeting the exchange rate had become the holy grail. If sterling’s rate was fixed against the Deutschmark, the UK would import stability from Germany.  

It was about this time that a senior aide to the chancellor took me to one side to explain that one of the great skills of the Treasury was to perform perfect U-turns while persuading the world it had deviated not a jot from previous policy. This proved its worth again when the exchange rate policy was blown up by sterling’s ejection from the European exchange rate mechanism in 1992. The currency was quickly replaced by an inflation target as an infallible lodestar of policy. 

The eternal truths amid the missteps and swerves were that public spending and borrowing were bad, tax cuts were good, and market liberalisation was the route to sunlit uplands. The pound’s ERM debacle was followed by a ferocious budgetary squeeze, and, across the channel, the eurozone was designed to fit a fiscal straitjacket. Financial market deregulation, we were told, oiled the wheels of globalisation. If madcap profits and bonuses at big financial institutions prompted unease, the answer was that markets would self-correct. Britain’s Labour government backed “light-touch” regulation in the 2000s. The Bank of England reduced its oversight of systemic financial stability. 

The abiding sin threaded through it all was that of certitude. Perfectly plausible but untested theories, whether about the money supply, fiscal balances and debt levels, or market risk, were elevated to the level of irrefutable facts. Economics, essentially a faith-based discipline, represented itself as a hard science. The real world was reduced by the 1990s to a set of complex mathematical equations that no one, least of all democratically elected politicians, dared challenge. 

Thus detached from reality, economic policy swept away the postwar balance between the interests of society and markets. Arid econometrics replaced a measured understanding of political economy. It scarcely mattered that the gains of globalisation were scooped up by the super-rich, that markets became casinos and that fiscal fundamentalism was widening social divisions. Nothing counted above the equations. And now? After Donald Trump, Brexit and Covid-19, it seems we are back at the beginning. Time to dust off Keynes’s general theory.

Thursday 5 September 2019

The race to create a perfect lie detector – and the dangers of succeeding

Amit Katwala in The Guardian


We learn to lie as children, between the ages of two and five. By adulthood, we are prolific. We lie to our employers, our partners and, most of all, one study has found, to our mothers. The average person hears up to 200 lies a day, according to research by Jerry Jellison, a psychologist at the University of Southern California. The majority of the lies we tell are “white”, the inconsequential niceties – “I love your dress!” – that grease the wheels of human interaction. But most people tell one or two “big” lies a day, says Richard Wiseman, a psychologist at the University of Hertfordshire. We lie to promote ourselves, protect ourselves and to hurt or avoid hurting others. 

The mystery is how we keep getting away with it. Our bodies expose us in every way. Hearts race, sweat drips and micro-expressions leak from small muscles in the face. We stutter, stall and make Freudian slips. “No mortal can keep a secret,” wrote the psychoanalyst in 1905. “If his lips are silent, he chatters with his fingertips. Betrayal oozes out of him at every pore.”

Even so, we are hopeless at spotting deception. On average, across 206 scientific studies, people can separate truth from lies just 54% of the time – only marginally better than tossing a coin. “People are bad at it because the differences between truth-tellers and liars are typically small and unreliable,” said Aldert Vrij, a psychologist at the University of Portsmouth who has spent years studying ways to detect deception. Some people stiffen and freeze when put on the spot, others become more animated. Liars can spin yarns packed with colour and detail, and truth-tellers can seem vague and evasive.

Humans have been trying to overcome this problem for millennia. The search for a perfect lie detector has involved torture, trials by ordeal and, in ancient India, an encounter with a donkey in a dark room. Three thousand years ago in China, the accused were forced to chew and spit out rice; the grains were thought to stick in the dry, nervous mouths of the guilty. In 1730, the English writer Daniel Defoe suggested taking the pulse of suspected pickpockets. “Guilt carries fear always about with it,” he wrote. “There is a tremor in the blood of a thief.” More recently, lie detection has largely been equated with the juddering styluses of the polygraph machine – the quintessential lie detector beloved by daytime television hosts and police procedurals. But none of these methods has yielded a reliable way to separate fiction from fact.

That could soon change. In the past couple of decades, the rise of cheap computing power, brain-scanning technologies and artificial intelligence has given birth to what many claim is a powerful new generation of lie-detection tools. Startups, racing to commercialise these developments, want us to believe that a virtually infallible lie detector is just around the corner.

Their inventions are being snapped up by police forces, state agencies and nations desperate to secure themselves against foreign threats. They are also being used by employers, insurance companies and welfare officers. “We’ve seen an increase in interest from both the private sector and within government,” said Todd Mickelsen, the CEO of Converus, which makes a lie detector based on eye movements and subtle changes in pupil size.

Converus’s technology, EyeDetect, has been used by FedEx in Panama and Uber in Mexico to screen out drivers with criminal histories, and by the credit ratings agency Experian, which tests its staff in Colombia to make sure they aren’t manipulating the company’s database to secure loans for family members. In the UK, Northumbria police are carrying out a pilot scheme that uses EyeDetect to measure the rehabilitation of sex offenders. Other EyeDetect customers include the government of Afghanistan, McDonald’s and dozens of local police departments in the US. Soon, large-scale lie-detection programmes could be coming to the borders of the US and the European Union, where they would flag potentially deceptive travellers for further questioning.

But as tools such as EyeDetect infiltrate more and more areas of public and private life, there are urgent questions to be answered about their scientific validity and ethical use. In our age of high surveillance and anxieties about all-powerful AIs, the idea that a machine could read our most personal thoughts feels more plausible than ever to us as individuals, and to the governments and corporations funding the new wave of lie-detection research. But what if states and employers come to believe in the power of a lie-detection technology that proves to be deeply biased – or that doesn’t actually work?

And what do we do with these technologies if they do succeed? A machine that reliably sorts truth from falsehood could have profound implications for human conduct. The creators of these tools argue that by weeding out deception they can create a fairer, safer world. But the ways lie detectors have been used in the past suggests such claims may be far too optimistic.

For most of us, most of the time, lying is more taxing and more stressful than honesty. To calculate another person’s view, suppress emotions and hold back from blurting out the truth requires more thought and more energy than simply being honest. It demands that we bear what psychologists call a cognitive load. Carrying that burden, most lie-detection theories assume, leaves evidence in our bodies and actions.

Lie-detection technologies tend to examine five different types of evidence. The first two are verbal: the things we say and the way we say them. Jeff Hancock, an expert on digital communication at Stanford, has found that people who are lying in their online dating profiles tend to use the words “I”, “me” and “my” more often, for instance. Voice-stress analysis, which aims to detect deception based on changes in tone of voice, was used during the interrogation of George Zimmerman, who shot the teenager Trayvon Martin in 2012, and by UK councils between 2007 and 2010 in a pilot scheme that tried to catch benefit cheats over the phone. Only five of the 23 local authorities where voice analysis was trialled judged it a success, but in 2014, it was still in use in 20 councils, according to freedom of information requests by the campaign group False Economy.

The third source of evidence – body language – can also reveal hidden feelings. Some liars display so-called “duper’s delight”, a fleeting expression of glee that crosses the face when they think they have got away with it. Cognitive load makes people move differently, and liars trying to “act natural” can end up doing the opposite. In an experiment in 2015, researchers at the University of Cambridge were able to detect deception more than 70% of the time by using a skintight suit to measure how much subjects fidgeted and froze under questioning.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

The fourth type of evidence is physiological. The polygraph measures blood pressure, breathing rate and sweat. Penile plethysmography tests arousal levels in sex offenders by measuring the engorgement of the penis using a special cuff. Infrared cameras analyse facial temperature. Unlike Pinocchio, our noses may actually shrink slightly when we lie as warm blood flows towards the brain.

In the 1990s, new technologies opened up a fifth, ostensibly more direct avenue of investigation: the brain. In the second season of the Netflix documentary Making a Murderer, Steven Avery, who is serving a life sentence for a brutal killing he says he did not commit, undergoes a “brain fingerprinting” exam, which uses an electrode-studded headset called an electroencephalogram, or EEG, to read his neural activity and translate it into waves rising and falling on a graph. The test’s inventor, Dr Larry Farwell, claims it can detect knowledge of a crime hidden in a suspect’s brain by picking up a neural response to phrases or pictures relating to the crime that only the perpetrator and investigators would recognise. Another EEG-based test was used in 2008 to convict a 24-year-old Indian woman named Aditi Sharma of murdering her fiance by lacing his food with arsenic, but Sharma’s sentence was eventually overturned on appeal when the Indian supreme court held that the test could violate the subject’s rights against self-incrimination.

After 9/11, the US government – long an enthusiastic sponsor of deception science – started funding other kinds of brain-based lie-detection work through Darpa, the Defence Advanced Research Projects Agency. By 2006, two companies – Cephos and No Lie MRI – were offering lie detection based on functional magnetic resonance imaging, or fMRI. Using powerful magnets, these tools track the flow of blood to areas of the brain involved in social calculation, memory recall and impulse control.

But just because a lie-detection tool seems technologically sophisticated doesn’t mean it works. “It’s quite simple to beat these tests in ways that are very difficult to detect by a potential investigator,” said Dr Giorgio Ganis, who studies EEG and fMRI-based lie detection at the University of Plymouth. In 2007, a research group set up by the MacArthur Foundation examined fMRI-based deception tests. “After looking at the literature, we concluded that we have no idea whether fMRI can or cannot detect lies,” said Anthony Wagner, a Stanford psychologist and a member of the MacArthur group, who has testified against the admissibility of fMRI lie detection in court.

A new frontier in lie detection is now emerging. An increasing number of projects are using AI to combine multiple sources of evidence into a single measure for deception. Machine learning is accelerating deception research by spotting previously unseen patterns in reams of data. Scientists at the University of Maryland, for example, have developed software that they claim can detect deception from courtroom footage with 88% accuracy.

The algorithms behind such tools are designed to improve continuously over time, and may ultimately end up basing their determinations of guilt and innocence on factors that even the humans who have programmed them don’t understand. These tests are being trialled in job interviews, at border crossings and in police interviews, but as they become increasingly widespread, civil rights groups and scientists are growing more and more concerned about the dangers they could unleash on society.

Nothing provides a clearer warning about the threats of the new generation of lie-detection than the history of the polygraph, the world’s best-known and most widely used deception test. Although almost a century old, the machine still dominates both the public perception of lie detection and the testing market, with millions of polygraph tests conducted every year. Ever since its creation, it has been attacked for its questionable accuracy, and for the way it has been used as a tool of coercion. But the polygraph’s flawed science continues to cast a shadow over lie detection technologies today.

Even John Larson, the inventor of the polygraph, came to hate his creation. In 1921, Larson was a 29-year-old rookie police officer working the downtown beat in Berkeley, California. But he had also studied physiology and criminology and, when not on patrol, he was in a lab at the University of California, developing ways to bring science to bear in the fight against crime.

In the spring of 1921, Larson built an ugly device that took continuous measurements of blood pressure and breathing rate, and scratched the results on to a rolling paper cylinder. He then devised an interview-based exam that compared a subject’s physiological response when answering yes or no questions relating to a crime with the subject’s answers to control questions such as “Is your name Jane Doe?” As a proof of concept, he used the test to solve a theft at a women’s dormitory.

 
John Larson (right), the inventor of the polygraph lie detector. Photograph: Pictorial Parade/Getty Images

Larson refined his invention over several years with the help of an enterprising young man named Leonarde Keeler, who envisioned applications for the polygraph well beyond law enforcement. After the Wall Street crash of 1929, Keeler offered a version of the machine that was concealed inside an elegant walnut box to large organisations so they could screen employees suspected of theft.

Not long after, the US government became the world’s largest user of the exam. During the “red scare” of the 1950s, thousands of federal employees were subjected to polygraphs designed to root out communists. The US Army, which set up its first polygraph school in 1951, still trains examiners for all the intelligence agencies at the National Center for Credibility Assessment at Fort Jackson in South Carolina.

Companies also embraced the technology. Throughout much of the last century, about a quarter of US corporations ran polygraph exams on employees to test for issues including histories of drug use and theft. McDonald’s used to use the machine on its workers. By the 1980s, there were up to 10,000 trained polygraph examiners in the US, conducting 2m tests a year.

The only problem was that the polygraph did not work. In 2003, the US National Academy of Sciences published a damning report that found evidence on the polygraph’s accuracy across 57 studies was “far from satisfactory”. History is littered with examples of known criminals who evaded detection by cheating the test. Aldrich Ames, a KGB double agent, passed two polygraphs while working for the CIA in the late 1980s and early 90s. With a little training, it is relatively easy to beat the machine. Floyd “Buzz” Fay, who was falsely convicted of murder in 1979 after a failed polygraph exam, became an expert in the test during his two-and-a-half-years in prison, and started coaching other inmates on how to defeat it. After 15 minutes of instruction, 23 of 27 were able to pass. Common “countermeasures”, which work by exaggerating the body’s response to control questions, include thinking about a frightening experience, stepping on a pin hidden in the shoe, or simply clenching the anus.

The upshot is that the polygraph is not and never was an effective lie detector. There is no way for an examiner to know whether a rise in blood pressure is due to fear of getting caught in a lie, or anxiety about being wrongly accused. Different examiners rating the same charts can get contradictory results and there are huge discrepancies in outcome depending on location, race and gender. In one extreme example, an examiner in Washington state failed one in 20 law enforcement job applicants for having sex with animals; he “uncovered” 10 times more bestiality than his colleagues, and twice as much child pornography.

As long ago as 1965, the year Larson died, the US Committee on Government Operations issued a damning verdict on the polygraph. “People have been deceived by a myth that a metal box in the hands of an investigator can detect truth or falsehood,” it concluded. By then, civil rights groups were arguing that the polygraph violated constitutional protections against self-incrimination. In fact, despite the polygraph’s cultural status, in the US, its results are inadmissible in most courts. And in 1988, citing concerns that the polygraph was open to “misuse and abuse”, the US Congress banned its use by employers. Other lie-detectors from the second half of the 20th century fared no better: abandoned Department of Defense projects included the “wiggle chair”, which covertly tracked movement and body temperature during interrogation, and an elaborate system for measuring breathing rate by aiming an infrared laser at the lip through a hole in the wall.

The polygraph remained popular though – not because it was effective, but because people thought it was. “The people who developed the polygraph machine knew that the real power of it was in convincing people that it works,” said Dr Andy Balmer, a sociologist at the University of Manchester who wrote a book called Lie Detection and the Law.

The threat of being outed by the machine was enough to coerce some people into confessions. One examiner in Cincinnati in 1975 left the interrogation room and reportedly watched, bemused, through a two-way mirror as the accused tore 1.8 metres of paper charts off the machine and ate them. (You didn’t even have to have the right machine: in the 1980s, police officers in Detroit extracted confessions by placing a suspect’s hand on a photocopier that spat out sheets of paper with the phrase “He’s Lying!” pre-printed on them.) This was particularly attractive to law enforcement in the US, where it is vastly cheaper to use a machine to get a confession out of someone than it is to take them to trial.

But other people were pushed to admit to crimes they did not commit after the machine wrongly labelled them as lying. The polygraph became a form of psychological torture that wrung false confessions from the vulnerable. Many of these people were then charged, prosecuted and sent to jail – whether by unscrupulous police and prosecutors, or by those who wrongly believed in the polygraph’s power.

Perhaps no one came to understand the coercive potential of his machine better than Larson. Shortly before his death in 1965, he wrote: “Beyond my expectation, through uncontrollable factors, this scientific investigation became for practical purposes a Frankenstein’s monster.”

The search for a truly effective lie detector gained new urgency after the terrorist attacks of 11 September 2001. Several of the hijackers had managed to enter the US after successfully deceiving border agents. Suddenly, intelligence and border services wanted tools that actually worked. A flood of new government funding made lie detection big business again. “Everything changed after 9/11,” writes psychologist Paul Ekman in Telling Lies.

Ekman was one of the beneficiaries of this surge. In the 1970s, he had been filming interviews with psychiatric patients when he noticed a brief flash of despair cross the features of Mary, a 42-year-old suicidal woman, when she lied about feeling better. He spent the next few decades cataloguing how these tiny movements of the face, which he termed “micro-expressions”, can reveal hidden truths.

Ekman’s work was hugely influential with psychologists, and even served as the basis for Lie to Me, a primetime television show that debuted in 2009 with an Ekman-inspired lead played by Tim Roth. But it got its first real-world test in 2006, as part of a raft of new security measures introduced to combat terrorism. That year, Ekman spent a month teaching US immigration officers how to detect deception at passport control by looking for certain micro-expressions. The results are instructive: at least 16 terrorists were permitted to enter the US in the following six years.

Investment in lie-detection technology “goes in waves”, said Dr John Kircher, a University of Utah psychologist who developed a digital scoring system for the polygraph. There were spikes in the early 1980s, the mid-90s and the early 2000s, neatly tracking with Republican administrations and foreign wars. In 2008, under President George W Bush, the US Army spent $700,000 on 94 handheld lie detectors for use in Iraq and Afghanistan. The Preliminary Credibility Assessment Screening System had three sensors that attached to the hand, connected to an off-the-shelf pager which flashed green for truth, red for lies and yellow if it couldn’t decide. It was about as good as a photocopier at detecting deception – and at eliciting the truth.

Some people believe an accurate lie detector would have allowed border patrol to stop the 9/11 hijackers. “These people were already on watch lists,” Larry Farwell, the inventor of brain fingerprinting, told me. “Brain fingerprinting could have provided the evidence we needed to bring the perpetrators to justice before they actually committed the crime.” A similar logic has been applied in the case of European terrorists who returned from receiving training abroad.

As a result, the frontline for much of the new government-funded lie detection technology has been the borders of the US and Europe. In 2014, travellers flying into Bucharest were interrogated by a virtual border agentcalled Avatar, an on-screen figure in a white shirt with blue eyes, which introduced itself as “the future of passport control”. As well as an e-passport scanner and fingerprint reader, the Avatar unit has a microphone, an infra-red eye-tracking camera and an Xbox Kinect sensor to measure body movement. It is one of the first “multi-modal” lie detectors – one that incorporates a number of different sources of evidence – since the polygraph.

But the “secret sauce”, according to David Mackstaller, who is taking the technology in Avatar to market via a company called Discern Science, is in the software, which uses an algorithm to combine all of these types of data. The machine aims to send a verdict to a human border guard within 45 seconds, who can either wave the traveller through or pull them aside for additional screening. Mackstaller said he is in talks with governments – he wouldn’t say which ones – about installing Avatar permanently after further tests at Nogales in Arizona on the US-Mexico border, and with federal employees at Reagan Airport near Washington DC. Discern Science claims accuracy rates in their preliminary studies – including the one in Bucharest – have been between 83% and 85%.

The Bucharest trials were supported by Frontex, the EU border agency, which is now funding a competing system called iBorderCtrl, with its own virtual border guard. One aspect of iBorderCtrl is based on Silent Talker, a technology that has been in development at Manchester Metropolitan University since the early 2000s. Silent Talker uses an AI model to analyse more than 40 types of microgestures in the face and head; it only needs a camera and an internet connection to function. On a recent visit to the company’s office in central Manchester, I watched video footage of a young man lying about taking money from a box during a mock crime experiment, while in the corner of the screen a dial swung from green, to yellow, to red. In theory, it could be run on a smartphone or used on live television footage, perhaps even during political debates, although co-founder James O’Shea said the company doesn’t want to go down that route – it is targeting law enforcement and insurance.

O’Shea and his colleague Zuhair Bandar claim Silent Talker has an accuracy rate of 75% in studies so far. “We don’t know how it works,” O’Shea said. They stressed the importance of keeping a “human in the loop” when it comes to making decisions based on Silent Talker’s results.

Mackstaller said Avatar’s results will improve as its algorithm learns. He also expects it to perform better in the real world because the penalties for getting caught are much higher, so liars are under more stress. But research shows that the opposite may be true: lab studies tend to overestimate real-world success.

Before these tools are rolled out at scale, clearer evidence is required that they work across different cultures, or with groups of people such as psychopaths, whose non-verbal behaviour may differ from the norm. Much of the research so far has been conducted on white Europeans and Americans. Evidence from other domains, including bail and prison sentencing, suggests that algorithms tend to encode the biases of the societies in which they are created. These effects could be heightened at the border, where some of society’s greatest fears and prejudices play out. What’s more, the black box of an AI model is not conducive to transparent decision making since it cannot explain its reasoning. “We don’t know how it works,” O’Shea said. “The AI system learned how to do it by itself.”

Andy Balmer, the University of Manchester sociologist, fears that technology will be used to reinforce existing biases with a veneer of questionable science – making it harder for individuals from vulnerable groups to challenge decisions. “Most reputable science is clear that lie detection doesn’t work, and yet it persists as a field of study where other things probably would have been abandoned by now,” he said. “That tells us something about what we want from it.”

The truth has only one face, wrote the 16th-century French philosopher Michel de Montaigne, but a lie “has a hundred thousand shapes and no defined limits”. Deception is not a singular phenomenon and, as of yet, we know of no telltale sign of deception that holds true for everyone, in every situation. There is no Pinocchio’s nose. “That’s seen as the holy grail of lie detection,” said Dr Sophie van der Zee, a legal psychologist at Erasmus University in Rotterdam. “So far no one has found it.”

The accuracy rates of 80-90% claimed by the likes of EyeDetect and Avatar sound impressive, but applied at the scale of a border crossing, they would lead to thousands of innocent people being wrongly flagged for every genuine threat it identified. It might also mean that two out of every 10 terrorists easily slips through.

History suggests that such shortcomings will not stop these new tools from being used. After all, the polygraph has been widely debunked, but an estimated 2.5m polygraph exams are still conducted in the US every year. It is a $2.5bn industry. In the UK, the polygraph has been used on sex offenders since 2014, and in January 2019, the government announced plans to use it on domestic abusers on parole. The test “cannot be killed by science because it was not born of science”, writes the historian Ken Alder in his book The Lie Detectors.

New technologies may be harder than the polygraph for unscrupulous examiners to deliberately manipulate, but that does not mean they will be fair. AI-powered lie detectors prey on the tendency of both individuals and governments to put faith in science’s supposedly all-seeing eye. And the closer they get to perfect reliability, or at least the closer they appear to get, the more dangerous they will become, because lie detectors often get aimed at society’s most vulnerable: women in the 1920s, suspected dissidents and homosexuals in the 60s, benefit claimants in the 2000s, asylum seekers and migrants today. “Scientists don’t think much about who is going to use these methods,” said Giorgio Ganis. “I always feel that people should be aware of the implications.”

In an era of fake news and falsehoods, it can be tempting to look for certainty in science. But lie detectors tend to surface at “pressure-cooker points” in politics, when governments lower their requirements for scientific rigour, said Balmer. In this environment, dubious new techniques could “slip neatly into the role the polygraph once played”, Alder predicts.

One day, improvements in artificial intelligence could find a reliable pattern for deception by scouring multiple sources of evidence, or more detailed scanning technologies could discover an unambiguous sign lurking in the brain. In the real world, however, practised falsehoods – the stories we tell ourselves about ourselves, the lies that form the core of our identity – complicate matters. “We have this tremendous capacity to believe our own lies,” Dan Ariely, a renowned behavioural psychologist at Duke University, said. “And once we believe our own lies, of course we don’t provide any signal of wrongdoing.” 

In his 1995 science-fiction novel The Truth Machine, James Halperin imagined a world in which someone succeeds in building a perfect lie detector. The invention helps unite the warring nations of the globe into a world government, and accelerates the search for a cancer cure. But evidence from the last hundred years suggests that it probably wouldn’t play out like that in real life. Politicians are hardly queueing up to use new technology on themselves. Terry Mullins, a long-time private polygraph examiner – one of about 30 in the UK – has been trying in vain to get police forces and government departments interested in the EyeDetect technology. “You can’t get the government on board,” he said. “I think they’re all terrified.”

Daniel Langleben, the scientist behind No Lie MRI, told me one of the government agencies he was approached by was not really interested in the accuracy rates of his brain-based lie detector. An fMRI machine cannot be packed into a suitcase or brought into a police interrogation room. The investigator cannot manipulate the test results to apply pressure to an uncooperative suspect. The agency just wanted to know whether it could be used to train agents to beat the polygraph.

“Truth is not really a commodity,” Langleben reflected. “Nobody wants it.”

Saturday 4 May 2019

Najam Sethi on Pakistan Military's Truths

 Najam Sethi in The Friday Times




The world according to Al Bakistan


In a wide ranging and far reaching “briefing”, Maj-Gen Asif Ghafoor, DG-ISPR, has laid down the grundnorm of state realism. But consider.

He says there is no organized terrorist infrastructure in Pakistan. True, the military has knocked out Al Qaeda/Tehrik-e-Taliban Pakistan and degraded the Lashkar-e-Jhangvi. But a question mark still hangs over the fate of our “freedom fighter” jihadi organisations which are deemed to be “terrorist” by the international community. That is why Pakistan is struggling to remain off the FATF black list. The Maj-Gen says Pakistan has paid a huge price in the martyrdom of 81,000 citizens in the war against terror. True, but the world couldn’t care less: these homegrown terrorists were the outcome of our own misguided policies. He says that “radicalization” took root in Pakistan due to the Afghan jihad. True, but we were more than willing partners in that project. He says that terrorism came to Pakistan after the international community intervened in Afghanistan. True, but we provided safe haven to the Taliban for nearly twenty years and allowed them to germinate in our womb. He says it was decided last January to “mainstream” proscribed organisations. True, but what took us so long to tackle a troubling problem for twenty years when we were not busy in “kinetic operations”?

Maj-Gen Ghafoor says madrassahs will be mainstreamed under the Education Ministry. A noble thought. However, far from being mainstreamed, the madrassahs have so far refused to even get themselves properly registered as per the National Action Plan. Now the Punjab government and religious parties have refused to comply. Indeed, the Khyber-Pakhtunkhwa government is actively funding some big ones which have provided the backbone of the terrorists.

But it is Maj-Gen Ghafoor’s briefing on the Pashtun Tahaffuz Movement (PTM) that has generated the most controversy. He says the military has responded positively to its demand to de-mine FATA and reduce check posts but is constrained by lack of civil administration in the area and resurfacing of terrorists from across the border. Fair enough. But most of the “disappeared” are still “disappeared” and extra-judicial killings, like those of Naqeebullah Mehsud, are not being investigated. He wants to know why the PTM asked the Afghan government not to hand over the body of Dawar to the Pakistan government. He has accused PTM of receiving funds from hostile intel agencies. If that is proven it would be a damning indictment of PTM.

The PTM has responded by accusing the military of being unaccountable and repressive, a view that is echoed by many rights groups, media and political parties across the country.

In response, Major General Ghafoor has threatened: “Time is up”. Presumably, the military wants to detain and charge some PTM leaders as “traitors”. That would be most inadvisable. It will only serve to swell the PTM ranks. It may even precipitate an armed resistance, given the propensity of foreign intel agencies to fish in troubled waters. We also know how the various “traitors” in Pakistani history have ended up acquiring heroic proportions while “state realism” dictated otherwise. The list is long and impressive: Fatima Jinnah, Hussein Shaheed Suharwardi, Mujeebur Rahman, G.M. Syed, Khan Abdul Wali Khan, Khair Bux Marri, Ataullah Mengal, Akbar Bugti, etc. etc. We also know the fate of “banned” organisations – they simply reappear under another name.

The PTM has arisen because of the trials and tribulations of the tribal areas in the last decade of terrorism. The Pashtun populace has been caught in the crossfire of insurgency and counter insurgency. The insurgents were once state assets with whom the populace was expected to cooperate. Those who didn’t suffered at the hands of both. But when these “assets” became “liabilities”, those who didn’t cooperate with the one were targeted by the other. In consequence, from racial profiling to disappearances, a whole generation of tribal Pashtuns is scarred by state policies. The PTM is voicing that protest. If neighbouring foreign intel agencies are exploiting their sentiments, it is to be expected as a “realistic” quid pro quo for what Pakistani intel agencies have been serving its neighbours in the past.

If the Pakistani Miltablishment has been compelled by the force of new circumstances to undo its own old misguided policies, it should at least recognize the legitimate grievances of those who have paid the price of its miscalculations and apply balm to their wounds. Every other household in FATA is adversely affected one way or the other by the “war against terrorism”. The PTM is their voice. It needs to be heard. The media should be allowed to cross-examine it. In turn, the PTM must be wary of being tainted by the “foreign hand” and stop abusing the army.

The civilian government and opposition in parliament should sit down with the leaders of the PTM and find an honourable and equitable way to address mutually legitimate and “realistic” concerns. The military’s self-righteous, authoritarian tone must give way to a caring and sympathetic approach. Time’s not up. It has just arrived.

Thursday 29 November 2018

Why we stopped trusting elites

The credibility of establishment figures has been demolished by technological change and political upheavals. But it’s too late to turn back the clock. By William Davies in The Guardian

For hundreds of years, modern societies have depended on something that is so ubiquitous, so ordinary, that we scarcely ever stop to notice it: trust. The fact that millions of people are able to believe the same things about reality is a remarkable achievement, but one that is more fragile than is often recognised.

At times when public institutions – including the media, government departments and professions – command widespread trust, we rarely question how they achieve this. And yet at the heart of successful liberal democracies lies a remarkable collective leap of faith: that when public officials, reporters, experts and politicians share a piece of information, they are presumed to be doing so in an honest fashion. 


The notion that public figures and professionals are basically trustworthy has been integral to the health of representative democracies. After all, the very core of liberal democracy is the idea that a small group of people – politicians – can represent millions of others. If this system is to work, there must be a basic modicum of trust that the small group will act on behalf of the much larger one, at least some of the time. As the past decade has made clear, nothing turns voters against liberalism more rapidly than the appearance of corruption: the suspicion, valid or otherwise, that politicians are exploiting their power for their own private interest.

This isn’t just about politics. In fact, much of what we believe to be true about the world is actually taken on trust, via newspapers, experts, officials and broadcasters. While each of us sometimes witnesses events with our own eyes, there are plenty of apparently reasonable truths that we all accept without seeing. In order to believe that the economy has grown by 1%, or to find out about latest medical advances, we take various things on trust; we don’t automatically doubt the moral character of the researchers or reporters involved.

Much of the time, the edifice that we refer to as “truth” is really an investment of trust. Consider how we come to know the facts about climate change: scientists carefully collect and analyse data, before drafting a paper for anonymous review by other scientists, who assume that the data is authentic. If published, the findings are shared with journalists in press releases, drafted by university press offices. We expect that these findings are then reported honestly and without distortion by broadcasters and newspapers. Civil servants draft ministerial speeches that respond to these facts, including details on what the government has achieved to date.

A modern liberal society is a complex web of trust relations, held together by reports, accounts, records and testimonies. Such systems have always faced political risks and threats. The template of modern expertise can be traced back to the second half of the 17th century, when scientists and merchants first established techniques for recording and sharing facts and figures. These were soon adopted by governments, for purposes of tax collection and rudimentary public finance. But from the start, strict codes of conduct had to be established to ensure that officials and experts were not seeking personal gain or glory (for instance through exaggerating their scientific discoveries), and were bound by strict norms of honesty.

But regardless of how honest parties may be in their dealings with one another, the cultural homogeneity and social intimacy of these gentlemanly networks and clubs has always been grounds for suspicion. Right back to the mid-17th century, the bodies tasked with handling public knowledge have always privileged white male graduates, living in global cities and university towns. This does not discredit the knowledge they produce – but where things get trickier is when that homogeneity starts to appear to be a political identity, with a shared set of political goals. This is what is implied by the concept of “elites”: that purportedly separate domains of power – media, business, politics, law, academia – are acting in unison.

A further threat comes from individuals taking advantage of their authority for personal gain. Systems that rely on trust are always open to abuse by those seeking to exploit them. It is a key feature of modern administrations that they use written documents to verify things – but there will always be scope for records to be manipulated, suppressed or fabricated. There is no escaping that possibility altogether. This applies to many fields: at a certain point, the willingness to trust that a newspaper is honestly reporting what a police officer claims to have been told by a credible witness, for example, relies on a leap of faith.

A trend of declining trust has been underway across the western world for many years, even decades, as copious survey evidence attests. Trust, and its absence, became a preoccupation for policymakers and business leaders during the 1990s and early 2000s. They feared that shrinking trust led to higher rates of crime and less cohesive communities, producing costs that would be picked up by the state.

What nobody foresaw was that, when trust sinks beneath a certain point, many people may come to view the entire spectacle of politics and public life as a sham. This happens not because trust in general declines, but because key public figures – notably politicians and journalists – are perceived as untrustworthy. It is those figures specifically tasked with representing society, either as elected representatives or as professional reporters, who have lost credibility.

To understand the crisis liberal democracy faces today – whether we identify this primarily in terms of “populism” or “post-truth” – it’s not enough to simply bemoan the rising cynicism of the public. We need also to consider some of the reasons why trust has been withdrawn. The infrastructure of fact has been undermined in part by a combination of technology and market forces – but we must seriously reckon with the underlying truth of the populists’ charge against the establishment today. Too often, the rise of insurgent political parties and demagogues is viewed as the source of liberalism’s problems, rather than as a symptom. But by focusing on trust, and the failure of liberal institutions to sustain it, we get a clearer sense of why this is happening now.

The problem today is that, across a number of crucial areas of public life, the basic intuitions of populists have been repeatedly verified. One of the main contributors to this has been the spread of digital technology, creating vast data trails with the latent potential to contradict public statements, and even undermine entire public institutions. Whereas it is impossible to conclusively prove that a politician is morally innocent or that a news report is undistorted, it is far easier to demonstrate the opposite. Scandals, leaks, whistleblowing and revelations of fraud all serve to confirm our worst suspicions. While trust relies on a leap of faith, distrust is supported by ever-mounting piles of evidence. And in Britain, this pile has been expanding much faster than many of us have been prepared to admit.

Confronted by the rise of populist parties and leaders, some commentators have described the crisis facing liberalism in largely economic terms – as a revolt among those “left behind” by inequality and globalisation. Another camp sees it primarily as the expression of cultural anxieties surrounding identity and immigration. There is some truth in both, of course – but neither gets to the heart of the trust crisis that populists exploit so ruthlessly. A crucial reason liberalism is in danger right now is that the basic honesty of mainstream politicians, journalists and senior officials is no longer taken for granted.


There are copious explanations for Trump, Brexit and so on, but insufficient attention to what populists are actually saying, which focuses relentlessly on the idea of self-serving “elites” maintaining a status quo that primarily benefits them. On the right, Nigel Farage has accused individual civil servants of seeking to sabotage Brexit for their own private ends. On the left, Jeremy Corbyn repeatedly refers to Britain’s “rigged” economic system. The promise to crack down on corruption and private lobbying is integral to the pitch made by figures such as Donald Trump, Jair Bolsonaro or Viktor Orbán.

One of the great political riddles of recent years is that declining trust in “elites” is often encouraged and exploited by figures of far more dubious moral character – not to mention far greater wealth – than the technocrats and politicians being ousted. On the face of it, it would seem odd that a sense of “elite” corruption would play into the hands of hucksters and blaggards such as Donald Trump or Arron Banks. But the authority of these figures owes nothing to their moral character, and everything to their perceived willingness to blow the whistle on corrupt “insiders” dominating the state and media.

Liberals – including those who occupy “elite” positions – may comfort themselves with the belief that these charges are ill-founded or exaggerated, or else that the populists offer no solutions to the failures they identify. After all, Trump has not “drained the swamp” of Washington lobbying. But this is to miss the point of how such rhetoric works, which is to chip away at the core faith on which liberalism depends, namely that power is being used in ways that represent the public interest, and that the facts published by the mainstream media are valid representations of reality.

Populists target various centres of power, including dominant political parties, mainstream media, big business and the institutions of the state, including the judiciary. The chilling phrase “enemies of the people” has recently been employed by Donald Trump to describe those broadcasters and newspapers he dislikes (such as CNN and the New York Times), and by the Daily Mail to describe high court judges, following their 2016 ruling that Brexit would require parliamentary consent. But on a deeper level, whether it is the judiciary, the media or the independent civil service that is being attacked is secondary to a more important allegation: that public life in general has become fraudulent.

Nigel Farage campaigning with Donald Trump in 2016. Photograph: Jonathan Bachman/Getty Images

How does this allegation work? One aspect of it is to dispute the very possibility that a judge, reporter or expert might act in a disinterested, objective fashion. For those whose authority depends on separating their public duties from their personal feelings, having their private views or identities publicised serves as an attack on their credibility. But another aspect is to gradually blur the distinctions between different varieties of expertise and authority, with the implication that politicians, journalists, judges, regulators and officials are effectively all working together.

It is easy for rival professions to argue that they have little in common with each other, and are often antagonistic to each other. Ostensibly, these disparate centres of expertise and power hold each other in check in various ways, producing a pluralist system of checks and balances. Twentieth-century defenders of liberalism, such as the American political scientist Robert Dahl, often argued that it didn’t matter how much power was concentrated in the hands of individual authorities, as long as no single political entity was able to monopolise power. The famous liberal ideal of a “separation of powers” (distinguishing executive, legislative and judicial branches of government), so influential in the framing of the US constitution, could persist so long as different domains of society hold one another up to critical scrutiny.

But one thing that these diverse professions and authorities do have in common is that they trade primarily in words and symbols. By lumping together journalists, judges, experts and politicians as a single homogeneous “liberal elite”, it is possible to treat them all as indulging in a babble of jargon, political correctness and, ultimately, lies. Their status as public servants is demolished once their claim to speak honestly is thrown into doubt. One way in which this is done is by bringing their private opinions and tastes before the public, something that social media and email render far easier. Tensions and contradictions between the public face of, say, a BBC reporter, and their private opinions and feelings, are much easier to discover in the age of Twitter.

Whether in the media, politics or academia, liberal professions suffer a vulnerability that a figure such as Trump doesn’t, in that their authority hangs on their claim to speak the truth. A recent sociological paper called The Authentic Appeal of the Lying Demagogue, by US academics Oliver Hahl, Minjae Kim and Ezra Zuckerman Sivan, draws a distinction between two types of lies. The first, “special access lies”, may be better termed “insider lies”. This is dishonesty from those trusted to truthfully report facts, who abuse that trust by failing to state what they privately know to be true. (The authors give the example of Bill Clinton’s infamous claim that he “did not have sexual relations with that woman”.)

The second, which they refer to as “common knowledge lies”, are the kinds of lies told by Donald Trump about the size of his election victory or the crowds at his inauguration, or the Vote Leave campaign’s false claims about sending “£350m a week to the EU”. These lies do not pretend to be bound by the norm of honesty in the first place, and the listener can make up their own mind what to make of them.

What the paper shows is that, where politics comes to be viewed as the domain of “insider” liars, there is a seductive authenticity, even a strange kind of honesty, about the “common knowledge” liar. The rise of highly polished, professional politicians such as Tony Blair and Bill Clinton exacerbated the sense that politics is all about strategic concealment of the truth, something that the Iraq war seemed to confirm as much as anything. Trump or Farage may have a reputation for fabricating things, but they don’t (rightly or wrongly) have a reputation for concealing things, which grants them a form of credibility not available to technocrats or professional politicians.

At the same time, and even more corrosively, when elected representatives come to be viewed as “insider liars”, it turns out that other professions whose job it is to report the truth – journalists, experts, officials – also suffer a slump in trust. Indeed, the distinctions between all these fact-peddlers start to look irrelevant in the eyes of those who’ve given up on the establishment altogether. It is this type of all-encompassing disbelief that creates the opportunity for rightwing populism in particular. Trump voters are more than twice as likely to distrust the media as those who voted for Clinton in 2016, according to the annual Edelman Trust Barometer, which adds that the four countries currently suffering the most “extreme trust losses” are Italy, Brazil, South Africa and the US.

It’s one thing to measure public attitudes, but quite another to understand what shapes them. Alienation and disillusionment develop slowly, and without any single provocation. No doubt economic stagnation and soaring inequality have played a role – but we should not discount the growing significance of scandals that appear to discredit the honesty and objectivity of “liberal elites”. The misbehaviour of elites did not “cause” Brexit, but it is striking, in hindsight, how little attention was paid to the accumulation of scandal and its consequences for trust in the establishment.

The 2010 edition of the annual British Social Attitudes survey included an ominous finding. Trust in politicians, already low, had suffered a fresh slump, with a majority of people saying politicians never tell the truth. But at the same time, interest in politics had mysteriously risen.


To whom would this newly engaged section of the electorate turn if they had lost trust in “politicians”? One answer was clearly Ukip, who experienced their greatest electoral gains in the years that followed, to the point of winning the most seats in the 2014 elections for the European parliament. Ukip’s surge, which initially appeared to threaten the Conservative party, was integral to David Cameron’s decision to hold a referendum on EU membership. One of the decisive (and unexpected) factors in the referendum result was the number of voters who went to the polls for the first time, specifically to vote leave.

What might have prompted the combination of angry disillusionment and intensifying interest that was visible in the 2010 survey? It clearly predated the toughest years of austerity. But there was clearly one event that did more than any other to weaken trust in politicians: the MPs’ expenses scandal, which blew up in May 2009 thanks to a drip-feed of revelations published by the Daily Telegraph.

Following as it did so soon after a disaster of world-historic proportions – the financial crisis – the full significance of the expenses scandal may have been forgotten. But its ramifications were vast. For one thing, it engulfed many of the highest reaches of power in Westminster: the Speaker of the House of Commons, the home secretary, the secretary of state for communities and local government and the chief secretary to the treasury all resigned. Not only that, but the rot appeared to have infected all parties equally, validating the feeling that politicians had more in common with each other (regardless of party loyalties) than they did with decent, ordinary people.

Many of the issues that “elites” deal with are complex, concerning law, regulation and economic analysis. We can all see the fallout of the financial crisis, for instance, but the precise causes are disputed and hard to fathom. By contrast, everybody understands expense claims, and everybody knows lying and exaggerating are among the most basic moral failings; even a child understands they are wrong. This may be unfair to the hundreds of honest MPs and to the dozens whose misdemeanours fell into a murky area around the “spirit” of the rules. But the sense of a mass stitch-up was deeply – and understandably – entrenched.

The other significant thing about the expenses scandal was the way it set a template for a decade of elite scandals – most of which also involved lies, leaks and dishonest denials. One year later, there was another leak from a vast archive of government data: in 2010, WikiLeaks released hundreds of thousands of US military field reports from Iraq and Afghanistan. With the assistance of newspaper including the New York Times, Der Spiegel, the Guardian and Le Monde, these “war logs” disclosed horrifying details about the conduct of US forces and revealed the Pentagon had falsely denied knowledge of various abuses. While some politicians expressed moral revulsion with what had been exposed, the US and British governments blamed WikiLeaks for endangering their troops, and the leaker, Chelsea Manning, was jailed for espionage.

 
Rupert Murdoch on his way to give evidence to the Leveson inquiry in 2012. Photograph: Ben Stansall/AFP/Getty Images

In 2011, the phone-hacking scandal put the press itself under the spotlight. It was revealed that senior figures in News International and the Metropolitan police had long been aware of the extent of phone-hacking practices – and they had lied about how much they knew. Among those implicated was the prime minister’s communications director, former News of the World editor Andy Coulson, who was forced to resign his post and later jailed. By the end of 2011, the News of the World had been closed down, the Leveson inquiry was underway, and the entire Murdoch empire was shaking.

The biggest scandal of 2012 was a different beast altogether, involving unknown men manipulating a number that very few people had even heard of. The number in question, the London interbank offered rate, or Libor, is meant to represent the rate at which banks are willing to loan to each other. What was surreal, in an age of complex derivatives and high-frequency trading algorithms, was that this number was calculated on the basis of estimates declared by each bank on a daily basis, and accepted purely on trust. The revelation that a handful of brokers had conspired to alter Libor for private gain (with possible costs to around 250,000 UK mortgage-holders, among others) may have been difficult to fully comprehend, but it gave the not unreasonable impression of an industry enriching itself in a criminal fashion at the public’s expense. Bob Diamond, the CEO of Barclays, the bank at the centre of the conspiracy, resigned in July 2012.

Towards the end of that year, the media was caught in another prolonged crisis, this time at the BBC. Horror greeted the broadcast of the ITV documentary The Other Side of Jimmy Savile in October 2012. How many people had known about his predatory sexual behaviour, and for how long? Why had the police abandoned earlier investigations? And why had BBC Newsnight dropped its own film about Savile, due to be broadcast shortly after his death in 2011? The police swiftly established Operation Yewtree to investigate historic sexual abuse allegations, while the BBC established independent commissions into what had gone wrong. But a sense lingered that neither the BBC nor the police had really wanted to know the truth of these matters for the previous 40 years.

It wasn’t long before it was the turn of the corporate world. In September 2014, a whistleblower revealed that Tesco had exaggerated its half-yearly profits by £250m, increasing the figure by around a third. An accounting fiddle on this scale clearly had roots at a senior managerial level. Sure enough, four senior executives were suspended the same month and three were charged with fraud two years later. A year later, it emerged that Volkswagen had systematically and deliberately tinkered with emissions controls in their vehicles, so as to dupe regulators in tests, but then pollute liberally the rest of the time. The CEO, Martin Winterkorn, resigned.

“We didn’t really learn anything from WikiLeaks we didn’t already presume to be true,” the philosopher Slavoj Žižek observed in 2014. “But it is one thing to know it in general and another to get concrete data.” The nature of all these scandals suggests the emergence of a new form of “facts”, in the shape of a leaked archive – one that, crucially, does not depend on trusting the secondhand report of a journalist or official. These revelations are powerful and consequential precisely because they appear to directly confirm our fears and suspicions. Resentment towards “liberal elites” would no doubt brew even in the absence of supporting evidence. But when that evidence arises, things become far angrier, even when the data – such as Hillary Clinton’s emails – isn’t actually very shocking.

This is by no means an exhaustive list of the scandals of the past decade, nor are they all of equal significance. But viewing them together provides a better sense of how the suspicions of populists cut through. Whether or not we continue to trust in politicians, journalists or officials, we have grown increasingly used to this pattern in which a curtain is dramatically pulled back, to reveal those who have been lying to or defrauding the public.

Another pattern also begins to emerge. It’s not just that isolated individuals are unmasked as corrupt or self-interested (something that is as old as politics), but that the establishment itself starts to appear deceitful and dubious. The distinctive scandals of the 21st century are a combination of some very basic and timeless moral failings (greed and dishonesty) with technologies of exposure that expose malpractice on an unprecedented scale, and with far more dramatic results.

Perhaps the most important feature of all these revelations was that they were definitely scandals, and not merely failures: they involved deliberate efforts to defraud or mislead. Several involved sustained cover-ups, delaying the moment of truth for as long as possible.

Several of the scandals ended with high profile figures behind bars. Jail terms satisfy some of the public demand that the “elites” pay for their dishonesty, but they don’t repair the trust that has been damaged. On the contrary, there’s a risk that they affirm the cry for retribution, after which the quest for punishment is only ramped up further. Chants of “lock her up” continue to reverberate around Trump rallies.

In addition to their conscious and deliberate nature, a second striking feature of these scandals was the ambiguous role played by the media. On the one hand, the reputation of the media has taken a pummelling over the past decade, egged on by populists and conspiracy theorists who accuse the “mainstream media” of being allied to professional political leaders, and who now have the benefit of social media through which to spread this message.

The moral authority of newspapers may never have been high, but the grisly revelations that journalists hacked the phone of murdered schoolgirl Milly Dowler represented a new low in the public standing of the press. The Leveson inquiry, followed soon after by the Savile revelations and Operation Yewtree, generated a sense of a media class who were adept at exposing others, but equally expert at concealing the truth of their own behaviours.

On the other hand, it was newspapers and broadcasters that enabled all of this to come to light at all. The extent of phone hacking was eventually exposed by the Guardian, the MPs’ expenses by the Telegraph, Jimmy Savile by ITV, and the “war logs” reported with the aid of several newspapers around the world simultaneously.

But the media was playing a different kind of role from the one traditionally played by journalists and newspapers, with very different implications for the status of truth in society. A backlog of data and allegations had built up in secret, until eventually a whistle was blown. An archive existed that the authorities refused to acknowledge, until they couldn’t resist the pressure to do so any longer. Journalists and whistleblowers were instrumental in removing the pressure valve, but from that point on, truth poured out unpredictably. While such torrents are underway, there is no way of knowing how far they may spread or how long they may last.

 
Tony Blair and Bill Clinton in Belfast in April. Photograph: Charles McQuillan/Getty Images

The era of “big data” is also the era of “leaks”. Where traditional “sleaze” could topple a minister, several of the defining scandals of the past decade have been on a scale so vast that they exceed any individual’s responsibility. The Edward Snowden revelations of 2013, the Panama Papers leak of 2015 and the HSBC files (revealing organised tax evasion) all involved the release of tens of thousands or even millions of documents. Paper-based bureaucracies never faced threats to their legitimacy on this scale.

The power of commissions and inquiries to make sense of so much data is not to be understated, nor is the integrity of those newspapers and whistleblowers that helped bring misdemeanours to light. In cases such as MPs’ expenses, some newspapers even invited their readers to help search these vast archives for treasure troves, like human algorithms sorting through data. But it is hard to imagine that the net effect of so many revelations was to build trust in any publicly visible institutions. On the contrary, the discovery that “elites” have been blocking access to a mine of incriminating data is perfect fodder for conspiracy theories. In his 2010 memoir, A Journey, Tony Blair confessed that legislating for freedom of information was one of his biggest regrets, which gave a glimpse of how transparency is viewed from the centre of power.

Following the release of the war logs by WikiLeaks, nobody in any position of power claimed that the data wasn’t accurate (it was, after all, the data, and not a journalistic report). Nor did they offer any moral justification for what was revealed. Defence departments were left making the flimsiest of arguments – that it was better for everyone if they didn’t know how war was conducted. It may well be that the House of Commons was not fairly represented by the MPs’ expenses scandal, that most City brokers are honest, or that the VW emissions scam was a one-off within the car industry. But scandals don’t work through producing fair or representative pictures of the world; they do so by blowing the lid on hidden truths and lies. Where whistleblowing and leaking become the dominant form of truth-telling, the authority of professional truth-tellers – reporters, experts, professionals, broadcasters – is thrown into question.

The term “illiberal democracy” is now frequently invoked to describe states such as Hungary under Viktor Orbán or Turkey under Recep Tayyip Erdoğan. In contrast to liberal democracy, this model of authoritarian populism targets the independence of the judiciary and the media, ostensibly on behalf of “the people”.

Brexit has been caused partly by distrust in “liberal elites”, but the anxiety is that it is also accelerating a drift towards “illiberalism”. There is a feeling at large, albeit amongst outspoken remainers, that the BBC has treated the leave campaign and Brexit itself with kid gloves, for fear of provoking animosity. More worrying was the discovery by openDemocracy in October that the Metropolitan police were delaying their investigation into alleged breaches of electoral law by the leave campaign due to what a Met spokesperson called “political sensitivities”. The risk at the present juncture is that key civic institutions will seek to avoid exercising scrutiny and due process, for fear of upsetting their opponents.

Britain is not an “illiberal democracy”, but the credibility of our elites is still in trouble, and efforts to placate their populist opponents may only make matters worse. At the more extreme end of the spectrum, the far-right activist Stephen Yaxley-Lennon, also known as Tommy Robinson, has used his celebrity and social media reach to cast doubt on the judiciary and the BBC at once.

Yaxley-Lennon has positioned himself as a freedom fighter, revealing “the truth” about Muslim men accused of grooming underage girls by violating legal rules that restrict reporting details of ongoing trials. Yaxley-Lennon was found guilty of contempt of court and jailed (he was later released after the court of appeal ordered a retrial, and the case has been referred to the attorney general), but this only deepened his appeal for those who believed the establishment was complicit in a cover-up, and ordinary people were being deliberately duped.

The political concern right now is that suspicions of this nature – that the truth is being deliberately hidden by an alliance of “elites” – are no longer the preserve of conspiracy theorists, but becoming increasingly common. Our current crisis has too many causes to enumerate here, and it is impossible to apportion blame for a collective collapse of trust – which is as much a symptom of changes in media technologies as it is of any moral failings on the part of elites.

But what is emerging now is what the social theorist Michel Foucault would have called a new “regime of truth” – a different way of organising knowledge and trust in society. The advent of experts and government administrators in the 17th century created the platform for a distinctive liberal solution to this problem, which rested on the assumption that knowledge would reside in public records, newspapers, government files and journals. But once the integrity of these people and these instruments is cast into doubt, an opportunity arises for a new class of political figures and technologies to demand trust instead.

The project that was launched over three centuries ago, of trusting elite individuals to know, report and judge things on our behalf, may not be viable in the long term, at least not in its existing form. It is tempting to indulge the fantasy that we can reverse the forces that have undermined it, or else batter them into retreat with an even bigger arsenal of facts. But this is to ignore the more fundamental ways in which the nature of trust is changing.

The main feature of the emerging regime is that truth is now assumed to reside in hidden archives of data, rather than in publicly available facts. This is what is affirmed by scandals such as MPs’ expenses and the leak of the Iraq war logs – and more recently in the #MeToo movement, which also occurred through a sudden and voluminous series of revelations, generating a crisis of trust. The truth was out there, just not in the public domain. In the age of email, social media and cameraphones, it is now common sense to assume that virtually all social activity is generating raw data, which exists out there somewhere. Truth becomes like the lava below the earth’s crust, which periodically bursts through as a volcano.

What role does this leave for the traditional, analogue purveyors of facts and figures? What does it mean to “report” the news in an age of reflexive disbelief? Newspapers have been grappling with this question for some time now; some have decided to refashion themselves as portals to the raw data, or curators of other people’s content. But it is no longer intuitively obvious to the public why they should be prepared to take a journalist’s word for something, when they can witness the thing itself in digital form. There may be good answers to these questions, but they are not obvious ones.

Instead, a new type of heroic truth-teller has emerged in tandem with these trends. This is the individual who appears brave enough to call bullshit on the rest of the establishment – whether that be government agencies, newspapers, business, political parties or anything else. Some are whistleblowers, others are political leaders, and others are more like conspiracy theorists or trolls. The problem is that everyone has a different heroic truth-teller, because we’re all preoccupied by different bullshit. There is no political alignment between figures such as Chelsea Manning and Nigel Farage; what they share is only a willingness to defy the establishment and break consensus.
If a world where everyone has their own truth-tellers sounds dangerously like relativism, that’s because it is. But the roots of this new and often unsettling “regime of truth” don’t only lie with the rise of populism or the age of big data. Elites have largely failed to understand that this crisis is about trust rather than facts – which may be why they did not detect the rapid erosion of their own credibility.

Unless liberal institutions and their defenders are willing to reckon with their own inability to sustain trust, the events of the past decade will remain opaque to them. And unless those institutions can rediscover aspects of the original liberal impulse – to keep different domains of power separate, and put the disinterested pursuit of knowledge before the pursuit of profit – then the present trends will only intensify, and no quantity of facts will be sufficient to resist. Power and authority will accrue to a combination of decreasingly liberal states and digital platforms – interrupted only by the occasional outcry as whistles are blown and outrages exposed.

Sunday 15 July 2018

The Death of Truth - How Trump and Modi came to power

Michiko Kakutani in The Guardian

Two of the most monstrous regimes in human history came to power in the 20th century, and both were predicated on the violation and despoiling of truth, on the knowledge that cynicism and weariness and fear can make people susceptible to the lies and false promises of leaders bent on unconditional power. As Hannah Arendt wrote in her 1951 book The Origins of Totalitarianism, “The ideal subject of totalitarian rule is not the convinced Nazi or the convinced communist, but people for whom the distinction between fact and fiction (ie the reality of experience) and the distinction between true and false (ie the standards of thought) no longer exist.”

Arendt’s words increasingly sound less like a dispatch from another century than a chilling description of the political and cultural landscape we inhabit today – a world in which fake news and lies are pumped out in industrial volume by Russian troll factories, emitted in an endless stream from the mouth and Twitter feed of the president of the United States, and sent flying across the world through social media accounts at lightning speed. Nationalism, tribalism, dislocation, fear of social change and the hatred of outsiders are on the rise again as people, locked in their partisan silos and filter bubbles, are losing a sense of shared reality and the ability to communicate across social and sectarian lines.

This is not to draw a direct analogy between today’s circumstances and the overwhelming horrors of the second world war era, but to look at some of the conditions and attitudes – what Margaret Atwood has called the “danger flags” in George Orwell’s Nineteen Eighty-Four and Animal Farm – that make a people susceptible to demagoguery and political manipulation, and nations easy prey for would-be autocrats. To examine how a disregard for facts, the displacement of reason by emotion, and the corrosion of language are diminishing the value of truth, and what that means for the world.


Trump made 2,140 false or misleading claims during his first year in office – an average of 5.9 a day


The term “truth decay” has joined the post-truth lexicon that includes such now familiar phrases as “fake news” and “alternative facts”. And it’s not just fake news either: it’s also fake science (manufactured by climate change deniers and anti-vaxxers, who oppose vaccination), fake history (promoted by Holocaust revisionists and white supremacists), fake Americans on Facebook (created by Russian trolls), and fake followers and “likes” on social media (generated by bots).

Donald Trump, the 45th president of the US, lies so prolifically and with such velocity that the Washington Post calculated he’d made 2,140 false or misleading claims during his first year in office – an average of 5.9 a day. His lies – about everything from the investigations into Russian interference in the election, to his popularity and achievements, to how much TV he watches – are only the brightest blinking red light among many warnings of his assault on democratic institutions and norms. He routinely assails the press, the justice system, the intelligence agencies, the electoral system and the civil servants who make the US government tick.

Nor is the assault on truth confined to America. Around the world, waves of populism and fundamentalism are elevating appeals to fear and anger over reasoned debate, eroding democratic institutions, and replacing expertise with the wisdom of the crowd. False claims about the UK’s financial relationship with the EU helped swing the vote in favour of Brexit, and Russia ramped up its sowing of dezinformatsiya in the runup to elections in France, Germany, the Netherlands and other countries in concerted propaganda efforts to discredit and destabilise democracies.

How did this happen? How did truth and reason become such endangered species, and what does the threat to them portend for our public discourse and the future of our politics and governance? 

It’s easy enough to see Trump as having ascended to office because of a unique, unrepeatable set of factors: a frustrated electorate still hurting from the backwash of the 2008 financial crash; Russian interference in the election and a deluge of pro-Trump fake news stories on social media; a highly polarising opponent who came to symbolise the Washington elite that populists decried; and an estimated $5bn‑worth of free campaign coverage from media outlets obsessed with the views and clicks that the former reality TV star generated.

If a novelist had concocted a villain like Trump – a larger-than-life, over-the-top avatar of narcissism, mendacity, ignorance, prejudice, boorishness, demagoguery and tyrannical impulses (not to mention someone who consumes as many as a dozen Diet Cokes a day) – she or he would likely be accused of extreme contrivance and implausibility. In fact, the president of the US often seems less like a persuasive character than some manic cartoon artist’s mashup of Ubu Roi, Triumph the Insult Comic Dog, and a character discarded by Molière. But the more clownish aspects of Trump the personality should not blind us to the monumentally serious consequences of his assault on truth and the rule of law, and the vulnerabilities he has exposed in our institutions and digital communications. It is unlikely that a candidate who had already been exposed during the campaign for his history of lying and deceptive business practices would have gained such popular support were portions of the public not blase about truth-telling and were there not systemic problems with how people get their information and how they’ve come to think in increasingly partisan terms.


For decades, objectivity – or even the aim of ascertaining the best available truth – has been falling out of favour


With Trump, the personal is political, and in many respects he is less a comic-book anomaly than an extreme, bizarro-world apotheosis of many of the broader, intertwined attitudes undermining truth today, from the merging of news and politics with entertainment, to the toxic polarisation that’s overtaken American politics, to the growing populist contempt for expertise.

For decades now, objectivity – or even the idea that people can aspire toward ascertaining the best available truth – has been falling out of favour. Daniel Patrick Moynihan’s well-known observation that “Everyone is entitled to his own opinion, but not to his own facts” is more timely than ever: polarisation has grown so extreme that voters have a hard time even agreeing on the same facts. This has been exponentially accelerated by social media, which connects users with like-minded members and supplies them with customised news feeds that reinforce their preconceptions, allowing them to live in ever narrower silos.

For that matter, relativism has been ascendant since the culture wars began in the 1960s. Back then, it was embraced by the New Left, who were eager to expose the biases of western, bourgeois, male-dominated thinking; and by academics promoting the gospel of postmodernism, which argued that there are no universal truths, only smaller personal truths – perceptions shaped by the cultural and social forces of one’s day. Since then, relativistic arguments have been hijacked by the populist right.

Relativism, of course, synced perfectly with the narcissism and subjectivity that had been on the rise, from Tom Wolfe’s “Me Decade” 1970s, on through the selfie age of self-esteem. No surprise then that the “Rashomon effect” – the point of view that everything depends on your point of view – has permeated our culture, from popular novels such as Lauren Groff’s Fates and Furies to television series like The Affair, which hinge on the idea of competing realities.


 History is reimagined in Oliver Stone’s 1991 film JFK. Photograph: Allstar/Cinetext/Warner Bros

I’ve been reading and writing about many of these issues for nearly four decades, going back to the rise of deconstruction and battles over the literary canon on college campuses; debates over the fictionalised retelling of history in movies such as Oliver Stone’s JFK and Kathryn Bigelow’s Zero Dark Thirty; efforts made by both the Clinton and Bush administrations to avoid transparency and define reality on their own terms; Trump’s war on language and efforts to normalise the abnormal; and the impact that technology has had on how we process and share information.

In his 2007 book, The Cult of the Amateur, the Silicon Valley entrepreneur Andrew Keen warned that the internet not only had democratised information beyond people’s wildest imaginings but also was replacing genuine knowledge with “the wisdom of the crowd”, dangerously blurring the lines between fact and opinion, informed argument and blustering speculation. A decade later, the scholar Tom Nichols wrote in The Death of Expertise that a wilful hostility towards established knowledge had emerged on both the right and the left, with people aggressively arguing that “every opinion on any matter is as good as every other”. Ignorance was now fashionable.

The postmodernist argument that all truths are partial (and a function of one’s perspective) led to the related argument that there are many legitimate ways to understand or represent an event. This both encouraged a more egalitarian discourse and made it possible for the voices of the previously disfranchised to be heard. But it has also been exploited by those who want to make the case for offensive or debunked theories, or who want to equate things that cannot be equated. Creationists, for instance, called for teaching “intelligent design” alongside evolution in schools. “Teach both,” some argued. Others said, “Teach the controversy.”


Doubt is our product, since it is the best means of competing with the ‘body of fact’ that exists in the minds of the publicTobacco industry executive memo, 1969


A variation on this “both sides” argument was employed by Trump when he tried to equate people demonstrating against white supremacy with the neo-Nazis who had converged in Charlottesville, Virginia, to protest the removal of Confederate statues. There were “some very fine people on both sides”, Trump declared. He also said, “We condemn in the strongest possible terms this egregious display of hatred, bigotry and violence on many sides, on many sides.”

Climate deniers, anti-vaxxers and other groups who don’t have science on their side bandy about phrases that wouldn’t be out of place in a college class on deconstruction – phrases such as “many sides,” “different perspectives”, “uncertainties”, “multiple ways of knowing.” As Naomi Oreskes and Erik M Conway demonstrated in their 2010 book Merchants of Doubt, rightwing thinktanks, the fossil fuel industry, and other corporate interests that are intent on discrediting science have employed a strategy first used by the tobacco industry to try to confuse the public about the dangers of smoking. “Doubt is our product,” read an infamous memo written by a tobacco industry executive in 1969, “since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.”

The strategy, essentially, was this: dig up a handful of so-called professionals to refute established science or argue that more research is needed; turn these false arguments into talking points and repeat them over and over; and assail the reputations of the genuine scientists on the other side. If this sounds familiar, that’s because it’s a tactic that’s been used by Trump and his Republican allies to defend policies (on matters ranging from gun control to building a border wall) that run counter to both expert evaluation and national polls.


In January 2018, protests were held in 50 states urging US senators to support scientific evidence against Trump’s climate change policies. Photograph: Pacific Press/LightRocket via Getty Images

What Oreskes and Conway call the “tobacco strategy” was helped, they argued, by elements in the mainstream media that tended “to give minority views more credence than they deserve”. This false equivalence was the result of journalists confusing balance with truth-telling, wilful neutrality with accuracy; caving in to pressure from rightwing interest groups to present “both sides”; and the format of television news shows that feature debates between opposing viewpoints – even when one side represents an overwhelming consensus and the other is an almost complete outlier in the scientific community. For instance, a 2011 BBC Trust report found that the broadcaster’s science coverage paid “undue attention to marginal opinion” on the subject of manmade climate change. Or, as a headline in the Telegraph put it, “BBC staff told to stop inviting cranks on to science programmes”.

In a speech on press freedom, CNN’s chief international correspondent Christiane Amanpour addressed this issue in the context of media coverage of the 2016 presidential race, saying: “It appeared much of the media got itself into knots trying to differentiate between balance, objectivity, neutrality, and crucially, truth … I learned long ago, covering the ethnic cleansing and genocide in Bosnia, never to equate victim with aggressor, never to create a false moral or factual equivalence, because then you are an accomplice to the most unspeakable crimes and consequences. I believe in being truthful, not neutral. And I believe we must stop banalising the truth.”

As the west lurched through the cultural upheavals of the 1960s and 1970s and their aftermath, artists struggled with how to depict this fragmenting reality. Some writers like John Barth, Donald Barthelme and William Gass created self-conscious, postmodernist fictions that put more emphasis on form and language than on conventional storytelling. Others adopted a minimalistic approach, writing pared-down, narrowly focused stories emulating the fierce concision of Raymond Carver. And as the pursuit of broader truths became more and more unfashionable in academia, and as daily life came to feel increasingly unmoored, some writers chose to focus on the smallest, most personal truths: they wrote about themselves.

American reality had become so confounding, Philip Roth wrote in a 1961 essay, that it felt like “a kind of embarrassment to one’s own meager imagination”. This had resulted, he wrote, in the “voluntary withdrawal of interest by the writer of fiction from some of the grander social and political phenomena of our times”, and the retreat, in his own case, to the more knowable world of the self.


Real estate and realism … Bruce Willis in the 1990 film version of The Bonfire of the Vanities. Photograph: Allstar/WARNER BROS.

In a controversial 1989 essay, Tom Wolfe lamented these developments, mourning what he saw as the demise of old-fashioned realism in American fiction, and he urged novelists to “head out into this wild, bizarre, unpredictable, hog-stomping Baroque country of ours and reclaim it as literary property”. He tried this himself in novels such as The Bonfire of the Vanities and A Man in Full, using his skills as a reporter to help flesh out a spectrum of subcultures with Balzacian detail. But while Wolfe had been an influential advocate in the 1970s of the New Journalism (which put an emphasis on the voice and point of view of the reporter), his new manifesto didn’t win many converts in the literary world. Instead, writers as disparate as Louise Erdrich, David Mitchell, Don DeLillo, Julian Barnes, Chuck Palahniuk, Gillian Flynn and Groff would play with devices (such as multiple points of view, unreliable narrators and intertwining storylines) pioneered decades ago by innovators such as William Faulkner, Virginia Woolf, Ford Madox Ford and Vladimir Nabokov to try to capture the new Rashomon-like reality in which subjectivity rules and, in the infamous words of former president Bill Clinton, truth “depends on what the meaning of the word ‘is’ is”.

But what Roth called “the sheer fact of self, the vision of self as inviolable, powerful, and nervy, self as the only real thing in an unreal environment” would remain more comfortable territory for many writers. In fact, it would lead, at the turn of the millennium, to a remarkable flowering of memoir writing, including such classics as Mary Karr’s The Liars’ Club and Dave Eggers’s A Heartbreaking Work of Staggering Genius – books that established their authors as among the foremost voices of their generation. The memoir boom and the popularity of blogging would eventually culminate in Karl Ove Knausgaard’s six-volume autobiographical novel, My Struggle – filled with minutely detailed descriptions, drawn from the author’s own daily life.

Personal testimony also became fashionable on college campuses, as the concept of objective truth fell out of favour and empirical evidence gathered by traditional research came to be regarded with suspicion. Academic writers began prefacing scholarly papers with disquisitions on their own “positioning” – their race, religion, gender, background, personal experiences that might inform or skew or ratify their analysis.


Social networks give people news that is popular and trending rather than accurate or important

In a 2016 documentary titled HyperNormalisation, the filmmaker Adam Curtis created an expressionistic, montage-driven meditation on life in the post-truth era; the title was taken from a term coined by the anthropologist Alexei Yurchak to describe life in the final years of the Soviet Union, when people both understood the absurdity of the propaganda the government had been selling them for decades and had difficulty envisioning any alternative. In HyperNormalisation, which was released shortly before the 2016 US election, Curtis says in voiceover narration that people in the west had also stopped believing the stories politicians had been telling them for years, and Trump realised that “in the face of that, you could play with reality” and in the process “further undermine and weaken the old forms of power”.

Some Trump allies on the far right also seek to redefine reality on their own terms. Invoking the iconography of the movie The Matrix – in which the hero is given a choice between two pills, a red one (representing knowledge and the harsh truths of reality) and a blue one (representing soporific illusion and denial) – members of the “alt-right” and some aggrieved men’s rights groups talk about “red-pilling the normies”, which means converting people to their cause. In other words, selling their inside-out alternative reality, in which white people are suffering from persecution, multiculturalism poses a grave threat and men have been oppressed by women.

Alice Marwick and Rebecca Lewis, the authors of a study on online disinformation, argue that “once groups have been red-pilled on one issue, they’re likely to be open to other extremist ideas. Online cultures that used to be relatively nonpolitical are beginning to seethe with racially charged anger. Some sci-fi, fandom, and gaming communities – having accepted run-of-the-mill antifeminism – are beginning to espouse white-nationalist ideas. ‘Ironic’ Nazi iconography and hateful epithets are becoming serious expressions of antisemitism.”


Some Trump allies on the far right invoke The Matrix to sell their inside‑out alternative reality

One of the tactics used by the alt-right to spread its ideas online, Marwick and Lewis argue, is to initially dilute more extreme views as gateway ideas to court a wider audience; among some groups of young men, they write, “it’s a surprisingly short leap from rejecting political correctness to blaming women, immigrants, or Muslims for their problems.”

Many misogynist and white supremacist memes, in addition to a lot of fake news, originate or gain initial momentum on sites such as 4chan and Reddit – before accumulating enough buzz to make the leap to Facebook and Twitter, where they can attract more mainstream attention. Renee DiResta, who studies conspiracy theories on the web, argues that Reddit can be a useful testing ground for bad actors – including foreign governments such as Russia’s – to try out memes or fake stories to see how much traction they get. DiResta warned in the spring of 2016 that the algorithms of social networks – which give people news that is popular and trending, rather than accurate or important – are helping to promote conspiracy theories.


There is an 'asymmetry of passion' on social media: most people won’t devote hours reinforcing the obvious. Extremists are committed to ‘wake up the sheeple’

This sort of fringe content can both affect how people think and seep into public policy debates on matters such as vaccines, zoning laws and water fluoridation. Part of the problem is an “asymmetry of passion” on social media: while most people won’t devote hours to writing posts that reinforce the obvious, DiResta says, “passionate truthers and extremists produce copious amounts of content in their commitment to ‘wake up the sheeple’”.

Recommendation engines, she adds, help connect conspiracy theorists with one another to the point that “we are long past merely partisan filter bubbles and well into the realm of siloed communities that experience their own reality and operate with their own facts”. At this point, she concludes, “the internet doesn’t just reflect reality any more; it shapes it”.

Language is to humans, the writer James Carroll once observed, what water is to fish: “We swim in language. We think in language. We live in language.” This is why Orwell wrote that “political chaos is connected with the decay of language”, divorcing words from meaning and opening up a chasm between a leader’s real and declared aims. This is why the US and the world feel so disoriented by the stream of lies issued by the Trump White House and the president’s use of language to disseminate distrust and discord. And this is why authoritarian regimes throughout history have co‑opted everyday language in an effort to control how people communicate – exactly the way the Ministry of Truth in Nineteen Eighty-Four aims to deny the existence of external reality and safeguard Big Brother’s infallibility.

Orwell’s “Newspeak” is a fictional language, but it often mirrors and satirises the “wooden language” imposed by communist authorities in the Soviet Union and eastern Europe. Among the characteristics of “wooden language” that the French scholar Françoise Thom identified in a 1987 thesis were abstraction and the avoidance of the concrete; tautologies (“the theories of Marx are true because they are correct”); bad metaphors (“the fascist octopus has sung its swan song”); and Manichaeism that divides the world into things good and things evil (and nothing in between).


‘Trump has performed the disturbing Orwellian trick of using words to mean the exact opposite of what they really mean.’ ... John Hurt in the film adaptation of Nineteen Eighty-Four. Photograph: Allstar/MGM

Trump has performed the disturbing Orwellian trick (“WAR IS PEACE”, “FREEDOM IS SLAVERY”, “IGNORANCE IS STRENGTH”) of using words to mean the exact opposite of what they really mean. It’s not just his taking the term “fake news”, turning it inside out, and using it to try to discredit journalism that he finds threatening or unflattering. He has also called the investigation into Russian election interference “the single greatest witch-hunt in American political history”, when he is the one who has repeatedly attacked the press, the justice department, the FBI, the intelligence services and any institution he regards as hostile.

In fact, Trump has the perverse habit of accusing opponents of the very sins he is guilty of himself: “Lyin’ Ted”, “Crooked Hillary”, “Crazy Bernie”. He accused Clinton of being “a bigot who sees people of colour only as votes, not as human beings worthy of a better future”, and he has asserted that “there was tremendous collusion on behalf of the Russians and the Democrats”.

In Orwell’s language of Newspeak, a word such as “blackwhite” has “two mutually contradictory meanings”: “Applied to an opponent, it means the habit of impudently claiming that black is white, in contradiction of the plain facts. Applied to a Party member, it means a loyal willingness to say that black is white when Party discipline demands this.”




Trump's inauguration crowd: Sean Spicer's claims versus the evidence


This, too, has an unnerving echo in the behaviour of Trump White House officials and Republican members of Congress who lie on the president’s behalf and routinely make pronouncements that flout the evidence in front of people’s eyes. The administration, in fact, debuted with the White House press secretary, Sean Spicer, insisting that Trump’s inaugural crowds were the “largest audience” ever – an assertion that defied photographic evidence and was rated by the fact-checking blog PolitiFact a “Pants on Fire” lie. These sorts of lies, the journalist Masha Gessen has pointed out, are told for the same reason that Vladimir Putin lies: “to assert power over truth itself”.

Trump has continued his personal assault on the English language. His incoherence (his twisted syntax, his reversals, his insincerity, his bad faith and his inflammatory bombast) is emblematic of the chaos he creates and thrives on, as well as an essential instrument in his liar’s toolkit. His interviews, off‑teleprompter speeches and tweets are a startling jumble of insults, exclamations, boasts, digressions, non sequiturs, qualifications, exhortations and innuendos – a bully’s efforts to intimidate, gaslight, polarise and scapegoat.

Precise words, like facts, mean little to Trump, as interpreters, who struggle to translate his grammatical anarchy, can attest. Chuck Todd, the anchor of NBC’s Meet the Press, observed that after several of his appearances as a candidate Trump would lean back in his chair and ask the control booth to replay his segment on a monitor – without sound: “He wants to see what it all looked like. He will watch the whole thing on mute.”
Protesters react to white nationalist Richard Spencer as he speaks at a college campus in Florida in 2017. Spencer participated in the Charlottesville Unite the Right rally earlier that year. Photograph: Joe Raedle/Getty Images

Philip Roth said he could never have imagined that “the 21st-century catastrophe to befall the USA, the most debasing of disasters”, would appear in “the ominously ridiculous commedia dell’arte figure of the boastful buffoon”. Trump’s ridiculousness, his narcissistic ability to make everything about himself, the outrageousness of his lies, and the profundity of his ignorance can easily distract attention from the more lasting implications of his story: how easily Republicans in Congress enabled him, undermining the whole concept of checks and balances set in place by the founders; how a third of the country passively accepted his assaults on the constitution; how easily Russian disinformation took root in a culture where the teaching of history and civics had seriously atrophied.

The US’s founding generation spoke frequently of the “common good”. George Washington reminded citizens of their “common concerns” and “common interests” and the “common cause” they had all fought for in the revolution. And Thomas Jefferson spoke in his inaugural address of the young country uniting “in common efforts for the common good”. A common purpose and a shared sense of reality mattered because they bound the disparate states and regions together, and they remain essential for conducting a national conversation. Especially today in a country where Trump and Russian and hard-right trolls are working to incite the very factionalism Washington warned us about, trying to inflame divisions between people along racial, ethnic and religious lines.

There are no easy remedies, but it’s essential that citizens defy the cynicism and resignation that autocrats and power-hungry politicians depend on to subvert resistance. Without commonly agreed-on facts – not Republican facts and Democratic facts; not the alternative facts of today’s silo-world – there can be no rational debate over policies, no substantive means of evaluating candidates for political office, and no way to hold elected officials accountable to the people. Without truth, democracy is hobbled