Search This Blog

Showing posts with label belief. Show all posts
Showing posts with label belief. Show all posts

Monday 14 August 2023

A Level Economics: Can you change a Brexit state of mind?


If departing the EU has failed to deliver, why is the UK still so divided? Seven years on, we ask behavioural psychologists if cognitive dissonance can be overcome> Tim Adams in The Guardian



One of the most significant political events of the past few months, it has seemed to me, wasn’t strictly a bit of politics at all, but an emotional catch and quaver in the voice of a politician. The politician was the Conservative MP Steve Baker and the sudden sob in his throat came about during a TV interview about the efforts to resolve the Northern Ireland protocol.

Baker, you will recall, was one of the most strident voices in the Brexit argument, a leader of the Tory European Research group, the ERG, which frustrated Theresa May’s efforts to find a compromise deal with the EU. The sob in his voice and the tears in his eyes prefaced a short, heartfelt confession about the extreme private stress that those Brexit machinations – and subsequent arguments over Covid lockdown – had caused him. Speaking subsequently to the Times, Baker expanded on that state of mind. “I felt absolutely worthless,” he said. “I felt repugnant, hateful, to blame for all of the troubles that we had, absolutely without any joy, constantly worried about everything to the point of mental torment. A constant state of panic attacks and anxiety. It’s not a state anyone should live in.”

Matters came to a head in November 2021 at a weekly prayer group that Baker, a Christian, attended in Westminster. “I suddenly just started crying,” he recalled. “I couldn’t control it. I couldn’t speak. I was just clutching myself, sobbing my heart out.” One of the reasons that he was opening up about that now, he said, was that “I’m very conscious there’s lots of people out there who blame me for their misery. But it’s an unfortunate thing on this question of leave and remain that leaving has caused a great deal of anxiety and anger and depression for a lot of people. But being in the EU has caused a lot of anxiety and anger and depression for people…” 

Baker’s courageous candour was significant in our national conversation, it seemed to me, for a couple of reasons. For one thing, that outpouring was, to my knowledge, the only public occasion on which a leading Brexiter had owned up to the pain of doubt and anguish that the referendum had occasioned. It was, also, a very telling illustration of a truism about the whole ongoing cataclysm: that, though the vote obviously had fundamental real-world consequences for our economy and our politics, it was arguably best understood as a psychological rather a political moment.

From the outset, headline writers recognised and amplified the internalised crises behind the politics in references to “Brexhaustion”, “Strexit”, “Branxiety” and “Brexistential crisis”. The referendum result, then and now, was (for remainers) an act of visceral “self-harm”. In the days after the result, the Guardian reported that “therapists everywhere” were experiencing “shockingly elevated levels of anxiety and despair”, with mental health referrals “[starting] to mushroom”. There was a clear spike in the prescription of antidepressants. By January 2019, a YouGov survey found that two-thirds of British adults were either “fairly unhappy” or “very unhappy” because of Brexit; one-third of leave voters were in that latter category.
 
Seven years is sometimes thought of as a moment of settled change. It is understood, not quite exactly, as the period of time in which nearly all the cells in our bodies have been replaced. The Brexit-made divisions of 2016 persist, however. Though there is some anecdotal and polling evidence that there has been a shift in sentiment, and that remain might now prevail, the same polls show very little appetite to reopen the question. When the BBC did an anniversary Question Time in June, only one half of the divided nation was even allowed in the studio – the audience was made up entirely of those who voted leave, presumably to ensure the debate would not simply descend into an all-too familiar slanging match. (It was as if the marriage guidance counsellor had been forced to separate the warring parties outside.) If there is one certainty about the coming political conference season it is that considered arguments for and against Brexit will not be aired. The Tories will crow about Brexit being done. The Labour frontbench will solemnly observe that past tense, and avoid the B-word, as if it is a triggering trauma for the party and the country, best left undisturbed.

If the language of psychology and identity was always the lexicon with which we understood Brexit, the denialism of our current politics insists that it remains so. With this in mind, in the course of the past week, I have been speaking to some of the behavioural psychologists who initially examined some of the choices of the referendum, to see if they now saw any way out of current entrenched divisions. (One of the triggers for this inquiry in my own mind was reading a new book called The Art of the Impossible, the inside story of Nigel Farage’s short-lived Brexit party. Revisiting the ingrained anxieties of that period was a version of the addictive chest-tightening outrage that comes from a daily scroll through the culture war of my Twitter feed.)

One of the key mechanisms that all psychologists agree has defined the binary choice of the referendum is cognitive dissonance. That is the powerful internal mechanism in each of us that demands consistency in our understanding of the world, and which desperately looks for ways to correct or manage information that contradicts understanding. The term was coined by a young behaviouralist in the 1950s, Leon Festinger, who came to it after studying a religious doomsday cult that saw its date for apocalypse come and go. Cognitive dissonance described the jarring pain of cultists seeing prophecy unfulfilled and the immediate strategies to explain it away.
 

“A man with a conviction is a hard man to change,” Festinger wrote. “Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point. But suppose he is presented with evidence, unequivocal and undeniable evidence, that his belief is wrong: what will happen? The individual will frequently emerge, not only unshaken, but even more convinced of the truth of his beliefs than ever before. Indeed, he may even show a new fervour about convincing and converting other people to his view.” Welcome to Brexit.

The American social psychologist Carol Tavris has spent her academic career partly developing Festinger’s work. That wisdom is distilled in her book (co-written with Elliot Aronson) called, perfectly, Mistakes Were Made (But Not By Me). At the time of the Brexit referendum seven years ago, Tavris noted that “once we have made a decision, our mental doors tend to close… it is always easier to continue to justify a belief than to change it.” As a result, she suggested, Britain – like America under Trump – tribally divided, faced with all sorts of evidence that campaign promises were utter falsehoods, was “awash in dissonance”.

Little of that, Tavris suggested to me last week, has gone away. “Most of us tend to hold the belief that we are essentially good and moral and competent,” she says. “And if I fear I did something bad, foolish or wrong, that fear challenges that belief we hold about ourselves. Then I have a choice: either I can accept this new information and say, ‘I guess I really screwed up here’, or I can say, ‘Sod off with your stupid evidence’.” This latter response, she suggests, is the default. 

Her book uses a metaphor she calls the pyramid of choice. At the top of the pyramid, before a choice has been made, an individual is at his or her most open-minded. But as soon as you step off the top in a particular direction, efforts will already be being made to justify that decision. “By the time you’ve made five, or six or seven steps down that pyramid, renewing your commitment to that decision, rehearsing that decision, the harder it will be for anyone to go back up to the top.”

She points to lots of evidence that shows that even if people – judges for example - are given a face-saving way to declare they have got something wrong, they are still unlikely to accept or declare it.

So what kind of conversation helps people move past that dissonance and accept the evidence of reality (that, say, £350m won’t be given to the health service and there won’t be an easy trade deal with the US – or that Brexit is not to blame for every aspect of the cost of living crisis)?

“When we argue with somebody about their beliefs,” Tavris says, “the absolute crucial thing to avoid is making them feel foolish. If you say something like, ‘How could you be so stupid?’, that will almost always make your listener become even more committed to their belief. If you say instead, ‘Well, many of my own expectations turned out not to be the case too’, that might be a place to start.”

Psychologists are, of course, not immune to the biases they identify. How does she maintain doubt in her own beliefs? Well, she says, “it’s that old idea of being open-minded, but not so open that your brains fall out. You want to have ideas you live by and feel passionately about. But the goal is to hold them lightly enough so that [if the evidence changes] you can also let them go.”

The more I read into the science of psychological polarisation, the more often ideas of “neuropolitics” crop up. This is the fairly new science that examines – with the help of fMRI brain scans – questions of political allegiance in the context of brain structure and activity. Perhaps the leading researcher in this area in the UK is Darren Schreiber, at Essex University, author of the Your Brain Is Built for Politics. I call to ask him if the differences between “leaver” and “remainer” responses to the world are so baked in as to be visible in brain activity?

Schreiber is circumspect about the current reach of his discipline, beyond broad observations. “If you’re a conservative, or if you’re a liberal, there are consistent patterns that emerge,” he says. Different experiments in brain imaging “can classify political affiliation with about 70%-plus accuracy based on brain structure, and with brain activity at an even higher rate, 80%-plus”. In very simplified terms, the amygdala, the part of the brain that processes fear and threat, appears more sensitive to certain stimuli in republican or conservative brains.

Probably more significant, Schreiber suggests, is the fact that our brains are hardwired to be excited by politics in general.
Even raising the issue seems provocative. We just can’t talk about [Brexit] any moreNick Chater, behavioural scientist

“Though we all have these underlying predispositions at the genetic level, to be a little bit more conservative or a little bit more liberal, these can be altered by environmental circumstances. And by far the most important environmental circumstances, if you’re a human, is your social milieu. If you’re an ant you can tell who is ‘us’ and who is ‘them’ with a very quick sniff. If you are a human it is more complex, and we put lot of work into that.”

That mental effort is subject to Hebb’s law, which holds that “neurons that fire together wire together”, that is to say, the brain itself starts to be shaped in tiny pathways by the associations it is most often exposed to, a sort of internal echo chamber.

How easy is it to change that wiring?

“It’s really hard,” Schreiber says. “We see tremendous stability over very long periods of time.” A choice like Brexit provides endless stimuli to feed that brain activity. “It’s coalitions within coalitions within coalitions…” Schreiber says.

Nick Chater, professor of behavioural science at Warwick Business School, approaches those stubborn social networks from a different perspective. In the aftermath of the referendum, he led a discussion on Radio 4’s The Human Zoo into the psychological fallout of Brexit, the hardening of decision into identity. He laughs a little when I ask him if there is any kind of time limit on cognitive dissonance.

“Behavioural psychology very rarely looks at the long term,” he says. “So I think actually, psychologists have fairly little idea how long these things tend to last.”

What he finds striking about the current situation is the almost total absence of debate about the effect of the decision itself. “It’s become so divisive, that even raising the issue seems almost a provocative act. There’s a sense that we just can’t talk about this at all any more.”

Does he see a strategy that would allow that to change?

“One of the things might change is if one could get to a point where we can reframe the debate. Say if the EU clearly broke into a two-speed Europe where there was a central core engaging in really deep integration and an outer rim more loosely connected…”

And what about the kind of language that might prompt a rethink?

“The most useful language would recognise that politics is inherently uncertain,” he says. “So: ‘We thought it would be a bad idea; you thought it would be better; but nobody really knew for sure. Now we know a bit more, and perhaps it’s time to rethink…’”

Does he hear much evidence of that?

“Not a great deal…”

Perhaps the most extensive examination of the referendum in these terms came in a book called The Psychology of Brexit, written by Brian Hughes, a specialist in stress psychophysiology and a professor of psychology at the University of Galway, Ireland. “Brexit,” Hughes argued, “emerged from psychological impulses, was determined by psychological choices, is construed in terms of psychological perceptions, and will leave a lasting psychological imprint.”

At the heart of the choice, Hughes suggested, were two persistent fallacies. First, the notion that people ever approach political questions with clear-headed reason. Second, the idea that your opponents have cornered the market in irrationality. “Remain did not have a monopoly on reason. This is because remainers are human beings.”
When you pathologise the other side, there’s no point in reaching out to themBrian Hughes, psychologist

Hughes’s book came out in 2019 at the height of “no deal is better than a bad deal” insanity. If he were to add to it now, he suggests, it would be as a textbook case of “polarisation theory” and the ways in which the three-word sloganeering of “Brexit means Brexit” and “get Brexit done” has been repurposed to provide the illusion of simplicity to other very complex issues – “stop the boats” etc. The primary division of Brexit has extended into “clusters of interwoven views” on the climate crisis, vaccination and immigration, feeding everything into the same blunt binary.

One result of that, he suggests, is that Brexit has become a classic example of toxicity. “If there is something especially scandalous in our own lives or traumatic, we will try not to mention it. It just brings too much up. People talk about the ‘Ming vase strategy’ for Labour and Brexit [the idea that they must not smash their precious majority]. The political logic is that this event was still so painful for people that you could lose half the electorate as a result of one soundbite.”

If there is a way through this, he suggests, it is to break down the myths of us and them. “Brexit was obviously never the single will of the people, but also the will of leavers and the will of remainers are very far from homogenous. Politicians need to find ways of foregrounding the diversity of views that people had and have, even if some of them might be very ugly. They need to show the illusion of simple polarisation.”

Preventing this, he says, is the fact that Brexit has brought into the political mainstream “all the reasoning errors that people make”. Polarity acts against nuance, and undermines the middle ground. “Both sides start to look at the other as somehow irretrievably deranged. And when you pathologise the other side, there’s no point in reaching out to them.” As a result, he suggests, “People who have the opportunity to address political challenges are no longer seeking to control the divisions in society, they are trying to maximise them for their own ends.”

Does he see an end to that polarity?

“It would require politicians and commentators to take some of the heat out of the arguments,” he says. “That might take a generation, or it might be one of these cyclical trends.” For the time being, our Brexity brains are it seems here to stay.

Friday 23 June 2023

Economics Explained: Assumptions and Economic Models

An assumption, in the context of economic models, refers to a simplifying belief or proposition about the behaviour of individuals, firms, or the overall economic system. These assumptions are necessary because economic models attempt to capture the complexity of real-world phenomena and make them more understandable and analysable.

Assumptions serve as building blocks for economic models, providing a foundation upon which the analysis can be conducted. They help economists create a framework that abstracts away unnecessary details and focuses on key variables and relationships of interest. By making assumptions, economists can isolate specific factors and explore their impact on economic outcomes.

For example, when constructing a model to analyse consumer behaviour, economists may assume that individuals are rational decision-makers who seek to maximise their personal satisfaction or utility. While this assumption may not accurately capture every aspect of real-world consumer behaviour, it simplifies the decision-making process and allows economists to predict how individuals might respond to changes in prices, incomes, or other factors.

Similarly, in the study of market dynamics, economists often assume perfect competition, which assumes a large number of buyers and sellers, identical products, and perfect information. Although perfect competition is rarely found in reality, this assumption enables economists to study market equilibrium, price determination, and the effects of various policy interventions in a more manageable way.

Assumptions in economic models also often employ the ceteris paribus principle, which means "all else equal." This principle assumes that while analysing the relationship between two variables, all other factors remain constant. This allows economists to focus on the specific relationship of interest without getting entangled in the complexities of simultaneous changes in multiple factors.

It is important to note that assumptions are simplifications and abstractions, and they may not always perfectly reflect reality. However, they serve a crucial role in economic modelling by making the analysis feasible, highlighting key relationships, and providing initial insights into economic behaviour and outcomes. While assumptions are necessary, it is also important for economists to continuously test and refine them based on empirical evidence to improve the accuracy and reliability of economic models.

Assumptions and simplifications in mathematical economic models can introduce potential biases and limitations in several ways:

  1. Inaccurate representation of reality: Economic models are abstractions that aim to simplify the complex real world. However, by making assumptions and simplifications, models may fail to capture the full complexity and nuances of economic phenomena. These simplifications can lead to a mismatch between the model's assumptions and the actual behaviour of individuals, firms, or markets, potentially introducing biases in the model's predictions.

  2. Omission of relevant variables: Economic models often involve simplifications that exclude certain variables or factors that may be important in real-world situations. This exclusion can limit the model's ability to provide a comprehensive understanding of the economic system under study. The omission of relevant variables can result in biased or incomplete analysis, as important drivers of economic behaviour or outcomes may be neglected.

  3. Assumptions about individual behaviour: Many economic models rely on assumptions about the behaviour of individuals, such as the assumption of rationality or self-interest. However, these assumptions may not always hold true in reality. Individuals may exhibit bounded rationality, have imperfect information, or behave altruistically, which can deviate from the assumptions made in economic models. Such deviations can lead to biased predictions or inaccurate representations of real-world phenomena.

  4. Simplified market structures: Economic models often assume simplified market structures, such as perfect competition, monopoly, or oligopoly. While these assumptions provide a useful framework for analysis, they may not reflect the complexities of actual markets. Real-world markets can exhibit various degrees of competition, market power, and imperfect information, which can introduce biases when using simplified market structures in economic models.

  5. Linear relationships: Many economic models assume linear relationships between variables for simplicity and tractability. However, in reality, relationships between variables may be nonlinear or exhibit diminishing returns. Assuming linearity can introduce biases in predictions or policy recommendations, as it may not accurately capture the actual dynamics and interactions among variables.

  6. Limited scope of analysis: Economic models often focus on specific aspects or sectors of the economy, neglecting interdependencies and feedback effects. This limited scope can introduce biases by overlooking broader systemic effects or failing to capture the full consequences of policy interventions. It is important to recognise that economic systems are complex and interconnected, and simplifications in models can restrict the understanding of these interconnections.

To mitigate these limitations and biases, economists employ various techniques, such as sensitivity analysis, robustness checks, and empirical validation, to test the assumptions and evaluate the robustness of model predictions. Additionally, economists strive to develop more realistic and nuanced models by incorporating more accurate assumptions, relaxing unrealistic assumptions, or adopting alternative modelling approaches to address the limitations and biases introduced by simplifications.



Principles of Economics Translated by Yoram Bauman


Thursday 30 June 2022

Scientific Facts have a Half-Life - Life is Poker not Chess 4

Abridged and adapted from Thinking in Bets by Annie Duke





The Half-Life of Facts, by Samuel Arbesman, is a great read about how practically every fact we’ve ever known has been subject to revision or reversal. The book talks about the extinction of the coelacanth, a fish from the Late Cretaceous period. This was the period that also saw the extinction of dinosaurs and other species. In the late 1930s and independently in the mid 1950s, coelacanths were found alive and well. Arbesman quoted a list of 187 species of mammals declared extinct, more than a third of which have subsequently been discovered as un-extinct.


Given that even scientific facts have an expiration date, we would all be well advised to take a good hard look at our beliefs, which are formed and updated in a much more haphazard way than in science.


We would be better served as communicators and decision makers if we thought less about whether we are confident in our beliefs and more about how confident we are about each of our beliefs. What if, in addition to expressing what we believe, we also rated our level of confidence about the accuracy of our belief on a scale of zero to ten? Zero would mean we are certain a belief is not true. Ten would mean we are certain that our belief is true. Forcing ourselves to express how sure we are of our beliefs brings to plain sight the probabilistic nature of those beliefs, that we believe is almost never 100% or 0% accurate but, rather, somewhere in between.


Incorporating uncertainty in the way we think about what we believe creates open-mindedness, moving us closer to a more objective stance towards information that disagrees with us. We are less likely to succumb to motivated reasoning since it feels better to make small adjustments in degrees of certainty instead of having to grossly downgrade from ‘right’ to ‘’wrong’. This shifts us away from treating information that disagrees with us as a threat, as something we have to defend against, making us better able to truthseek.


There is no sin in finding out there is evidence that contradicts what we believe. The only sin is in not using that evidence as objectively as possible to refine that belief going forward. By saying, ‘I’m 80%’ and thereby communicating we aren’t sure, we open the door for others to tell us what they know. They realise they can contribute without having to confront us by saying or implying, ‘You’re wrong’. Admitting we are not sure is an invitation for help in refining our beliefs and that will make our beliefs more accurate over time as we are more likely to gather relevant information.


Acknowledging that decisions are bets based on our beliefs, getting comfortable with uncertainty and redefining right and wrong are integral to a good overall approach to decision making.


Wednesday 29 June 2022

Being smart makes our bias worse: Life is Poker not Chess - 3

 

Abridged and adapted from Thinking in Bets by Annie Duke


We bet based on what we believe about the world.This is very good news: part of the skill in life comes from learning to be a better belief calibrator, using experience and information to more objectively update our beliefs to more accurately represent the world. The more accurate our beliefs, the better the foundation of the bets we make. However there is also some bad news: our beliefs can be way, way off.


Hearing is believing


We form beliefs in a haphazard way, believing all sorts of things based just on what we hear out in the world but haven’t researched for ourselves.


This is how we think we form abstract beliefs:


  1. We hear something

  2. We think about it and vet it, determining whether it is true or false; only after that

  3. We form our belief


It turns out though, that we actually form abstract beliefs this way:


  1. We hear something

  2. We believe it to be true

  3. Only sometimes, later, if we have the time or the inclination, we think about it and vet it, determining whether it is, in fact, true or false.


These belief formation methods had evolved due to a need for efficiency not accuracy. In fact, questioning what you see or hear could get you eaten in the jungle. However, assuming that you are no longer living in a jungle, we have failed to develop a high degree of scepticism to deal with materials available in the modern social media age. This general belief-formation process may affect our decision making in areas that can have significant consequences.


If we were good at updating our beliefs based on new information, our haphazard belief formation process might cause relatively few problems. Sadly, we form beliefs without vetting most of them, and maintain them even after receiving clear, corrective information.


Truthseeking, the desire to know the truth regardless of whether the truth aligns with the beliefs we currently hold, is not naturally supported by the way we process information. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.


The stubbornness of beliefs


Once a belief is lodged, it becomes difficult to dislodge. It takes on a life of its own, leading us to notice and seek out evidence confirming our belief, rarely challenge the validity of confirming evidence, and ignore or work hard to actively discredit information contradicting the belief. This irrational, circular information processing pattern is called motivated reasoning.


Fake news works because people who already hold beliefs consistent with the story generally won’t question the evidence. Disinformation is even more powerful because the confirmable facts in the story make it feel like the information has been vetted, adding to the power of the narrative being pushed.


Fake news isn’t meant to change minds. The potency of fake news is that it entrenches beliefs its intended audience already has, and then amplifies them. The Internet is a playground for motivated reasoning. It provides the promise of access to a greater diversity of information sources and opinions than we’ve ever had available. Yet, we gravitate towards sources that confirm our beliefs. Every flavour is out there, but we tend to stick with our favourite. 


Even when directly confronted with facts that disconfirm our beliefs, we don’t let facts get in the way.


Being smart makes it worse


Surprisingly, being smart can actually make bias worse. The smarter you are, the better you are at constructing a narrative that supports your beliefs, rationalising and framing the data to fit your argument or point of view.


Blind spot bias - is an irrationality where people are better at recognising biased reasoning in others but are blind to bias in themselves. It was found that blind spot bias is greater the smarter you are.  Furthermore, people who were aware of their own biases were not better able to overcome them. 


Dan Kahan discovered that the more numerate people made more mistakes interpreting data on emotionally charged topics than the less numerate subjects sharing the same beliefs. It turns out the better you are with numbers, the better you are at spinning those numbers to conform to and support your beliefs.


Wanna bet


Imagine taking part in a conversation with a friend about the movie Chintavishtayaya Shyamala. Best film of all time, introduces a bunch of new techniques by which directors could contribute to story-telling. ‘Obviously, it won the national award’ you gush, as part of a list of superlatives the film unquestionably deserves.


Then your friend says, ‘Wanna bet?’


Suddenly, you are not so sure. That challenge puts you on your heels, causing you to back off your declaration and question the belief that you just declared with such assurance.


Remember the order in which we form abstract beliefs:


  1. We hear something

  2. We believe it to be true

  3. Only sometimes, later, if we have the time or the inclination, we think about it and vet it, determining whether it is, in fact, true or false.


‘Wanna bet?’ triggers us to engage in that third step that we only sometimes get to. Being asked if we're willing to bet money on it makes it much more likely that we will examine our information in a less biased way, be more honest with ourselves about how sure we are of our beliefs, and be more open to updating and calibrating our beliefs. The more objective we are, the more accurate our beliefs become. And the person who wins bets over the long run is the one with the more accurate beliefs.


Of course, in most instances, the person offering to bet isn’t actually looking to put any money on it. They are just making a point - a valid point that perhaps we overstated our conclusion or made our statement without including relevant caveats.


I’d imagine that if you went around challenging everyone with ‘Wanna bet?’ it would be difficult to make friends and you’d lose the ones you have. But that doesn’t mean we can’t change the framework for ourselves in the way we think about our biases. decisions and opinions.


Thursday 16 June 2022

Why we trust fraudsters

From Enron to Wirecard, elaborate scams can remain undetected long after the warning signs appear. What are investors missing? Tom Straw in The FT

In March 2020, the star English fund manager Alexander Darwall spoke admiringly to the chief executive at one of the largest investments in his award-winning portfolio. “The last set of numbers are fantastic,” he gushed, adding: “This is a crazy situation. People should be looking at your company and saying ‘wow’. I’m delighted, I’m delighted to be a shareholder.” 

Seated in a swivel chair at his personal conference table, Markus Braun sounded relaxed. The billionaire technologist was dressed all in black, a turtleneck under his suit like some distant Austrian cousin of the late Steve Jobs, and he had little to say about swirling allegations the company had faked its profits for years. “I am very optimistic,” he offered, when Darwall voiced his hope that the controversy would amount to nothing more than growing pains at a fast expanding company. 

“I haven’t sold a single share,” Darwall assured him, doing most of the talking, while also acknowledging how precarious the situation was. The Financial Times had reported in October 2019 that large portions of Wirecard’s sales and profits were fraudulent, and published internal company documents stuffed with the names of fake clients. A six-month “special audit” by the accounting firm KPMG was approaching completion. “If it shows anything that senior people misled, that would be a disaster,” Darwall said. 

His assessment proved correct. Three months later the company collapsed like a house of cards, punctuated by a final lie: that €1.9bn of its cash was “missing”. In fact, the money never existed and Wirecard had for years relied on a fraud that was almost farcical in its simplicity: a few friends of the company claimed to manage huge amounts of business for Wirecard, with all the vast profits from these partners said to be collected in special bank accounts overseen by a Manila-based lawyer with a YouTube following. Braun, who claims to be a victim of a protégé with security services connections who masterminded the scheme and then absconded to Belarus, faces a trial this autumn alongside two subordinates that will examine how the final years of the fraud were accomplished. 

Left behind in the ashes, however, is a much larger question, one which haunts all victims of such scams: how on earth did they get away with it for so long? Wirecard faces serious questions about the integrity of its accounts since at least 2010. Estimates for losses run to more than €20bn, never mind the reputation of Frankfurt as a financial centre. Why did so many inside and outside the company — a long list of investors, bankers, regulators, prosecutors, auditors and analysts — look at the evidence that Wirecard was too good to be true and decide to trust Braun? 

---

In 2019 I worked with whistleblowers to expose Wirecard, using internal documents to show the true source of its spellbinding growth in sales and profit. As I faced Twitter vitriol and accusations I was corrupt, the retired American short-seller Marc Cohodes regularly rang me from wine country on the US west coast to deliver pep talks and describe his own attempts to persuade German journalists to see Wirecard’s true colours. “Keep going Dan. I always say, ‘there’s never just one cockroach in the kitchen’.” 

He was right on that point: find one lie and another soon follows. But short-sellers who search for overvalued companies to bet against are unusual, because they go looking for fraud and skulduggery. Most investors are not prosecutors fitting facts into a pattern of guilt: they don’t see a cockroach at all. 

Think of Elizabeth Holmes, another aficionado of the black turtleneck, who persuaded a group of experts and well-known investors to back or advise her company, Theranos, based on the claim it had technology able to deliver medical results from an improbably small pinprick of blood. The involvement of reputable people and institutions — including retired general James Mattis, former secretary of state Henry Kissinger and former Wells Fargo chief executive Richard Kovacevich as board members — seemed to confirm that all was well. 

Another problem is that complex frauds have a dark magic that is different to, say, “Count” Victor Lustig personally persuading two scrap metal dealers he could sell them a decaying Eiffel Tower in 1925. As Dan Davies wrote in his history of financial scams, Lying for Money, “the way in which most white-collar crime works is by manipulating institutional psychology. That means creating something that looks as much as possible like a normal set of transactions. The drama comes much later, when it all unwinds.”  

What such frauds exploit is the highly valuable character of trust in modern economies. We go through life assuming the businesses we encounter are real, confident that there are institutions and processes in place to check that food standards are met or accounts are prepared correctly. Horse meat smugglers, Enron and Wirecard all abused trust in complex systems as a whole. To doubt them was to doubt the entire structure, which is what makes their impact so insidious; frauds degrade faith in the whole system. 

Trust means not wasting time on pointless checks. Most deceptions would generally have been caught early on by basic due diligence, “but nobody does confirm the facts. There are just too bloody many of them”, wrote Davies. It makes as much sense for a banker to visit every outpost of a company requiring a loan as it would for the buyer of a pint of milk to inquire after the health of the cow. For instance, by the time John Paulson, one of the world’s most famous and successful hedge fund managers, became the largest shareholder in Canadian-listed Sino Forest, its shares had traded for 15 years. Until the group’s 2011 collapse, few thought of travelling to China to see if its woodlands were there. 

---

Yet what stands out in the case of Wirecard are the many attempts to check the actual facts. In 2015 a young American investigator, Susannah Kroeber, tried to knock on the doors at several remote Wirecard locations. Between 2010 and 2015 the company claimed to have grown in a series of leaps and bounds by buying businesses all over Asia for tens of millions of euros apiece. In Laos she found nothing at all, in Cambodia only traces. Wirecard’s reception area in Vietnam was like a school lunchroom; the only furniture was a picnic table for six and an open bicycle lock hung from one of the internal doors, a common security measure usually removed at a business expecting visitors. The inside was dim, with only a handful of people visible and many desks empty. She knew something wasn’t right, but she also told me that while she went half mad looking for non-existent addresses on heat-baked Southeast Asian dirt roads, she had an epiphany: “Who in their right mind would go to these lengths just to check out a stock investment?” 

Even when Kroeber’s snapshots of empty offices were gathered into a report for her employer, J Capital Research, and presented to Wirecard investors, the response reflected preconceived expectations: these are reputable people, EY is a good auditor, why would they be lying? The short seller Leo Perry described attending an investor meeting where the report was discussed. A French fund manager responded by reporting his own due diligence. He’d asked his secretary to call Wirecard’s Singapore office, the site of its Asian headquarters, and could happily report that someone there had picked up the phone. 

The shareholders reacted at an emotional level, showing how fraud exploits human behaviour. “When you’re invested in the success of something, you want to see it be the best it can be, you don’t pay attention to the finer details that are inconsistent”, says Martina Dove, author of The Psychology of Fraud, Persuasion and Scam Techniques. She adds that social proof and deference to authority, such as expert accounting firms, were powerful forces when used to spread the lies of crooks: “If a friend recommends a builder, you trust that builder because you trust your friend.” 

Wirecard’s response, in addition to taking analysts on a tour of hastily well-staffed offices in Asia, was to drape itself in complexity. Like WeWork, the office space provider that presented itself as a technology company (and which wasn’t accused of fraud), Wirecard waved a wand of innovation to make an ordinary business appear extraordinary. 

At heart, Wirecard’s legitimate operations processed credit and debit card payments for retailers all over the world. It was a competitive field with many rivals, but Wirecard claimed to have become a European PayPal and more, outpacing the competition with profit margins few could match. Wirecard was “a technology company with a bank as a daughter”, Braun said, one using artificial intelligence and cutting-edge security. As the share price rose, so did Braun’s standing as a technologist who heralded the arrival of a cashless society. Who were mere investors to suggest that the results of this whirligig, with operations in 40 countries, were too good to be true? 

It seems to me Wirecard used a similar tactic to the founder of software group Autonomy, Mike Lynch, who charged that critics simply didn’t understand the business. (Lynch has lost a civil fraud trial relating to the $11bn sale of the group, denied any wrongdoing, said he will appeal, and is fighting extradition to the US to face fraud charges. Autonomy’s former CFO was convicted of fraud in separate American proceedings.) 

When this publication presented internal documents describing a book cooking operation in Singapore, Wirecard focused on the amounts at stake, which were initially small, rather than the unpunished practices of forgery and money laundering, which were damning. 

Then there was the thrall of German officials. Three times, in 2008, 2017, and 2019, the financial market regulator BaFin publicly investigated critics of Wirecard, taken by observers as a signal of support. Indeed, BaFin fell for the big lie when faced with an unenviable choice of circumstances: either foreign journalists and speculators were conspiring to attack Germany’s new technology champion using the pages of a prominent newspaper; or senior executives at a DAX 30 company were lying to prosecutors, as well as some of Germany’s most prestigious banks and investment houses. Acting on a claim by Wirecard that traders knew about an FT story before publication, regulators suspended short selling of the stock to protect the integrity of financial markets. 

Proximity to the subject won out, but the German authorities were hardly the first to fail in this way. Their US counterparts ignored the urging of Harry Markopolos to investigate the Ponzi schemer Bernard Madoff, a former chairman of the Nasdaq whose imaginary $65bn fund sent out account statements run off a dot matrix printer. 

---

For some long-term investors, to doubt Wirecard was surely to doubt themselves. Darwall first invested in 2007, when the share price was around €9. As it rose more than tenfold, his investment prowess was recognised accordingly, attracting money to the funds he ran for Jupiter Asset Management, and fame. He knew the Wirecard staff, they had provided advice on taking payments for his wife’s holiday rental. Naturally he trusted Braun. 

Darwall did not respond to requests for comment made to his firm, Devon Equity Management. 

In the buildings beyond the shades of Braun’s office, staff rationalised what didn’t fit. Wirecard was a tech company, yet in early 2016 it suffered a tech disaster. On a quiet Saturday afternoon, running down a list of routine maintenance, a tech guy made a typo. He entered the wrong command when decommissioning a Linux server. Instead of taking out the one machine, he watched with rising panic as it killed all of them, pulling the plug on almost the entire company’s operations without warning. 

Customers were in the dark, as email was offline and Wirecard had no weekend helpline, and it took days for services to recover. Following the incident, a small but notable proportion of clients left and new business was put on hold as teams placated those they already had, staff recalled. Yet the pace of growth in the published numbers remained strong. 

Martin Osterloh, a salesman at Wirecard for 15 years, put the mismatch between claims and capabilities down to spin. Only after the fall was the extent of Wirecard’s hackers, private detectives, intimidation and legal threats exposed to the light. Haphazard lines of communication, disorganisation and poor record keeping created excuses for middle-ranking Wirecard staff and its supervisory board, stories to tell themselves about a failure to integrate and start-up’s culture of experimentation. 

It was perhaps not as hard to believe as we might think. Facebook, which has probed the legal boundaries of surveillance capitalism, famously encouraged staff to “move fast and break things”. Business questions often shade grey before they turn black. As Andrew Fastow said of his own career as a fraudster, “I wasn’t the chief finance officer at Enron, I was the chief loophole officer.” 

Braun’s protégé was chief operating officer Jan Marsalek, a mercurial Austrian who constantly travelled and struck deals, with no real team to speak of. Boasting that he only slept “in the air”, he would appear at headquarters from one flight with a copy of Sun-Tzu’s The Art of War tucked under an arm, then leave a few hours later for the next. Questions were met with a shrug, that strange arrangements reflected Marsalek’s “chaotic genius”. As scrutiny intensified in the final 18 months, the fraudulent imitation shifted to problem solving, allowing board members and staff to think they were engaged in procedures to improve governance. 

After the collapse I shared pretzels with Osterloh on a snowy day in Munich and he seemed embarrassed by events. He and thousands of others had worked on a real business, until they were summarily fired and learned it lost money hand over fist. Osterloh spoke for many when he said: “I’m like the idiot guy in a movie, I got to meet all these guys. The question arises, why were we so naive? And I can’t really answer that question.”  

Sunday 5 June 2022

THE PERFORMATIVE POLITICIAN



Nadeem F Paracha in The Dawn

Illustration by Abro



Populism is a way of framing political ideas that can be filled with a verity of ideologies (C. Mudde in Current History, 2014). These ideologies can come from the left or the right. Populism in itself is not a distinct ideology. It is a performative political style.

No matter where it’s coming from, it is manifested through a particular set of animated gestures, images, tones and symbols (B. Moffitt, The Global Rise of Populism, 2016). At the core of it is a narrative containing two main ‘villains’: The ‘elites’ and ‘the other’. Elites are described as being corrupt. And ‘the other’ is demonised as being a threat to the beliefs and values of the ‘majority’.

Populists begin by glorifying the ‘besieged’ polity as noble. They then begin to frame the polity’s civilisation as ‘sacred’. Therefore, the mission to eradicate threats, in this context, becomes a sacred cause. The far-right parties in Europe want to protect Europe’s Christian identity from Muslim intruders. They see Muslim immigration to European countries as an invasion.

Yet, these far-right groups are largely secular. They do not propose the creation of a Christian theocracy. Instead, they understand modern European civilisation as the outcome of its illustrious Christian past. They frame the Muslim immigrant as ‘the other’ who has arrived from a lesser civilisation. So, according to far-right populists in Europe, the Muslim other — tolerated and facilitated by a political elite — starts to undermine the Christian values that aided European civilisations to become ‘great’.

Ironically, most far-right outfits in Europe that espouse such notions are largely critical of conventional Christian institutions. They see them as being too conservative towards modern European values. Far-right outfits are not overtly religious at all — even though their fiery populist rhetoric frames their cause as a sacred undertaking to protect the civilisational role of Christianity in shaping European societies.

Thus, European far-right populists adopt Christianity not as a theocratic-political doctrine, but as an identity marker to differentiate themselves from Muslims (Saving The People, ed. O. Roy, 2016). It is therefore naive to understand issues such as Islamophobia as a tussle between Christianity and Islam. Neither is it a clash between modernity and anti-modernity, as such.

The actions of some Islamist extremists, and the manner in which these were framed by popular media, made Muslim migrants in the West a community that could be easily moulded into a feared ‘other’ by populists. If one takes out the Muslim migrants from the equation, the core narrative of far-right populists will lose its sting.

Muslims in this regard have become ‘the other’ in India as well. Hindu nationalism is challenging the old, ‘secular’ political elite by claiming that this elite was serving Muslim interests to maintain its political hegemony, and that it was repressing values, beliefs and memories of a Hindu civilisation that was thriving before being invaded and dismantled by Muslim invaders.

Here too, the populist Hindu nationalists are not necessarily devout and pious. And when they are, then the actions in this respect are largely performative rather than doctrinal. That’s why, today, a harmless Hindu ritual and the act of emotionally or physically assaulting a Muslim, may carry similar performative connotations. For example, a militant Hindu nationalist mob attacking a Muslim can be conceived by the attackers as a sacred ritual.

Same is the case in Pakistan. The researcher Muhammad Amir Rana has conducted several interviews of young Islamist militants who were arrested and put in rehabilitation programmes. Almost all of them were told by their ‘handlers’ that self-sacrifice was a means to create an Islamic state/caliphate that would wipe out poverty, corruption and immorality, and provide justice. This idea was programmed into them to create a ‘self’ in relation to an opposite or ‘the other’. The other in this respect were heretics and infidels who were conspiring to destroy Islam.

When an Islamist suicide bomber explodes him/herself in public, or when extremists desecrate Ahmadiyya graves, or a mob attacks an alleged blasphemer, each one of these believe they are undertaking a sacred ritual that is not that different from the harmless ones. But Islamist militants are not populists. They have dogmatic doctrines or are deeply indoctrinated.

Not so, the populists. Populists are great hijackers of ideas. There’s nothing original or deep about them. Everything remains on the surface. Take, for instance, the recently ousted Pakistani PM Imran Khan. He unabashedly steals ideas from the left and the right. His core constituency, which is not so attuned to history, perceives these ideas as being entirely new. Everything he says or claims to have done, becomes ‘for the first time in the history of Pakistan.’

But being a populist, it wasn’t enough for Khan to frame his ‘struggle’ (against ‘corrupt elites’) as a noble cause. It needed to be manifested as a sacred conviction. So, from 2014 onwards, he increasingly began to lace his speeches with allusions of him fighting for justice and morality by treading a path laid out by Islam’s sacred texts and personalities. He then began to explain this undertaking as a ‘jihad’.

These were/are pure populist manoeuvres and entirely performative. Once the cause transformed into becoming a ‘jihad’, it not only required rhetoric culled from Islamist evangelists and then put in the context of a ‘political struggle’, but it also needed performed piety — carrying prayer beads, being constantly photographed while saying obligatory Muslim prayers, embracing famous preachers, etc.

And since ‘jihad’ in the popular imagination is often perceived to be something aggressive and manly, Khan poses as an outspoken and fearless saviour of not only the people of Pakistan, but also of the ‘ummah’.

Yet, by all accounts, he is not very religious. He’s not secular either. But this is how populists are. They are basically nothing. They are great performers who can draw devotion from a great many people — especially those who are struggling to formulate a political identity for themselves. There are no shortcuts to this. But populists provide them shortcuts.

Khan is a curious mixture of an Islamist and a brawler. But both of these attributes mainly reside on the surface and in his rhetoric. The only aim one can say that is lingering underneath the surface is an inexhaustible ambition to be constantly admired and, of course, rule as a North Korean premier does. Conjuring lots of adulation, but zero opposition.

Monday 30 May 2022

On Fact, Opinion and Belief

 Annie Duke in 'Thinking in Bets'


What exactly is the difference between fact, opinion and belief?” .

Fact: (noun): A piece of information that can be backed up by evidence.

Belief: (noun): A state or habit of mind, in which trust or confidence is placed, in some person or thing. Something accepted or considered to be true.

Opinion: (noun): a view, judgement or appraisal formed in the mind about a particular matter.

The main difference here is that we can verify facts, but opinions and beliefs are not verifiable. Until relatively recently, most people would call facts things like numbers, dates, photographic accounts that we can all agree upon.

More recently, it has become commonplace to question even the most mundane objective sources of fact, like eyewitness accounts, and credible peer-reviewed science, but that is a topic for another day.

 How we think we form our beliefs:

  1. We hear something;

  2. We think about it and vet it, determining whether it is true or false; only after that

  3. We form our belief

Actually, we form our beliefs:

  1. We hear something;

  2. We believe it to be true;

  3. Only sometimes, later, if we have the time or the inclination, we think about it and vet it, determining whether it is, in fact, true or false.


Psychology professor Daniel Gilbert, “People are credulous creatures who find it very easy to believe and very difficult to doubt”. 


Our default is to believe that what we hear and read is true. Even when that information is clearly presented as being false, we are still likely to process it as true.

For example, some people believe that we use only 10% of our brains. If you hold that belief, did you ever research it for yourself?

People usually say it is something they heard but they have no idea where or from whom. Yet they are confident that this is true. That should be proof enough that the way we form beliefs is foolish. And, we actually use all parts of our brain.

Our beliefs drive the way we process information. We form beliefs without vetting/testing most of them and we even maintain them even after receiving clear, corrective information.

Once a belief is lodged, it becomes difficult to dislodge it from our thinking. It takes a life of its own, leading us to notice and seek out evidence confirming our belief. We rarely challenge the validity of confirming evidence and ignore or work hard to actively discredit information contradicting the belief.  This irrational, circular information-processing pattern is called motivated reasoning.

Truth Seeking, the desire to know the truth regardless of whether the truth aligns with the beliefs we currently hold, is not naturally supported by the way we process information.

Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.

Fake news works because people who already hold beliefs consistent with the story generally won’t question the evidence. The potency of fake news is that it entrenches beliefs its intended audience already has, and then amplifies them. Social media is a playground for motivated reasoning. It provides the promise of access to a greater diversity of information sources and opinions than we’ve ever had available. Yet, we gravitate towards sources that confirm our beliefs, that agree with us. Every flavour is out there, but we tend to stick with our favourite. 

Even when directly confronted with facts that disconfirm our beliefs, we don’t let facts get in the way of our opinions.


Being Smart Makes It Worse


The popular wisdom is that the smarter you are, the less susceptible you are to fake news or disinformation. After all, smart people are more likely to analyze and effectively evaluate where the information is coming from, right? Part of being ‘smart’ is being good at processing information, parsing the quality of an argument and the credibility of the source.


Surprisingly, being smart can actually make bias worse. The smarter you are, the better you are at constructing a narrative that supports your beliefs, rationalising and framing the data to fit your argument or point of view.


Unfortunately, this is just the way evolution built us. We are wired to protect our beliefs even when our goal is to truthseek. This is one of the instances where being smart and aware of our capacity for irrationality alone doesn’t help us refrain from biased reasoning. As with visual illusions, we can’t make our minds work differently than they do no matter how smart we are. Just as we can’t unsee an illusion, intellect or will power alone can’t make us resist motivated reasoning.


Wanna Bet?


Imagine taking part in a conversation with a friend about the movie Citizen Kane. BEst film of all time, introduced a bunch of new techniques by which directors could contribute to storytelling. “Obviously, it won the best picture Oscar,” you gush, as part of a list of superlatives the film unquestionably deserves.


Then your friend says,”Wanna bet?”


Suddenly, you’re not so sure. That challenge puts you on your heels, causing you to back off your declaration and question the belief that you just declared with such assurance.


When someone challenges us to bet on a belief, signalling their confidence that our belief is inaccurate in some way, ideally it triggers us to vet the belief, taking an inventory of the evidence that informed us.


  • How do I know this?

  • Where did I get this information?

  • Who did I get it from?

  • What is the quality of my sources?

  • How much do I trust them?

  • How up to date is my information?

  • How much information do I have that is relevant to the belief?

  • What other things like this have I been confident about that turned out not to be true?

  • What are the other plausible alternatives?

  • What do I know about the person challenging my belief?

  • What is their view of how credible my opinion is?

  • What do they know that I don’t know?

  • What is their level of expertise?

  • What am I missing?


Remember the order in which we form our beliefs:


  1. We hear something;

  2. We believe it to be true;

  3. Only sometimes, later, if we have the time or the inclination, we think about it and vet it, determining whether it is, in fact, true or false.


“Wanna bet?” triggers us to engage in that third step that we only sometimes get to. Being asked if we are willing to bet money on it makes it much more likely that we will examine our information in a less biased way, be more honest with ourselves about how sure we are of our beliefs, and be more open to updating and calibrating our beliefs.


A lot of good can result from someone saying, “Wanna bet?” Offering a wager brings the risk out in the open, making explicit what is implicit (and frequently overlooked). The more we recognise that we are betting on our beliefs (with our happiness, attention, health, money time or some other limited resource), the more we are likely to temper our statements, getting closer to the truth as we acknowledge the risk inherent in what we believe.


Once we start doing this (at the risk of losing friends), we are more likely to recognise that there is always a degree of uncertainty, that we are generally less sure than we thought we were, that practically nothing is black and white 0% or 100%. And that's a pretty good philosophy for living.