Search This Blog

Showing posts with label decision. Show all posts
Showing posts with label decision. Show all posts

Saturday 13 April 2024

The myth of the second chance

Janan Ganesh in The FT


In the novels of Ian McEwan, a pattern recurs. The main character makes a mistake — just one — which then hangs over them forever. A girl misidentifies a rapist, and in doing so shatters three lives, including her own (Atonement). A man exchanges a lingering glance with another, who becomes a tenacious stalker (Enduring Love). A just-married couple fail to have sex, or rather have it badly, and aren’t themselves again, either as individuals or as a pair (On Chesil Beach). Often, the mistake reverberates over much of the 20th century.  

This plot trick is said to be unbecoming of a serious artist. McEwan is accused of an obsession with incident that isn’t true to the gradualism and untidiness of real life. Whereas Proust luxuriates in the slow accretion of human experience, McEwan homes in on the singular event. It is too neat. It is written to be filmed. 

Well, I am old enough now to observe peers in their middle years, including some disappointed and hurt ones. I suggest it is McEwan who gets life right. The surprise of middle age, and the terror of it, is how much of a person’s fate can boil down to one misjudgement.  

Such as? What in particular should the young know? If you marry badly — or marry at all, when it isn’t for you — don’t assume the damage is recoverable. If you make the wrong career choice, and realise it as early as age 30, don’t count on a way back. Even the decision to go down a science track at school, when the humanities turn out to be your bag, can mangle a life. None of these errors need consign a person to eternal and acute distress. But life is path-dependent: each mistake narrows the next round of choices. A big one, or just an early one, can foreclose all hope of the life you wanted. 

There should be more candour about this from the people who are looked to (and paid) for guidance. The rise of the advice-industrial complex — the self-help podcasts, the chief executive coaches, the men’s conferences — has been mostly benign. But much of the content is American, and reflects the optimism of that country. The notion of an unsalvageable mistake is almost transgressive in the land of second chances.  

Also, for obvious commercial reasons, the audience has to be told that all is not lost, that life is still theirs to shape deep into adulthood. No one is signing up to the Ganesh Motivational Bootcamp (“You had kids without thinking it through? It’s over, son”) however radiant the speaker. 

A mistake, in the modern telling, is not a mistake but a chance to “grow”, to form “resilience”. It is a mere bridge towards ultimate success. And in most cases, quite so. But a person’s life at 40 isn’t the sum of most decisions. It is skewed by a disproportionately important few: sometimes professional, often romantic. Get these wrong, and the scope for retrieving the situation is, if not zero, then overblown by a culture that struggles to impart bad news.  

Martin Amis, that peer of McEwan’s, once attempted an explanation of the vast international appeal of football. “It’s the only sport which is usually decided by one goal,” he theorised, “so the pressure on the moment is more intense in football than any other sport.” His point is borne out across Europe most weekends. A team hogs the ball, creates superior chances, wins more duels — and loses the game to one error. It is, as the statisticians say, a “stupid” sport.   

But it is also the one that most approximates life outside the stadium. I am now roughly midway through that other low-scoring game. Looking around at the distress and regret of some peers, I feel sympathy, but also amazement at the casualness with which people entered into big life choices. Perhaps this is what happens when ideas of redemption and resurrection — the ultimate second chance — are encoded into the historic faith of a culture. It takes a more profane cast of mind to see through it.

Monday 8 May 2023

Negotiation in the age of the dual-career couple

Stefan Stern in The FT

To mark the recent centenary of the Harvard Business Review, editor-in-chief Adi Ignatius dipped into the archive and found, among other things, an article from 1956 titled “Successful Wives of Successful Executives”.

“It is the task of the wife to co-operate in working towards the goals set by her husband,” the article stated. “This means accepting — or perhaps encouraging — the business trips, the long hours at the office, and the household moves dictated by his business career.”

It got worse. The husband, the piece continued, “may meet someone who conforms more closely to the new social standards he has acquired while moving socially upward; he may discard his wife either by taking a new wife or by concentrating all his attention on his business.” Yuk.

The rise of the dual-career couple has transformed the politics of marriage since the 1950s but some tensions remain. A recently published book declares: “The most important career decision you’ll make is about whom to marry and what kind of relationship you will have.”

The words appear in “Money and Love: an Intelligent Road Map for Life’s Biggest Decisions”, written by Myra Strober, professor emerita at Stanford University, and Abby Davisson, a former executive at retailer Gap, and now a consultant.

The book takes a both/and rather than an either/or approach to the issues surrounding professional and domestic life. The authors reject an artificial notion of “balance”. Instead there are necessary, hard-headed but human trade-offs. “If you want lives that are not just two individuals pursuing career aspirations separately, then it takes a lot of negotiation and a lot of discussion, and compromise,” Davisson explained when I met the authors in London.

Strober led a course called “work and family” at Stanford’s graduate school of business (SGSB) for several decades until her retirement in 2018. She was one of the first female faculty members there on her appointment in the early 1970s.

“If I had proposed my course at the business school would be called ‘money and love’ instead of ‘work and family’ I would have had some pushback,” she told me. But wasn’t this in California in the days following the “summer of love”? “The business school was not buying that then either!” she noted.

Perhaps inevitably, in a book written by a business school professor and graduate, there is a checklist or framework to help the reader make better life decisions. These are the five Cs: to clarify what is important; to communicate effectively with a partner (or potential partner); to consider a broad range of choices, avoiding crude either/or decisions; to check-in with a sounding board of friends and family; and to explore the likely short-term and long-term consequences of any big decisions.

Actions will count as much as the thought processes that precede them.

Davisson said: “The mental models that we have, particularly from our parents, are incredibly powerful.” If you don’t see what an equal partnership looks like in your home, she added, it might be hard to imagine one.

“I have two boys,” she said, “and they see my husband as the head chef. They think it’s funny when I cook . . . They will have this model of us sharing the workload. All the home responsibilities do not fall on one person.”

During the Covid pandemic, employees, parents and carers had their roles blended as they worked from home and tried to keep family life going. For some that has been an opportunity to more equally share the domestic workload, for others it has made the mythical work/life balance harder to achieve.

The authors say more is needed. “We need to invest in excellent childcare,” Strober said. “This is something business leaders need to be thinking about.” Davisson added: “We see birth rates falling, people not wanting to fund the cost, and then we wonder why people are not having more children.”

Although Strober’s course was greatly valued by students — with men, incidentally, making up 40 per cent of participants — SGSB chose not to continue it after her retirement.

That risks the business school reverting to a too narrow focus on money and how to make it — without thinking about the human factor.

Strober is all too familiar with that split. She cites the 18th-century philosopher Adam Smith’s two books: The Wealth of Nations, which covers free markets and the workings of the economy; and The Theory of Moral Sentiments, which focuses on social cohesion and relationships.

“Most people only know about The Wealth of Nations,” said Strober. “It’s too bad that he separated out those two books. Had he blended the discussion of wealth with the discussion of altruism we might not be quite so separated on them.”

We need both money and love. “Having money isn’t worth it unless you also have love,” Davisson said. And Strober’s final piece of advice? “The trick is to find someone to be your life partner who has the same philosophy as you do.”

Friday 4 November 2022

Quitting is underrated

We are far too stubborn, committing to an idea, job or romantic partner even when it becomes clear we’ve made a mistakeTim Harford in The FT

“I am a fighter and not a quitter,” said Liz Truss, the day before quitting. She was echoing the words of Peter Mandelson MP over two decades ago, although Mandelson had the good sense to speak after winning a political fight rather than while losing one. 

It’s a curious thing, though. Being a “fighter” is not entirely a compliment. It’s a prized quality in certain circumstances, but it’s not a word I’d use on my résumé or, for that matter, my Tinder bio. 

There can be little doubt about the term “quitter”, though. It is an unambiguous insult. That’s strange, because not only is there too much fighting in the world, there’s not nearly enough quitting. We are far too stubborn, sticking with an idea, a job, or a romantic partner even when it becomes clear we’ve made a mistake. 

There are few better illustrations of this than the viral popularity of “quiet quitting”, in which jaded young workers refuse to work beyond their contracted hours or to take on responsibilities beyond the job description. It’s a more poetic term than “slacking”, which is what we Gen-Xers would have called exactly the same behaviour 25 years ago. It’s also a perfectly understandable response to being overworked and underpaid. But if you are overworked and underpaid, a better response in most cases would not be quiet quitting, but simply quitting. 

I don’t mean this as a sneer at Gen-Z. I remember being utterly miserable at a job in my twenties, and I also remember how much social pressure there was to stick it out for a couple of years for the sake of making my CV seem less flaky. A flaky CV has its costs, of course. But if you’re a young graduate, so does spending two years of your life in a job you hate, while accumulating skills, experience and contacts in an industry you wish to leave. Most people cautioned me about the costs of quitting; only the wisest warned me of the costs of not quitting. 

Everything you quit clears space to try something new. Everything you say “no” to is an opportunity to say “yes” to something else. 

In her new book, Quit, Annie Duke argues that when we’re weighing up whether or not to quit, our cognitive biases are putting their thumb on the scale in favour of persistence. And persistence is overrated. 

To a good poker player — and Duke used to be a very good poker player indeed — this is obvious. “Optimal quitting might be the most important skill separating great players from amateurs,” she writes, adding that without the option to abandon a hand, poker would not be a game of skill at all. Expert players abandon about 80 per cent of their hands in the popular variant of Texas Hold’em. “Compare that to an amateur, who will stick with their starting cards over half the time.” 

What are these cognitive biases that push us towards persisting when we should quit? 

One is the sunk cost effect, where we treat past costs as a reason to continue with a course of action. If you’re at your favourite high-end shopping mall but you can’t find anything you love, it should be irrelevant how much time and money it cost you to travel to the mall. But it isn’t. We put ourselves under pressure to justify the trouble we’ve already taken, even if that means more waste. The same tendency applies from relationships to multi-billion-dollar mega-projects. Instead of cutting our losses, we throw good money after bad. 

(The sunk cost fallacy is old news to economists, but it took Nobel laureate Richard Thaler to point out that if it was common enough to have a name, it was common enough to be regarded as human nature.) 

The “status quo bias” also tends to push us towards persevering when we should stop. Highlighted in a 1988 study by the economists William Samuelson and Richard Zeckhauser, the status quo bias is a tendency to reaffirm earlier decisions and cling to the existing path we’re on, rather than make an active choice to do something different. 

Duke is frustrated with the way we frame these status quo choices. “I’m not ready to make a decision,” we say. Duke rightly points out that not making a decision is itself a decision. 

A few years ago, Steve Levitt, the co-author of Freakonomics, set up a website in which people facing difficult decisions could record their dilemma, toss a coin to help them choose and later return to say what they did and how they felt about it. These decisions were often weighty, such as leaving a job or ending a relationship. Levitt concluded that people who decided to make a major change — that is, the quitters — were significantly happier six months later than those who decided against the change — that is, the fighters. The conclusion: if you’re at the point when you’re tossing a coin to help you decide whether to quit, you should have quit some time ago. 

“I am a quitter and not a fighter.” It’s not much of a political slogan. But as a rule of thumb for life, I’ve seen worse.

Friday 4 June 2021

Have you seen Groupthink in action?

Tim Harford in The FT 

In his acid parliamentary testimony last week, Dominic Cummings, the prime minister’s former chief adviser, blamed a lot of different people and things for the UK’s failure to fight Covid-19 — including “groupthink”. 

Groupthink is unlikely to fight back. It already has a terrible reputation, not helped by its Orwellian ring, and the term is used so often that I begin to fear that we have groupthink about groupthink. 

So let’s step back. Groupthink was made famous in a 1972 book by psychologist Irving Janis. He was fascinated by the Bay of Pigs fiasco in 1961, in which a group of perfectly intelligent people in John F Kennedy’s administration made a series of perfectly ridiculous decisions to support a botched coup in Cuba. How had that happened? How can groups of smart people do such stupid things? 

An illuminating metaphor from Scott Page, author of The Difference, a book about the power of diversity, is that of the cognitive toolbox. A good toolbox is not the same thing as a toolbox full of good tools: two dozen top-quality hammers will not do the job. Instead, what’s needed is variety: a hammer, pliers, a saw, a choice of screwdrivers and more. 

This is obvious enough and, in principle, it should be obvious for decision-making too: a group needs a range of ideas, skills, experience and perspectives. Yet when you put three hammers on a hiring committee, they are likely to hire another hammer. This “homophily” — hanging out with people like ourselves — is the original sin of group decision-making, and there is no mystery as to how it happens. 

But things get worse. One problem, investigated by Cass Sunstein and Reid Hastie in their book Wiser, is that groups intensify existing biases. One study looked at group discussions about then-controversial topics (climate change, same-sex marriage, affirmative action) by groups in left-leaning Boulder, Colorado, and in right-leaning Colorado Springs. 

Each group contained six individuals with a range of views, but after discussing those views with each other, the Boulder groups bunched sharply to the left and the Colorado Springs groups bunched similarly to the right, becoming both more extreme and more uniform within the group. In some cases, the emergent view of the group was more extreme than the prior view of any single member. 

One reason for this is that when surrounded with fellow travellers, people became more confident in their own views. They felt reassured by the support of others. 

Meanwhile, people with contrary views tended to stay silent. Few people enjoy being publicly outnumbered. As a result, a false consensus emerged, with potential dissenters censoring themselves and the rest of the group gaining a misplaced sense of unanimity. 

The Colorado experiments studied polarisation but this is not just a problem of polarisation. Groups tend to seek common ground on any subject from politics to the weather, a fact revealed by “hidden profile” psychology experiments. In such experiments, groups are given a task (for example, to choose the best candidate for a job) and each member of the group is given different pieces of information. 

One might hope that each individual would share everything they knew, but instead what tends to happen is that people focus, redundantly, on what everybody already knows, rather than unearthing facts known to only one individual. The result is a decision-making disaster. 

These “hidden profile” studies point to the heart of the problem: group discussions aren’t just about sharing information and making wise decisions. They are about cohesion — or, at least, finding common ground to chat about. 

Reading Charlan Nemeth’s No! The Power of Disagreement In A World That Wants To Get Along, one theme is that while dissent leads to better, more robust decisions, it also leads to discomfort and even distress. Disagreement is valuable but agreement feels so much more comfortable. 

There is no shortage of solutions to the problem of groupthink, but to list them is to understand why they are often overlooked. The first and simplest is to embrace decision-making processes that require disagreement: appoint a “devil’s advocate” whose job is to be a contrarian, or practise “red-teaming”, with an internal group whose task is to play the role of hostile actors (hackers, invaders or simply critics) and to find vulnerabilities. The evidence suggests that red-teaming works better than having a devil’s advocate, perhaps because dissent needs strength in numbers. 

A more fundamental reform is to ensure that there is a real diversity of skills, experience and perspectives in the room: the screwdrivers and the saws as well as the hammers. This seems to be murderously hard. 

When it comes to social interaction, the aphorism is wrong: opposites do not attract. We unconsciously surround ourselves with like-minded people. 

Indeed, the process is not always unconscious. Boris Johnson’s cabinet could have contained Greg Clark and Jeremy Hunt, the two senior Conservative backbenchers who chair the committees to which Dominic Cummings gave his evidence about groupthink. But it does not. Why? Because they disagree with him too often. 

The right groups, with the right processes, can make excellent decisions. But most of us don’t join groups to make better decisions. We join them because we want to belong. Groupthink persists because groupthink feels good.

Tuesday 27 April 2021

Is Free Will an Illusion?

A growing chorus of scientists and philosophers argue that free will does not exist. Could they be right?
by Oliver Burkeman in The Guardian 


Towards the end of a conversation dwelling on some of the deepest metaphysical puzzles regarding the nature of human existence, the philosopher Galen Strawson paused, then asked me: “Have you spoken to anyone else yet who’s received weird email?” He navigated to a file on his computer and began reading from the alarming messages he and several other scholars had received over the past few years. Some were plaintive, others abusive, but all were fiercely accusatory. “Last year you all played a part in destroying my life,” one person wrote. “I lost everything because of you – my son, my partner, my job, my home, my mental health. All because of you, you told me I had no control, how I was not responsible for anything I do, how my beautiful six-year-old son was not responsible for what he did … Goodbye, and good luck with the rest of your cancerous, evil, pathetic existence.” “Rot in your own shit Galen,” read another note, sent in early 2015. “Your wife, your kids your friends, you have smeared all there [sic] achievements you utter fucking prick,” wrote the same person, who subsequently warned: “I’m going to fuck you up.” And then, days later, under the subject line “Hello”: “I’m coming for you.” “This was one where we had to involve the police,” Strawson said. Thereafter, the violent threats ceased.

It isn’t unheard of for philosophers to receive death threats. The Australian ethicist Peter Singer, for example, has received many, in response to his argument that, in highly exceptional circumstances, it might be morally justifiable to kill newborn babies with severe disabilities. But Strawson, like others on the receiving end of this particular wave of abuse, had merely expressed a longstanding position in an ancient debate that strikes many as the ultimate in “armchair philosophy”, wholly detached from the emotive entanglements of real life. They all deny that human beings possess free will. They argue that our choices are determined by forces beyond our ultimate control – perhaps even predetermined all the way back to the big bang – and that therefore nobody is ever wholly responsible for their actions. Reading back over the emails, Strawson, who gives the impression of someone far more forgiving of other people’s flaws than of his own, found himself empathising with his harassers’ distress. “I think for these people it’s just an existential catastrophe,” he said. “And I think I can see why.”

The difficulty in explaining the enigma of free will to those unfamiliar with the subject isn’t that it’s complex or obscure. It’s that the experience of possessing free will – the feeling that we are the authors of our choices – is so utterly basic to everyone’s existence that it can be hard to get enough mental distance to see what’s going on. Suppose you find yourself feeling moderately hungry one afternoon, so you walk to the fruit bowl in your kitchen, where you see one apple and one banana. As it happens, you choose the banana. But it seems absolutely obvious that you were free to choose the apple – or neither, or both – instead. That’s free will: were you to rewind the tape of world history, to the instant just before you made your decision, with everything in the universe exactly the same, you’d have been able to make a different one.

Nothing could be more self-evident. And yet according to a growing chorus of philosophers and scientists, who have a variety of different reasons for their view, it also can’t possibly be the case. “This sort of free will is ruled out, simply and decisively, by the laws of physics,” says one of the most strident of the free will sceptics, the evolutionary biologist Jerry Coyne. Leading psychologists such as Steven Pinker and Paul Bloom agree, as apparently did the late Stephen Hawking, along with numerous prominent neuroscientists, including VS Ramachandran, who called free will “an inherently flawed and incoherent concept” in his endorsement of Sam Harris’s bestselling 2012 book Free Will, which also makes that argument. According to the public intellectual Yuval Noah Harari, free will is an anachronistic myth – useful in the past, perhaps, as a way of motivating people to fight against tyrants or oppressive ideologies, but rendered obsolete by the power of modern data science to know us better than we know ourselves, and thus to predict and manipulate our choices.

Arguments against free will go back millennia, but the latest resurgence of scepticism has been driven by advances in neuroscience during the past few decades. Now that it’s possible to observe – thanks to neuroimaging – the physical brain activity associated with our decisions, it’s easier to think of those decisions as just another part of the mechanics of the material universe, in which “free will” plays no role. And from the 1980s onwards, various specific neuroscientific findings have offered troubling clues that our so-called free choices might actually originate in our brains several milliseconds, or even much longer, before we’re first aware of even thinking of them.

Despite the criticism that this is all just armchair philosophy, the truth is that the stakes could hardly be higher. Were free will to be shown to be nonexistent – and were we truly to absorb the fact – it would “precipitate a culture war far more belligerent than the one that has been waged on the subject of evolution”, Harris has written. Arguably, we would be forced to conclude that it was unreasonable ever to praise or blame anyone for their actions, since they weren’t truly responsible for deciding to do them; or to feel guilt for one’s misdeeds, pride in one’s accomplishments, or gratitude for others’ kindness. And we might come to feel that it was morally unjustifiable to mete out retributive punishment to criminals, since they had no ultimate choice about their wrongdoing. Some worry that it might fatally corrode all human relations, since romantic love, friendship and neighbourly civility alike all depend on the assumption of choice: any loving or respectful gesture has to be voluntary for it to count.

Peer over the precipice of the free will debate for a while, and you begin to appreciate how an already psychologically vulnerable person might be nudged into a breakdown, as was apparently the case with Strawson’s email correspondents. Harris has taken to prefacing his podcasts on free will with disclaimers, urging those who find the topic emotionally distressing to give them a miss. And Saul Smilansky, a professor of philosophy at the University of Haifa in Israel, who believes the popular notion of free will is a mistake, told me that if a graduate student who was prone to depression sought to study the subject with him, he would try to dissuade them. “Look, I’m naturally a buoyant person,” he said. “I have the mentality of a village idiot: it’s easy to make me happy. Nevertheless, the free will problem is really depressing if you take it seriously. It hasn’t made me happy, and in retrospect, if I were at graduate school again, maybe a different topic would have been preferable.”

Smilansky is an advocate of what he calls “illusionism”, the idea that although free will as conventionally defined is unreal, it’s crucial people go on believing otherwise – from which it follows that an article like this one might be actively dangerous. (Twenty years ago, he said, he might have refused to speak to me, but these days free will scepticism was so widely discussed that “the horse has left the barn”.) “On the deepest level, if people really understood what’s going on – and I don’t think I’ve fully internalised the implications myself, even after all these years – it’s just too frightening and difficult,” Smilansky said. “For anyone who’s morally and emotionally deep, it’s really depressing and destructive. It would really threaten our sense of self, our sense of personal value. The truth is just too awful here.”

The conviction that nobody ever truly chooses freely to do anything – that we’re the puppets of forces beyond our control – often seems to strike its adherents early in their intellectual careers, in a sudden flash of insight. “I was sitting in a carrel in Wolfson College [in Oxford] in 1975, and I had no idea what I was going to write my DPhil thesis about,” Strawson recalled. “I was reading something about Kant’s views on free will, and I was just electrified. That was it.” The logic, once glimpsed, seems coldly inexorable. Start with what seems like an obvious truth: anything that happens in the world, ever, must have been completely caused by things that happened before it. And those things must have been caused by things that happened before them – and so on, backwards to the dawn of time: cause after cause after cause, all of them following the predictable laws of nature, even if we haven’t figured all of those laws out yet. It’s easy enough to grasp this in the context of the straightforwardly physical world of rocks and rivers and internal combustion engines. But surely “one thing leads to another” in the world of decisions and intentions, too. Our decisions and intentions involve neural activity – and why would a neuron be exempt from the laws of physics any more than a rock?

So in the fruit bowl example, there are physiological reasons for your feeling hungry in the first place, and there are causes – in your genes, your upbringing, or your current environment – for your choosing to address your hunger with fruit, rather than a box of doughnuts. And your preference for the banana over the apple, at the moment of supposed choice, must have been caused by what went before, presumably including the pattern of neurons firing in your brain, which was itself caused – and so on back in an unbroken chain to your birth, the meeting of your parents, their births and, eventually, the birth of the cosmos.
An astronomical clock in Prague, Czech Republic. Photograph: John Kellerman/Alamy

But if all that’s true, there’s simply no room for the kind of free will you might imagine yourself to have when you see the apple and banana and wonder which one you’ll choose. To have what’s known in the scholarly jargon as “contra-causal” free will – so that if you rewound the tape of history back to the moment of choice, you could make a different choice – you’d somehow have to slip outside physical reality. To make a choice that wasn’t merely the next link in the unbroken chain of causes, you’d have to be able to stand apart from the whole thing, a ghostly presence separate from the material world yet mysteriously still able to influence it. But of course you can’t actually get to this supposed place that’s external to the universe, separate from all the atoms that comprise it and the laws that govern them. You just are some of the atoms in the universe, governed by the same predictable laws as all the rest.

It was the French polymath Pierre-Simon Laplace, writing in 1814, who most succinctly expressed the puzzle here: how can there be free will, in a universe where events just crank forwards like clockwork? His thought experiment is known as Laplace’s demon, and his argument went as follows: if some hypothetical ultra-intelligent being – or demon – could somehow know the position of every atom in the universe at a single point in time, along with all the laws that governed their interactions, it could predict the future in its entirety. There would be nothing it couldn’t know about the world 100 or 1,000 years hence, down to the slightest quiver of a sparrow’s wing. You might think you made a free choice to marry your partner, or choose a salad with your meal rather than chips; but in fact Laplace’s demon would have known it all along, by extrapolating out along the endless chain of causes. “For such an intellect,” Laplace said, “nothing could be uncertain, and the future, just like the past, would be present before its eyes.”

It’s true that since Laplace’s day, findings in quantum physics have indicated that some events, at the level of atoms and electrons, are genuinely random, which means they would be impossible to predict in advance, even by some hypothetical megabrain. But few people involved in the free will debate think that makes a critical difference. Those tiny fluctuations probably have little relevant impact on life at the scale we live it, as human beings. And in any case, there’s no more freedom in being subject to the random behaviours of electrons than there is in being the slave of predetermined causal laws. Either way, something other than your own free will seems to be pulling your strings.

By far the most unsettling implication of the case against free will, for most who encounter it, is what it seems to say about morality: that nobody, ever, truly deserves reward or punishment for what they do, because what they do is the result of blind deterministic forces (plus maybe a little quantum randomness). “For the free will sceptic,” writes Gregg Caruso in his new book Just Deserts, a collection of dialogues with his fellow philosopher Daniel Dennett, “it is never fair to treat anyone as morally responsible.” Were we to accept the full implications of that idea, the way we treat each other – and especially the way we treat criminals – might change beyond recognition.

Consider the case of Charles Whitman. Just after midnight on 1 August 1966, Whitman – an outgoing and apparently stable 25-year-old former US Marine – drove to his mother’s apartment in Austin, Texas, where he stabbed her to death. He returned home, where he killed his wife in the same manner. Later that day, he took an assortment of weapons to the top of a high building on the campus of the University of Texas, where he began shooting randomly for about an hour and a half. By the time Whitman was killed by police, 12 more people were dead, and one more died of his injuries years afterwards – a spree that remains the US’s 10th worst mass shooting.

Within hours of the massacre, the authorities discovered a note that Whitman had typed the night before. “I don’t quite understand what compels me to type this letter,” he wrote. “Perhaps it is to leave some vague reason for the actions I have recently performed. I don’t really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I can’t recall when it started) I have been a victim of many unusual and irrational thoughts [which] constantly recur, and it requires a tremendous mental effort to concentrate on useful and progressive tasks … After my death I wish that an autopsy would be performed to see if there is any visible physical disorder.” Following the first two murders, he added a coda: “Maybe research can prevent further tragedies of this type.” An autopsy was performed, revealing the presence of a substantial brain tumour, pressing on Whitman’s amygdala, the part of the brain governing “fight or flight” responses to fear.

As the free will sceptics who draw on Whitman’s case concede, it’s impossible to know if the brain tumour caused Whitman’s actions. What seems clear is that it certainly could have done so – and that almost everyone, on hearing about it, undergoes some shift in their attitude towards him. It doesn’t make the killings any less horrific. Nor does it mean the police weren’t justified in killing him. But it does make his rampage start to seem less like the evil actions of an evil man, and more like the terrible symptom of a disorder, with Whitman among its victims. The same is true for another wrongdoer famous in the free-will literature, the anonymous subject of the 2003 paper Right Orbitofrontal Tumor with Paedophilia Symptom and Constructional Apraxia Sign, a 40-year-old schoolteacher who suddenly developed paedophilic urges and began seeking out child pornography, and was subsequently convicted of child molestation. Soon afterwards, complaining of headaches, he was diagnosed with a brain tumour; when it was removed, his paedophilic urges vanished. A year later, they returned – as had his tumour, detected in another brain scan.

If you find the presence of a brain tumour in these cases in any way exculpatory, though, you face a difficult question: what’s so special about a brain tumour, as opposed to all the other ways in which people’s brains cause them to do things? When you learn about the specific chain of causes that were unfolding inside Charles Whitman’s skull, it has the effect of seeming to make him less personally responsible for the terrible acts he committed. But by definition, anyone who commits any immoral act has a brain in which a chain of prior causes had unfolded, leading to the act; if that weren’t the case, they’d never have committed the act. “A neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions,” is how Harris expresses it. “Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumour in it.” It appears to follow that as we understand ever more about how the brain works, we’ll illuminate the last shadows in which something called “free will” might ever have lurked – and we’ll be forced to concede that a criminal is merely someone unlucky enough to find himself at the end of a causal chain that culminates in a crime. We can still insist the crime in question is morally bad; we just can’t hold the criminal individually responsible. (Or at least that’s where the logic seems to lead our modern minds: there’s a rival tradition, going back to the ancient Greeks, which holds that you can be held responsible for what’s fated to happen to you anyway.)

  
Illustration: Nathalie Lees

For Caruso, who teaches philosophy at the State University of New York, what all this means is that retributive punishment – punishing a criminal because he deserves it, rather than to protect the public, or serve as a warning to others – can’t ever be justified. Like Strawson, he has received email abuse from people disturbed by the implications. Retribution is central to all modern systems of criminal justice, yet ultimately, Caruso thinks, “it’s a moral injustice to hold someone responsible for actions that are beyond their control. It’s capricious.” Indeed some psychological research, he points out, suggests that people believe in free will partly because they want to justify their appetite for retribution. “What seems to happen is that people come across an action they disapprove of; they have a high desire to blame or punish; so they attribute to the perpetrator the degree of control [over their own actions] that would be required to justify blaming them.” (It’s no accident that the free will controversy is entangled in debates about religion: following similar logic, sinners must freely choose to sin, in order for God’s retribution to be justified.)

Caruso is an advocate of what he calls the “public health-quarantine” model of criminal justice, which would transform the institutions of punishment in a radically humane direction. You could still restrain a murderer, on the same rationale that you can require someone infected by Ebola to observe a quarantine: to protect the public. But you’d have no right to make the experience any more unpleasant than was strictly necessary for public protection. And you would be obliged to release them as soon as they no longer posed a threat. (The main focus, in Caruso’s ideal world, would be on redressing social problems to try stop crime happening in the first place – just as public health systems ought to focus on preventing epidemics happening to begin with.)

It’s tempting to try to wriggle out of these ramifications by protesting that, while people might not choose their worst impulses – for murder, say – they do have the choice not to succumb to them. You can feel the urge to kill someone but resist it, or even seek psychiatric help. You can take responsibility for the state of your personality. And don’t we all do that, all the time, in more mundane ways, whenever we decide to acquire a new professional skill, become a better listener, or finally get fit?

But this is not the escape clause it might seem. After all, the free will sceptics insist, if you do manage to change your personality in some admirable way, you must already have possessed the kind of personality capable of implementing such a change – and you didn’t choose that. None of this requires us to believe that the worst atrocities are any less appalling than we previously thought. But it does entail that the perpetrators can’t be held personally to blame. If you’d been born with Hitler’s genes, and experienced Hitler’s upbringing, you would be Hitler – and ultimately it’s only good fortune that you weren’t. In the end, as Strawson puts it, “luck swallows everything”.

Given how watertight the case against free will can appear, it may be surprising to learn that most philosophers reject it: according to a 2009 survey, conducted by the website PhilPapers, only about 12% of them are persuaded by it. And the disagreement can be fraught, partly because free will denial belongs to a wider trend that drives some philosophers spare – the tendency for those trained in the hard sciences to make sweeping pronouncements about debates that have raged in philosophy for years, as if all those dull-witted scholars were just waiting for the physicists and neuroscientists to show up. In one chilly exchange, Dennett paid a backhanded compliment to Harris, who has a PhD in neuroscience, calling his book “remarkable” and “valuable” – but only because it was riddled with so many wrongheaded claims: “I am grateful to Harris for saying, so boldly and clearly, what less outgoing scientists are thinking but keeping to themselves.”

What’s still more surprising, and hard to wrap one’s mind around, is that most of those who defend free will don’t reject the sceptics’ most dizzying assertion – that every choice you ever make might have been determined in advance. So in the fruit bowl example, a majority of philosophers agree that if you rewound the tape of history to the moment of choice, with everything in the universe exactly the same, you couldn’t have made a different selection. That kind of free will is “as illusory as poltergeists”, to quote Dennett. What they claim instead is that this doesn’t matter: that even though our choices may be determined, it makes sense to say we’re free to choose. That’s why they’re known as “compatibilists”: they think determinism and free will are compatible. (There are many other positions in the debate, including some philosophers, many Christians among them, who think we really do have “ghostly” free will; and others who think the whole so-called problem is a chimera, resulting from a confusion of categories, or errors of language.)

To those who find the case against free will persuasive, compatibilism seems outrageous at first glance. How can we possibly be free to choose if we aren’t, in fact, you know, free to choose? But to grasp the compatibilists’ point, it helps first to think about free will not as a kind of magic, but as a mundane sort of skill – one which most adults possess, most of the time. As the compatibilist Kadri Vihvelin writes, “we have the free will we think we have, including the freedom of action we think we have … by having some bundle of abilities and being in the right kind of surroundings.” The way most compatibilists see things, “being free” is just a matter of having the capacity to think about what you want, reflect on your desires, then act on them and sometimes get what you want. When you choose the banana in the normal way – by thinking about which fruit you’d like, then taking it – you’re clearly in a different situation from someone who picks the banana because a fruit-obsessed gunman is holding a pistol to their head; or someone afflicted by a banana addiction, compelled to grab every one they see. In all of these scenarios, to be sure, your actions belonged to an unbroken chain of causes, stretching back to the dawn of time. But who cares? The banana-chooser in one of them was clearly more free than in the others.

“Harris, Pinker, Coyne – all these scientists, they all make the same two-step move,” said Eddy Nahmias, a compatibilist philosopher at Georgia State University in the US. “Their first move is always to say, ‘well, here’s what free will means’” – and it’s always something nobody could ever actually have, in the reality in which we live. “And then, sure enough, they deflate it. But once you have that sort of balloon in front of you, it’s very easy to deflate it, because any naturalistic account of the world will show that it’s false.”
Daniel Dennett in Stockholm, Sweden. Photograph: Ibl/Rex/Shutterstock

Consider hypnosis. A doctrinaire free will sceptic might feel obliged to argue that a person hypnotised into making a particular purchase is no less free than someone who thinks about it, in the usual manner, before reaching for their credit card. After all, their idea of free will requires that the choice wasn’t fully determined by prior causes; yet in both cases, hypnotised and non-hypnotised, it was. “But come on, that’s just really annoying,” said Helen Beebee, a philosopher at the University of Manchester who has written widely on free will, expressing an exasperation commonly felt by compatibilists toward their rivals’ more outlandish claims. “In some sense, I don’t care if you call it ‘free will’ or ‘acting freely’ or anything else – it’s just that it obviously does matter, to everybody, whether they get hypnotised into doing things or not.”

Granted, the compatibilist version of free will may be less exciting. But it doesn’t follow that it’s worthless. Indeed, it may be (in another of Dennett’s phrases) the only kind of “free will worth wanting”. You experience the desire for a certain fruit, you act on it, and you get the fruit, with no external gunmen or internal disorders influencing your choice. How could a person ever be freer than that?

Thinking of free will this way also puts a different spin on some notorious experiments conducted in the 80s by the American neuroscientist Benjamin Libet, which have been interpreted as offering scientific proof that free will doesn’t exist. Wiring his subjects to a brain scanner, and asking them to flex their hands at a moment of their choosing, Libet seemed to show that their choice was detectable from brain activity 300 milliseconds before they made a conscious decision. (Other studies have indicated activity up to 10 seconds before a conscious choice.) How could these subjects be said to have reached their decisions freely, if the lab equipment knew their decisions so far in advance? But to most compatibilists, this is a fuss about nothing. Like everything else, our conscious choices are links in a causal chain of neural processes, so of course some brain activity precedes the moment at which we become aware of them.

From this down-to-earth perspective, there’s also no need to start panicking that cases like Charles Whitman’s might mean we could never hold anybody responsible for their misdeeds, or praise them for their achievements. (In their defence, several free will sceptics I spoke to had their reasons for not going that far, either.) Instead, we need only ask whether someone had the normal ability to choose rationally, reflecting on the implications of their actions. We all agree that newborn babies haven’t developed that yet, so we don’t blame them for waking us in the night; and we believe most non-human animals don’t possess it – so few of us rage indignantly at wasps for stinging us. Someone with a severe neurological or developmental impairment would surely lack it, too, perhaps including Whitman. But as for everyone else: “Bernie Madoff is the example I always like to use,” said Nahmias. “Because it’s so clear that he knew what he was doing, and that he knew that what he was doing was wrong, and he did it anyway.” He did have the ability we call “free will” – and used it to defraud his investors of more than $17bn.

To the free will sceptics, this is all just a desperate attempt at face-saving and changing the subject – an effort to redefine free will not as the thing we all feel, when faced with a choice, but as something else, unworthy of the name. “People hate the idea that they aren’t agents who can make free choices,” Jerry Coyne has argued. Harris has accused Dennett of approaching the topic as if he were telling someone bent on discovering the lost city of Atlantis that they ought to be satisfied with a trip to Sicily. After all, it meets some of the criteria: it’s an island in the sea, home to a civilisation with ancient roots. But the facts remain: Atlantis doesn’t exist. And when it felt like it wasn’t inevitable you’d choose the banana, the truth is that it actually was.

It’s tempting to dismiss the free will controversy as irrelevant to real life, on the grounds that we can’t help but feel as though we have free will, whatever the philosophical truth may be. I’m certainly going to keep responding to others as though they had free will: if you injure me, or someone I love, I can guarantee I’m going to be furious, instead of smiling indulgently on the grounds that you had no option. In this experiential sense, free will just seems to be a given.

But is it? When my mind is at its quietest – for example, drinking coffee early in the morning, before the four-year-old wakes up – things are liable to feel different. In such moments of relaxed concentration, it seems clear to me that my intentions and choices, like all my other thoughts and emotions, arise unbidden in my awareness. There’s no sense in which it feels like I’m their author. Why do I put down my coffee mug and head to the shower at the exact moment I do so? Because the intention to do so pops up, caused, no doubt, by all sorts of activity in my brain – but activity that lies outside my understanding, let alone my command. And it’s exactly the same when it comes to those weightier decisions that seem to express something profound about the kind of person I am: whether to attend the funeral of a certain relative, say, or which of two incompatible career opportunities to pursue. I can spend hours or even days engaged in what I tell myself is “reaching a decision” about those, when what I’m really doing, if I’m honest, is just vacillating between options – until at some unpredictable moment, or when an external deadline forces the issue, the decision to commit to one path or another simply arises.

This is what Harris means when he declares that, on close inspection, it’s not merely that free will is an illusion, but that the illusion of free will is itself an illusion: watch yourself closely, and you don’t even seem to be free. “If one pays sufficient attention,” he told me by email, “one can notice that there’s no subject in the middle of experience – there is only experience. And everything we experience simply arises on its own.” This is an idea with roots in Buddhism, and echoed by others, including the philosopher David Hume: when you look within, there’s no trace of an internal commanding officer, autonomously issuing decisions. There’s only mental activity, flowing on. Or as Arthur Rimbaud wrote, in a letter to a friend in 1871: “I am a spectator at the unfolding of my thought; I watch it, I listen to it.”

There are reasons to agree with Saul Smilansky that it might be personally and societally detrimental for too many people to start thinking in this way, even if it turns out it’s the truth. (Dennett, although he thinks we do have free will, takes a similar position, arguing that it’s morally irresponsible to promote free-will denial.) In one set of studies in 2008, the psychologists Kathleen Vohs and Jonathan Schooler asked one group of participants to read an excerpt from The Astonishing Hypothesis by Francis Crick, co-discoverer of the structure of DNA, in which he suggests free will is an illusion. The subjects thus primed to doubt the existence of free will proved significantly likelier than others, in a subsequent stage of the experiment, to cheat in a test where there was money at stake. Other research has reported a diminished belief in free will to less willingness to volunteer to help others, to lower levels of commitment in relationships, and lower levels of gratitude.

Unsuccessful attempts to replicate Vohs and Schooler’s findings have called them into question. But even if the effects are real, some free will sceptics argue that the participants in such studies are making a common mistake – and one that might get cleared up rather rapidly, were the case against free will to become better known and understood. Study participants who suddenly become immoral seem to be confusing determinism with fatalism – the idea that if we don’t have free will, then our choices don’t really matter, so we might as well not bother trying to make good ones, and just do as we please instead. But in fact it doesn’t follow from our choices being determined that they don’t matter. It might matter enormously whether you choose to feed your children a diet rich in vegetables or not; or whether you decide to check carefully in both directions before crossing a busy road. It’s just that (according to the sceptics) you don’t get to make those choices freely.

In any case, were free will really to be shown to be nonexistent, the implications might not be entirely negative. It’s true that there’s something repellent about an idea that seems to require us to treat a cold-blooded murderer as not responsible for his actions, while at the same time characterising the love of a parent for a child as nothing more than what Smilansky calls “the unfolding of the given” – mere blind causation, devoid of any human spark. But there’s something liberating about it, too. It’s a reason to be gentler with yourself, and with others. For those of us prone to being hard on ourselves, it’s therapeutic to keep in the back of your mind the thought that you might be doing precisely as well as you were always going to be doing – that in the profoundest sense, you couldn’t have done any more. And for those of us prone to raging at others for their minor misdeeds, it’s calming to consider how easily their faults might have been yours. (Sure enough, some research has linked disbelief in free will to increased kindness.) 

Harris argues that if we fully grasped the case against free will, it would be difficult to hate other people: how can you hate someone you don’t blame for their actions? Yet love would survive largely unscathed, since love is “the condition of our wanting those we love to be happy, and being made happy ourselves by that ethical and emotional connection”, neither of which would be undermined. And countless other positive aspects of life would be similarly untouched. As Strawson puts it, in a world without a belief in free will, “strawberries would still taste just as good”.

Those early-morning moments aside, I personally can’t claim to find the case against free will ultimately persuasive; it’s just at odds with too much else that seems obviously true about life. Yet even if only entertained as a hypothetical possibility, free will scepticism is an antidote to that bleak individualist philosophy which holds that a person’s accomplishments truly belong to them alone – and that you’ve therefore only yourself to blame if you fail. It’s a reminder that accidents of birth might affect the trajectories of our lives far more comprehensively than we realise, dictating not only the socioeconomic position into which we’re born, but also our personalities and experiences as a whole: our talents and our weaknesses, our capacity for joy, and our ability to overcome tendencies toward violence, laziness or despair, and the paths we end up travelling. There is a deep sense of human fellowship in this picture of reality – in the idea that, in our utter exposure to forces beyond our control, we might all be in the same boat, clinging on for our lives, adrift on the storm-tossed ocean of luck.

Saturday 23 June 2018

What is patriotic? Who gets to decide that?

Pervez Hoodbhoy in The Dawn


LAST week an unsigned email from Netra­ckerOnline@gmail.com landed in my inbox. It accused me of stirring “hate against the state and the institutions in the garb of being sane and intellectual” while claiming “we know what cooks in your mind when u address the masses and who u work for”. And so, to deal with me, it says “we can enlist them”. What “them” means is unstated.

Hidden somewhere in cyber space some prankster bearing some personal grudge — possibly a student who couldn’t pass my physics course — might well have authored this email. If so the only action called for has already been taken — hitting the delete button followed by a trash flush. I lost no sleep over this.

But instead, what if today there is actually some organised and systematic effort afoot to frighten and silence those Pakistani voices judged unpatriotic? Could this be why — now for many months — meaningful political analysis and discussion have disappeared from local print and electronic media? Bloggers have disappeared, only to reappear with horrendous tales to tell, and many journalists have been stilled forever.

The evidence is all over: cable operators have been forced to block certain TV news channels, and street hawkers have been warned against selling certain newspapers that don’t toe the line. The line — that mysterious line — can only be inferred because specifying it might reveal too much of who actually draws the line. With some exceptions, owners, editors, anchors, journalists, and opinion writers have fallen quickly into place.

But even if some voices are successfully gagged, I contend such tactics by anonymous actors cannot ever create a more stable or stronger Pakistan. In fact the efforts of NetrackerOnline@gmail.com and his ilk are arguably counter-patriotic. Here’s why.

First, freedom of expression acts as a safety valve against authoritarian rule, tyranny and secret government. Secret government is bad because it is uninhibited by the checks and balances needed for good governance. Accountability is not just about iqamas and politicians. It’s equally needed for generals, judges, lawyers, professors, policemen and milkmen. If certain voices are amplified while others are suppressed, genuine accountability becomes difficult.

Second, true patriotism comes from caring. In fact, real caring is often the reason why some dare raise voices to criticise what they perceive wrong around them. While Mr NetrackerOnline@gmail.com was probably told in his school that criticising state institutions is unpatriotic, this view is without logic.

Should citizens of Pakistan be stopped from sharing and airing their thoughts on PIA’s performance, the national cricket team, or the country’s professors, politicians, or generals? None of these are holy, faultless, and above reproach. No patriotic Pakistani can have beef with the state or any of its institutions provided these function within their respective mandates.

This begs the key question: who is a patriotic Pakistani and acts to benefit it? Equivalently, what is Pakistan’s national interest and who may rightfully define it? Surely this is not for some hidden force to specify. The only proper way is to determine its parameters through open and honest public debate.

Here’s my take, hopefully shared by many millions. A true patriot wants to make Pakistan poverty-free; to help it achieve high standards of justice and financial integrity; to convince its different peoples and provinces about mutual sharing and caring; to help make real universities instead of the ones we have; to explore space and become a world leader in science; to develop literature and the arts; and much more.

The other conception of Pakistani patriotism and national interest — the mainstream one — is different. Taught in schools and propagated via the media, it focuses upon our relations with India. This involves freeing Kashmir from India; deterring India with nuclear weapons; creating strategic depth against India through controlling Afghanistan; neutralising Indian power by nurturing the Pakistan-China relationship; punishing Iran for its friendship with India; etc. This India-centric view has been strengthened by Indian obduracy on Kashmir, its unconscionable repression of Kashmiri protesters, and the emergence of a hard-line anti-Muslim Hindu right.

But now matters other than India are casting dark shadows. Short of nuclear war or a miracle, nothing can now prevent Pakistan from reaching 400 million people in 35-40 years. Water is running short, and environmental destruction is everywhere. Then there are fanatical mullahs that the state appeases, fights, and then appeases again.

Add these all up and you can understand why Mr NetrackerOnline@gmail.com’s mind is being unconsciously governed by the fears of Robert Hobbes (1588-1679). Hobbes famously articulated the dread of a state sliding deep into dystopia. During the English Civil War, he became obsessed with demonstrating the necessity of a strong central authority to avoid the evil of discord and civil war.

In one of the best known passages of English literature, Hobbes writes: “In such condition, there is no place for industry; because the fruit thereof is uncertain: and consequently no culture of the earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short.” His only solution is an absolute authority in the form of an absolute monarch. Else, says Hobbes, there would be a “war of all against all”.

Hobbes was wrong and his negative vision proved false. England grew to be Europe’s most powerful country and a fountain of civilisation. Democracy was central to this; without developing a system resting on freedom of speech and thought England could never have become the cradle of the Scientific Revolution and then the Industrial Revolution. Rejection of military rule, hereditary privilege, and absolute monarchy eventually won universal acceptance.

I wonder if Mr NetrackerOnline@gmail.com and others with a negative vision will get to read this article. Will they realise that trying to shut people up is actually unpatriotic? For all who care for the well-being of Pakistan and its people, it is a patriotic duty to speak against abuses of power. Equating patriotism with passivity and unquestioning obedience is nonsense. Pakistan Zindabad!