'People will forgive you for being wrong, but they will never forgive you for being right - especially if events prove you right while proving them wrong.' Thomas Sowell
Search This Blog
Showing posts with label responsibility. Show all posts
Showing posts with label responsibility. Show all posts
Sunday, 19 June 2022
Sunday, 24 April 2022
Saturday, 19 February 2022
Thursday, 10 June 2021
Tuesday, 27 April 2021
Is Free Will an Illusion?
A growing chorus of scientists and philosophers argue that free will does not exist. Could they be right?
by Oliver Burkeman in The Guardian
Towards the end of a conversation dwelling on some of the deepest metaphysical puzzles regarding the nature of human existence, the philosopher Galen Strawson paused, then asked me: “Have you spoken to anyone else yet who’s received weird email?” He navigated to a file on his computer and began reading from the alarming messages he and several other scholars had received over the past few years. Some were plaintive, others abusive, but all were fiercely accusatory. “Last year you all played a part in destroying my life,” one person wrote. “I lost everything because of you – my son, my partner, my job, my home, my mental health. All because of you, you told me I had no control, how I was not responsible for anything I do, how my beautiful six-year-old son was not responsible for what he did … Goodbye, and good luck with the rest of your cancerous, evil, pathetic existence.” “Rot in your own shit Galen,” read another note, sent in early 2015. “Your wife, your kids your friends, you have smeared all there [sic] achievements you utter fucking prick,” wrote the same person, who subsequently warned: “I’m going to fuck you up.” And then, days later, under the subject line “Hello”: “I’m coming for you.” “This was one where we had to involve the police,” Strawson said. Thereafter, the violent threats ceased.
It isn’t unheard of for philosophers to receive death threats. The Australian ethicist Peter Singer, for example, has received many, in response to his argument that, in highly exceptional circumstances, it might be morally justifiable to kill newborn babies with severe disabilities. But Strawson, like others on the receiving end of this particular wave of abuse, had merely expressed a longstanding position in an ancient debate that strikes many as the ultimate in “armchair philosophy”, wholly detached from the emotive entanglements of real life. They all deny that human beings possess free will. They argue that our choices are determined by forces beyond our ultimate control – perhaps even predetermined all the way back to the big bang – and that therefore nobody is ever wholly responsible for their actions. Reading back over the emails, Strawson, who gives the impression of someone far more forgiving of other people’s flaws than of his own, found himself empathising with his harassers’ distress. “I think for these people it’s just an existential catastrophe,” he said. “And I think I can see why.”
The difficulty in explaining the enigma of free will to those unfamiliar with the subject isn’t that it’s complex or obscure. It’s that the experience of possessing free will – the feeling that we are the authors of our choices – is so utterly basic to everyone’s existence that it can be hard to get enough mental distance to see what’s going on. Suppose you find yourself feeling moderately hungry one afternoon, so you walk to the fruit bowl in your kitchen, where you see one apple and one banana. As it happens, you choose the banana. But it seems absolutely obvious that you were free to choose the apple – or neither, or both – instead. That’s free will: were you to rewind the tape of world history, to the instant just before you made your decision, with everything in the universe exactly the same, you’d have been able to make a different one.
Nothing could be more self-evident. And yet according to a growing chorus of philosophers and scientists, who have a variety of different reasons for their view, it also can’t possibly be the case. “This sort of free will is ruled out, simply and decisively, by the laws of physics,” says one of the most strident of the free will sceptics, the evolutionary biologist Jerry Coyne. Leading psychologists such as Steven Pinker and Paul Bloom agree, as apparently did the late Stephen Hawking, along with numerous prominent neuroscientists, including VS Ramachandran, who called free will “an inherently flawed and incoherent concept” in his endorsement of Sam Harris’s bestselling 2012 book Free Will, which also makes that argument. According to the public intellectual Yuval Noah Harari, free will is an anachronistic myth – useful in the past, perhaps, as a way of motivating people to fight against tyrants or oppressive ideologies, but rendered obsolete by the power of modern data science to know us better than we know ourselves, and thus to predict and manipulate our choices.
Arguments against free will go back millennia, but the latest resurgence of scepticism has been driven by advances in neuroscience during the past few decades. Now that it’s possible to observe – thanks to neuroimaging – the physical brain activity associated with our decisions, it’s easier to think of those decisions as just another part of the mechanics of the material universe, in which “free will” plays no role. And from the 1980s onwards, various specific neuroscientific findings have offered troubling clues that our so-called free choices might actually originate in our brains several milliseconds, or even much longer, before we’re first aware of even thinking of them.
Despite the criticism that this is all just armchair philosophy, the truth is that the stakes could hardly be higher. Were free will to be shown to be nonexistent – and were we truly to absorb the fact – it would “precipitate a culture war far more belligerent than the one that has been waged on the subject of evolution”, Harris has written. Arguably, we would be forced to conclude that it was unreasonable ever to praise or blame anyone for their actions, since they weren’t truly responsible for deciding to do them; or to feel guilt for one’s misdeeds, pride in one’s accomplishments, or gratitude for others’ kindness. And we might come to feel that it was morally unjustifiable to mete out retributive punishment to criminals, since they had no ultimate choice about their wrongdoing. Some worry that it might fatally corrode all human relations, since romantic love, friendship and neighbourly civility alike all depend on the assumption of choice: any loving or respectful gesture has to be voluntary for it to count.
Peer over the precipice of the free will debate for a while, and you begin to appreciate how an already psychologically vulnerable person might be nudged into a breakdown, as was apparently the case with Strawson’s email correspondents. Harris has taken to prefacing his podcasts on free will with disclaimers, urging those who find the topic emotionally distressing to give them a miss. And Saul Smilansky, a professor of philosophy at the University of Haifa in Israel, who believes the popular notion of free will is a mistake, told me that if a graduate student who was prone to depression sought to study the subject with him, he would try to dissuade them. “Look, I’m naturally a buoyant person,” he said. “I have the mentality of a village idiot: it’s easy to make me happy. Nevertheless, the free will problem is really depressing if you take it seriously. It hasn’t made me happy, and in retrospect, if I were at graduate school again, maybe a different topic would have been preferable.”
Smilansky is an advocate of what he calls “illusionism”, the idea that although free will as conventionally defined is unreal, it’s crucial people go on believing otherwise – from which it follows that an article like this one might be actively dangerous. (Twenty years ago, he said, he might have refused to speak to me, but these days free will scepticism was so widely discussed that “the horse has left the barn”.) “On the deepest level, if people really understood what’s going on – and I don’t think I’ve fully internalised the implications myself, even after all these years – it’s just too frightening and difficult,” Smilansky said. “For anyone who’s morally and emotionally deep, it’s really depressing and destructive. It would really threaten our sense of self, our sense of personal value. The truth is just too awful here.”
The conviction that nobody ever truly chooses freely to do anything – that we’re the puppets of forces beyond our control – often seems to strike its adherents early in their intellectual careers, in a sudden flash of insight. “I was sitting in a carrel in Wolfson College [in Oxford] in 1975, and I had no idea what I was going to write my DPhil thesis about,” Strawson recalled. “I was reading something about Kant’s views on free will, and I was just electrified. That was it.” The logic, once glimpsed, seems coldly inexorable. Start with what seems like an obvious truth: anything that happens in the world, ever, must have been completely caused by things that happened before it. And those things must have been caused by things that happened before them – and so on, backwards to the dawn of time: cause after cause after cause, all of them following the predictable laws of nature, even if we haven’t figured all of those laws out yet. It’s easy enough to grasp this in the context of the straightforwardly physical world of rocks and rivers and internal combustion engines. But surely “one thing leads to another” in the world of decisions and intentions, too. Our decisions and intentions involve neural activity – and why would a neuron be exempt from the laws of physics any more than a rock?
So in the fruit bowl example, there are physiological reasons for your feeling hungry in the first place, and there are causes – in your genes, your upbringing, or your current environment – for your choosing to address your hunger with fruit, rather than a box of doughnuts. And your preference for the banana over the apple, at the moment of supposed choice, must have been caused by what went before, presumably including the pattern of neurons firing in your brain, which was itself caused – and so on back in an unbroken chain to your birth, the meeting of your parents, their births and, eventually, the birth of the cosmos.
by Oliver Burkeman in The Guardian
Towards the end of a conversation dwelling on some of the deepest metaphysical puzzles regarding the nature of human existence, the philosopher Galen Strawson paused, then asked me: “Have you spoken to anyone else yet who’s received weird email?” He navigated to a file on his computer and began reading from the alarming messages he and several other scholars had received over the past few years. Some were plaintive, others abusive, but all were fiercely accusatory. “Last year you all played a part in destroying my life,” one person wrote. “I lost everything because of you – my son, my partner, my job, my home, my mental health. All because of you, you told me I had no control, how I was not responsible for anything I do, how my beautiful six-year-old son was not responsible for what he did … Goodbye, and good luck with the rest of your cancerous, evil, pathetic existence.” “Rot in your own shit Galen,” read another note, sent in early 2015. “Your wife, your kids your friends, you have smeared all there [sic] achievements you utter fucking prick,” wrote the same person, who subsequently warned: “I’m going to fuck you up.” And then, days later, under the subject line “Hello”: “I’m coming for you.” “This was one where we had to involve the police,” Strawson said. Thereafter, the violent threats ceased.
It isn’t unheard of for philosophers to receive death threats. The Australian ethicist Peter Singer, for example, has received many, in response to his argument that, in highly exceptional circumstances, it might be morally justifiable to kill newborn babies with severe disabilities. But Strawson, like others on the receiving end of this particular wave of abuse, had merely expressed a longstanding position in an ancient debate that strikes many as the ultimate in “armchair philosophy”, wholly detached from the emotive entanglements of real life. They all deny that human beings possess free will. They argue that our choices are determined by forces beyond our ultimate control – perhaps even predetermined all the way back to the big bang – and that therefore nobody is ever wholly responsible for their actions. Reading back over the emails, Strawson, who gives the impression of someone far more forgiving of other people’s flaws than of his own, found himself empathising with his harassers’ distress. “I think for these people it’s just an existential catastrophe,” he said. “And I think I can see why.”
The difficulty in explaining the enigma of free will to those unfamiliar with the subject isn’t that it’s complex or obscure. It’s that the experience of possessing free will – the feeling that we are the authors of our choices – is so utterly basic to everyone’s existence that it can be hard to get enough mental distance to see what’s going on. Suppose you find yourself feeling moderately hungry one afternoon, so you walk to the fruit bowl in your kitchen, where you see one apple and one banana. As it happens, you choose the banana. But it seems absolutely obvious that you were free to choose the apple – or neither, or both – instead. That’s free will: were you to rewind the tape of world history, to the instant just before you made your decision, with everything in the universe exactly the same, you’d have been able to make a different one.
Nothing could be more self-evident. And yet according to a growing chorus of philosophers and scientists, who have a variety of different reasons for their view, it also can’t possibly be the case. “This sort of free will is ruled out, simply and decisively, by the laws of physics,” says one of the most strident of the free will sceptics, the evolutionary biologist Jerry Coyne. Leading psychologists such as Steven Pinker and Paul Bloom agree, as apparently did the late Stephen Hawking, along with numerous prominent neuroscientists, including VS Ramachandran, who called free will “an inherently flawed and incoherent concept” in his endorsement of Sam Harris’s bestselling 2012 book Free Will, which also makes that argument. According to the public intellectual Yuval Noah Harari, free will is an anachronistic myth – useful in the past, perhaps, as a way of motivating people to fight against tyrants or oppressive ideologies, but rendered obsolete by the power of modern data science to know us better than we know ourselves, and thus to predict and manipulate our choices.
Arguments against free will go back millennia, but the latest resurgence of scepticism has been driven by advances in neuroscience during the past few decades. Now that it’s possible to observe – thanks to neuroimaging – the physical brain activity associated with our decisions, it’s easier to think of those decisions as just another part of the mechanics of the material universe, in which “free will” plays no role. And from the 1980s onwards, various specific neuroscientific findings have offered troubling clues that our so-called free choices might actually originate in our brains several milliseconds, or even much longer, before we’re first aware of even thinking of them.
Despite the criticism that this is all just armchair philosophy, the truth is that the stakes could hardly be higher. Were free will to be shown to be nonexistent – and were we truly to absorb the fact – it would “precipitate a culture war far more belligerent than the one that has been waged on the subject of evolution”, Harris has written. Arguably, we would be forced to conclude that it was unreasonable ever to praise or blame anyone for their actions, since they weren’t truly responsible for deciding to do them; or to feel guilt for one’s misdeeds, pride in one’s accomplishments, or gratitude for others’ kindness. And we might come to feel that it was morally unjustifiable to mete out retributive punishment to criminals, since they had no ultimate choice about their wrongdoing. Some worry that it might fatally corrode all human relations, since romantic love, friendship and neighbourly civility alike all depend on the assumption of choice: any loving or respectful gesture has to be voluntary for it to count.
Peer over the precipice of the free will debate for a while, and you begin to appreciate how an already psychologically vulnerable person might be nudged into a breakdown, as was apparently the case with Strawson’s email correspondents. Harris has taken to prefacing his podcasts on free will with disclaimers, urging those who find the topic emotionally distressing to give them a miss. And Saul Smilansky, a professor of philosophy at the University of Haifa in Israel, who believes the popular notion of free will is a mistake, told me that if a graduate student who was prone to depression sought to study the subject with him, he would try to dissuade them. “Look, I’m naturally a buoyant person,” he said. “I have the mentality of a village idiot: it’s easy to make me happy. Nevertheless, the free will problem is really depressing if you take it seriously. It hasn’t made me happy, and in retrospect, if I were at graduate school again, maybe a different topic would have been preferable.”
Smilansky is an advocate of what he calls “illusionism”, the idea that although free will as conventionally defined is unreal, it’s crucial people go on believing otherwise – from which it follows that an article like this one might be actively dangerous. (Twenty years ago, he said, he might have refused to speak to me, but these days free will scepticism was so widely discussed that “the horse has left the barn”.) “On the deepest level, if people really understood what’s going on – and I don’t think I’ve fully internalised the implications myself, even after all these years – it’s just too frightening and difficult,” Smilansky said. “For anyone who’s morally and emotionally deep, it’s really depressing and destructive. It would really threaten our sense of self, our sense of personal value. The truth is just too awful here.”
The conviction that nobody ever truly chooses freely to do anything – that we’re the puppets of forces beyond our control – often seems to strike its adherents early in their intellectual careers, in a sudden flash of insight. “I was sitting in a carrel in Wolfson College [in Oxford] in 1975, and I had no idea what I was going to write my DPhil thesis about,” Strawson recalled. “I was reading something about Kant’s views on free will, and I was just electrified. That was it.” The logic, once glimpsed, seems coldly inexorable. Start with what seems like an obvious truth: anything that happens in the world, ever, must have been completely caused by things that happened before it. And those things must have been caused by things that happened before them – and so on, backwards to the dawn of time: cause after cause after cause, all of them following the predictable laws of nature, even if we haven’t figured all of those laws out yet. It’s easy enough to grasp this in the context of the straightforwardly physical world of rocks and rivers and internal combustion engines. But surely “one thing leads to another” in the world of decisions and intentions, too. Our decisions and intentions involve neural activity – and why would a neuron be exempt from the laws of physics any more than a rock?
So in the fruit bowl example, there are physiological reasons for your feeling hungry in the first place, and there are causes – in your genes, your upbringing, or your current environment – for your choosing to address your hunger with fruit, rather than a box of doughnuts. And your preference for the banana over the apple, at the moment of supposed choice, must have been caused by what went before, presumably including the pattern of neurons firing in your brain, which was itself caused – and so on back in an unbroken chain to your birth, the meeting of your parents, their births and, eventually, the birth of the cosmos.
An astronomical clock in Prague, Czech Republic. Photograph: John Kellerman/Alamy
But if all that’s true, there’s simply no room for the kind of free will you might imagine yourself to have when you see the apple and banana and wonder which one you’ll choose. To have what’s known in the scholarly jargon as “contra-causal” free will – so that if you rewound the tape of history back to the moment of choice, you could make a different choice – you’d somehow have to slip outside physical reality. To make a choice that wasn’t merely the next link in the unbroken chain of causes, you’d have to be able to stand apart from the whole thing, a ghostly presence separate from the material world yet mysteriously still able to influence it. But of course you can’t actually get to this supposed place that’s external to the universe, separate from all the atoms that comprise it and the laws that govern them. You just are some of the atoms in the universe, governed by the same predictable laws as all the rest.
It was the French polymath Pierre-Simon Laplace, writing in 1814, who most succinctly expressed the puzzle here: how can there be free will, in a universe where events just crank forwards like clockwork? His thought experiment is known as Laplace’s demon, and his argument went as follows: if some hypothetical ultra-intelligent being – or demon – could somehow know the position of every atom in the universe at a single point in time, along with all the laws that governed their interactions, it could predict the future in its entirety. There would be nothing it couldn’t know about the world 100 or 1,000 years hence, down to the slightest quiver of a sparrow’s wing. You might think you made a free choice to marry your partner, or choose a salad with your meal rather than chips; but in fact Laplace’s demon would have known it all along, by extrapolating out along the endless chain of causes. “For such an intellect,” Laplace said, “nothing could be uncertain, and the future, just like the past, would be present before its eyes.”
It’s true that since Laplace’s day, findings in quantum physics have indicated that some events, at the level of atoms and electrons, are genuinely random, which means they would be impossible to predict in advance, even by some hypothetical megabrain. But few people involved in the free will debate think that makes a critical difference. Those tiny fluctuations probably have little relevant impact on life at the scale we live it, as human beings. And in any case, there’s no more freedom in being subject to the random behaviours of electrons than there is in being the slave of predetermined causal laws. Either way, something other than your own free will seems to be pulling your strings.
By far the most unsettling implication of the case against free will, for most who encounter it, is what it seems to say about morality: that nobody, ever, truly deserves reward or punishment for what they do, because what they do is the result of blind deterministic forces (plus maybe a little quantum randomness). “For the free will sceptic,” writes Gregg Caruso in his new book Just Deserts, a collection of dialogues with his fellow philosopher Daniel Dennett, “it is never fair to treat anyone as morally responsible.” Were we to accept the full implications of that idea, the way we treat each other – and especially the way we treat criminals – might change beyond recognition.
Consider the case of Charles Whitman. Just after midnight on 1 August 1966, Whitman – an outgoing and apparently stable 25-year-old former US Marine – drove to his mother’s apartment in Austin, Texas, where he stabbed her to death. He returned home, where he killed his wife in the same manner. Later that day, he took an assortment of weapons to the top of a high building on the campus of the University of Texas, where he began shooting randomly for about an hour and a half. By the time Whitman was killed by police, 12 more people were dead, and one more died of his injuries years afterwards – a spree that remains the US’s 10th worst mass shooting.
Within hours of the massacre, the authorities discovered a note that Whitman had typed the night before. “I don’t quite understand what compels me to type this letter,” he wrote. “Perhaps it is to leave some vague reason for the actions I have recently performed. I don’t really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I can’t recall when it started) I have been a victim of many unusual and irrational thoughts [which] constantly recur, and it requires a tremendous mental effort to concentrate on useful and progressive tasks … After my death I wish that an autopsy would be performed to see if there is any visible physical disorder.” Following the first two murders, he added a coda: “Maybe research can prevent further tragedies of this type.” An autopsy was performed, revealing the presence of a substantial brain tumour, pressing on Whitman’s amygdala, the part of the brain governing “fight or flight” responses to fear.
As the free will sceptics who draw on Whitman’s case concede, it’s impossible to know if the brain tumour caused Whitman’s actions. What seems clear is that it certainly could have done so – and that almost everyone, on hearing about it, undergoes some shift in their attitude towards him. It doesn’t make the killings any less horrific. Nor does it mean the police weren’t justified in killing him. But it does make his rampage start to seem less like the evil actions of an evil man, and more like the terrible symptom of a disorder, with Whitman among its victims. The same is true for another wrongdoer famous in the free-will literature, the anonymous subject of the 2003 paper Right Orbitofrontal Tumor with Paedophilia Symptom and Constructional Apraxia Sign, a 40-year-old schoolteacher who suddenly developed paedophilic urges and began seeking out child pornography, and was subsequently convicted of child molestation. Soon afterwards, complaining of headaches, he was diagnosed with a brain tumour; when it was removed, his paedophilic urges vanished. A year later, they returned – as had his tumour, detected in another brain scan.
If you find the presence of a brain tumour in these cases in any way exculpatory, though, you face a difficult question: what’s so special about a brain tumour, as opposed to all the other ways in which people’s brains cause them to do things? When you learn about the specific chain of causes that were unfolding inside Charles Whitman’s skull, it has the effect of seeming to make him less personally responsible for the terrible acts he committed. But by definition, anyone who commits any immoral act has a brain in which a chain of prior causes had unfolded, leading to the act; if that weren’t the case, they’d never have committed the act. “A neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions,” is how Harris expresses it. “Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumour in it.” It appears to follow that as we understand ever more about how the brain works, we’ll illuminate the last shadows in which something called “free will” might ever have lurked – and we’ll be forced to concede that a criminal is merely someone unlucky enough to find himself at the end of a causal chain that culminates in a crime. We can still insist the crime in question is morally bad; we just can’t hold the criminal individually responsible. (Or at least that’s where the logic seems to lead our modern minds: there’s a rival tradition, going back to the ancient Greeks, which holds that you can be held responsible for what’s fated to happen to you anyway.)
But if all that’s true, there’s simply no room for the kind of free will you might imagine yourself to have when you see the apple and banana and wonder which one you’ll choose. To have what’s known in the scholarly jargon as “contra-causal” free will – so that if you rewound the tape of history back to the moment of choice, you could make a different choice – you’d somehow have to slip outside physical reality. To make a choice that wasn’t merely the next link in the unbroken chain of causes, you’d have to be able to stand apart from the whole thing, a ghostly presence separate from the material world yet mysteriously still able to influence it. But of course you can’t actually get to this supposed place that’s external to the universe, separate from all the atoms that comprise it and the laws that govern them. You just are some of the atoms in the universe, governed by the same predictable laws as all the rest.
It was the French polymath Pierre-Simon Laplace, writing in 1814, who most succinctly expressed the puzzle here: how can there be free will, in a universe where events just crank forwards like clockwork? His thought experiment is known as Laplace’s demon, and his argument went as follows: if some hypothetical ultra-intelligent being – or demon – could somehow know the position of every atom in the universe at a single point in time, along with all the laws that governed their interactions, it could predict the future in its entirety. There would be nothing it couldn’t know about the world 100 or 1,000 years hence, down to the slightest quiver of a sparrow’s wing. You might think you made a free choice to marry your partner, or choose a salad with your meal rather than chips; but in fact Laplace’s demon would have known it all along, by extrapolating out along the endless chain of causes. “For such an intellect,” Laplace said, “nothing could be uncertain, and the future, just like the past, would be present before its eyes.”
It’s true that since Laplace’s day, findings in quantum physics have indicated that some events, at the level of atoms and electrons, are genuinely random, which means they would be impossible to predict in advance, even by some hypothetical megabrain. But few people involved in the free will debate think that makes a critical difference. Those tiny fluctuations probably have little relevant impact on life at the scale we live it, as human beings. And in any case, there’s no more freedom in being subject to the random behaviours of electrons than there is in being the slave of predetermined causal laws. Either way, something other than your own free will seems to be pulling your strings.
By far the most unsettling implication of the case against free will, for most who encounter it, is what it seems to say about morality: that nobody, ever, truly deserves reward or punishment for what they do, because what they do is the result of blind deterministic forces (plus maybe a little quantum randomness). “For the free will sceptic,” writes Gregg Caruso in his new book Just Deserts, a collection of dialogues with his fellow philosopher Daniel Dennett, “it is never fair to treat anyone as morally responsible.” Were we to accept the full implications of that idea, the way we treat each other – and especially the way we treat criminals – might change beyond recognition.
Consider the case of Charles Whitman. Just after midnight on 1 August 1966, Whitman – an outgoing and apparently stable 25-year-old former US Marine – drove to his mother’s apartment in Austin, Texas, where he stabbed her to death. He returned home, where he killed his wife in the same manner. Later that day, he took an assortment of weapons to the top of a high building on the campus of the University of Texas, where he began shooting randomly for about an hour and a half. By the time Whitman was killed by police, 12 more people were dead, and one more died of his injuries years afterwards – a spree that remains the US’s 10th worst mass shooting.
Within hours of the massacre, the authorities discovered a note that Whitman had typed the night before. “I don’t quite understand what compels me to type this letter,” he wrote. “Perhaps it is to leave some vague reason for the actions I have recently performed. I don’t really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I can’t recall when it started) I have been a victim of many unusual and irrational thoughts [which] constantly recur, and it requires a tremendous mental effort to concentrate on useful and progressive tasks … After my death I wish that an autopsy would be performed to see if there is any visible physical disorder.” Following the first two murders, he added a coda: “Maybe research can prevent further tragedies of this type.” An autopsy was performed, revealing the presence of a substantial brain tumour, pressing on Whitman’s amygdala, the part of the brain governing “fight or flight” responses to fear.
As the free will sceptics who draw on Whitman’s case concede, it’s impossible to know if the brain tumour caused Whitman’s actions. What seems clear is that it certainly could have done so – and that almost everyone, on hearing about it, undergoes some shift in their attitude towards him. It doesn’t make the killings any less horrific. Nor does it mean the police weren’t justified in killing him. But it does make his rampage start to seem less like the evil actions of an evil man, and more like the terrible symptom of a disorder, with Whitman among its victims. The same is true for another wrongdoer famous in the free-will literature, the anonymous subject of the 2003 paper Right Orbitofrontal Tumor with Paedophilia Symptom and Constructional Apraxia Sign, a 40-year-old schoolteacher who suddenly developed paedophilic urges and began seeking out child pornography, and was subsequently convicted of child molestation. Soon afterwards, complaining of headaches, he was diagnosed with a brain tumour; when it was removed, his paedophilic urges vanished. A year later, they returned – as had his tumour, detected in another brain scan.
If you find the presence of a brain tumour in these cases in any way exculpatory, though, you face a difficult question: what’s so special about a brain tumour, as opposed to all the other ways in which people’s brains cause them to do things? When you learn about the specific chain of causes that were unfolding inside Charles Whitman’s skull, it has the effect of seeming to make him less personally responsible for the terrible acts he committed. But by definition, anyone who commits any immoral act has a brain in which a chain of prior causes had unfolded, leading to the act; if that weren’t the case, they’d never have committed the act. “A neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions,” is how Harris expresses it. “Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumour in it.” It appears to follow that as we understand ever more about how the brain works, we’ll illuminate the last shadows in which something called “free will” might ever have lurked – and we’ll be forced to concede that a criminal is merely someone unlucky enough to find himself at the end of a causal chain that culminates in a crime. We can still insist the crime in question is morally bad; we just can’t hold the criminal individually responsible. (Or at least that’s where the logic seems to lead our modern minds: there’s a rival tradition, going back to the ancient Greeks, which holds that you can be held responsible for what’s fated to happen to you anyway.)
Illustration: Nathalie Lees
For Caruso, who teaches philosophy at the State University of New York, what all this means is that retributive punishment – punishing a criminal because he deserves it, rather than to protect the public, or serve as a warning to others – can’t ever be justified. Like Strawson, he has received email abuse from people disturbed by the implications. Retribution is central to all modern systems of criminal justice, yet ultimately, Caruso thinks, “it’s a moral injustice to hold someone responsible for actions that are beyond their control. It’s capricious.” Indeed some psychological research, he points out, suggests that people believe in free will partly because they want to justify their appetite for retribution. “What seems to happen is that people come across an action they disapprove of; they have a high desire to blame or punish; so they attribute to the perpetrator the degree of control [over their own actions] that would be required to justify blaming them.” (It’s no accident that the free will controversy is entangled in debates about religion: following similar logic, sinners must freely choose to sin, in order for God’s retribution to be justified.)
Caruso is an advocate of what he calls the “public health-quarantine” model of criminal justice, which would transform the institutions of punishment in a radically humane direction. You could still restrain a murderer, on the same rationale that you can require someone infected by Ebola to observe a quarantine: to protect the public. But you’d have no right to make the experience any more unpleasant than was strictly necessary for public protection. And you would be obliged to release them as soon as they no longer posed a threat. (The main focus, in Caruso’s ideal world, would be on redressing social problems to try stop crime happening in the first place – just as public health systems ought to focus on preventing epidemics happening to begin with.)
It’s tempting to try to wriggle out of these ramifications by protesting that, while people might not choose their worst impulses – for murder, say – they do have the choice not to succumb to them. You can feel the urge to kill someone but resist it, or even seek psychiatric help. You can take responsibility for the state of your personality. And don’t we all do that, all the time, in more mundane ways, whenever we decide to acquire a new professional skill, become a better listener, or finally get fit?
But this is not the escape clause it might seem. After all, the free will sceptics insist, if you do manage to change your personality in some admirable way, you must already have possessed the kind of personality capable of implementing such a change – and you didn’t choose that. None of this requires us to believe that the worst atrocities are any less appalling than we previously thought. But it does entail that the perpetrators can’t be held personally to blame. If you’d been born with Hitler’s genes, and experienced Hitler’s upbringing, you would be Hitler – and ultimately it’s only good fortune that you weren’t. In the end, as Strawson puts it, “luck swallows everything”.
Given how watertight the case against free will can appear, it may be surprising to learn that most philosophers reject it: according to a 2009 survey, conducted by the website PhilPapers, only about 12% of them are persuaded by it. And the disagreement can be fraught, partly because free will denial belongs to a wider trend that drives some philosophers spare – the tendency for those trained in the hard sciences to make sweeping pronouncements about debates that have raged in philosophy for years, as if all those dull-witted scholars were just waiting for the physicists and neuroscientists to show up. In one chilly exchange, Dennett paid a backhanded compliment to Harris, who has a PhD in neuroscience, calling his book “remarkable” and “valuable” – but only because it was riddled with so many wrongheaded claims: “I am grateful to Harris for saying, so boldly and clearly, what less outgoing scientists are thinking but keeping to themselves.”
What’s still more surprising, and hard to wrap one’s mind around, is that most of those who defend free will don’t reject the sceptics’ most dizzying assertion – that every choice you ever make might have been determined in advance. So in the fruit bowl example, a majority of philosophers agree that if you rewound the tape of history to the moment of choice, with everything in the universe exactly the same, you couldn’t have made a different selection. That kind of free will is “as illusory as poltergeists”, to quote Dennett. What they claim instead is that this doesn’t matter: that even though our choices may be determined, it makes sense to say we’re free to choose. That’s why they’re known as “compatibilists”: they think determinism and free will are compatible. (There are many other positions in the debate, including some philosophers, many Christians among them, who think we really do have “ghostly” free will; and others who think the whole so-called problem is a chimera, resulting from a confusion of categories, or errors of language.)
To those who find the case against free will persuasive, compatibilism seems outrageous at first glance. How can we possibly be free to choose if we aren’t, in fact, you know, free to choose? But to grasp the compatibilists’ point, it helps first to think about free will not as a kind of magic, but as a mundane sort of skill – one which most adults possess, most of the time. As the compatibilist Kadri Vihvelin writes, “we have the free will we think we have, including the freedom of action we think we have … by having some bundle of abilities and being in the right kind of surroundings.” The way most compatibilists see things, “being free” is just a matter of having the capacity to think about what you want, reflect on your desires, then act on them and sometimes get what you want. When you choose the banana in the normal way – by thinking about which fruit you’d like, then taking it – you’re clearly in a different situation from someone who picks the banana because a fruit-obsessed gunman is holding a pistol to their head; or someone afflicted by a banana addiction, compelled to grab every one they see. In all of these scenarios, to be sure, your actions belonged to an unbroken chain of causes, stretching back to the dawn of time. But who cares? The banana-chooser in one of them was clearly more free than in the others.
“Harris, Pinker, Coyne – all these scientists, they all make the same two-step move,” said Eddy Nahmias, a compatibilist philosopher at Georgia State University in the US. “Their first move is always to say, ‘well, here’s what free will means’” – and it’s always something nobody could ever actually have, in the reality in which we live. “And then, sure enough, they deflate it. But once you have that sort of balloon in front of you, it’s very easy to deflate it, because any naturalistic account of the world will show that it’s false.”
For Caruso, who teaches philosophy at the State University of New York, what all this means is that retributive punishment – punishing a criminal because he deserves it, rather than to protect the public, or serve as a warning to others – can’t ever be justified. Like Strawson, he has received email abuse from people disturbed by the implications. Retribution is central to all modern systems of criminal justice, yet ultimately, Caruso thinks, “it’s a moral injustice to hold someone responsible for actions that are beyond their control. It’s capricious.” Indeed some psychological research, he points out, suggests that people believe in free will partly because they want to justify their appetite for retribution. “What seems to happen is that people come across an action they disapprove of; they have a high desire to blame or punish; so they attribute to the perpetrator the degree of control [over their own actions] that would be required to justify blaming them.” (It’s no accident that the free will controversy is entangled in debates about religion: following similar logic, sinners must freely choose to sin, in order for God’s retribution to be justified.)
Caruso is an advocate of what he calls the “public health-quarantine” model of criminal justice, which would transform the institutions of punishment in a radically humane direction. You could still restrain a murderer, on the same rationale that you can require someone infected by Ebola to observe a quarantine: to protect the public. But you’d have no right to make the experience any more unpleasant than was strictly necessary for public protection. And you would be obliged to release them as soon as they no longer posed a threat. (The main focus, in Caruso’s ideal world, would be on redressing social problems to try stop crime happening in the first place – just as public health systems ought to focus on preventing epidemics happening to begin with.)
It’s tempting to try to wriggle out of these ramifications by protesting that, while people might not choose their worst impulses – for murder, say – they do have the choice not to succumb to them. You can feel the urge to kill someone but resist it, or even seek psychiatric help. You can take responsibility for the state of your personality. And don’t we all do that, all the time, in more mundane ways, whenever we decide to acquire a new professional skill, become a better listener, or finally get fit?
But this is not the escape clause it might seem. After all, the free will sceptics insist, if you do manage to change your personality in some admirable way, you must already have possessed the kind of personality capable of implementing such a change – and you didn’t choose that. None of this requires us to believe that the worst atrocities are any less appalling than we previously thought. But it does entail that the perpetrators can’t be held personally to blame. If you’d been born with Hitler’s genes, and experienced Hitler’s upbringing, you would be Hitler – and ultimately it’s only good fortune that you weren’t. In the end, as Strawson puts it, “luck swallows everything”.
Given how watertight the case against free will can appear, it may be surprising to learn that most philosophers reject it: according to a 2009 survey, conducted by the website PhilPapers, only about 12% of them are persuaded by it. And the disagreement can be fraught, partly because free will denial belongs to a wider trend that drives some philosophers spare – the tendency for those trained in the hard sciences to make sweeping pronouncements about debates that have raged in philosophy for years, as if all those dull-witted scholars were just waiting for the physicists and neuroscientists to show up. In one chilly exchange, Dennett paid a backhanded compliment to Harris, who has a PhD in neuroscience, calling his book “remarkable” and “valuable” – but only because it was riddled with so many wrongheaded claims: “I am grateful to Harris for saying, so boldly and clearly, what less outgoing scientists are thinking but keeping to themselves.”
What’s still more surprising, and hard to wrap one’s mind around, is that most of those who defend free will don’t reject the sceptics’ most dizzying assertion – that every choice you ever make might have been determined in advance. So in the fruit bowl example, a majority of philosophers agree that if you rewound the tape of history to the moment of choice, with everything in the universe exactly the same, you couldn’t have made a different selection. That kind of free will is “as illusory as poltergeists”, to quote Dennett. What they claim instead is that this doesn’t matter: that even though our choices may be determined, it makes sense to say we’re free to choose. That’s why they’re known as “compatibilists”: they think determinism and free will are compatible. (There are many other positions in the debate, including some philosophers, many Christians among them, who think we really do have “ghostly” free will; and others who think the whole so-called problem is a chimera, resulting from a confusion of categories, or errors of language.)
To those who find the case against free will persuasive, compatibilism seems outrageous at first glance. How can we possibly be free to choose if we aren’t, in fact, you know, free to choose? But to grasp the compatibilists’ point, it helps first to think about free will not as a kind of magic, but as a mundane sort of skill – one which most adults possess, most of the time. As the compatibilist Kadri Vihvelin writes, “we have the free will we think we have, including the freedom of action we think we have … by having some bundle of abilities and being in the right kind of surroundings.” The way most compatibilists see things, “being free” is just a matter of having the capacity to think about what you want, reflect on your desires, then act on them and sometimes get what you want. When you choose the banana in the normal way – by thinking about which fruit you’d like, then taking it – you’re clearly in a different situation from someone who picks the banana because a fruit-obsessed gunman is holding a pistol to their head; or someone afflicted by a banana addiction, compelled to grab every one they see. In all of these scenarios, to be sure, your actions belonged to an unbroken chain of causes, stretching back to the dawn of time. But who cares? The banana-chooser in one of them was clearly more free than in the others.
“Harris, Pinker, Coyne – all these scientists, they all make the same two-step move,” said Eddy Nahmias, a compatibilist philosopher at Georgia State University in the US. “Their first move is always to say, ‘well, here’s what free will means’” – and it’s always something nobody could ever actually have, in the reality in which we live. “And then, sure enough, they deflate it. But once you have that sort of balloon in front of you, it’s very easy to deflate it, because any naturalistic account of the world will show that it’s false.”
Daniel Dennett in Stockholm, Sweden. Photograph: Ibl/Rex/Shutterstock
Consider hypnosis. A doctrinaire free will sceptic might feel obliged to argue that a person hypnotised into making a particular purchase is no less free than someone who thinks about it, in the usual manner, before reaching for their credit card. After all, their idea of free will requires that the choice wasn’t fully determined by prior causes; yet in both cases, hypnotised and non-hypnotised, it was. “But come on, that’s just really annoying,” said Helen Beebee, a philosopher at the University of Manchester who has written widely on free will, expressing an exasperation commonly felt by compatibilists toward their rivals’ more outlandish claims. “In some sense, I don’t care if you call it ‘free will’ or ‘acting freely’ or anything else – it’s just that it obviously does matter, to everybody, whether they get hypnotised into doing things or not.”
Granted, the compatibilist version of free will may be less exciting. But it doesn’t follow that it’s worthless. Indeed, it may be (in another of Dennett’s phrases) the only kind of “free will worth wanting”. You experience the desire for a certain fruit, you act on it, and you get the fruit, with no external gunmen or internal disorders influencing your choice. How could a person ever be freer than that?
Thinking of free will this way also puts a different spin on some notorious experiments conducted in the 80s by the American neuroscientist Benjamin Libet, which have been interpreted as offering scientific proof that free will doesn’t exist. Wiring his subjects to a brain scanner, and asking them to flex their hands at a moment of their choosing, Libet seemed to show that their choice was detectable from brain activity 300 milliseconds before they made a conscious decision. (Other studies have indicated activity up to 10 seconds before a conscious choice.) How could these subjects be said to have reached their decisions freely, if the lab equipment knew their decisions so far in advance? But to most compatibilists, this is a fuss about nothing. Like everything else, our conscious choices are links in a causal chain of neural processes, so of course some brain activity precedes the moment at which we become aware of them.
From this down-to-earth perspective, there’s also no need to start panicking that cases like Charles Whitman’s might mean we could never hold anybody responsible for their misdeeds, or praise them for their achievements. (In their defence, several free will sceptics I spoke to had their reasons for not going that far, either.) Instead, we need only ask whether someone had the normal ability to choose rationally, reflecting on the implications of their actions. We all agree that newborn babies haven’t developed that yet, so we don’t blame them for waking us in the night; and we believe most non-human animals don’t possess it – so few of us rage indignantly at wasps for stinging us. Someone with a severe neurological or developmental impairment would surely lack it, too, perhaps including Whitman. But as for everyone else: “Bernie Madoff is the example I always like to use,” said Nahmias. “Because it’s so clear that he knew what he was doing, and that he knew that what he was doing was wrong, and he did it anyway.” He did have the ability we call “free will” – and used it to defraud his investors of more than $17bn.
To the free will sceptics, this is all just a desperate attempt at face-saving and changing the subject – an effort to redefine free will not as the thing we all feel, when faced with a choice, but as something else, unworthy of the name. “People hate the idea that they aren’t agents who can make free choices,” Jerry Coyne has argued. Harris has accused Dennett of approaching the topic as if he were telling someone bent on discovering the lost city of Atlantis that they ought to be satisfied with a trip to Sicily. After all, it meets some of the criteria: it’s an island in the sea, home to a civilisation with ancient roots. But the facts remain: Atlantis doesn’t exist. And when it felt like it wasn’t inevitable you’d choose the banana, the truth is that it actually was.
It’s tempting to dismiss the free will controversy as irrelevant to real life, on the grounds that we can’t help but feel as though we have free will, whatever the philosophical truth may be. I’m certainly going to keep responding to others as though they had free will: if you injure me, or someone I love, I can guarantee I’m going to be furious, instead of smiling indulgently on the grounds that you had no option. In this experiential sense, free will just seems to be a given.
But is it? When my mind is at its quietest – for example, drinking coffee early in the morning, before the four-year-old wakes up – things are liable to feel different. In such moments of relaxed concentration, it seems clear to me that my intentions and choices, like all my other thoughts and emotions, arise unbidden in my awareness. There’s no sense in which it feels like I’m their author. Why do I put down my coffee mug and head to the shower at the exact moment I do so? Because the intention to do so pops up, caused, no doubt, by all sorts of activity in my brain – but activity that lies outside my understanding, let alone my command. And it’s exactly the same when it comes to those weightier decisions that seem to express something profound about the kind of person I am: whether to attend the funeral of a certain relative, say, or which of two incompatible career opportunities to pursue. I can spend hours or even days engaged in what I tell myself is “reaching a decision” about those, when what I’m really doing, if I’m honest, is just vacillating between options – until at some unpredictable moment, or when an external deadline forces the issue, the decision to commit to one path or another simply arises.
This is what Harris means when he declares that, on close inspection, it’s not merely that free will is an illusion, but that the illusion of free will is itself an illusion: watch yourself closely, and you don’t even seem to be free. “If one pays sufficient attention,” he told me by email, “one can notice that there’s no subject in the middle of experience – there is only experience. And everything we experience simply arises on its own.” This is an idea with roots in Buddhism, and echoed by others, including the philosopher David Hume: when you look within, there’s no trace of an internal commanding officer, autonomously issuing decisions. There’s only mental activity, flowing on. Or as Arthur Rimbaud wrote, in a letter to a friend in 1871: “I am a spectator at the unfolding of my thought; I watch it, I listen to it.”
There are reasons to agree with Saul Smilansky that it might be personally and societally detrimental for too many people to start thinking in this way, even if it turns out it’s the truth. (Dennett, although he thinks we do have free will, takes a similar position, arguing that it’s morally irresponsible to promote free-will denial.) In one set of studies in 2008, the psychologists Kathleen Vohs and Jonathan Schooler asked one group of participants to read an excerpt from The Astonishing Hypothesis by Francis Crick, co-discoverer of the structure of DNA, in which he suggests free will is an illusion. The subjects thus primed to doubt the existence of free will proved significantly likelier than others, in a subsequent stage of the experiment, to cheat in a test where there was money at stake. Other research has reported a diminished belief in free will to less willingness to volunteer to help others, to lower levels of commitment in relationships, and lower levels of gratitude.
Unsuccessful attempts to replicate Vohs and Schooler’s findings have called them into question. But even if the effects are real, some free will sceptics argue that the participants in such studies are making a common mistake – and one that might get cleared up rather rapidly, were the case against free will to become better known and understood. Study participants who suddenly become immoral seem to be confusing determinism with fatalism – the idea that if we don’t have free will, then our choices don’t really matter, so we might as well not bother trying to make good ones, and just do as we please instead. But in fact it doesn’t follow from our choices being determined that they don’t matter. It might matter enormously whether you choose to feed your children a diet rich in vegetables or not; or whether you decide to check carefully in both directions before crossing a busy road. It’s just that (according to the sceptics) you don’t get to make those choices freely.
In any case, were free will really to be shown to be nonexistent, the implications might not be entirely negative. It’s true that there’s something repellent about an idea that seems to require us to treat a cold-blooded murderer as not responsible for his actions, while at the same time characterising the love of a parent for a child as nothing more than what Smilansky calls “the unfolding of the given” – mere blind causation, devoid of any human spark. But there’s something liberating about it, too. It’s a reason to be gentler with yourself, and with others. For those of us prone to being hard on ourselves, it’s therapeutic to keep in the back of your mind the thought that you might be doing precisely as well as you were always going to be doing – that in the profoundest sense, you couldn’t have done any more. And for those of us prone to raging at others for their minor misdeeds, it’s calming to consider how easily their faults might have been yours. (Sure enough, some research has linked disbelief in free will to increased kindness.)
Harris argues that if we fully grasped the case against free will, it would be difficult to hate other people: how can you hate someone you don’t blame for their actions? Yet love would survive largely unscathed, since love is “the condition of our wanting those we love to be happy, and being made happy ourselves by that ethical and emotional connection”, neither of which would be undermined. And countless other positive aspects of life would be similarly untouched. As Strawson puts it, in a world without a belief in free will, “strawberries would still taste just as good”.
Those early-morning moments aside, I personally can’t claim to find the case against free will ultimately persuasive; it’s just at odds with too much else that seems obviously true about life. Yet even if only entertained as a hypothetical possibility, free will scepticism is an antidote to that bleak individualist philosophy which holds that a person’s accomplishments truly belong to them alone – and that you’ve therefore only yourself to blame if you fail. It’s a reminder that accidents of birth might affect the trajectories of our lives far more comprehensively than we realise, dictating not only the socioeconomic position into which we’re born, but also our personalities and experiences as a whole: our talents and our weaknesses, our capacity for joy, and our ability to overcome tendencies toward violence, laziness or despair, and the paths we end up travelling. There is a deep sense of human fellowship in this picture of reality – in the idea that, in our utter exposure to forces beyond our control, we might all be in the same boat, clinging on for our lives, adrift on the storm-tossed ocean of luck.
Consider hypnosis. A doctrinaire free will sceptic might feel obliged to argue that a person hypnotised into making a particular purchase is no less free than someone who thinks about it, in the usual manner, before reaching for their credit card. After all, their idea of free will requires that the choice wasn’t fully determined by prior causes; yet in both cases, hypnotised and non-hypnotised, it was. “But come on, that’s just really annoying,” said Helen Beebee, a philosopher at the University of Manchester who has written widely on free will, expressing an exasperation commonly felt by compatibilists toward their rivals’ more outlandish claims. “In some sense, I don’t care if you call it ‘free will’ or ‘acting freely’ or anything else – it’s just that it obviously does matter, to everybody, whether they get hypnotised into doing things or not.”
Granted, the compatibilist version of free will may be less exciting. But it doesn’t follow that it’s worthless. Indeed, it may be (in another of Dennett’s phrases) the only kind of “free will worth wanting”. You experience the desire for a certain fruit, you act on it, and you get the fruit, with no external gunmen or internal disorders influencing your choice. How could a person ever be freer than that?
Thinking of free will this way also puts a different spin on some notorious experiments conducted in the 80s by the American neuroscientist Benjamin Libet, which have been interpreted as offering scientific proof that free will doesn’t exist. Wiring his subjects to a brain scanner, and asking them to flex their hands at a moment of their choosing, Libet seemed to show that their choice was detectable from brain activity 300 milliseconds before they made a conscious decision. (Other studies have indicated activity up to 10 seconds before a conscious choice.) How could these subjects be said to have reached their decisions freely, if the lab equipment knew their decisions so far in advance? But to most compatibilists, this is a fuss about nothing. Like everything else, our conscious choices are links in a causal chain of neural processes, so of course some brain activity precedes the moment at which we become aware of them.
From this down-to-earth perspective, there’s also no need to start panicking that cases like Charles Whitman’s might mean we could never hold anybody responsible for their misdeeds, or praise them for their achievements. (In their defence, several free will sceptics I spoke to had their reasons for not going that far, either.) Instead, we need only ask whether someone had the normal ability to choose rationally, reflecting on the implications of their actions. We all agree that newborn babies haven’t developed that yet, so we don’t blame them for waking us in the night; and we believe most non-human animals don’t possess it – so few of us rage indignantly at wasps for stinging us. Someone with a severe neurological or developmental impairment would surely lack it, too, perhaps including Whitman. But as for everyone else: “Bernie Madoff is the example I always like to use,” said Nahmias. “Because it’s so clear that he knew what he was doing, and that he knew that what he was doing was wrong, and he did it anyway.” He did have the ability we call “free will” – and used it to defraud his investors of more than $17bn.
To the free will sceptics, this is all just a desperate attempt at face-saving and changing the subject – an effort to redefine free will not as the thing we all feel, when faced with a choice, but as something else, unworthy of the name. “People hate the idea that they aren’t agents who can make free choices,” Jerry Coyne has argued. Harris has accused Dennett of approaching the topic as if he were telling someone bent on discovering the lost city of Atlantis that they ought to be satisfied with a trip to Sicily. After all, it meets some of the criteria: it’s an island in the sea, home to a civilisation with ancient roots. But the facts remain: Atlantis doesn’t exist. And when it felt like it wasn’t inevitable you’d choose the banana, the truth is that it actually was.
It’s tempting to dismiss the free will controversy as irrelevant to real life, on the grounds that we can’t help but feel as though we have free will, whatever the philosophical truth may be. I’m certainly going to keep responding to others as though they had free will: if you injure me, or someone I love, I can guarantee I’m going to be furious, instead of smiling indulgently on the grounds that you had no option. In this experiential sense, free will just seems to be a given.
But is it? When my mind is at its quietest – for example, drinking coffee early in the morning, before the four-year-old wakes up – things are liable to feel different. In such moments of relaxed concentration, it seems clear to me that my intentions and choices, like all my other thoughts and emotions, arise unbidden in my awareness. There’s no sense in which it feels like I’m their author. Why do I put down my coffee mug and head to the shower at the exact moment I do so? Because the intention to do so pops up, caused, no doubt, by all sorts of activity in my brain – but activity that lies outside my understanding, let alone my command. And it’s exactly the same when it comes to those weightier decisions that seem to express something profound about the kind of person I am: whether to attend the funeral of a certain relative, say, or which of two incompatible career opportunities to pursue. I can spend hours or even days engaged in what I tell myself is “reaching a decision” about those, when what I’m really doing, if I’m honest, is just vacillating between options – until at some unpredictable moment, or when an external deadline forces the issue, the decision to commit to one path or another simply arises.
This is what Harris means when he declares that, on close inspection, it’s not merely that free will is an illusion, but that the illusion of free will is itself an illusion: watch yourself closely, and you don’t even seem to be free. “If one pays sufficient attention,” he told me by email, “one can notice that there’s no subject in the middle of experience – there is only experience. And everything we experience simply arises on its own.” This is an idea with roots in Buddhism, and echoed by others, including the philosopher David Hume: when you look within, there’s no trace of an internal commanding officer, autonomously issuing decisions. There’s only mental activity, flowing on. Or as Arthur Rimbaud wrote, in a letter to a friend in 1871: “I am a spectator at the unfolding of my thought; I watch it, I listen to it.”
There are reasons to agree with Saul Smilansky that it might be personally and societally detrimental for too many people to start thinking in this way, even if it turns out it’s the truth. (Dennett, although he thinks we do have free will, takes a similar position, arguing that it’s morally irresponsible to promote free-will denial.) In one set of studies in 2008, the psychologists Kathleen Vohs and Jonathan Schooler asked one group of participants to read an excerpt from The Astonishing Hypothesis by Francis Crick, co-discoverer of the structure of DNA, in which he suggests free will is an illusion. The subjects thus primed to doubt the existence of free will proved significantly likelier than others, in a subsequent stage of the experiment, to cheat in a test where there was money at stake. Other research has reported a diminished belief in free will to less willingness to volunteer to help others, to lower levels of commitment in relationships, and lower levels of gratitude.
Unsuccessful attempts to replicate Vohs and Schooler’s findings have called them into question. But even if the effects are real, some free will sceptics argue that the participants in such studies are making a common mistake – and one that might get cleared up rather rapidly, were the case against free will to become better known and understood. Study participants who suddenly become immoral seem to be confusing determinism with fatalism – the idea that if we don’t have free will, then our choices don’t really matter, so we might as well not bother trying to make good ones, and just do as we please instead. But in fact it doesn’t follow from our choices being determined that they don’t matter. It might matter enormously whether you choose to feed your children a diet rich in vegetables or not; or whether you decide to check carefully in both directions before crossing a busy road. It’s just that (according to the sceptics) you don’t get to make those choices freely.
In any case, were free will really to be shown to be nonexistent, the implications might not be entirely negative. It’s true that there’s something repellent about an idea that seems to require us to treat a cold-blooded murderer as not responsible for his actions, while at the same time characterising the love of a parent for a child as nothing more than what Smilansky calls “the unfolding of the given” – mere blind causation, devoid of any human spark. But there’s something liberating about it, too. It’s a reason to be gentler with yourself, and with others. For those of us prone to being hard on ourselves, it’s therapeutic to keep in the back of your mind the thought that you might be doing precisely as well as you were always going to be doing – that in the profoundest sense, you couldn’t have done any more. And for those of us prone to raging at others for their minor misdeeds, it’s calming to consider how easily their faults might have been yours. (Sure enough, some research has linked disbelief in free will to increased kindness.)
Harris argues that if we fully grasped the case against free will, it would be difficult to hate other people: how can you hate someone you don’t blame for their actions? Yet love would survive largely unscathed, since love is “the condition of our wanting those we love to be happy, and being made happy ourselves by that ethical and emotional connection”, neither of which would be undermined. And countless other positive aspects of life would be similarly untouched. As Strawson puts it, in a world without a belief in free will, “strawberries would still taste just as good”.
Those early-morning moments aside, I personally can’t claim to find the case against free will ultimately persuasive; it’s just at odds with too much else that seems obviously true about life. Yet even if only entertained as a hypothetical possibility, free will scepticism is an antidote to that bleak individualist philosophy which holds that a person’s accomplishments truly belong to them alone – and that you’ve therefore only yourself to blame if you fail. It’s a reminder that accidents of birth might affect the trajectories of our lives far more comprehensively than we realise, dictating not only the socioeconomic position into which we’re born, but also our personalities and experiences as a whole: our talents and our weaknesses, our capacity for joy, and our ability to overcome tendencies toward violence, laziness or despair, and the paths we end up travelling. There is a deep sense of human fellowship in this picture of reality – in the idea that, in our utter exposure to forces beyond our control, we might all be in the same boat, clinging on for our lives, adrift on the storm-tossed ocean of luck.
Friday, 15 May 2020
The real message behind 'stay alert': it'll be your fault if coronavirus spreads
This meaningless phrase allows the government to shift blame to the public for failing to be sufficiently responsible writes Owen Jones in The Guardian
Officially, the new strategy is “personal responsibility” and “good, solid British common sense”, as our prime minister colourfully describes it; unofficially, operation blame the public is well under way. As media outlets query why London’s trains and buses are rammed despite government advice, our transport secretary, Grant Shapps, pleads with silly old commuters not to “flood” back on to public transport.
The small flaw is that the government has ordered millions of workers to return to their jobs, and given the continued failure to invent teleporters, they need a means to bridge the distance between their homes and their work. If you’re a Londoner earning more than £70,000 a year, this is no big deal: about 80% have access to a car, and most can work from home. Unfortunately, nearly half of the capital’s citizens – and over 70% of those earning less than £10,000 – do not have access to a car: if you want to understand those images of packed trains and buses, start here.
It is unsurprising that a government that has presided over Europe’s worst death toll is so invested in shifting the blame. Was it “good, solid British common sense” to pursue herd immunity and impose a lockdown later than other European nations, even despite having advance notice of the horrors of Lombardy? Perhaps, indeed, it was “good, solid British common sense” to send vulnerable patients back to care homes without testing them for coronavirus first, seeding the illness in a sector in which up to 22,000 people may have died? Or, who knows, perhaps “good, solid British common sense” could explain how frontline staff have been left exposed for a lack of personal protective equipment?
But the strategy in the government’s new approach is clear. “Stay alert” is meaningless, of course, except to devolve responsibility for what happens next to individuals. Grownups don’t need a nanny state to hold their hands, scoff the government’s outriders: rather than relying on detailed instructions and central diktat, we should rely on our judgment. The implication, of course, is that if there is another spike in infections and death, that will be the public’s fault for not exercising adequate levels of personal responsibility.
Here is a revival of the ideals of High Thatcherism, except applied to a pandemic. Back in the 1980s, what were once known as social problems requiring collective solutions – such as unemployment and poverty – became redefined as individual failings. “Nowadays there really is no primary poverty left in this country,” declared Margaret Thatcher herself. “In western countries we are left with the problems which aren’t poverty. All right, there may be poverty because people don’t know how to budget, don’t know how to spend their earnings, but now you are left with the really hard fundamental character – personality defect.”
If you were poor, it became an increasingly popularised attitude that it was because you were feckless, workshy, stupid and lazy. Thanks to the former Tory minister Norman Tebbit, “get on your bike” became a national cliche: it was more convenient, of course, for the government to pretend that mass unemployment was caused by a lack of effort, graft and can-do determination, rather than monetarist economics that ravaged entire industries.
What the dogma of “personal responsibility” does is erase the inequalities that scar, disfigure and ultimately define society. It pretends that we are all equally free, that our autonomy over our lives and circumstances are the same; that a middle-class professional working from home with access to a car can make the same choices as a cleaner expected to work halfway across a city.
The estimated 60,000 people who have so far died in this national calamity were not wrested from their families because the public failed to be responsible, and neither will the deaths to come in the weeks ahead. Any uptick in infections won’t be down to someone standing one metre rather than two away from their parent in a park. It won’t be down to people inviting neighbours round for forbidden cups of tea in their kitchens, instead of paying poverty wages to cleaners to wash away their dirt.
The explanation will instead be straightforward: the government relaxed a lockdown to force disproportionately working-class people into potentially unsafe environments at the behest of employers who have prioritised economic interests over human life. Another aggravating factor will be the abandonment of clear instructions in favour of confusion. It may well be this is a deliberate strategy, to claim that the government was perfectly clear, but the public let the team down by not showing enough “good, solid British common sense”. Whatever happens, the attempt to shift blame for the most disastrous government failure since appeasement on to the public must not succeed. This is on them: they did this, and we must not let them forget it.
Officially, the new strategy is “personal responsibility” and “good, solid British common sense”, as our prime minister colourfully describes it; unofficially, operation blame the public is well under way. As media outlets query why London’s trains and buses are rammed despite government advice, our transport secretary, Grant Shapps, pleads with silly old commuters not to “flood” back on to public transport.
The small flaw is that the government has ordered millions of workers to return to their jobs, and given the continued failure to invent teleporters, they need a means to bridge the distance between their homes and their work. If you’re a Londoner earning more than £70,000 a year, this is no big deal: about 80% have access to a car, and most can work from home. Unfortunately, nearly half of the capital’s citizens – and over 70% of those earning less than £10,000 – do not have access to a car: if you want to understand those images of packed trains and buses, start here.
It is unsurprising that a government that has presided over Europe’s worst death toll is so invested in shifting the blame. Was it “good, solid British common sense” to pursue herd immunity and impose a lockdown later than other European nations, even despite having advance notice of the horrors of Lombardy? Perhaps, indeed, it was “good, solid British common sense” to send vulnerable patients back to care homes without testing them for coronavirus first, seeding the illness in a sector in which up to 22,000 people may have died? Or, who knows, perhaps “good, solid British common sense” could explain how frontline staff have been left exposed for a lack of personal protective equipment?
But the strategy in the government’s new approach is clear. “Stay alert” is meaningless, of course, except to devolve responsibility for what happens next to individuals. Grownups don’t need a nanny state to hold their hands, scoff the government’s outriders: rather than relying on detailed instructions and central diktat, we should rely on our judgment. The implication, of course, is that if there is another spike in infections and death, that will be the public’s fault for not exercising adequate levels of personal responsibility.
Here is a revival of the ideals of High Thatcherism, except applied to a pandemic. Back in the 1980s, what were once known as social problems requiring collective solutions – such as unemployment and poverty – became redefined as individual failings. “Nowadays there really is no primary poverty left in this country,” declared Margaret Thatcher herself. “In western countries we are left with the problems which aren’t poverty. All right, there may be poverty because people don’t know how to budget, don’t know how to spend their earnings, but now you are left with the really hard fundamental character – personality defect.”
If you were poor, it became an increasingly popularised attitude that it was because you were feckless, workshy, stupid and lazy. Thanks to the former Tory minister Norman Tebbit, “get on your bike” became a national cliche: it was more convenient, of course, for the government to pretend that mass unemployment was caused by a lack of effort, graft and can-do determination, rather than monetarist economics that ravaged entire industries.
What the dogma of “personal responsibility” does is erase the inequalities that scar, disfigure and ultimately define society. It pretends that we are all equally free, that our autonomy over our lives and circumstances are the same; that a middle-class professional working from home with access to a car can make the same choices as a cleaner expected to work halfway across a city.
The estimated 60,000 people who have so far died in this national calamity were not wrested from their families because the public failed to be responsible, and neither will the deaths to come in the weeks ahead. Any uptick in infections won’t be down to someone standing one metre rather than two away from their parent in a park. It won’t be down to people inviting neighbours round for forbidden cups of tea in their kitchens, instead of paying poverty wages to cleaners to wash away their dirt.
The explanation will instead be straightforward: the government relaxed a lockdown to force disproportionately working-class people into potentially unsafe environments at the behest of employers who have prioritised economic interests over human life. Another aggravating factor will be the abandonment of clear instructions in favour of confusion. It may well be this is a deliberate strategy, to claim that the government was perfectly clear, but the public let the team down by not showing enough “good, solid British common sense”. Whatever happens, the attempt to shift blame for the most disastrous government failure since appeasement on to the public must not succeed. This is on them: they did this, and we must not let them forget it.
Tuesday, 6 March 2018
Europe’s strategic choices on Brexit
Gideon Rachman in The FT
Talk to EU policymakers and you will be told that Britain has yet to make the hard choices on Brexit. The standard line is that Theresa May’s government is still trying to “have its cake and eat it” — leaving the EU, but retaining many of the benefits of membership. Britain must drop this “magical thinking” and make some crucial decisions. Once that is done, the structure of the future EU-UK relationship will be dictated by law and precedent.
Talk to EU policymakers and you will be told that Britain has yet to make the hard choices on Brexit. The standard line is that Theresa May’s government is still trying to “have its cake and eat it” — leaving the EU, but retaining many of the benefits of membership. Britain must drop this “magical thinking” and make some crucial decisions. Once that is done, the structure of the future EU-UK relationship will be dictated by law and precedent.
That argument has some truth to it. But what it misses is that the EU also has important choices to make. By treating Brexit as, above all, a legal process, the EU is largely ignoring the political and strategic implications of Britain leaving the EU. That is an intellectual failure that could have dangerous consequences for all sides.
It is clearly true that the EU is a legal order. But it is also a political organisation. The EU is perfectly capable of creating new laws — or interpreting current ones with extreme flexibility — when it is politically necessary.
There are many examples of this flexibility in action. France and Germany broke the EU’s Stability and Growth pact — rather than accept legally mandated fines for breaking its budget-deficit rules. There was a “no bailout” clause for the euro, but Greece was bailed out. Now the European Commission is pursuing Poland for breaching the rule of law, but ignoring equally egregious breaches in Hungary.
So the EU can cherry-pick the law, when it is politically convenient. It can therefore make strategic and political choices on Brexit. And, broadly speaking, it has three options.
Staying tough means sticking with the current line. Britain has chosen to be a third country. There can be no special deals — no “cherry-picking” in the EU’s favoured jargon. There are only two viable models for a “third country”: Norway (which involves membership of the single market) or Canada (which is a pure free trade agreement). Britain must pick one and then accept the consequences.
The arguments for this purist stance are that it protects the integrity of the EU’s single market. If Britain keeps some benefits of EU membership, while ditching many of its obligations, then all 27 members of the EU might seek special deals, and the single market could unravel.
By contrast, if Britain suffers economically from Brexit, that could actually benefit the EU. It would underline the negative consequences of leaving the organisation and undermine Eurosceptic parties across the continent. And jobs and tax revenues could migrate from Britain to the EU.
Compromise on Brexit, the second option, would mean embracing the idea that there should be special arrangements between Britain and the EU. Britain is not any old third country. It has been crucial to the European balance of power for centuries. It has been a member of the EU for decades. And it is currently a major trading partner and military ally for most EU countries. So it sounds unrealistic to say that the UK must be treated exactly like Norway or Canada.
As the EU attempts to navigate an emerging world order — with a rising China and an unpredictable and protectionist US — the strategic alignment of Brexit Britain is uncertain. So it makes sense for the EU to try to pull the UK into a new sort of “special relationship”. By contrast, a Britain that feels humiliated or impoverished by the EU could be an uncomfortable neighbour — with Russia as an extreme example of what can happen when a major European power is at odds with the EU.
Some Europeans, particularly the French, agree that Britain should continue to play a major strategic role in European affairs. But they do not accept that this has any implications for Britain’s economic relationship with the EU. This sounds like a European version of the dreaded “cherry-picking”.
There are plenty of areas where the EU could adopt a more flexible approach on its economic partnership with Britain — if it made the political choice to do so. These could involve the free movement of people, the role of the European Court of Justice, and the mutual recognition of product standards and financial regulations.
The EU’s final option is to force a crisis. If it concludes that Brexit can be reversed and that this is in the EU’s interest (and those are both big “ifs”), then Europe might try to force a political crisis in Britain. This would involve hanging tough for now, hoping that the political fissures in Britain widen and that the May government collapses.
A new administration in the UK might reconsider Brexit — particularly if there was a fresh offer from the EU, perhaps on free movement of people. That might create the impetus for a second referendum in the UK, and a vote to reverse Brexit.
But this approach is also fraught with danger. Crises are obviously unpredictable. If the crisis happened too late in the process, Britain might simply crash out of the EU without a deal. And it is also entirely possible that a second referendum would result in a second vote to leave the EU.
There are powerful arguments to be made for and against each of these three courses of action. But pretending that there are no strategic choices facing the EU should not be an option. That is simply an evasion of responsibility.
Friday, 5 January 2018
Cosmetic changes won't mask England's deep structural flaws
George Dobell in Cricinfo
Having carried the drinks for most of the Ashes tour, Gary Ballance now looks set to carry the can for it.
Ballance, despite not having played a first-class game on the tour, is one of the few involved in this campaign who appears to find his place in jeopardy ahead of the two-Test series in New Zealand in March. A couple of others - notably Jake Ball and James Vince - might be waiting nervously for a tap on the shoulder, too.
But most of the main protagonists in the series - the batsmen who have averaged in the 20s, the bowlers who have averaged over 100 - look set to keep their places. And most of those behind the scenes - the administrators who make the policies that have held England back, as well as the development coaches who have failed to develop a player for years - appear to be immune from consequence.
Nobody is advocating a return to the days when England used 29 players in a series (as they did in the 1989 Ashes). And nobody is advocating an adoption of the culture prevalent in football where managers - well, managers anywhere but in north London - are never more than a bad fortnight away from the sack.
But there has to be a balance. And the problem England - and the ECB - have at present is that they are in danger of breeding and encouraging mediocrity. And, while what appears to be a cosy life goes on for many of those involved, nobody is taking any responsibility.
The ECB have, you know, a pace bowling programme. It is designed to identify the most talented young bowlers and provide them with the best coaching and support to ensure they avoid injury as much as possible. It is designed to optimise their ability and ensure England get the best out of them.
Sounds great, doesn't it?
But let's look at the results: their first change in this Test is a medium-fast bowler who was born in South Africa and invited to England as a 17-year-old. And hard though Tom Curran has worked - and his efforts have been faultless - he has not looked likely to take a wicket. Meanwhile the fast bowlers who have developed in county cricket - the likes of Jamie Overton, Olly Stone, Mark Wood, Atif Sheikh, George Garton, Stuart Meaker and Zak Chappell - are either injured or not deemed consistent enough for selection.
The poverty of the programme has, to some extent, been masked by the enduring excellence of James Anderson and Stuart Broad. That's the same Anderson who went through the Loughborough experience, sustained a stress fracture, lost his ability to swing the ball and reverted to bowling how he did originally. Take them out of this attack - and time will eventually defeat even the apparently indefatigable Anderson - and you have real trouble for England. You have an attack that will struggle to keep them in the top six of the Test rankings.Mason Crane saw a chance fall between him and short leg Getty Images
The ECB have a spin bowling programme, too. A programme that has delivered so little that, here in Sydney, they have taken a punt on a talented kid who, in a more sympathetic domestic system, would be learning his trade bowling over after over for his county. But, as it is, with the Championship squeezed into the margins of the season, Mason Crane (who did fine here after a nervous start; Shane Warne took 1 for 150 on Test debut, remember) has struggled to warrant selection for Hampshire (he played half their Championship games in 2017 and claimed 16 wickets). Other promising young spinners - the likes of Ravi Patel and Josh Poysden - could tell a similar tale.
Meanwhile Adam Riley, who not so long ago was viewed as the most talented young spinner in England - some well-known pundits recommended him for Test selection - didn't play a Championship game for his county, Kent, last season having previously been identified for inclusion in the ECB's spin programme. Does that sound like a success story?
It is not just those at Loughborough to blame. The county system is ever more marginalised by those who set the policies in English cricket - the likes of Tom Harrison and Andrew Strauss - so the development of Test quality cricketers has been arrested. The struggle to develop red-ball players will only be accentuated by the decision to have a window for white-ball cricket in the middle of the season. With so many games played either before the end of May or after the start of September (when the start time is brought forward to 10.30am), the need for quality spin and pace has been diminished. Why bother to invest in the time and effort of developing such players or fast bowler when the likes of Darren Stevens can hit the seam at 65 mph, nibble the ball about, and prove highly effective?
But will anyone be held accountable for this Ashes defeat? Will the director of England cricket take responsibility? Will the development coaches? Will the executives who prioritise T20 over the success of the Test side? Judging by recent events - Harrison telling us that, actually, England cricket has had a fine year, that the pace bowling programme is delivering "excellent results" (he namechecked Mark Footitt as an example of its success) and that changes to the governance of the sport somehow represent an "exciting moment" - the answer is a resounding no.
In the longer term, there is talk around the camp of the creation of a new position. A manager might be appointed - particularly on tours - who would be responsible for discipline within the squad and act as a sort of big brother for players who may be struggling. It would be no surprise if that new appointment - no doubt a recently retired player with experience of such tours - was in place by the time England depart for Sri Lanka in October.
There's probably some sense in such an idea. But it does grate a little that England's response to this latest series loss abroad is the appointment of another layer of middle management.
It's not as if they don't have a fair few figures on tour already. There's already a coach, an assistant coach, a batting coach and, in normal circumstances, a bowling coach. That's before we even consider the doctor, physio, masseuse, selector, strength & conditioning coach, topiarist and women who makes balloon animals. OK, those last two were made up, but you get the point. Does another manager on tour really answer the questions England are facing? Or does such an appointment further obfuscate who takes responsibility when things go wrong?
The fact is this: England have lost eight out of their last 10 away Tests and won none of them. The only away series they have won since the end of 2012 was the one in South Africa in 2015. Despite being awash with money (relatively speaking), England are about to slip to fifth in the Test rankings.
They really shouldn't be satisfied with that.
Ashes defeats used to hurt. They should hurt. If the ECB have in any way become inured to such pain, if they are in any way content with that away record and anything other than entirely focused on improving it, they are not just accepting mediocrity, they are bathing and swilling in it.
Having carried the drinks for most of the Ashes tour, Gary Ballance now looks set to carry the can for it.
Ballance, despite not having played a first-class game on the tour, is one of the few involved in this campaign who appears to find his place in jeopardy ahead of the two-Test series in New Zealand in March. A couple of others - notably Jake Ball and James Vince - might be waiting nervously for a tap on the shoulder, too.
But most of the main protagonists in the series - the batsmen who have averaged in the 20s, the bowlers who have averaged over 100 - look set to keep their places. And most of those behind the scenes - the administrators who make the policies that have held England back, as well as the development coaches who have failed to develop a player for years - appear to be immune from consequence.
Nobody is advocating a return to the days when England used 29 players in a series (as they did in the 1989 Ashes). And nobody is advocating an adoption of the culture prevalent in football where managers - well, managers anywhere but in north London - are never more than a bad fortnight away from the sack.
But there has to be a balance. And the problem England - and the ECB - have at present is that they are in danger of breeding and encouraging mediocrity. And, while what appears to be a cosy life goes on for many of those involved, nobody is taking any responsibility.
The ECB have, you know, a pace bowling programme. It is designed to identify the most talented young bowlers and provide them with the best coaching and support to ensure they avoid injury as much as possible. It is designed to optimise their ability and ensure England get the best out of them.
Sounds great, doesn't it?
But let's look at the results: their first change in this Test is a medium-fast bowler who was born in South Africa and invited to England as a 17-year-old. And hard though Tom Curran has worked - and his efforts have been faultless - he has not looked likely to take a wicket. Meanwhile the fast bowlers who have developed in county cricket - the likes of Jamie Overton, Olly Stone, Mark Wood, Atif Sheikh, George Garton, Stuart Meaker and Zak Chappell - are either injured or not deemed consistent enough for selection.
The poverty of the programme has, to some extent, been masked by the enduring excellence of James Anderson and Stuart Broad. That's the same Anderson who went through the Loughborough experience, sustained a stress fracture, lost his ability to swing the ball and reverted to bowling how he did originally. Take them out of this attack - and time will eventually defeat even the apparently indefatigable Anderson - and you have real trouble for England. You have an attack that will struggle to keep them in the top six of the Test rankings.Mason Crane saw a chance fall between him and short leg Getty Images
The ECB have a spin bowling programme, too. A programme that has delivered so little that, here in Sydney, they have taken a punt on a talented kid who, in a more sympathetic domestic system, would be learning his trade bowling over after over for his county. But, as it is, with the Championship squeezed into the margins of the season, Mason Crane (who did fine here after a nervous start; Shane Warne took 1 for 150 on Test debut, remember) has struggled to warrant selection for Hampshire (he played half their Championship games in 2017 and claimed 16 wickets). Other promising young spinners - the likes of Ravi Patel and Josh Poysden - could tell a similar tale.
Meanwhile Adam Riley, who not so long ago was viewed as the most talented young spinner in England - some well-known pundits recommended him for Test selection - didn't play a Championship game for his county, Kent, last season having previously been identified for inclusion in the ECB's spin programme. Does that sound like a success story?
It is not just those at Loughborough to blame. The county system is ever more marginalised by those who set the policies in English cricket - the likes of Tom Harrison and Andrew Strauss - so the development of Test quality cricketers has been arrested. The struggle to develop red-ball players will only be accentuated by the decision to have a window for white-ball cricket in the middle of the season. With so many games played either before the end of May or after the start of September (when the start time is brought forward to 10.30am), the need for quality spin and pace has been diminished. Why bother to invest in the time and effort of developing such players or fast bowler when the likes of Darren Stevens can hit the seam at 65 mph, nibble the ball about, and prove highly effective?
But will anyone be held accountable for this Ashes defeat? Will the director of England cricket take responsibility? Will the development coaches? Will the executives who prioritise T20 over the success of the Test side? Judging by recent events - Harrison telling us that, actually, England cricket has had a fine year, that the pace bowling programme is delivering "excellent results" (he namechecked Mark Footitt as an example of its success) and that changes to the governance of the sport somehow represent an "exciting moment" - the answer is a resounding no.
In the longer term, there is talk around the camp of the creation of a new position. A manager might be appointed - particularly on tours - who would be responsible for discipline within the squad and act as a sort of big brother for players who may be struggling. It would be no surprise if that new appointment - no doubt a recently retired player with experience of such tours - was in place by the time England depart for Sri Lanka in October.
There's probably some sense in such an idea. But it does grate a little that England's response to this latest series loss abroad is the appointment of another layer of middle management.
It's not as if they don't have a fair few figures on tour already. There's already a coach, an assistant coach, a batting coach and, in normal circumstances, a bowling coach. That's before we even consider the doctor, physio, masseuse, selector, strength & conditioning coach, topiarist and women who makes balloon animals. OK, those last two were made up, but you get the point. Does another manager on tour really answer the questions England are facing? Or does such an appointment further obfuscate who takes responsibility when things go wrong?
The fact is this: England have lost eight out of their last 10 away Tests and won none of them. The only away series they have won since the end of 2012 was the one in South Africa in 2015. Despite being awash with money (relatively speaking), England are about to slip to fifth in the Test rankings.
They really shouldn't be satisfied with that.
Ashes defeats used to hurt. They should hurt. If the ECB have in any way become inured to such pain, if they are in any way content with that away record and anything other than entirely focused on improving it, they are not just accepting mediocrity, they are bathing and swilling in it.
Wednesday, 29 November 2017
How ‘journeys’ are the first defence for sex pests and sinners
Robert Shrimsley in The FT
I want to tell you that I’ve been on a journey — a journey away from personal responsibility. I cannot as yet tell you very much about my journey because I’m not yet clear what it is that I need to have been journeying away from. But I wanted to put it out there, just in case anyone discovers anything bad about me. Because if they do, it is important that you know that was me then, not me now, because I have been on a journey.
I want to tell you that I’ve been on a journey — a journey away from personal responsibility. I cannot as yet tell you very much about my journey because I’m not yet clear what it is that I need to have been journeying away from. But I wanted to put it out there, just in case anyone discovers anything bad about me. Because if they do, it is important that you know that was me then, not me now, because I have been on a journey.
Being on a journey is quite the thing these days. In recent weeks, a fair few people have discovered that they too have been on one. It has become the go-to excuse for anyone caught in bad behaviour that happened some time in the past. If you don’t know the way, you head straight for the door marked contrition, turn left at redemption and keep going till you reach self-righteousness.
High-profile journeymen and women include people who have posted really unpleasant comments online. Among those on a journey was a would-be Labour councillor who was on a trek away from wondering why people kept thinking that Hitler was the bad guy. And let’s be fair — who among us hasn’t been on a journey from wondering why Hitler is portrayed as the bad guy? Another journey was embarked on by a Labour MP who had been caught engaging in horrible homophobic remarks, as well as referring to women as bitches and slags. But don’t worry; he’s been on a journey and we can rest assured that he’ll never do it again.
The important thing about being on a journey is that it allows us to separate the hideous git who once made those mistakes from the really rather super human being we see today. For this, fundamentally, is a journey away from culpability, because all that bad stuff — that was old you; the you before you embarked on the journey; the you before you were caught.
But listen, you don’t need to be in the Labour party to go on a journey. Anyone with a suddenly revealed embarrassing past can join in. This is especially important for those unwise enough to have made their mistakes in the era of social media. The beauty of the journey defence is that it plays to our inner sense of fairness. Everyone makes mistakes, so we warm to those who admit to them and seem sincere in their contrition. Sadly, the successful rehabilitation of early voyagers has encouraged any miscreant to view it as the fallback du jour.
But the journey defence will get you only so far. For a start, it requires a reasonable time to have elapsed since the last offence. It is also of limited use in more serious misdemeanours. The journey defence is very good for racist comments, casual homophobia or digital misogyny. It is of little use with serious sexual harassment. For that, you are going to want to have an illness.
You may, for example, need to discover that you are a sex addict. Addiction obviously means that you bear no responsibility for your actions, which, however repellent, are entirely beyond your control. Sex addict also sounds kind of cool, certainly much better than, say, hideous predatory creep. Harvey Weinstein and Kevin Spacey have both — rather recently — discovered they suffer from this terrible affliction. I realise that forcing yourself on young women (or men) might technically be different from sex, but “groping addict” doesn’t sound quite as stylish.
After consultation with your doctors (spin-doctors, that is), you realise that you require extensive treatment at, say, the Carmel Valley Ranch golf course and spa, where you are currently undergoing an intensive course of therapy, massage and gourmet cuisine as you battle your inner demons. If you are American, you might, at this point, ask people to pray for you.
In a way, I suppose, this is also a journey but one that comes with back rubs and fine wine. Alas, the excuses and faux admissions are looking a bit too easy. The sex addicts are somewhat devalued; the journeys are too well trodden. Those seeking to evade personal responsibility are going to need to find a new path to redemption.
Friday, 1 July 2016
No grades, no timetable: Berlin school turns teaching upside down
Philip Olterman in The Guardian
Anton Oberländer is a persuasive speaker. Last year, when he and a group of friends were short of cash for a camping trip to Cornwall, he managed to talk Germany’s national rail operator into handing them some free tickets. So impressed was the management with his chutzpah that they invited him back to give a motivational speech to 200 of their employees.
Anton, it should be pointed out, is 14 years old.
The Berlin teenager’s self-confidence is largely the product of a unique educational institution that has turned the conventions of traditional teaching radically upside down. At Oberländer’s school, there are no grades until students turn 15, no timetables, no lecture-style instructions. The pupils themselves decide which subjects they want to study for each lesson and when they want to take an exam.
The school’s syllabus reads like any helicopter parents’ nightmare. Set subjects are limited to maths, German, English and social studies, supplemented by more abstract courses such as “responsibility” or “challenge”. For “challenge”, students aged 12 to 14 are each given €150 (£115) and sent on an adventure they have to plan entirely by themselves. Some go kayaking, others work on a farm. Anton went trekking along England’s south coast.
The philosophy behind these innovations is simple: as the requirements of the labour market are changing, and smartphones and the internet are transforming the ways in which young people process information, headteacher Margret Rasfeld argues, the most important skill a school can pass down to its students is the ability to motivate themselves.
“Look at three or four year olds – they are all full of self-confidence,” Rasfeld says. “Often children can’t wait to start school. But frustratingly, most schools then somehow manage to untrain that confidence.”
The Evangelical School Berlin Centre (ESBC) is trying to do nothing less than “reinvent what a school is”, she says. “The mission of a progressive school should be to prepare young people to cope with change, or better still, to make them look forward to change. In the 21st century schools should see it as their job to develop strong personalities.”
Making students listen to a teacher for 45 minutes and punish them for collaborating on an exercise, Rasfeld says, was not only out of sync with the requirements of the modern world of work but also counterproductive. “Nothing motivates students more than when they discover the meaning behind a subject of their own accord.”
Students at her school are encouraged to think up other ways to prove their acquired skills, such as coding a computer game instead of sitting a maths exam. Oberländer, who had never been away from home for three weeks until he embarked on his challenge in Cornwall, said he learned more English on his trip than he had in several years of learning the language at school.
Reinventing education: pupils at the ESBC, which is gaining a reputation as Germany’s most exciting school. Photograph: Handout
Germany’s federalised education structure, in which each of the 16 states plans its own education system, has traditionally allowed “free learning” models to flourish. Yet unlike Sudbury, Montessori or Steiner schools, Rasfeld’s institution tries to embed student self-determination within a relatively strict system of rules. Students who dawdle during lessons have to come into school on Saturday morning to catch up, a punishment known as “silentium”. “The more freedom you have, the more structure you need,” says Rasfeld.
The main reason why the ESBC is gaining a reputation as Germany’s most exciting school is that its experimental philosophy has also managed to deliver impressive results. Year after year Rasfeld’s institution ends up with the best grades among Berlin’s gesamtschulen, or comprehensive schools, which combine all three school forms of Germany’s tertiary system. Last year’s school leavers achieved an average grade of 2.0, the equivalent of a straight B – even though 40% of the year had been advised not to continue to Abitur, the German equivalent of A-levels, before they joined the school. Having opened in 2007 with just 16 students, the school now operates at full capacity, with 500 pupils and long waiting lists for new applicants.
Given its word-of-mouth success, it is little wonder there have been calls for Rasfeld’s approach to go nationwide. Yet some educational experts question whether the school’s methods can easily be exported: in Berlin, they say, the school can draw the most promising applicants from well-off and progressive families. Rasfeld rejects such criticisms, insisting the school aims for a heterogenous mix of students from different backgrounds. While a cross adorns the assembly hall and each school day starts with worship, only a third of current pupils are baptised. Thirty per cent of students have a migrant background, and 7% are from households where no German is spoken at all.
Even though the ESBC is one of Germany’s 5,000 private schools, fees are means tested and relatively low compared with those common in Britain, ranging between €720 and €6,636 a year. About 5% of students are exempt from fees.
Yet even Rasfeld admits that finding teachers able to adjust to the school’s learning methods can be harder than getting students to do the same.
Aged 65 and due to retire in July, Rasfeld still has ambitious plans. A four-person “education innovation lab” based at the school has been developing teaching materials for schools that want to follow the ESBC’s lead. About 40 schools inGermany are in the process of adopting some or all of Rasfeld’s methods. One in Berlin’s Weissensee district recently let a student trek across the Alps for a challenge project. “Things are only getting started,” says Rasfeld.
“In education, you can only create change from the bottom – if the orders come from the top, schools will resist. Ministries are like giant oil tankers: it takes a long time to turn them around. What we need is lots of little speedboats to show you can do things differently.”
Anton Oberländer is a persuasive speaker. Last year, when he and a group of friends were short of cash for a camping trip to Cornwall, he managed to talk Germany’s national rail operator into handing them some free tickets. So impressed was the management with his chutzpah that they invited him back to give a motivational speech to 200 of their employees.
Anton, it should be pointed out, is 14 years old.
The Berlin teenager’s self-confidence is largely the product of a unique educational institution that has turned the conventions of traditional teaching radically upside down. At Oberländer’s school, there are no grades until students turn 15, no timetables, no lecture-style instructions. The pupils themselves decide which subjects they want to study for each lesson and when they want to take an exam.
The school’s syllabus reads like any helicopter parents’ nightmare. Set subjects are limited to maths, German, English and social studies, supplemented by more abstract courses such as “responsibility” or “challenge”. For “challenge”, students aged 12 to 14 are each given €150 (£115) and sent on an adventure they have to plan entirely by themselves. Some go kayaking, others work on a farm. Anton went trekking along England’s south coast.
The philosophy behind these innovations is simple: as the requirements of the labour market are changing, and smartphones and the internet are transforming the ways in which young people process information, headteacher Margret Rasfeld argues, the most important skill a school can pass down to its students is the ability to motivate themselves.
“Look at three or four year olds – they are all full of self-confidence,” Rasfeld says. “Often children can’t wait to start school. But frustratingly, most schools then somehow manage to untrain that confidence.”
The Evangelical School Berlin Centre (ESBC) is trying to do nothing less than “reinvent what a school is”, she says. “The mission of a progressive school should be to prepare young people to cope with change, or better still, to make them look forward to change. In the 21st century schools should see it as their job to develop strong personalities.”
Making students listen to a teacher for 45 minutes and punish them for collaborating on an exercise, Rasfeld says, was not only out of sync with the requirements of the modern world of work but also counterproductive. “Nothing motivates students more than when they discover the meaning behind a subject of their own accord.”
Students at her school are encouraged to think up other ways to prove their acquired skills, such as coding a computer game instead of sitting a maths exam. Oberländer, who had never been away from home for three weeks until he embarked on his challenge in Cornwall, said he learned more English on his trip than he had in several years of learning the language at school.
Reinventing education: pupils at the ESBC, which is gaining a reputation as Germany’s most exciting school. Photograph: Handout
Germany’s federalised education structure, in which each of the 16 states plans its own education system, has traditionally allowed “free learning” models to flourish. Yet unlike Sudbury, Montessori or Steiner schools, Rasfeld’s institution tries to embed student self-determination within a relatively strict system of rules. Students who dawdle during lessons have to come into school on Saturday morning to catch up, a punishment known as “silentium”. “The more freedom you have, the more structure you need,” says Rasfeld.
The main reason why the ESBC is gaining a reputation as Germany’s most exciting school is that its experimental philosophy has also managed to deliver impressive results. Year after year Rasfeld’s institution ends up with the best grades among Berlin’s gesamtschulen, or comprehensive schools, which combine all three school forms of Germany’s tertiary system. Last year’s school leavers achieved an average grade of 2.0, the equivalent of a straight B – even though 40% of the year had been advised not to continue to Abitur, the German equivalent of A-levels, before they joined the school. Having opened in 2007 with just 16 students, the school now operates at full capacity, with 500 pupils and long waiting lists for new applicants.
Given its word-of-mouth success, it is little wonder there have been calls for Rasfeld’s approach to go nationwide. Yet some educational experts question whether the school’s methods can easily be exported: in Berlin, they say, the school can draw the most promising applicants from well-off and progressive families. Rasfeld rejects such criticisms, insisting the school aims for a heterogenous mix of students from different backgrounds. While a cross adorns the assembly hall and each school day starts with worship, only a third of current pupils are baptised. Thirty per cent of students have a migrant background, and 7% are from households where no German is spoken at all.
Even though the ESBC is one of Germany’s 5,000 private schools, fees are means tested and relatively low compared with those common in Britain, ranging between €720 and €6,636 a year. About 5% of students are exempt from fees.
Yet even Rasfeld admits that finding teachers able to adjust to the school’s learning methods can be harder than getting students to do the same.
Aged 65 and due to retire in July, Rasfeld still has ambitious plans. A four-person “education innovation lab” based at the school has been developing teaching materials for schools that want to follow the ESBC’s lead. About 40 schools inGermany are in the process of adopting some or all of Rasfeld’s methods. One in Berlin’s Weissensee district recently let a student trek across the Alps for a challenge project. “Things are only getting started,” says Rasfeld.
“In education, you can only create change from the bottom – if the orders come from the top, schools will resist. Ministries are like giant oil tankers: it takes a long time to turn them around. What we need is lots of little speedboats to show you can do things differently.”
Saturday, 3 October 2015
How to blame less and learn more
Mathew Syed in The Guardian
Accountability. We hear a lot about it. It’s a buzzword. Politicians should be accountable for their actions; social workers for the children they are supervising; nurses for their patients. But there’s a catastrophic problem with our concept of accountability.
Consider the case of Peter Connelly, better known as Baby P, a child who died at the hands of his mother, her boyfriend and her boyfriend’s brother in 2007. The perpetrators were sentenced to prison. But the media focused its outrage on a different group: mainly his social worker, Maria Ward, and Sharon Shoesmith, director of children’s services. The local council offices were surrounded by a crowd holding placards. In interviews, protesters and politicians demanded their sacking. “They must be held accountable,” it was said.
Many were convinced that the social work profession would improve its performance in the aftermath of the furore. This is what people think accountability looks like: a muscular response to failure. It is about forcing people to sit up and take responsibility. As one pundit put it: “It will focus minds.”
But what really happened? Did child services improve? In fact, social workers started leaving the profession en masse. The numbers entering the profession also plummeted. In one area, the council had to spend £1.5m on agency social work teams because it didn’t have enough permanent staff to handle a jump in referrals.
Those who stayed in the profession found themselves with bigger caseloads and less time to look after the interests of each child. They also started to intervene more aggressively, terrified that a child under their supervision would be harmed. The number of children removed from their families soared. £100m was needed to cope with new child protection orders.
Crucially, defensiveness started to infiltrate every aspect of social work. Social workers became cautious about what they documented. The bureaucratic paper trails got longer, but the words were no longer about conveying information, they were about back-covering. Precious information was concealed out of sheer terror of the consequences.
Almost every commentator estimates that the harm done to children following the attempt to “increase accountability” was high indeed. Performance collapsed. The number of children killed at the hands of their parents increased by more than 25% in the year following the outcry and remained higher for every one of the next three years.
Let us take a step back. One of the most well-established human biases is called the fundamental attribution error. It is about how the sense-making part of the brain blames individuals, rather than systemic factors, when things go wrong. When volunteers are shown a film of a driver cutting across lanes, for example, they infer that he is selfish and out of control. And this inference may indeed turn out to be true. But the situation is not always as cut-and-dried.
After all, the driver may have the sun in his eyes or be swerving to avoid a car. To most observers looking from the outside in, these factors do not register. It is not because they don’t think such possibilities are irrelevant, it is that often they don’t even consider them. The brain just sees the simplest narrative: “He’s a homicidal fool!”
Even in an absurdly simple event like this, then, it pays to pause to look beneath the surface, to challenge the most reductionist narrative. This is what aviation, as an industry, does. When mistakes are made, investigations are conducted. A classic example comes from the 1940s where there was a series of seemingly inexplicable accidents involving B-17 bombers. Pilots were pressing the wrong switches. Instead of pressing the switch to lift the flaps, they were pressing the switch to lift the landing gear.
Should they have been penalised? Or censured? The industry commissioned an investigator to probe deeper. He found that the two switches were identical and side by side. Under the pressure of a difficult landing, pilots were pressing the wrong switch. It was an error trap, an indication that human error often emerges from deeper systemic factors. The industry responded not by sacking the pilots but by attaching a rubber wheel to the landing-gear switch and a small flap shape to the flaps control. The buttons now had an intuitive meaning, easily identified under pressure. Accidents of this kind disappeared overnight.
This is sometimes called forward accountability: the responsibility to learn lessons so that future people are not harmed by avoidable mistakes.
But isn’t this soft? Won’t people get sloppy if they are not penalised for mistakes? The truth is quite the reverse. If, after proper investigation, it turns out that a person was genuinely negligent, then punishment is not only justifiable, but imperative. Professionals themselves demand this. In aviation, pilots are the most vocal in calling for punishments for colleagues who get drunk or demonstrate gross carelessness. And yet justifiable blame does not undermine openness. Management has the time to find out what really happened, giving professionals the confidence that they can speak up without being penalised for honest mistakes.
In 2001, the University of Michigan Health System introduced open reporting, guaranteeing that clinicians would not be pre-emptively blamed. As previously suppressed information began to flow, the system adapted. Reports of drug administration problems led to changes in labelling. Surgical errors led to redesigns of equipment. Malpractice claims dropped from 262 to 83. The number of claims against the University of Illinois Medical Centre fell by half in two years following a similar change. This is the power of forward accountability.
High-performance institutions, such as Google, aviation and pioneering hospitals, have grasped a precious truth. Failure is inevitable in a complex world. The key is to harness these lessons as part of a dynamic process of change. Kneejerk blame may look decisive, but it destroys the flow of information. World-class organisations interrogate errors, learn from them, and only blame after they have found out what happened.
And when Lord Laming reported on Baby P in 2009? Was blame of social workers justified? There were allegations that the report’s findings were prejudged. Even the investigators seemed terrified about what might happen to them if they didn’t appease the appetite for a scapegoat. It was final confirmation of how grotesquely distorted our concept of accountability has become.
Accountability. We hear a lot about it. It’s a buzzword. Politicians should be accountable for their actions; social workers for the children they are supervising; nurses for their patients. But there’s a catastrophic problem with our concept of accountability.
Consider the case of Peter Connelly, better known as Baby P, a child who died at the hands of his mother, her boyfriend and her boyfriend’s brother in 2007. The perpetrators were sentenced to prison. But the media focused its outrage on a different group: mainly his social worker, Maria Ward, and Sharon Shoesmith, director of children’s services. The local council offices were surrounded by a crowd holding placards. In interviews, protesters and politicians demanded their sacking. “They must be held accountable,” it was said.
Many were convinced that the social work profession would improve its performance in the aftermath of the furore. This is what people think accountability looks like: a muscular response to failure. It is about forcing people to sit up and take responsibility. As one pundit put it: “It will focus minds.”
But what really happened? Did child services improve? In fact, social workers started leaving the profession en masse. The numbers entering the profession also plummeted. In one area, the council had to spend £1.5m on agency social work teams because it didn’t have enough permanent staff to handle a jump in referrals.
Those who stayed in the profession found themselves with bigger caseloads and less time to look after the interests of each child. They also started to intervene more aggressively, terrified that a child under their supervision would be harmed. The number of children removed from their families soared. £100m was needed to cope with new child protection orders.
Crucially, defensiveness started to infiltrate every aspect of social work. Social workers became cautious about what they documented. The bureaucratic paper trails got longer, but the words were no longer about conveying information, they were about back-covering. Precious information was concealed out of sheer terror of the consequences.
Almost every commentator estimates that the harm done to children following the attempt to “increase accountability” was high indeed. Performance collapsed. The number of children killed at the hands of their parents increased by more than 25% in the year following the outcry and remained higher for every one of the next three years.
Let us take a step back. One of the most well-established human biases is called the fundamental attribution error. It is about how the sense-making part of the brain blames individuals, rather than systemic factors, when things go wrong. When volunteers are shown a film of a driver cutting across lanes, for example, they infer that he is selfish and out of control. And this inference may indeed turn out to be true. But the situation is not always as cut-and-dried.
After all, the driver may have the sun in his eyes or be swerving to avoid a car. To most observers looking from the outside in, these factors do not register. It is not because they don’t think such possibilities are irrelevant, it is that often they don’t even consider them. The brain just sees the simplest narrative: “He’s a homicidal fool!”
Even in an absurdly simple event like this, then, it pays to pause to look beneath the surface, to challenge the most reductionist narrative. This is what aviation, as an industry, does. When mistakes are made, investigations are conducted. A classic example comes from the 1940s where there was a series of seemingly inexplicable accidents involving B-17 bombers. Pilots were pressing the wrong switches. Instead of pressing the switch to lift the flaps, they were pressing the switch to lift the landing gear.
Should they have been penalised? Or censured? The industry commissioned an investigator to probe deeper. He found that the two switches were identical and side by side. Under the pressure of a difficult landing, pilots were pressing the wrong switch. It was an error trap, an indication that human error often emerges from deeper systemic factors. The industry responded not by sacking the pilots but by attaching a rubber wheel to the landing-gear switch and a small flap shape to the flaps control. The buttons now had an intuitive meaning, easily identified under pressure. Accidents of this kind disappeared overnight.
This is sometimes called forward accountability: the responsibility to learn lessons so that future people are not harmed by avoidable mistakes.
But isn’t this soft? Won’t people get sloppy if they are not penalised for mistakes? The truth is quite the reverse. If, after proper investigation, it turns out that a person was genuinely negligent, then punishment is not only justifiable, but imperative. Professionals themselves demand this. In aviation, pilots are the most vocal in calling for punishments for colleagues who get drunk or demonstrate gross carelessness. And yet justifiable blame does not undermine openness. Management has the time to find out what really happened, giving professionals the confidence that they can speak up without being penalised for honest mistakes.
In 2001, the University of Michigan Health System introduced open reporting, guaranteeing that clinicians would not be pre-emptively blamed. As previously suppressed information began to flow, the system adapted. Reports of drug administration problems led to changes in labelling. Surgical errors led to redesigns of equipment. Malpractice claims dropped from 262 to 83. The number of claims against the University of Illinois Medical Centre fell by half in two years following a similar change. This is the power of forward accountability.
High-performance institutions, such as Google, aviation and pioneering hospitals, have grasped a precious truth. Failure is inevitable in a complex world. The key is to harness these lessons as part of a dynamic process of change. Kneejerk blame may look decisive, but it destroys the flow of information. World-class organisations interrogate errors, learn from them, and only blame after they have found out what happened.
And when Lord Laming reported on Baby P in 2009? Was blame of social workers justified? There were allegations that the report’s findings were prejudged. Even the investigators seemed terrified about what might happen to them if they didn’t appease the appetite for a scapegoat. It was final confirmation of how grotesquely distorted our concept of accountability has become.
Saturday, 11 April 2015
Benaud - the wise old king
Gideon Haigh in Cricinfo
If we don't remember him as an elite legspinner, a thinking captain or one of cricket's true professionals, it's because of the phenomenal work he has done as a commentator, writer and observer
If Arlott was the voice of cricket, Benaud was the face © Getty Images
"Did you ever play cricket for Australia, Mr Benaud?" In his On Reflection, Richie Benaud recalls being asked this humbling question by a "fair-haired, angelic little lad of about 12", one of a group of six autograph seekers who accosted him at the SCG "one December evening in 1982".
"Now what do you do?" Benaud writes. "Cry or laugh? I did neither but merely said yes, I had played up to 1963, which was going to be well before he was born. 'Oh,' he said. 'That's great. I thought you were just a television commentator on cricket.'" Autograph in hand, the boy "scampered away with a 'thank you' thrown over his shoulder".
It is a familiar anecdotal scenario: past player confronted by dwindling renown. But the Benaud version is very Benaudesque. There is the amused self-mockery, the precise observation, the authenticating detail: he offers a date, the number of boys and a description of the appearance of his interlocutor, whose age is cautiously approximated.
In his story Benaud indulges the boy's solecism, realising that it arises not merely from youthful innocence but from the fact that "he had never seen me in cricket gear, and knew me only as the man who did the cricket on Channel 9". Then he segues into several pages of discussion of the changed nature of the cricket audience, ending with a self-disclosing identification. "Some would say such a question of that kind showed lack of respect or knowledge. Not a bit of it… what it did was show an inquiring mind and I'm all in favour of inquiring minds among our young sportsmen. Perhaps that is because I had an inquiring mind when I came into first-class cricket but was not necessarily allowed to exercise it in the same way as young players are now."
I like this passage; droll, reasoned and thoughtful, it tells us much about cricket's most admired and pervasive post-war personality. It is the voice, as Greg Manning phrased it inWisden Australia, of commentary's "wise old king". It betrays, too, the difficulty in assessing him: in some respects Benaud's abiding ubiquity in England and Australia inhibits appreciation of the totality of his achievements.
In fact, Benaud would rank among Test cricket's elite legspinners and captains if he had never uttered or written a word about the game. His apprenticeship was lengthy - thanks partly to the prolongation of Ian Johnson's career by his tenure as Australian captain - and Benaud's first 27 Tests encompassed only 73 wickets at 28.90 and 868 runs at 28.66.
Then, as Johnnie Moyes put it, came seniority and skipperhood: "Often in life and in cricket we see the man who has true substance in him burst forth into stardom when his walk-on part is changed for one demanding personality and a degree of leadership. I believe that this is what happened to Benaud." In his next 23 Tests, Benaud attained the peak of proficiency - 131 wickets at 22.66 and 830 runs at 28.62, until a shoulder injury in May 1961 impaired his effectiveness.
Australia did not lose a series under Benaud's leadership, although he was defined by his deportment as much as his deeds. Usually bareheaded, and with shirt open as wide as propriety permitted, he was a colourful, communicative antidote to an austere, tight-lipped era. Jack Fingleton likened Benaud to Jean Borotra, the "Bounding Basque of Biarritz" over whom tennis audiences had swooned in the 1920s. Wisden settled for describing him as "the most popular captain of any overseas team to come to Great Britain".
One of Benaud's legacies is the demonstrative celebration of wickets and catches, which was a conspicuous aspect of his teams' communal spirit and is today de rigeur. Another is a string of astute, astringent books, including Way of Cricket (1960) and A Tale of Two Tests (1962), which are among the best books written by a cricketer during his career. "In public relations to benefit the game," Ray Robinson decided, "Benaud was so far ahead of his predecessors that race-glasses would have been needed to see who was at the head of the others."
Benaud's reputation as a gambling captain has probably been overstated. On the contrary he was tirelessly fastidious in his planning, endlessly solicitous of his players and inclusive in his decision-making. Benaud receives less credit than he deserves for intuiting that "11 heads are better than one" where captaincy is concerned; what is commonplace now was not so in his time. In some respects his management model paralleled the "human relations school" in organisational psychology, inspired by Douglas McGregor's The Human Side of Enterprise(1960). Certainly Benaud's theory that "cricketers are intelligent people and must be treated as such", and his belief in "an elastic but realistic sense of self-discipline" could be transliterations of McGregor to a sporting context.
Ian Meckiff defined Benaud as "a professional in an amateur era", a succinct formulation that may partly explain the ease with which he has assimilated the professional present. For if a quality distinguishes his commentary, it is that he calls the game he is watching, not one he once watched or played in. When Simon Katich was awarded his baggy green at Headingley in 2001, it was Benaud whom Steve Waugh invited to undertake the duty.
"Did you ever play cricket for Australia, Mr Benaud?" In his On Reflection, Richie Benaud recalls being asked this humbling question by a "fair-haired, angelic little lad of about 12", one of a group of six autograph seekers who accosted him at the SCG "one December evening in 1982".
"Now what do you do?" Benaud writes. "Cry or laugh? I did neither but merely said yes, I had played up to 1963, which was going to be well before he was born. 'Oh,' he said. 'That's great. I thought you were just a television commentator on cricket.'" Autograph in hand, the boy "scampered away with a 'thank you' thrown over his shoulder".
It is a familiar anecdotal scenario: past player confronted by dwindling renown. But the Benaud version is very Benaudesque. There is the amused self-mockery, the precise observation, the authenticating detail: he offers a date, the number of boys and a description of the appearance of his interlocutor, whose age is cautiously approximated.
In his story Benaud indulges the boy's solecism, realising that it arises not merely from youthful innocence but from the fact that "he had never seen me in cricket gear, and knew me only as the man who did the cricket on Channel 9". Then he segues into several pages of discussion of the changed nature of the cricket audience, ending with a self-disclosing identification. "Some would say such a question of that kind showed lack of respect or knowledge. Not a bit of it… what it did was show an inquiring mind and I'm all in favour of inquiring minds among our young sportsmen. Perhaps that is because I had an inquiring mind when I came into first-class cricket but was not necessarily allowed to exercise it in the same way as young players are now."
I like this passage; droll, reasoned and thoughtful, it tells us much about cricket's most admired and pervasive post-war personality. It is the voice, as Greg Manning phrased it inWisden Australia, of commentary's "wise old king". It betrays, too, the difficulty in assessing him: in some respects Benaud's abiding ubiquity in England and Australia inhibits appreciation of the totality of his achievements.
In fact, Benaud would rank among Test cricket's elite legspinners and captains if he had never uttered or written a word about the game. His apprenticeship was lengthy - thanks partly to the prolongation of Ian Johnson's career by his tenure as Australian captain - and Benaud's first 27 Tests encompassed only 73 wickets at 28.90 and 868 runs at 28.66.
Then, as Johnnie Moyes put it, came seniority and skipperhood: "Often in life and in cricket we see the man who has true substance in him burst forth into stardom when his walk-on part is changed for one demanding personality and a degree of leadership. I believe that this is what happened to Benaud." In his next 23 Tests, Benaud attained the peak of proficiency - 131 wickets at 22.66 and 830 runs at 28.62, until a shoulder injury in May 1961 impaired his effectiveness.
Australia did not lose a series under Benaud's leadership, although he was defined by his deportment as much as his deeds. Usually bareheaded, and with shirt open as wide as propriety permitted, he was a colourful, communicative antidote to an austere, tight-lipped era. Jack Fingleton likened Benaud to Jean Borotra, the "Bounding Basque of Biarritz" over whom tennis audiences had swooned in the 1920s. Wisden settled for describing him as "the most popular captain of any overseas team to come to Great Britain".
One of Benaud's legacies is the demonstrative celebration of wickets and catches, which was a conspicuous aspect of his teams' communal spirit and is today de rigeur. Another is a string of astute, astringent books, including Way of Cricket (1960) and A Tale of Two Tests (1962), which are among the best books written by a cricketer during his career. "In public relations to benefit the game," Ray Robinson decided, "Benaud was so far ahead of his predecessors that race-glasses would have been needed to see who was at the head of the others."
Benaud's reputation as a gambling captain has probably been overstated. On the contrary he was tirelessly fastidious in his planning, endlessly solicitous of his players and inclusive in his decision-making. Benaud receives less credit than he deserves for intuiting that "11 heads are better than one" where captaincy is concerned; what is commonplace now was not so in his time. In some respects his management model paralleled the "human relations school" in organisational psychology, inspired by Douglas McGregor's The Human Side of Enterprise(1960). Certainly Benaud's theory that "cricketers are intelligent people and must be treated as such", and his belief in "an elastic but realistic sense of self-discipline" could be transliterations of McGregor to a sporting context.
Ian Meckiff defined Benaud as "a professional in an amateur era", a succinct formulation that may partly explain the ease with which he has assimilated the professional present. For if a quality distinguishes his commentary, it is that he calls the game he is watching, not one he once watched or played in. When Simon Katich was awarded his baggy green at Headingley in 2001, it was Benaud whom Steve Waugh invited to undertake the duty.
The forgotten legspinner © PA Photos
Benaud's progressive attitude to the game's commercialisation - sponsorship, TV, the one-day game - may also spring partly from his upbringing. In On Reflection he tells how his father, Lou, a gifted legspinner, had his cricket ambitions curtailed when he was posted to the country as a schoolteacher for 12 years. Benaud describes two vows his father took: "If… there were any sons in his family he would make sure they had a chance [to make a cricket career] and there would be no more schoolteachers in the Benaud family."
At an early stage of his first-class career, too, Benaud lost his job with an accounting firm that "couldn't afford to pay the six pounds a week which would have been my due". He criticised the poor rewards for the cricketers of his time, claiming they were "not substantial enough" and that "some players… made nothing out of tours". He contended as far back as 1960 that "cricket is now a business".
Those views obtained active expression when he aligned with World Series Cricket - it "ran alongside my ideas about Australian cricketers currently being paid far too little and having virtually no input into the game in Australia". Benaud's contribution to Kerry Packer's venture, both as consultant and commentator, was inestimable: to the organisation he brought cricket knowhow, to the product he applied a patina of respectability. Changes were wrought in cricket over two years that would have taken decades under the game's existing institutions, and Benaud was essentially their frontman.
In lending Packer his reputation Benaud ended up serving his own. John Arlott has been garlanded as the voice of cricket; Benaud is indisputably the face of it, in both hemispheres, over generations. If one was to be critical it may be that Benaud has been too much the apologist for modern cricket, too much the Dr Pangloss. It is, after all, difficult to act as an impartial critic of the entertainment package one is involved in selling.
Professionalism, meanwhile, has not been an unmixed blessing: what is match-fixing but professional sport in extremis, the cricketer selling his services to the highest bidder in the sporting free market? Yet Benaud is one of very few certifiably unique individuals in cricket history. From time to time one hears mooted "the next Benaud"; one also knows that this cannot be.
Benaud's progressive attitude to the game's commercialisation - sponsorship, TV, the one-day game - may also spring partly from his upbringing. In On Reflection he tells how his father, Lou, a gifted legspinner, had his cricket ambitions curtailed when he was posted to the country as a schoolteacher for 12 years. Benaud describes two vows his father took: "If… there were any sons in his family he would make sure they had a chance [to make a cricket career] and there would be no more schoolteachers in the Benaud family."
At an early stage of his first-class career, too, Benaud lost his job with an accounting firm that "couldn't afford to pay the six pounds a week which would have been my due". He criticised the poor rewards for the cricketers of his time, claiming they were "not substantial enough" and that "some players… made nothing out of tours". He contended as far back as 1960 that "cricket is now a business".
Those views obtained active expression when he aligned with World Series Cricket - it "ran alongside my ideas about Australian cricketers currently being paid far too little and having virtually no input into the game in Australia". Benaud's contribution to Kerry Packer's venture, both as consultant and commentator, was inestimable: to the organisation he brought cricket knowhow, to the product he applied a patina of respectability. Changes were wrought in cricket over two years that would have taken decades under the game's existing institutions, and Benaud was essentially their frontman.
In lending Packer his reputation Benaud ended up serving his own. John Arlott has been garlanded as the voice of cricket; Benaud is indisputably the face of it, in both hemispheres, over generations. If one was to be critical it may be that Benaud has been too much the apologist for modern cricket, too much the Dr Pangloss. It is, after all, difficult to act as an impartial critic of the entertainment package one is involved in selling.
Professionalism, meanwhile, has not been an unmixed blessing: what is match-fixing but professional sport in extremis, the cricketer selling his services to the highest bidder in the sporting free market? Yet Benaud is one of very few certifiably unique individuals in cricket history. From time to time one hears mooted "the next Benaud"; one also knows that this cannot be.
Friday, 2 January 2015
We can’t control how we’ll die. The limits of individual responsibility
It’s important to live healthily, but scientists also tell us that the majority of cancers are down to chance – a good reminder of the limits of individual responsibility
Our terror of death (happy new year, by the way) surely has much to do with a fear that it is out of our control. The lifetime risk of dying in a road accident is disturbingly high – one in 240 – and yet the freakishly small chance of dying in a plane accident generally provokes far more fear. We dread a final few moments in which we are powerless to do anything except wait for oblivion. So perhaps the news that most cancers are the product of bad luck – rather than, say, our diet or lifestyles – is scant reassurance. Most cancers are a random lightning bolt, not something we can avoid by keeping away from tobacco or excessive booze, or by going for regular morning runs. That’s something we have to live with.
But perhaps the news should be of comfort. It is, of course, crucial to promote healthy lifestyles. Regular exercise, a good diet and the avoidance of excess does save lives. Yet the cult of individualism fuels the idea that we are invariably personally responsible for the situation we are in: whether that be poverty, unemployment or ill health. Cancer is more individualised than most diseases: all that talk of “losing” or “winning” battles. A far wiser approach was summed up by DJ Danny Baker after his own diagnosis. He said he was “just the battlefield, science is doing the fighting and of course the wonderful docs and nurses of the brilliant NHS”. The cancer patient, in other words, was practically a bystander in a collective effort.
One of the heroes of 2014 was Stephen Sutton because of his infectious optimism and cheerfulness in the face of cancer. But his battle was about not letting cancer consume his final few months on earth, rather than a superhuman quest to miraculously defeat the disease himself. What struck me about Stephen was that a situation that seemed nightmarish to most of us became an opportunity for him to take control of his life. It is what struck me, too, about Gordon Aikman, a 29-year-old Scot with a terminal diagnosis of motor neurone disease. There is no right way to die, but he has learned how to live.
So that’s why I have some sympathy with Richard Smith, a doctor who once edited the British Medical Journal. He has upset many by suggesting we are “wasting billions trying to cure cancer”, when it is the “best” way to go. I certainly would not advocate cutting back on cancer research, quite the opposite – even if other fatal diseases don’t receive the same amount of attention – and cancer can be a horrible way to die. But his point was that it provided an opportunity to make peace, to reflect on life, to do all the things you always wanted to do – to finally have control over your own life. Other ways of dying simply do not provide that option, either because they are so sudden or because of the form they take.
We have less of a say over how and when we die than we thought. That may be a cause for anxiety: it may actually frighten us more. I think it’s liberating. If only we learned to live like many of those – like Stephen or Gordon – facing death, taking control of their lives, we would be so much happier than we are.
Subscribe to:
Posts (Atom)