Search This Blog

Showing posts with label error. Show all posts
Showing posts with label error. Show all posts

Sunday 9 October 2016

An Open Letter to Moderate Muslims

Ali A Rizvi in The Huffington Post

Let’s start with what I’m not going to do.
I’m not going to accuse you of staying silent in the face of the horrific atrocities being committed around the world by your co-religionists. Most of you have loudly and unequivocally condemned groups like the Islamic State (ISIS), and gone out of your way to dissociate yourselves from them. You have helped successfully isolate ISIS and significantly damage its credibility.
I’m also not going to accuse you of being sympathetic to fundamentalists’ causes like violent jihad or conversion by force. I know you condemn their primitive tactics like the rest of us, maybe even more so, considering the majority of victims of Islamic terrorists are moderate Muslims like yourselves. On this, I am with you.
But I do want to talk to you about your increasingly waning credibility — a concern many of you have articulated as well.
You’re feeling more misunderstood than ever, as Islamic fundamentalists hijack the image of Muslims, ostentatiously presenting themselves as the “voice of Islam.” And worse, everyone seems to be buying it.
The frustration is evident. In response to comedian Bill Maher’s recent segment ripping liberals for their silence on criticizing Islam, religious scholar Reza Aslan slammed him in a CNN interview. Visibly exasperated, he ultimately resorted to using words like “stupid” and “bigot” to make his points. (He apologized for this later.)
We’ll get to Aslan’s other arguments in a bit. But first, let’s talk about something he said to his hosts that I know many of you relate to: that moderate Muslims are too often painted with the same brush as their fundamentalist counterparts. This is often true, and is largely unfair to moderates like yourselves.
But you can’t simply blame this on the “ignorance” or “bigotry” of non-Muslims, or on media bias. Non-Muslims and the media are no more monolithic than the Muslim world you and I come from.
The problem is this: moderate Muslims like you also play a significant role in perpetuating this narrative — even if you don’t intend to.
To understand how, it’s important to see how it looks from the other side.
***
Tell me if this sounds familiar:
(1) A moderate Muslim states that ISIS is wrong, they aren’t “true” Muslims, and Islam is a religion of peace.

(2) A questioner asks: what about verses in the Quran like 4:89, saying to “seize and kill” disbelievers? Or 8:12-13, saying God sent angels to “smite the necks and fingertips” of disbelievers, foreboding a “grievous penalty” for whoever opposes Allah and his Messenger? Or 5:33, which says those who “spread corruption” (a vague phrase widely believed to include blasphemy and apostasy) should be “killed or crucified”? Or 47:4, which also prescribes beheading for disbelievers encountered in jihad?
(3) The Muslim responds by defending these verses as Allah’s word — he insists that they have been quoted “out of context,” have been misinterpreted, are meant as metaphor, or that they may even have been mistranslated.

(4) Despite being shown multiple translations, or told that some of these passages (like similar passages in other holy books) are questionable in any context, the Muslim insists on his/her defense of the Scripture.
Sometimes, this kind of exchange will lead to the questioner being labeled an “Islamophobe,” or being accused of bigotry, as Aslan did with Maher and his CNN hosts. This is a very serious charge that is very effective at ending the conversation.No one wants to be called a bigot.
But put yourself in the shoes of your non-Muslim audience. Is it really them linking Islam to terrorism? We’re surrounded with images and videos of jihadists yelling “Allahu Akbar” and quoting passages from the Quran before beheading someone (usually a non-Muslim), setting off an explosion, or rallying others to battle. Who is really making this connection?
What would you do if this situation was reversed? What are non-Muslims supposed to think when even moderate Muslims like yourselves defend the very same words and book that these fundamentalists effortlessly quote as justification for killing them — as perfect and infallible?
Like other moderates, Reza Aslan frequently bemoans those who read the Quran “literally.” Interestingly enough, we sort of agree on this: the thought of the Quran being read “literally” — or exactly as Allah wrote it — unsettles me as much as it unsettles Reza.
This is telling, and Reza isn’t alone. Many of you insist on alternative interpretations, some kind of metaphorical reading — anything to avoid reading the holy book the way it’s actually written. What message do you think this sends? To those on the outside, it implies there is something lacking in what you claim is God’s perfect word. In a way, you’re telling the listener to value your explanations of these words over the sacred words themselves. Obviously, this doesn’t make a great case for divine authorship. Combined with the claims that the book is widely misunderstood, it makes the writer appear either inarticulate or incompetent. I know that’s not the message you mean to send — I’ve been where you are. But it is important to understand why it comes across that way to many non-Muslims.
If any kind of literature is to be interpreted “metaphorically,” it has to at least represent the original idea. Metaphors are meant to illustrate and clarify ideas, not twist and obscure them. When the literal words speak of blatant violence but are claimed to really mean peace and unity, we’re not in interpretation/metaphor zone anymore; we’re heading into distortion/misrepresentation territory. If this disconnect was limited to one or two verses, I would consider your argument. If your interpretation were accepted by all of the world’s Muslims, I would consider your argument. Unfortunately, neither of these is the case.
You may be shaking your head at this point. I know your explanations are very convincing to fellow believers. That’s expected. When people don’t want to abandon their faith or their conscience, they’ll jump on anything they can find to reconcile the two.
But believe me, outside the echo chamber, all of this is very confusing. I’ve argued with Western liberals who admit they don’t find these arguments convincing, but hold back their opinions for fear of being seen as Islamophobic, or in the interest of supporting moderates within the Muslim community who share their goals of fighting jihad and fundamentalism. Many of your liberal allies are sincere, but you’d be surprised how many won’t tell you what they really think because of fear or political correctness. The only difference between them and Bill Maher is that Maher actually says it.
Unfortunately, this is what’s eating away at your credibility. This is what makes otherwise rational moderate Muslims look remarkably inconsistent. Despite your best intentions, you also embolden anti-Muslim bigots — albeit unknowingly — by effectively narrowing the differences between yourselves and the fundamentalists. You condemn all kinds of terrible things being done in the name of your religion, but when the same things appear as verses in your book, you use all your faculties to defend them. This comes across as either denial or disingenuousness, both of which make an honest conversation impossible.
This presents an obvious dilemma. The belief that the Quran is the unquestionable word of God is fundamental to the Islamic faith, and held by the vast majority of Muslims worldwide, fundamentalist or progressive. Many of you believe that letting it go is as good as calling yourself non-Muslim. I get that. But does it have to be that way?
Having grown up as part of a Muslim family in several Muslim-majority countries, I’ve been hearing discussions about an Islamic reformation for as long as I can remember. Ultimately, I came to believe that the first step to any kind of substantive reformation is to seriously reconsider the concept of scriptural inerrancy.
And I’m not the only one. Maajid Nawaz, a committed Muslim, speaks openly about acknowledging problems in the Quran. Recently, in a brave article here right here on The Huffington Post, Imra Nazeer also asked Muslims to reconsider treating the Quran as infallible.
Is she right? At first glance, this may be a shocking thought. But it’s possible, and it actually has precedent.
***
I grew up in Riyadh, Saudi Arabia, before the Internet. We had an after-school tutor who taught us to read and recite the Quran in classical Arabic, the language in which it’s written.
My family is among the majority of the world’s 1.6 billion Muslims — concentrated in countries like Indonesia, India, Pakistan, Turkey and Iran — that doesn’t speak Arabic. Millions of us, however, can read the Quran in Arabic, even if we don’t understand it.
In most Muslim households, the Quran is physically placed at the highest place possible. In our house, it was at the top of a tall bookshelf. It cannot be physically touched unless an act of ablution/purification (wudhu) is first performed. It cannot be recited or touched by menstruating women. It is read in its entirety during the Sunni taraweeh prayers in the holy month of Ramadan. In many Muslim communities, it is held over the heads of grooms and brides as a blessing when they get married. A child completing her first reading of the Quran is a momentous occasion — parties are thrown, gifts are given.
But before the Internet, I rarely met anyone — including the devoutly religious — who had really read the Quran in their own language. We just went by what we heard from our elders. We couldn’t Google or verify things instantaneously like we do now.
There were many things in the Quran we didn’t know were in there. Like Aslan, we also mistakenly thought that harsh punishments in Saudi Arabia like decapitation and hand amputation were cultural and not religious. Later, we learned that the Quran does indeed prescribe beheadings, and says clearly in verse 5:38 that thieves, male or female, should have their hands cut off.
Now, there are also other things widely thought to be in the Quran that aren’t actually in there. A prominent example is the hijab or burka — neither is mentioned in the Quran. Also absent is stoning to death as a punishment — it’s mentioned in the hadith (the Sunnah, or traditions of the Prophet), and even in the Old Testament— but not in the Quran.
Neither male nor female circumcision (M/FGM) are found in the Quran. Again, however, both are mentioned in the hadith. When Aslan discussed FGM, he neglected to mention that of the four Sunni schools of jurisprudence, the Shafi’i school makes FGM mandatory based on these hadith, and the other three schools recommend it. This is why Indonesia, the largest Muslim country in the world, mostly Shafi’i, where Aslan said women were “absolutely 100% equal” to men, has an FGM prevalence of at least 86%, with over 90% of families supporting the practice. And the world’s largest Arab Muslim country, Egypt, has an FGM prevalence of over 90%. So yes, both male and female genital cutting pre-date Islam. But it is inaccurate to say that they have no connection whatever to the religion.
***
That is the kind of information I could never reliably access growing up. But with the Internet came exposure.
Suddenly, every 12-year-old kid could search multiple translations of the Quran by topic, in dozens of languages. Nothing was hidden. It was all right there to see. When Lee Rigby’s murderer cited Surah At-Tawbah to justify his actions, we could go online and see exactly what he was talking about. When ISIS claims divine sanction for its actions by citing verse 33 from Surah Al-Maaidah or verse 4 from Surah Muhammad, we can look it up for ourselves and connect the dots.
Needless to say, this is a pretty serious problem, one that you must address. When people see moderates insisting that Islam is peaceful while also defending these verses and claiming they’re misunderstood, it appears inconsistent. When they read these passages and see fundamentalists carrying out exactly what they say, it appears consistent. That’s scary. You should try to understand it. Loudly shouting “Racist!” over the voices of critics, as Ben Affleck did over Maher and Sam Harris last week, isn’t going to make it go away.
(Also, if you think criticizing Islam is racist, you’re saying that all of Islam is one particular race. There’s a word for that.)
Yes, it’s wrong and unfair for anyone to judge a religion by the actions of its followers, be they progressive Muslims or al Qaeda. But it is appropriate and intellectually honest to judge it by the contents of its canonical texts — texts that are now accessible online to anyone and everyone at the tap of a finger.
Today, you need to do better when you address the legitimate questions people have about your beliefs and your holy book. Brushing off everything that is false or disturbing as “metaphor” or “misinterpretation” just isn’t going to cut it. Neither is dismissing the questioner as a bigot.
How, then, to respond?
***
For starters, it might help to read not only the Quran, but the other Abrahamic texts. When you do, you’ll see that the Old Testament has just as much violence, if not more, than the Quran. Stoning blasphemersstoning fornicatorskilling homosexuals — it’s all in there. When you get about ten verses deep into Deuteronomy 20, you may even swear you’re reading a rulebook for ISIS.
You may find yourself asking, how is this possible? The book of the Jews is not much different from my book. How, then, are the majority of them secular? How is it that most don’t take too seriously the words of the Torah/Old Testament — originally believed to be the actual word of God revealed to Moses much like the Quran to Muhammad — yet still retain strong Jewish identities? Can this happen with Islam and Muslims?
Clearly from the above, the answer is a tried-and-tested yes. And it must start by dissociating Islamic identity from Muslim identity — by coming together on a sense of community, not ideology.
Finding consensus on ideology is impossible. The sectarian violence that continues to plague the Muslim world, and has killed more Muslims than any foreign army, is blatant evidence for this. But coming together on a sense of community is what moves any society forward. Look at other Abrahamic religions that underwent reformations. You know well that Judaism and Christianity had their own violence-ridden dark ages; you mention it every chance you get nowadays, and you’re right. But how did they get past that?
Well, as much as the Pope opposes birth control, abortion and premarital sex, most Catholics today are openly pro-choice, practice birth control, and fornicate to their hearts’ content. Most Jews are secular, and many even identify as atheists or agnostics while retaining the Jewish label. The dissidents and the heretics in these communities may get some flak here and there, but they aren’t getting killed for dissenting.
This is in stark contrast to the Muslim world where, according to a worldwide 2013 Pew Research Study, a majority of people in large Muslim-majority countries like Egypt and Pakistan believe that those who leave the faith must die. They constantly obsess over who is a “real” Muslim and who is not. They are quicker to defend their faith from cartoonists and filmmakers than they are to condemn those committing atrocities in its name. (Note: To their credit, the almost universal, unapologetic opposition against ISIS from Muslims is a welcome development.)
***
The word “moderate” has lost its credibility. Fareed Zakaria has referred to Middle Eastern moderates as a “fantasy.” Even apologists like Nathan Lean are pointing out that the use of this word isn’t helping anyone.
Islam needs reformers, not moderates. And words like “reform” just don’t go very well with words like “infallibility.”
The purpose of reform is to change things, fix the system, and move it in a new direction. And to fix something, you have to acknowledge that it’s broken — not that it looks broken, or is being falsely portrayed as broken by the wrong people — but that it’s broken. That is your first step to reformation.
If this sounds too radical, think back to the Prophet Muhammad himself, who was chased out of Mecca for being a radical dissident fighting the Quraysh. Think of why Jesus Christ was crucified. These men didn’t capitulate or shy away from challenging even the most sacred foundations of the status quo.
These men certainly weren’t “moderates.” They were radicals. Rebels. Reformers. That’s how change happens. All revolutions start out as rebellions. Islam itself started this way. Openly challenging problematic ideas isn’t bigotry, and it isn’t blasphemy. If anything, it’s Sunnah.
Get out there, and take it back.


Saturday 3 October 2015

How to blame less and learn more

Mathew Syed in The Guardian

Accountability. We hear a lot about it. It’s a buzzword. Politicians should be accountable for their actions; social workers for the children they are supervising; nurses for their patients. But there’s a catastrophic problem with our concept of accountability.

 Consider the case of Peter Connelly, better known as Baby P, a child who died at the hands of his mother, her boyfriend and her boyfriend’s brother in 2007. The perpetrators were sentenced to prison. But the media focused its outrage on a different group: mainly his social worker, Maria Ward, and Sharon Shoesmith, director of children’s services. The local council offices were surrounded by a crowd holding placards. In interviews, protesters and politicians demanded their sacking. “They must be held accountable,” it was said.

Many were convinced that the social work profession would improve its performance in the aftermath of the furore. This is what people think accountability looks like: a muscular response to failure. It is about forcing people to sit up and take responsibility. As one pundit put it: “It will focus minds.”

But what really happened? Did child services improve? In fact, social workers started leaving the profession en masse. The numbers entering the profession also plummeted. In one area, the council had to spend £1.5m on agency social work teams because it didn’t have enough permanent staff to handle a jump in referrals.

Those who stayed in the profession found themselves with bigger caseloads and less time to look after the interests of each child. They also started to intervene more aggressively, terrified that a child under their supervision would be harmed. The number of children removed from their families soared. £100m was needed to cope with new child protection orders.

Crucially, defensiveness started to infiltrate every aspect of social work. Social workers became cautious about what they documented. The bureaucratic paper trails got longer, but the words were no longer about conveying information, they were about back-covering. Precious information was concealed out of sheer terror of the consequences.

Almost every commentator estimates that the harm done to children following the attempt to “increase accountability” was high indeed. Performance collapsed. The number of children killed at the hands of their parents increased by more than 25% in the year following the outcry and remained higher for every one of the next three years.

Let us take a step back. One of the most well-established human biases is called the fundamental attribution error. It is about how the sense-making part of the brain blames individuals, rather than systemic factors, when things go wrong. When volunteers are shown a film of a driver cutting across lanes, for example, they infer that he is selfish and out of control. And this inference may indeed turn out to be true. But the situation is not always as cut-and-dried.

After all, the driver may have the sun in his eyes or be swerving to avoid a car. To most observers looking from the outside in, these factors do not register. It is not because they don’t think such possibilities are irrelevant, it is that often they don’t even consider them. The brain just sees the simplest narrative: “He’s a homicidal fool!”

Even in an absurdly simple event like this, then, it pays to pause to look beneath the surface, to challenge the most reductionist narrative. This is what aviation, as an industry, does. When mistakes are made, investigations are conducted. A classic example comes from the 1940s where there was a series of seemingly inexplicable accidents involving B-17 bombers. Pilots were pressing the wrong switches. Instead of pressing the switch to lift the flaps, they were pressing the switch to lift the landing gear.

Should they have been penalised? Or censured? The industry commissioned an investigator to probe deeper. He found that the two switches were identical and side by side. Under the pressure of a difficult landing, pilots were pressing the wrong switch. It was an error trap, an indication that human error often emerges from deeper systemic factors. The industry responded not by sacking the pilots but by attaching a rubber wheel to the landing-gear switch and a small flap shape to the flaps control. The buttons now had an intuitive meaning, easily identified under pressure. Accidents of this kind disappeared overnight.

This is sometimes called forward accountability: the responsibility to learn lessons so that future people are not harmed by avoidable mistakes.

But isn’t this soft? Won’t people get sloppy if they are not penalised for mistakes? The truth is quite the reverse. If, after proper investigation, it turns out that a person was genuinely negligent, then punishment is not only justifiable, but imperative. Professionals themselves demand this. In aviation, pilots are the most vocal in calling for punishments for colleagues who get drunk or demonstrate gross carelessness. And yet justifiable blame does not undermine openness. Management has the time to find out what really happened, giving professionals the confidence that they can speak up without being penalised for honest mistakes.

In 2001, the University of Michigan Health System introduced open reporting, guaranteeing that clinicians would not be pre-emptively blamed. As previously suppressed information began to flow, the system adapted. Reports of drug administration problems led to changes in labelling. Surgical errors led to redesigns of equipment. Malpractice claims dropped from 262 to 83. The number of claims against the University of Illinois Medical Centre fell by half in two years following a similar change. This is the power of forward accountability.

High-performance institutions, such as Google, aviation and pioneering hospitals, have grasped a precious truth. Failure is inevitable in a complex world. The key is to harness these lessons as part of a dynamic process of change. Kneejerk blame may look decisive, but it destroys the flow of information. World-class organisations interrogate errors, learn from them, and only blame after they have found out what happened.

And when Lord Laming reported on Baby P in 2009? Was blame of social workers justified? There were allegations that the report’s findings were prejudged. Even the investigators seemed terrified about what might happen to them if they didn’t appease the appetite for a scapegoat. It was final confirmation of how grotesquely distorted our concept of accountability has become.

Monday 20 April 2015

Cricket Coaching: Follow in the bare footsteps of the Kalenjin

Ed Smith in Cricinfo

What can the story of running shoes among Kenyan athletes teach us about cricket? More than I thought possible.

Nearly all top marathon runners are Kenyan. In fact, they are drawn from a particular Kenyan tribe, the Kalenjin, an ethnic group of around 5 million people living mostly at altitude in the Rift Valley.

Here is the really interesting thing. The majority of top marathon runners grow up running without shoes. The debate about whether other athletes should try to mimic barefoot-running technique remains contested and unresolved. However, it is overwhelmingly likely that the unshod childhoods of the Kalenjin does contribute to their pre-eminence as distance runners.*

And yet it is also true that as soon as Kalenjin athletes can afford running shoes, they do buy them. They know that the protection offered by modern shoes helps them to rack up the epic number of hours of training required to become a serious distance runner.

So there is a paradox about long-term potential and running shoes. If an athlete wears shoes too often and too early, when his natural technique and running style are still evolving, he significantly reduces his chances of becoming a champion distance runner. But if he doesn't wear them at all in the later stages of his athletic education he jeopardises his ability to train and perform optimally when it matters.

Put simply, the advantages of modernity and technology need to be first withheld and then embraced. Most Kenyan runners begin wearing trainers in their mid-teens. Some sports scientists argue that if they could hold off for another two or three years, they'd be even better as adult athletes. But no one knows for sure exactly when is the "right" time to start running in shoes. We glimpse the ideal athletic childhood, but its contours remain extremely hazy. 

Logically, there is a further complexity. Imagine two equally talented developing athletes, one with shoes, the other barefoot, neither yet at their athletic peak. Wearing shoes, by assisting training and recovery, would yield an advantage at the time. But that short-term advantage would leave behind a long-term disadvantage, by depriving the athlete of the legacy that barefoot runners enjoy when they begin wearing shoes at a later stage. In other words, building the right foundations during adolescence is more important than doing whatever it takes to win at the time.

Where is the cricket here? When I read about the strange influence of first learning barefoot then using the latest technology - in the admirable and thought-provoking book Two Hours by Ed Caesar, published this July - I wrote in the margin: just like cricket coaching.

A modern player seeking an edge over his opponents would be mad not to have access to the latest kit, technology, data, fitness coaching and rehab techniques. But if he comes to rely on the interventions and apparatus of coaches and trainers too early, when his game and character are still in flux, then he misses out of the biggest advantage of the lot: self-reliance and learning from trial and error. In other words, there is no conflict between homespun training methods and sports science. It is a question of the right amount at the right time. Indeed, the art of training always relies on subtly mixing technique and science alongside folk wisdom and feeling.

Consider the greatest of all cricketing educations. As a child, Don Bradman learnt to bat on his own - repeatedly hitting a golf ball against the curved brick base of his family water tank. The empirical method led him to a technique that no one had dared to try. His bat swing started way out to the side, rather than a straight pendulum line from behind him. He had escaped the greatest risk that can befall any genius: an early overdose of prescriptive formal education.

Kevin Pietersen, in his pomp the most exciting England batsman of his era, was also self-taught to an unusual degree. It was ironic, in his recent autobiography, that Pietersen was so keen to describe in words that he knew better than "the system". In his earlier days, he made the point more eloquently with his bat. Having arrived from South Africa as an offspinning allrounder, he became one of the most thrilling batsmen in the world. Think of all the money and effort - the "pyramids of excellence" and "talent conveyor belts" - expended on manufacturing great English players. And one of the best of them was untouched - some would say undamaged - by the whole apparatus. He figured things out for himself.

Connected to the question of impairing natural development is the problem of over-training and specialising too early. The now debunked "10,000 hours theory" - which holds that genius is created by selecting a discipline as early as possible and then loading on mountains of practice - is being replaced by a far more subtle understanding of nurturing talent.

A study of professional baseball players showed that keeping up football and basketball in teenage years increased the likelihood of making it as a top baseball pro. In his fine book The Sports Gene, David Epstein assembles persuasive evidence that Roger Federer's sporting education (a mixture of badminton, basketball, football as well as tennis) is far more typical of great athletes than the Tiger Woods-style mono-focus that is so often held up as the model.

When the psychologists John Sloboda and Michael Howe studied gifted children at a musical academy, they found that extra lessons for younger musicians proved counterproductive: the kids just burned out. The best players, it turned out, had practised the least as children. Diversity was just as important. The exceptional players practised much less at their first instrument, but much more than the average players on their third instrument.

So if you want an Under-13s champion, yes, buy the latest kit, bully him to practise all hours, pick one sport and make him eliminate all the others. But you are merely reducing the likelihood of producing an adult champion.

Even professionals can aspire to retain the receptivity of children who are learning by playful sampling rather than through directed orthodoxy. I once organised the first phase of pre-season training for a cricket team. I tried to change the culture from one of compliance - if I don't do what I'm told, I'll get in trouble - towards self-regulation, the ability to feel and respond to your game as you push yourself and find out what works and what doesn't. The Kalenjin have mastered that, too. Even at the very top, the athletes continue to lead the training sessions. They take what they need to from science but they trust their intuition.

*A barefoot childhood is by no means the only factor. A recent study showed that the Kalenjin elite runners had 5% longer legs and 12% lighter legs than a sample of top Swedish runners. The Kalenjin also have an unusual mixture of sea-level ancestry (they moved from the low-lying Nile Valley to the elevated Rift Valley only a few centuries ago) and altitude living. Physiologically, they are valley people who live up the mountain. There are also, inevitably, a host of environmental factors.

Wednesday 13 November 2013

What Does it Mean to be a Physician?


Whitcomb, Michael E. MD

 Extraordinary changes have occurred during the past few decades in the design and conduct of the medical school curriculum. To a great extent, this reflects a commitment by medical school deans and faculties to better prepare their students for the challenges they will face throughout their professional careers. The changes that have been adopted are truly impressive, yet there is still more to be accomplished. I have suggested on several occasions that in order for the medical education community to be clear about the kind of changes that are needed, the community needs to define more clearly the purpose of the educational program.1,2 And I have suggested that in order to reach agreement on that purpose, the community must first answer a fundamental question: What does it mean to be a physician?
This approach reflects my belief that one of the primary purposes of the educational program is for students to learn, in depth, what it means to be a physician. After all, the title is bestowed upon them when they graduate from medical school, even though they are not yet prepared for the actual practice of medicine. Even so, shouldn’t they have an understanding of what it means to be a physician when they receive the title? In posing the question I am not seeking a formal definition of the term physician that one might find in a dictionary. My intent, instead, is to seek agreement within the medical education community on the attributes—that is, the personal qualities—that a physician should possess if he or she is to be capable of meeting the public’s expectations of a doctor.
Some have suggested that possessing a body of knowledge and a set of skills that can be applied in the practice of medicine defines what it means to be a physician. Now, there is no question that certain knowledge and certain skills are essential elements of being a physician. But it is also clear that the knowledge and skills required vary depending upon the particular career path a physician has chosen. So, while it is essential for physicians to be knowledgeable and skillful in order to engage in the practice of medicine, it is not possible to define what it means to be a physician by identifying a body of knowledge and a set of skills that all physicians must possess. On the other hand, there is a specific set of personal attributes that I maintain all physicians should possess if they are to meet the public’s expectations, and that it is those attributes that define the essence of what it means to be a physician.
First, a physician must be caring. One of the most famous quotes in the annals of American medicine comes from the address Francis Peabody gave to Harvard medical students in 1925.3 In that address, Peabody stated, “The secret of the care of the patient is in caring for the patient.” There are many texts that describe in eloquent terms the value that patients place on being truly cared for by a physician. But in modern times, members of the medical profession have too often equated caring with treatment, and have tended at times to limit their role to providing treatment leading to a cure. Unfortunately, this approach has too often meant that physicians ignored the importance of a caring manner, no matter what the treatment options were. Worse, once a patient could no longer be cured, too many physicians believed that there was nothing more to be done and attended in only a minimum way to the patient’s needs. In fact, it is now clear that caring for patients becomes more critical in situations in which the patient understands that treatment will no longer be useful and cure is no longer possible.
A few years ago, the Hastings Center initiated a project to define the goals of medicine.4 One of the four major goals that evolved from the project was called The Care and Cure of Those with a Malady, and the Care of Those Who Cannot be Cured. It is essential, therefore, that physicians understand clearly that to serve the goals of medicine, they have a responsibility to continue to care for their patients when they can no longer prescribe a particular form of treatment or offer the likelihood of a cure. If they do not continue to provide care under those circumstances—that is, by being caring—their patients will sense that they have been abandoned by their doctor at a critical time. Clearly, the essence of what it means to be a physician requires that a physician not allow this to occur.
Second, physicians must be inquisitive. Medicine has a long tradition of celebrating all that the members of the profession know about mechanisms of disease and the diagnosis and management of various clinical maladies. Indeed, admission to the study of medicine and advancement throughout the various stages of one’s career are often based solely on what one knows. But the fact is that there is a great deal about medicine that is not known, and there is a great deal that individual physicians do not know about what is known.
Given that, the value of physicians’ being inquisitive about medicine is clear. This attribute contributes in an important way to the quality of care provided by physicians by ensuring that they continue to acquire the knowledge and skills they will need to meet their professional responsibilities as the nature of medicine changes during their careers. But it is also important to recognize that this attribute contributes in a more immediate way to the quality of the care provided to individual patients.
In his new book, How Doctors Think, Jerome Groopman5 emphasizes that most of the diagnostic errors made by physicians result from cognitive mistakes. He points out that because of the uncertainty inherent in the practice of medicine, there is a tendency for physicians when encountering a patient to lock in too soon on a particular diagnosis or a particular approach to treatment. By doing so, the physician runs the risk of overlooking clues suggesting that the working diagnosis may not be correct. Even though a patient may present with the classic manifestations of a particular malady, the true physician will always pause before making a diagnosis and embarking on a course of therapy by asking himself or herself, What is there about this patient’s presentation that I don’t understand? Or, importantly, What is there about this patient that I should know before proceeding?
And finally, physicians must be civic minded. This is a confusing concept to grasp, because in modern times the civic responsibility of the individual physician tends to be obscure. Over the years, this responsibility has come to be viewed as an element of professionalism that is somehow embedded, at least implicitly, within the context of the social contract that defines the medical profession’s responsibility to the society as a whole—a responsibility manifested largely by how professional organizations relate to the public. But Bill Sullivan6suggests in Work and Integrity: The Crisis and Promise of Professionalism in America that it is critically important that individual physicians become more personally involved in meeting medicine’s responsibility to society. In his view, they must concern themselves with ensuring that the professional organizations to which they belong are focused on serving the interests of the public, rather than simply serving the interests of the organization’s members. But the civic mindedness of physicians should go beyond that to include consciously contributing in a variety of ways to the betterment of the communities in which they live by participating in community organizations and bringing their special talents to bear in volunteer efforts specifically aimed at improving the health of the public.
So, I suggest that although a physician who is not caring, inquisitive, and civic minded may be a highly skilled technician involved in the practice of medicine, such an individual will not truly reflect the essence of what it means to be a physician. Given this, it is essential that as medical schools continue to modify their educational programs, they ensure that those programs reflect a commitment to ensuring that their graduates be caring, inquisitive, and civic-minded physicians. Deans and faculties of medical schools must understand clearly that while their graduates will spend their residencies acquiring much of the knowledge and many of the skills they will need for the practice of their chosen specialties, it is in medical school that they must learn the essential attributes of a true physician.

Saturday 18 May 2013

How the Case for Austerity Has Crumbled


 
 
The Alchemists: Three Central Bankers and a World on Fire
by Neil Irwin
Penguin, 430 pp., $29.95                                                  
Austerity: The History of a Dangerous Idea
by Mark Blyth
Oxford University Press, 288 pp., $24.95                                                  
The Great Deformation: The Corruption of Capitalism in America
by David A. Stockman
PublicAffairs, 742 pp., $35.00                                                  
krugman_1-060613
President Barack Obama and Representative Paul Ryan at a bipartisan meeting on health insurance reform, Washington, D.C., February 2010
In normal times, an arithmetic mistake in an economics paper would be a complete nonevent as far as the wider world was concerned. But in April 2013, the discovery of such a mistake—actually, a coding error in a spreadsheet, coupled with several other flaws in the analysis—not only became the talk of the economics profession, but made headlines. Looking back, we might even conclude that it changed the course of policy.
Why? Because the paper in question, “Growth in a Time of Debt,” by the Harvard economists Carmen Reinhart and Kenneth Rogoff, had acquired touchstone status in the debate over economic policy. Ever since the paper was first circulated, austerians—advocates of fiscal austerity, of immediate sharp cuts in government spending—had cited its alleged findings to defend their position and attack their critics. Again and again, suggestions that, as John Maynard Keynes once argued, “the boom, not the slump, is the right time for austerity”—that cuts should wait until economies were stronger—were met with declarations that Reinhart and Rogoff had shown that waiting would be disastrous, that economies fall off a cliff once government debt exceeds 90 percent of GDP.
Indeed, Reinhart-Rogoff may have had more immediate influence on public debate than any previous paper in the history of economics. The 90 percent claim was cited as the decisive argument for austerity by figures ranging from Paul Ryan, the former vice-presidential candidate who chairs the House budget committee, to Olli Rehn, the top economic official at the European Commission, to the editorial board of The Washington Post. So the revelation that the supposed 90 percent threshold was an artifact of programming mistakes, data omissions, and peculiar statistical techniques suddenly made a remarkable number of prominent people look foolish.
The real mystery, however, was why Reinhart-Rogoff was ever taken seriously, let alone canonized, in the first place. Right from the beginning, critics raised strong concerns about the paper’s methodology and conclusions, concerns that should have been enough to give everyone pause. Moreover, Reinhart-Rogoff was actually the second example of a paper seized on as decisive evidence in favor of austerity economics, only to fall apart on careful scrutiny. Much the same thing happened, albeit less spectacularly, after austerians became infatuated with a paper by Alberto Alesina and Silvia Ardagna purporting to show that slashing government spending would have little adverse impact on economic growth and might even be expansionary. Surely that experience should have inspired some caution.
So why wasn’t there more caution? The answer, as documented by some of the books reviewed here and unintentionally illustrated by others, lies in both politics and psychology: the case for austerity was and is one that many powerful people want to believe, leading them to seize on anything that looks like a justification. I’ll talk about that will to believe later in this article. First, however, it’s useful to trace the recent history of austerity both as a doctrine and as a policy experiment.

1.

In the beginning was the bubble. There have been many, many books about the excesses of the boom years—in fact, too many books. For as we’ll see, the urge to dwell on the lurid details of the boom, rather than trying to understand the dynamics of the slump, is a recurrent problem for economics and economic policy. For now, suffice it to say that by the beginning of 2008 both America and Europe were poised for a fall. They had become excessively dependent on an overheated housing market, their households were too deep in debt, their financial sectors were undercapitalized and overextended.
All that was needed to collapse these houses of cards was some kind of adverse shock, and in the end the implosion of US subprime-based securities did the deed. By the fall of 2008 the housing bubbles on both sides of the Atlantic had burst, and the whole North Atlantic economy was caught up in “deleveraging,” a process in which many debtors try—or are forced—to pay down their debts at the same time.
Why is this a problem? Because of interdependence: your spending is my income, and my spending is your income. If both of us try to reduce our debt by slashing spending, both of our incomes plunge—and plunging incomes can actually make our indebtedness worse even as they also produce mass unemployment.
Students of economic history watched the process unfolding in 2008 and 2009 with a cold shiver of recognition, because it was very obviously the same kind of process that brought on the Great Depression. Indeed, early in 2009 the economic historians Barry Eichengreen and Kevin O’Rourke produced shocking charts showing that the first year of the 2008–2009 slump in trade and industrial production was fully comparable to the first year of the great global slump from 1929 to 1933.
So was a second Great Depression about to unfold? The good news was that we had, or thought we had, several big advantages over our grandfathers, helping to limit the damage. Some of these advantages were, you might say, structural, built into the way modern economies operate, and requiring no special action on the part of policymakers. Others were intellectual: surely we had learned something since the 1930s, and would not repeat our grandfathers’ policy mistakes.
On the structural side, probably the biggest advantage over the 1930s was the way taxes and social insurance programs—both much bigger than they were in 1929—acted as “automatic stabilizers.” Wages might fall, but overall income didn’t fall in proportion, both because tax collections plunged and because government checks continued to flow for Social Security, Medicare, unemployment benefits, and more. In effect, the existence of the modern welfare state put a floor on total spending, and therefore prevented the economy’s downward spiral from going too far.
On the intellectual side, modern policymakers knew the history of the Great Depression as a cautionary tale; some, including Ben Bernanke, had actually been major Depression scholars in their previous lives. They had learned from Milton Friedman the folly of letting bank runs collapse the financial system and the desirability of flooding the economy with money in times of panic. They had learned from John Maynard Keynes that under depression conditions government spending can be an effective way to create jobs. They had learned from FDR’s disastrous turn toward austerity in 1937 that abandoning monetary and fiscal stimulus too soon can be a very big mistake.
As a result, where the onset of the Great Depression was accompanied by policies that intensified the slump—interest rate hikes in an attempt to hold on to gold reserves, spending cuts in an attempt to balance budgets—2008 and 2009 were characterized by expansionary monetary and fiscal policies, especially in the United States, where the Federal Reserve not only slashed interest rates, but stepped into the markets to buy everything from commercial paper to long-term government debt, while the Obama administration pushed through an $800 billion program of tax cuts and spending increases. European actions were less dramatic—but on the other hand, Europe’s stronger welfare states arguably reduced the need for deliberate stimulus.
Now, some economists (myself included) warned from the beginning that these monetary and fiscal actions, although welcome, were too small given the severity of the economic shock. Indeed, by the end of 2009 it was clear that although the situation had stabilized, the economic crisis was deeper than policymakers had acknowledged, and likely to prove more persistent than they had imagined. So one might have expected a second round of stimulus to deal with the economic shortfall.
What actually happened, however, was a sudden reversal.

2.

Neil Irwin’s The Alchemists gives us a time and a place at which the major advanced countries abruptly pivoted from stimulus to austerity. The time was early February 2010; the place, somewhat bizarrely, was the remote Canadian Arctic settlement of Iqaluit, where the Group of Seven finance ministers held one of their regularly scheduled summits. Sometimes (often) such summits are little more than ceremonial occasions, and there was plenty of ceremony at this one too, including raw seal meat served at the last dinner (the foreign visitors all declined). But this time something substantive happened. “In the isolation of the Canadian wilderness,” Irwin writes, “the leaders of the world economy collectively agreed that their great challenge had shifted. The economy seemed to be healing; it was time for them to turn their attention away from boosting growth. No more stimulus.”
krugman_figure1-060613
How decisive was the turn in policy? Figure 1, which is taken from the IMF’s most recent World Economic Outlook, shows how real government spending behaved in this crisis compared with previous recessions; in the figure, year zero is the year before global recession (2007 in the current slump), and spending is compared with its level in that base year. What you see is that the widespread belief that we are experiencing runaway government spending is false—on the contrary, after a brief surge in 2009, government spending began falling in both Europe and the United States, and is now well below its normal trend. The turn to austerity was very real, and quite large.
On the face of it, this was a very strange turn for policy to take. Standard textbook economics says that slashing government spending reduces overall demand, which leads in turn to reduced output and employment. This may be a desirable thing if the economy is overheating and inflation is rising; alternatively, the adverse effects of reduced government spending can be offset. Central banks (the Fed, the European Central Bank, or their counterparts elsewhere) can cut interest rates, inducing more private spending. However, neither of these conditions applied in early 2010, or for that matter apply now. The major advanced economies were and are deeply depressed, with no hint of inflationary pressure. Meanwhile, short-term interest rates, which are more or less under the central bank’s control, are near zero, leaving little room for monetary policy to offset reduced government spending. So Economics 101 would seem to say that all the austerity we’ve seen is very premature, that it should wait until the economy is stronger.
The question, then, is why economic leaders were so ready to throw the textbook out the window.
One answer is that many of them never believed in that textbook stuff in the first place. The German political and intellectual establishment has never had much use for Keynesian economics; neither has much of the Republican Party in the United States. In the heat of an acute economic crisis—as in the autumn of 2008 and the winter of 2009—these dissenting voices could to some extent be shouted down; but once things had calmed they began pushing back hard.
A larger answer is the one we’ll get to later: the underlying political and psychological reasons why many influential figures hate the notions of deficit spending and easy money. Again, once the crisis became less acute, there was more room to indulge in these sentiments.
In addition to these underlying factors, however, were two more contingent aspects of the situation in early 2010: the new crisis in Greece, and the appearance of seemingly rigorous, high-quality economic research that supported the austerian position.
The Greek crisis came as a shock to almost everyone, not least the new Greek government that took office in October 2009. The incoming leadership knew it faced a budget deficit—but it was only after arriving that it learned that the previous government had been cooking the books, and that both the deficit and the accumulated stock of debt were far higher than anyone imagined. As the news sank in with investors, first Greece, then much of Europe, found itself in a new kind of crisis—one not of failing banks but of failing governments, unable to borrow on world markets.
It’s an ill wind that blows nobody good, and the Greek crisis was a godsend for anti-Keynesians. They had been warning about the dangers of deficit spending; the Greek debacle seemed to show just how dangerous fiscal profligacy can be. To this day, anyone arguing against fiscal austerity, let alone suggesting that we need another round of stimulus, can expect to be attacked as someone who will turn America (or Britain, as the case may be) into another Greece.
If Greece provided the obvious real-world cautionary tale, Reinhart and Rogoff seemed to provide the math. Their paper seemed to show not just that debt hurts growth, but that there is a “threshold,” a sort of trigger point, when debt crosses 90 percent of GDP. Go beyond that point, their numbers suggested, and economic growth stalls. Greece, of course, already had debt greater than the magic number. More to the point, major advanced countries, the United States included, were running large budget deficits and closing in on the threshold. Put Greece and Reinhart-Rogoff together, and there seemed to be a compelling case for a sharp, immediate turn toward austerity.
But wouldn’t such a turn toward austerity in an economy still depressed by private deleveraging have an immediate negative impact? Not to worry, said another remarkably influential academic paper, “Large Changes in Fiscal Policy: Taxes Versus Spending,” by Alberto Alesina and Silvia Ardagna.
One of the especially good things in Mark Blyth’s Austerity: The History of a Dangerous Idea is the way he traces the rise and fall of the idea of “expansionary austerity,” the proposition that cutting spending would actually lead to higher output. As he shows, this is very much a proposition associated with a group of Italian economists (whom he dubs “the Bocconi boys”) who made their case with a series of papers that grew more strident and less qualified over time, culminating in the 2009 analysis by Alesina and Ardagna.
In essence, Alesina and Ardagna made a full frontal assault on the Keynesian proposition that cutting spending in a weak economy produces further weakness. Like Reinhart and Rogoff, they marshaled historical evidence to make their case. According to Alesina and Ardagna, large spending cuts in advanced countries were, on average, followed by expansion rather than contraction. The reason, they suggested, was that decisive fiscal austerity created confidence in the private sector, and this increased confidence more than offset any direct drag from smaller government outlays.
As Mark Blyth documents, this idea spread like wildfire. Alesina and Ardagna made a special presentation in April 2010 to the Economic and Financial Affairs Council of the European Council of Ministers; the analysis quickly made its way into official pronouncements from the European Commission and the European Central Bank. Thus in June 2010 Jean-Claude Trichet, the then president of theECB, dismissed concerns that austerity might hurt growth:
As regards the economy, the idea that austerity measures could trigger stagnation is incorrect…. In fact, in these circumstances, everything that helps to increase the confidence of households, firms and investors in the sustainability of public finances is good for the consolidation of growth and job creation. I firmly believe that in the current circumstances confidence-inspiring policies will foster and not hamper economic recovery, because confidence is the key factor today.
This was straight Alesina-Ardagna.
By the summer of 2010, then, a full-fledged austerity orthodoxy had taken shape, becoming dominant in European policy circles and influential on this side of the Atlantic. So how have things gone in the almost three years that have passed since?

3.

Clear evidence on the effects of economic policy is usually hard to come by. Governments generally change policies reluctantly, and it’s hard to distinguish the effects of the half-measures they undertake from all the other things going on in the world. The Obama stimulus, for example, was both temporary and fairly small compared with the size of the US economy, never amounting to much more than 2 percent of GDP, and it took effect in an economy whipsawed by the biggest financial crisis in three generations. How much of what took place in 2009–2011, good or bad, can be attributed to the stimulus? Nobody really knows.
The turn to austerity after 2010, however, was so drastic, particularly in European debtor nations, that the usual cautions lose most of their force. Greece imposed spending cuts and tax increases amounting to 15 percent of GDP; Ireland and Portugal rang in with around 6 percent; and unlike the half-hearted efforts at stimulus, these cuts were sustained and indeed intensified year after year. So how did austerity actually work?
krugman_figure2-060613
The answer is that the results were disastrous—just about as one would have predicted from textbook macroeconomics. Figure 2, for example, shows what happened to a selection of European nations (each represented by a diamond-shaped symbol). The horizontal axis shows austerity measures—spending cuts and tax increases—as a share of GDP, as estimated by the International Monetary Fund. The vertical axis shows the actual percentage change in real GDP. As you can see, the countries forced into severe austerity experienced very severe downturns, and the downturns were more or less proportional to the degree of austerity.
There have been some attempts to explain away these results, notably at the European Commission. But the IMF, looking hard at the data, has not only concluded that austerity has had major adverse economic effects, it has issued what amounts to a mea culpa for having underestimated these adverse effects.*
But is there any alternative to austerity? What about the risks of excessive debt?
In early 2010, with the Greek disaster fresh in everyone’s mind, the risks of excessive debt seemed obvious; those risks seemed even greater by 2011, as Ireland, Spain, Portugal, and Italy joined the ranks of nations having to pay large interest rate premiums. But a funny thing happened to other countries with high debt levels, including Japan, the United States, and Britain: despite large deficits and rapidly rising debt, their borrowing costs remained very low. The crucial difference, as the Belgian economist Paul DeGrauwe pointed out, seemed to be whether countries had their own currencies, and borrowed in those currencies. Such countries can’t run out of money because they can print it if needed, and absent the risk of a cash squeeze, advanced nations are evidently able to carry quite high levels of debt without crisis.
Three years after the turn to austerity, then, both the hopes and the fears of the austerians appear to have been misplaced. Austerity did not lead to a surge in confidence; deficits did not lead to crisis. But wasn’t the austerity movement grounded in serious economic research? Actually, it turned out that it wasn’t—the research the austerians cited was deeply flawed.
First to go down was the notion of expansionary austerity. Even before the results of Europe’s austerity experiment were in, the Alesina-Ardagna paper was falling apart under scrutiny. Researchers at the Roosevelt Institute pointed out that none of the alleged examples of austerity leading to expansion of the economy actually took place in the midst of an economic slump; researchers at the IMF found that the Alesina-Ardagna measure of fiscal policy bore little relationship to actual policy changes. “By the middle of 2011,” Blyth writes, “empirical and theoretical support for expansionary austerity was slipping away.” Slowly, with little fanfare, the whole notion that austerity might actually boost economies slunk off the public stage.
Reinhart-Rogoff lasted longer, even though serious questions about their work were raised early on. As early as July 2010 Josh Bivens and John Irons of the Economic Policy Institute had identified both a clear mistake—a misinterpretation of US data immediately after World War II—and a severe conceptual problem. Reinhart and Rogoff, as they pointed out, offered no evidence that the correlation ran from high debt to low growth rather than the other way around, and other evidence suggested that the latter was more likely. But such criticisms had little impact; for austerians, one might say, Reinhart-Rogoff was a story too good to check.
So the revelations in April 2013 of the errors of Reinhart and Rogoff came as a shock. Despite their paper’s influence, Reinhart and Rogoff had not made their data widely available—and researchers working with seemingly comparable data hadn’t been able to reproduce their results. Finally, they made their spreadsheet available to Thomas Herndon, a graduate student at the University of Massachusetts, Amherst—and he found it very odd indeed. There was one actual coding error, although that made only a small contribution to their conclusions. More important, their data set failed to include the experience of several Allied nations—Canada, New Zealand, and Australia—that emerged from World War II with high debt but nonetheless posted solid growth. And they had used an odd weighting scheme in which each “episode” of high debt counted the same, whether it occurred during one year of bad growth or seventeen years of good growth.
Without these errors and oddities, there was still a negative correlation between debt and growth—but this could be, and probably was, mostly a matter of low growth leading to high debt, not the other way around. And the “threshold” at 90 percent vanished, undermining the scare stories being used to sell austerity.
Not surprisingly, Reinhart and Rogoff have tried to defend their work; but their responses have been weak at best, evasive at worst. Notably, they continue to write in a way that suggests, without stating outright, that debt at 90 percent ofGDP is some kind of threshold at which bad things happen. In reality, even if one ignores the issue of causality—whether low growth causes high debt or the other way around—the apparent effects on growth of debt rising from, say, 85 to 95 percent of GDP are fairly small, and don’t justify the debt panic that has been such a powerful influence on policy.
At this point, then, austerity economics is in a very bad way. Its predictions have proved utterly wrong; its founding academic documents haven’t just lost their canonized status, they’ve become the objects of much ridicule. But as I’ve pointed out, none of this (except that Excel error) should have come as a surprise: basic macroeconomics should have told everyone to expect what did, in fact, happen, and the papers that have now fallen into disrepute were obviously flawed from the start.
This raises the obvious question: Why did austerity economics get such a powerful grip on elite opinion in the first place?
krugman_2-060613

4.

Everyone loves a morality play. “For the wages of sin is death” is a much more satisfying message than “Shit happens.” We all want events to have meaning.
When applied to macroeconomics, this urge to find moral meaning creates in all of us a predisposition toward believing stories that attribute the pain of a slump to the excesses of the boom that precedes it—and, perhaps, also makes it natural to see the pain as necessary, part of an inevitable cleansing process. When Andrew Mellon told Herbert Hoover to let the Depression run its course, so as to “purge the rottenness” from the system, he was offering advice that, however bad it was as economics, resonated psychologically with many people (and still does).
By contrast, Keynesian economics rests fundamentally on the proposition that macroeconomics isn’t a morality play—that depressions are essentially a technical malfunction. As the Great Depression deepened, Keynes famously declared that “we have magneto trouble”—i.e., the economy’s troubles were like those of a car with a small but critical problem in its electrical system, and the job of the economist is to figure out how to repair that technical problem. Keynes’s masterwork, The General Theory of Employment, Interest and Money, is noteworthy—and revolutionary—for saying almost nothing about what happens in economic booms. Pre-Keynesian business cycle theorists loved to dwell on the lurid excesses that take place in good times, while having relatively little to say about exactly why these give rise to bad times or what you should do when they do. Keynes reversed this priority; almost all his focus was on how economies stay depressed, and what can be done to make them less depressed.
I’d argue that Keynes was overwhelmingly right in his approach, but there’s no question that it’s an approach many people find deeply unsatisfying as an emotional matter. And so we shouldn’t find it surprising that many popular interpretations of our current troubles return, whether the authors know it or not, to the instinctive, pre-Keynesian style of dwelling on the excesses of the boom rather than on the failures of the slump.
David Stockman’s The Great Deformation should be seen in this light. It’s an immensely long rant against excesses of various kinds, all of which, in Stockman’s vision, have culminated in our present crisis. History, to Stockman’s eyes, is a series of “sprees”: a “spree of unsustainable borrowing,” a “spree of interest rate repression,” a “spree of destructive financial engineering,” and, again and again, a “money-printing spree.” For in Stockman’s world, all economic evil stems from the original sin of leaving the gold standard. Any prosperity we may have thought we had since 1971, when Nixon abandoned the last link to gold, or maybe even since 1933, when FDR took us off gold for the first time, was an illusion doomed to end in tears. And of course, any policies aimed at alleviating the current slump will just make things worse.
In itself, Stockman’s book isn’t important. Aside from a few swipes at Republicans, it consists basically of standard goldbug bombast. But the attention the book has garnered, the ways it has struck a chord with many people, including even some liberals, suggest just how strong remains the urge to see economics as a morality play, three generations after Keynes tried to show us that it is nothing of the kind.
And powerful officials are by no means immune to that urge. In The Alchemists, Neil Irwin analyzes the motives of Jean-Claude Trichet, the president of the European Central Bank, in advocating harsh austerity policies:
Trichet embraced a view, especially common in Germany, that was rooted in a sort of moralism. Greece had spent too much and taken on too much debt. It must cut spending and reduce deficits. If it showed adequate courage and political resolve, markets would reward it with lower borrowing costs. He put a great deal of faith in the power of confidence….
Given this sort of predisposition, is it any wonder that Keynesian economics got thrown out the window, while Alesina-Ardagna and Reinhart-Rogoff were instantly canonized?
So is the austerian impulse all a matter of psychology? No, there’s also a fair bit of self-interest involved. As many observers have noted, the turn away from fiscal and monetary stimulus can be interpreted, if you like, as giving creditors priority over workers. Inflation and low interest rates are bad for creditors even if they promote job creation; slashing government deficits in the face of mass unemployment may deepen a depression, but it increases the certainty of bondholders that they’ll be repaid in full. I don’t think someone like Trichet was consciously, cynically serving class interests at the expense of overall welfare; but it certainly didn’t hurt that his sense of economic morality dovetailed so perfectly with the priorities of creditors.
It’s also worth noting that while economic policy since the financial crisis looks like a dismal failure by most measures, it hasn’t been so bad for the wealthy. Profits have recovered strongly even as unprecedented long-term unemployment persists; stock indices on both sides of the Atlantic have rebounded to pre-crisis highs even as median income languishes. It might be too much to say that those in the top 1 percent actually benefit from a continuing depression, but they certainly aren’t feeling much pain, and that probably has something to do with policymakers’ willingness to stay the austerity course.

5.

How could this happen? That’s the question many people were asking four years ago; it’s still the question many are asking today. But the “this” has changed.
Four years ago, the mystery was how such a terrible financial crisis could have taken place, with so little forewarning. The harsh lessons we had to learn involved the fragility of modern finance, the folly of trusting banks to regulate themselves, and the dangers of assuming that fancy financial arrangements have eliminated or even reduced the age-old problems of risk.
I would argue, however—self-serving as it may sound (I warned about the housing bubble, but had no inkling of how widespread a collapse would follow when it burst)—that the failure to anticipate the crisis was a relatively minor sin. Economies are complicated, ever-changing entities; it was understandable that few economists realized the extent to which short-term lending and securitization of assets such as subprime mortgages had recreated the old risks that deposit insurance and bank regulation were created to control.
I’d argue that what happened next—the way policymakers turned their back on practically everything economists had learned about how to deal with depressions, the way elite opinion seized on anything that could be used to justify austerity—was a much greater sin. The financial crisis of 2008 was a surprise, and happened very fast; but we’ve been stuck in a regime of slow growth and desperately high unemployment for years now. And during all that time policymakers have been ignoring the lessons of theory and history.
It’s a terrible story, mainly because of the immense suffering that has resulted from these policy errors. It’s also deeply worrying for those who like to believe that knowledge can make a positive difference in the world. To the extent that policymakers and elite opinion in general have made use of economic analysis at all, they have, as the saying goes, done so the way a drunkard uses a lamppost: for support, not illumination. Papers and economists who told the elite what it wanted to hear were celebrated, despite plenty of evidence that they were wrong; critics were ignored, no matter how often they got it right.
The Reinhart-Rogoff debacle has raised some hopes among the critics that logic and evidence are finally beginning to matter. But the truth is that it’s too soon to tell whether the grip of austerity economics on policy will relax significantly in the face of these revelations. For now, the broader message of the past few years remains just how little good comes from understanding.