Search This Blog

Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Sunday 26 April 2020

Nudge theory is a poor substitute for hard science in matters of life or death

Behavioural economics is being abused by politicians as a justification for flawed policies over the coronavirus outbreak writes Sonia Sodha in The Guardian 


Illustration: Dom McKenzie/The Observer


I first came across “nudge” – the concept many consider to be the pinnacle of behavioural economics – at a thinktank seminar a little over 10 years ago. We were all handed a mock wine menu and asked what we’d order.

This was supposed to illustrate that most price-aware diners order the second-cheapest bottle to avoid looking tight and that restaurateurs use this to nudge us towards the bottle with the highest markup. I remember thinking it an interesting insight, but that these sorts of nudges were nowhere near as likely to transform the world as their enthusiastic proponent claimed.

Lots of far more eminent people disagreed with me. Behavioural economics looks at how people make decisions in the real world – warts, irrational biases and all – and applies this to public policy. Its signature policy is set out in the 2008 book Nudge , by Cass Sunstein and Richard Thaler. The central insight is that changing the way choices are presented to people can have a huge impact. Make saving for retirement or donating your organs an opt-out rather than opt-in and watch as people suddenly adopt more socially responsible behaviour. Coming just as the financial crisis hit, Nudge was perfectly timed to achieve maximum traction by offering politicians the chance to reap savings through low-cost policy. Sunstein was quickly appointed to a senior job in the Obama administration, while David Cameron set up the behavioural insights team, dubbed the “nudge unit”, led by psychologist turned policy wonk David Halpern.

The nudge unit has since had a mixed track record: there have been some real successes on pensions and tax payments but in other areas it’s been a bit of a damp squib. So I was surprised when Halpern popped up to talk about the government’s pandemic strategy in the press in early March. It was he who first publicly mentioned the idea of “herd immunity” as part of an effective response to Covid-19 (the government has since denied this was ever the strategy). And it’s clear from the briefing he gave journalists that he favoured delaying a lockdown because of the risk of “behavioural fatigue”, the idea that people will stick with restrictions for only so long, making it better to save social distancing for when more people are infected. “If you go too early and tell people to take a week off work when they are very unlikely to have coronavirus, and then a couple of weeks later they have another cough, it’s likely they’ll say ‘come on already’,” he told one reporter.

Halpern is reportedly on Sage, the government’s scientific advisory committee for emergencies, and he is also the government’s What Works national adviser, responsible for helping it apply evidence to public policy. So one might expect there to be something substantial behind the idea of behavioural fatigue. 

But evidence presented to government by the Sage behavioural subcommittee on 4 March, representing the views of a wider group of experts, was non-committal on the behavioural impact of a lockdown, noting that the empirical evidence on behavioural interventions in a pandemic is limited. Shortly after Halpern’s interviews, more than 600 behavioural economists wrote a letter questioning the evidence base for behavioural fatigue.

Rightly so: a rapid evidence review of behavioural science as it relates to pandemics only fleetingly refers to evidence that extending a lockdown might increase non-compliance, but this turns out to be a study about extending deployment in the armed forces. “Behavioural fatigue is a nebulous concept,” the review’s authors later concluded in the Irish Times.

This is a common critique of behavioural economics: some (not all) members of the discipline have a tendency to overclaim and overgeneralise, based on small studies carried out in a very different context, often on university students in academic settings. It’s extraordinary that Halpern was briefing on what essentially looks like his opinion as if it were science. We won’t know how influential it was in the government’s decision to delay lockdown until a post-hoc inquiry, but there’s no reason to suppose Boris Johnson wasn’t listening to his “what works” adviser. “The behavioural psychologists say that if you don’t shake somebody’s hand, that sends an important message… [about] washing your hands,” he said on 9 March.

It’s less extraordinary, though, when you understand that the Behavioural Insights Team is a multimillion-pound profitable company, which pays Halpern, who owns 7.5% of its shares, a bigger salary than the prime minister. Here lies the potential conflict of interest: someone who contributes to Sage also has a significant financial incentive to sell his wares. It perhaps explains BIT’s bombastic claims – “it’s no longer a matter of supposition… we can now say with a high degree of confidence these models give you best policy,” Halpern claimed in 2018. And: “We make much of the simplicity of our interventions… but if properly implemented, they can have a powerful impact on even our biggest societal challenges.” (It is worth noting that Sir Patrick Vallance, the government’s chief scientific adviser, says that one reason the composition of Sage has been kept private is to protect scientists from “lobbying and other forms of unwanted influence which may hinder their ability to give impartial advice”.)

This hubris has led some behavioural scientists to push their approach way beyond those realms such as consumer policy, where it has the potential to be most effective. My jaw dropped on reading a recent 70-page BIT report on applying behavioural insights to domestic abuse that included not one survivor’s voice and in which the word “trauma” appeared only once. It describes domestic abuse as a “phenomenon made up of multiple behaviours undertaken by different actors at different points in time”. Its recommendations are that strange mix of common sense dressed up as behavioural revelation and jarring suggestions that tend to characterise behavioural science when it overreaches itself.

Little wonder that a House of Lords committee was highly critical of government tendencies to emphasise nudges at the expense of other effective policy solutions in 2011. Nudges undoubtedly have their place, but they’re not going to eradicate domestic violence or end catastrophic climate change.

The problem with all forms of expertise in public policy is that it is often the most formidable salespeople who claim greater certainty than the evidence allows who are invited to jet around the world advising governments. But the irony for behavioural scientists is that this is a product of them trading off, and falling prey to, the very biases they have made their names calling out.

I can only imagine how easy it might have been for Johnson to succumb to confirmation bias in looking for reasons to delay a lockdown: what prime minister wants to shut down the economy? And it is the optimism bias of the behavioural tsars that has led them to place too much stock in their own judgement in a world of limited evidence. But this isn’t some experiment in a university psychology department - it is a pandemic and lives are at stake.

Sunday 22 December 2019

Robert Skidelsky speaks: How and how not to do economics

What is economics about?


Unlimited wants limited resources


Economic growth


Is economics a science?


Models and laws


Psychology and economics


Sociology and economics

Economics and power


History of economic thought


Economic history


Ethics and economics



Sunday 28 July 2019

What if NaMo was India’s PM in 1947?

By Girish Menon

Pervez Hoodbhoy, one of few famed Pakistan scientists, asked whether India could have launched Chandrayaan2 if Modi was India’s Prime Minister in 1947? I thought this was an extremely important question in an era when all things Nehruvian and Indirayian are being rubbished without any concern for facts.

Meghnad Desai, an economist of renown, in his latest piece in the Indian Express provides a stark example. Desai tries to create the impression that it was the private sector that was responsible for India’s lead in the space programme. Desai states that despite Nehru’s inclinations towards the public sector he listened to Sarabhai, who came from an industrialist family and had a market orientation, and this resulted in the successful space programme.

What Desai ignores is that ISRO has always been a public sector organisation. It was not started by the Sarabhai family and subsequently nationalised by a socialist Nehru or Indira.

A recent edition of the Guardian (How the state runs business in China) talked about how members of China’s Communist Party are involved in the management of all large companies operating in China. This includes western multinationals as well. And China is poised to be the world’s largest economy with its firms ready to compete with global corporates. Free market ideologues deliberately ignore such facts.

In India’s case, the spokespersons of the ruling corporatocracy fail to admit that the industrialists post independence viz. Tatas and Birlas did not have the capital nor the knowhow to launch the industrial revolution and it was left to Nehru to use tax payer money to launch the education and scientific revolution whose benefits India is now reaping.

The case I am making is that there are good public sector organisations as well as bad ones. The bad ones could be shut down due to their continuous reliance on government subsidies. But as the so far botched privatisation of Air-India has shown, India’s private sector enthusiasts have not shown any enthusiasm to take over and turn around such firms. Instead, they would like instant money spinners or public sector firms which can be cannibalised for instant profit.

Even in the case of bad public sector firms, if a systematic analysis of their sickness is carried out many will reveal that their problems often are not within the firm but lie outside with their political masters. In Air-India’s case the decisions by Praful Patel to favour Jet Airways and to destroy the public sector firm have led to its current state.

On the other hand, it is worthwhile to study the case of Jet Airways the private sector darling of free market India. It is now bankrupt despite all the favours given to it including flying rights and free aviation fuel.

There are so many instances of India’s preferred industrialists being given favourable loans, government lands and other subsidies and yet not contributing to reducing the burgeoning unemployment question. All of this is swept under the carpet as the current administration prepares to make a distress sale of the public sector.

India’s private sector has not provided global leadership in any area. While we await to see what Anil Ambani’s defence company will do, Indians must rejoice Chandrayaan2 while not forgetting that it is a public sector firm that is a world leader in rockets and satellite technology.

Monday 17 September 2018

What is your brand of atheism?

Arvind Sharma in The Wire.In

The modern world is nothing if not plural in the number of possible world views it offers in terms of religions, creeds and ideologies. The profusion can be quite perplexing, even bewildering. And atheism too is an important component of the cocktail.

The book under review – John Gray’s Seven Types of Atheism – acts like a ‘guide to the perplexed’ in the modern Western world by bestowing the same kind of critical attention to atheism as theologians do to theism, and historians of religion do to the world religions.






In doing so, it identifies seven types of atheism: (1) new atheism, or an atheism which is simply interested in discrediting religion; (2) ‘secular atheism’, better described as secular humanism, which seeks salvation of the world within the world through progress; (3) ‘scientific atheism’, which turns science into a religion – a category in which the author includes ‘evolutionary humanism, Mesmerism, dialectical materialism, and contemporary transhumanism’; (4) ‘political atheism’, a category in which fall what the author considers to be modern political religions such as Jacobinism, Communism, Nazism and contemporary evangelical liberalism; (5) ‘antitheistic atheism’ or misotheism, the kind of atheism characterized by hatred of God of such people as Marquis de Sade, Dostoevsky’s character Ivan Karamazov (in a famous novel) and William Empson; (6) ‘non-humanistic atheism’, of the kind associated with the positions of George Santayana and Joseph Conrad who rejected the idea of a creator God but did not go on to cultivate benevolence towards humanity, so characteristic of secular atheism; and (7) mystical atheism, associated with the names of Schopenhauer, Spinoza, and the Russian thinker Leo Shestov. The author states his position in relation to these seven types candidly; he is repelled (his word) by the first five but feels drawn to the last two.

The seminal insight of the book, in the Western context, is that according “contemporary atheism is a continuation of monotheism by other means”. The author returns to the point again and again so that this insight enables us to examine both religion and atheism in tandem. It is thus an admirable book on atheism in the Western world and is strewn with nuggets such as:

“Scientific inquiry answers a demand for explanation. The practice of religion expresses a need for meaning…”;

“The human mind is programmed for survival, not truth”;

“Science can never close the gap between fact and value”;

“The fundamental conflict in ethics is not between self-interest and general welfare but between general welfare and desires of the moment”;

“It is not only the assertion that ‘moral’ values must take precedence over all others that has been inherited from Christianity. So has the belief that all human beings must live by the same morality”;

“… beliefs that have depended on falsehood need not themselves be false”;
“Some values may be humanly universal – being tortured or persecuted is bad for all human beings. But universal values do not make a universal morality, for these values often conflict with each other”;

“Liberal societies are not templates of a universal political order but instances of a particular form of life. Yet liberals persist in imagining that only ignorance prevents their gospel from being accepted by all of humankind – a vision inherited from Christianity”;
“Causing others to suffer could produce an excitement far beyond any achieved through mere debauchery”;

“Prayer is no less natural than sex, virtue as much as vice”;

“Continuing progress is possible only in technology and the mechanical arts. Progress in this sense may well accelerate as the quality of civilisation declines”;

“Any prospect of a worthwhile life without illusions might itself be an illusion”;

“If Nietzsche shouted the death of God from the rooftops, Arthur Schopenhauer gave the Deity a quiet burial”;

“The liberated individual entered into a realm where the will is silent”;
“Human life… is purposeless striving… But from another point of view this aimless world is pure play”;

“If the human mind mirrors the cosmos, it may be because they are both fundamentally chaotic”; and so on. 

Its provocative ideas and brilliant summaries notwithstanding, the book is bound by a limitation; its scope is limited to the West. The author does touch on Buddhism and even Sankhya but only as they have implications for the West; he does not cover Asian ideas of atheism alongside the Western. Neither Confucianism nor Daoism are hung up on a creator god and thus seem to demand attention, if atheism is defined as “the idea of the absence of a creator-god”. Similarly, in Hindu theism, the relation between the universe and the ultimate reality is posited as ontological rather than cosmological.

The concept of atheism also needs to be refined further in relation to Indian religions. In this context it is best to speak of the nontheism of Buddhism (which denies a creator god but not gods as such), and the transtheism of Advaita Vedanta (which accepts a God-like reality but denies it the status of the ultimate reality). In fact, the discussion in this book is perhaps better understood if we invoke some other categories related to the idea of God, such as transcendence and immanence. God is understood as transcendent in the Abrahamic traditions. God no doubt creates the universe but also transcends it; in the Hindu traditions, god is considered both transcendent and immanent – God ‘creates’ the universe and transcends it but also pervades it, just as the number seven transcends the number five but also contains it.

The many atheisms described in the book are really cases of denying the transcendence of god as the ultimate reality and identifying ultimate reality with something immanent in the universe. This enables one to see the atheisms of the West in an even broader light than when described as crypto-monotheisms.

One may conclude the discussion of such a heavy topic on a lighter note. Could one not think of something which is best called ‘devout agnosticism’ as a solution to rampant atheism in the West, if atheism is perceived as a problem? Such would be the situation if one prayed to a God, whose existence had been bracketed by one.

Crying for help from such a God in an emergency, is like crying for help in a less dire situation in which one shouts for help without knowing whether there is any one within earshot. Even the communists in Kerala might have found this possibility useful if the torrential rains filled them with the ‘fear of God’.

Friday 20 July 2018

Our job as scientists is to find the truth. But we must also be storytellers

Nick Enfield in The Guardian

Scientists often struggle to communicate the findings of research. Our subject matter can be technical and not easily digested by a general audience. And our discoveries – from a new type of tessellating pentagon to the presence of gravitational waves in space – have no meaning until that meaning can be defined and agreed upon. To address this, we are often advised to use the tools of narrative.






This advice is now found everywhere from training sessions to blogs to the most prominent scientific journals. An article in Nature magazine advises scientists to relate data to the world by using “the age-old custom of telling a story.” Proceedings of the National Academy of Sciences cites the “increased comprehension, interest, and engagement” that narrative offers. And another study shows that writing in a narrative style increases uptake of scientific results.

What is a story? Here is screenwriting guru John Truby’s definition: “A speaker tells a listener what someone did to get what he wanted and why.” This is every Hollywood film. At the centre is a person with a well-defined goal. They pursue that goal, against the odds, and after various twists and turns the story comes to a satisfying end. Most importantly, as writer John Collee explains, a good story will have a meaning, a relatable moral with universal resonance for audiences.

How can scientists be expected to use storytelling when we are not trained in the craft? True, we are not professional screenwriters. But like everyone we are nevertheless well-practiced storytellers. When you tell someone about a frightening incident that happened on the bus to work, your narrative may be ordinary but it has the core elements of story: situation, complication, resolution, and most importantly, meaning. You are not just telling me what happened, you are telling me what it means to you, how you feel about it. And you are inviting me to feel the same. In this way, stories are one of our most important social bonding mechanisms.

So, what could be wrong with urging scientists to take advantage of our natural storytelling skills? In an article titled “Against storytelling of scientific results”, Yarden Katz explains that certain defining features of narrative – someone pursing a goal; a satisfying resolution that resolves this; a meaning that draws people in – are antithetical to key ideals and practices of scientific work.


Human beings, scientists included, have brains that are not evolved for dispassionate thinking

One objection is that, according to the scientific norm known as disinterestedness, scientists should not aim for any particular result. Our job is to find the truth. So, we should first establish the facts, and then use those facts to decide what our conclusions are. But too often, people have it the wrong way around. We start with our pre-established beliefs, and then look for evidence to support them. Another objection is that because science is a permanently unfinished line of business, there can be no satisfying endings.

Further, the scientist’s job is to inform, not persuade. Advice in Nature from authors Martin Krzywinski and Alberto Cairo seems to challenge this norm: “Maintain focus of your presentation by leaving out detail that does not advance the plot”; “inviting readers to draw their own conclusions is risky.” Most scientists would agree that this is going too far.

Katz’s concerns are well taken. But what should be done? Can we be truly dispassionate about what we are doing in science? There are reasons to think that even when we are operating in the rarefied atmosphere of scientific endeavor, we are never not wrapping our lives in stories.

Human beings, scientists included, have brains that are not evolved for dispassionate thinking. Bugs in our reasoning from the confirmation bias to the gambler’s fallacy make our natural thought processes deeply subjective and partial. And these are precisely the kinds of cognitive propensities that make storytelling stick so well. Even if an exemplary scientist has trained herself to be utterly objective, her audience will always bring their biased, story-gobbling minds.

This is why we have little choice but to apply the philosophy of judo to the problem of communicating scientific work and findings. Rather than struggle against cognitive biases, we need to work with them if we are going to keep them in check. Facts can be collected but they need to be interpreted. To interpret a fact is to give it meaning. And this is nothing other than storytelling. Only with a story can the facts be communicated, and only then can they become part of the received knowledge that drives the very possibility of scientific progress.

Scientists do not have the luxury of forgoing storytelling. We need not fear that storytelling will compromise our objectivity. If we believe that we have the right story, then we should tell it. Only then can it be evaluated. Because science is a collective enterprise, our stories will succeed when they are validated by broad agreement in our community.

It is our responsibility to become at least literate, if not masterly, in storytelling about our work. Our audiences need stories. So we must tell the right stories about our findings, if we are going to treat those findings with the respect they need.

Saturday 5 May 2018

Into the brave new age of irrationality

The assault on rationality is part of a concerted political strategy writes Sanjay Rajoura in The Hindu


Much has been written and said about the assault on liberal arts under way in India since the new political era dawned. But the real assault is on science and rationality. And it has not been difficult to mount this attack.

For long, India has as a nation proudly claimed to be a society of belief. And Indians like to assert that faith is a ‘way of life’ here. Terms such as modernity, rational thinking and scientific analysis are often frowned upon, and misdiagnosed as disrespect to Indian culture.


Freshly minted spokesmodel

In recent years, we have entered a new era. I call it the Era of Irrationality. The new Chief Minister of Tripura, Biplab Kumar Deb, is the freshly minted spokesmodel of this bold, new era.

There appears to be a relay race among people in public positions, each one making an astonishingly ridiculous claim and then passing on the baton. Mr. Deb’s claim that the Internet existed in the times of the Mahabharata is the latest. But there have been several other persons before that: Ganesh was the first example of plastic surgery, Darwin’s theory of evolution is hokum because nobody has seen monkeys turning into humans, and that Stephen Hawking had said that Vedas have a theory superior to Einstein’s E = mc2.

Such statements have made us the laughing stock of the global scientific community. But more importantly, they also undermine the significant scientific achievements we have made post-Independence.

We cannot even dismiss these as random remarks by the fringe, the babas and the sadhus. These claims are often made by public officials (it’s another matter that the babas and sadhus are now occupying several public offices). The assault on rationality is a consequence of a concerted strategy of political forces. As rational thinking thins, the same political forces fatten.

We Indians have never really adopted the scientific temper, irrespective of our education. It’s evident from our obsession with crackpot sciences such as astrology and palmistry in our daily lives. However, in the past four years, the belief in pseudo-sciences has gained a political fig leaf as have tall, unverifiable claims on science.

The cultivation of scientific temper involves asking questions and demanding empirical evidence. It has no place for blind faith. The ruling political dispensation is uncomfortable with questioning Indians. But at the same time, it also wants to come across as a dispensation that champions a 21st century modern India. Therein lies a catch-22 situation.

So, they have devised a devious strategy to invest in the culture of blind belief. They already have a willing constituency. Ludicrous statements like those mentioned above — made by leaders in positions of power with alarming frequency — go on to legitimise and boost the Era of Irrationality.

An unscientific society makes the job of an incompetent ruler a lot easier. No questions are asked; not even basic ones. The ruler has to just make a claim and the believers will worship him. Rather than conforming, a truly rational community often questions disparity, exploitation, persecution on the basis of caste, religion or gender. It demands answers and accountability for such violations, which are often based on irrational whims. Hence rationality must be on top of the casualty list followed quickly by the minorities, Dalits, women, liberals. For the ‘Irrationality project’ to succeed, the ruler needs a willing suspension of disbelief on a mass scale.


Science v. technology

The vigour with which the government is making an assault on the scientific temper only confirms that it is actually frightened of it. This is the reason why authoritarian regimes are often intolerant of those who champion the spirit of science, but encourage scientists who will launch satellites and develop nuclear weapons — even as they break coconuts, chant hymns and press “Enter” with their fingers laden with auspicious stones.

These ‘techno-scientists’ are what I call ‘the DJs of the scientific community’. And they are often the establishment’s yes-men and yes-women.

The founders of the Constitution were aware of this. Hence the words “scientific temper” and “the spirit of inquiry and reform” find place in the Constitution, along with “secular” (belatedly), “equality” and “rights”. To dismantle secularism, dilute equality and pushback rights, it is imperative to destroy a scientific temperament.

The indoctrination against the scientific temper begins very early in our lives. It starts in our families and communities where young minds are aggressively discouraged from questioning authority and asking questions. An upper caste child for example may be forced to follow customs, which among others include practising and subscribing to the age-old caste system. The same methodology is used to impose fixed gender, sexual and religious identities. As a result, we are hardwired to be casteist, majoritarian and misogynist.

The final step in the ‘Irrationality project’ is to inject with regularity, preposterous, over-the-top claims about the nation’s past. It effectively blurs vision of the present.

The world is busy studying string theory, the god particle in a cyclotron, quantum mechanics. But we are busy expanding our chest size with claims of a fantastic yore.

Why is ignorance of science acceptable?

Janan Ganesh in The FT

Stephen Hawking’s final research paper clarifies his idea of a “multiverse”. I think. Published posthumously this week, it explores whether the same laws of physics obtain in all the parallel universes that were the Big Bang’s supposed offspring. Apparently. The paper envisages a plural but finite number of universes rather than a limitless amount. It says here. 


I do not begin to know how to engage with this material. Nor could I say more than a sentence or two about how aeroplanes achieve flight, or distinguish mass from weight, or name a chemical compound outside those two biggies, H2O and CO2. Not only can I not do calculus, I cannot tell you with much confidence what it is. 

For all this ignorance of the sciences, society treats me as a thoughtful person, rewards me with a line of work that is sometimes hard to distinguish from recreation and invites me to politico-media parties, where I catch up with people who, I promise you, make me look like a Copley Medalist. 

In 1959, CP Snow spoke of “two cultures”, the humanities and the sciences, the first blind to the second in a way that is not reciprocated. When his cultured friends laughed at scientists who did not know their way around the Shakespearean canon, he invited them to recite the Second Law of Thermodynamics. This should be no great ask for anyone of moderately rounded learning, he thought, but they were stumped, and peeved to be tested. It was more in despair than in mischief that he rolled out this parlour game of an evening. 

The subsequent trend of events — the space race, the energy crisis, the computer age — should have embarrassed those steeped exclusively in the humanities into meeting science halfway with a hybrid or “third” culture. In the likes of Ian McEwan, who smuggles scientific ideas into his novels, and Steven Pinker, who has tried to establish a scientific basis for literary style, there are some willing brokers of an intellectual concordat out there. 

Yet almost six decades on from Snow’s intervention, near-perfect ignorance of the natural world is still no bar to life as a sophisticate. In Britain, especially, scientific geniuses have always had to coexist with a culture that holds them to be somehow below stairs. This is not the principled anti-science of the Romantics or the hyper-religious. The laws of physics are not being doubted here. It is “just” an aesthetic distaste. 

We can guess at the costs of this distaste in a world already tilting to economies that do not share a bit of it. In this vision of the future, China and India are to the west what Snow said Germany and America were to late-Victorian Britain: profiteers of our own decadent neglect of the hard sciences. But what if the stakes are higher than mere material decline? 

Since the populist shocks of 2016, there has been fighting talk about the preciousness of facts and the urgency of their defence. It just tends to be tactical — a call for the regulation of Facebook, perhaps, or a more vigilant, news-buying citizenry. 

If something as basic as truth is faltering, the cause might be deeper than the habits and technologies of the past decade. The longer-term estrangement of humanities and science seems more like it. A culture that does not punish scientific ignoramuses, and instead hands us the keys to public life, is likely to be vulnerable and credulous — a sucker for any passing nonsense. 

It is not the content of scientific knowledge so much as the scientific method itself that helps to inoculate against ideology and hysteria. Doubt, evidence, falsifiability, the provisional status of all knowledge: these are priceless habits of mind, but you can go far in Britain and other rich democracies without much formal grounding in them. 

The Eloi-and-Morlocks split between the cultured and the scientific, the latter toiling unseen as a necessary evil, is too one-sided for the wider good. It should be a mortifying faux pas to profess ignorance of Hawking’s work in polite company. In his own country, it borders on a boast.

Sunday 18 February 2018

Ramanujan and Salam — what inspired them?

Pervez Hoodbhoy in The Dawn

SRINIVISAN Ramanujan (1887-1920) and Muhammad Abdus Salam (1926-1996), two intellectual giants of the 20th century, were born in the same corner of the world. Of humble origin and educated in local schools, they nevertheless rose to dizzying heights in the arcane world of theoretical science. Few others on the subcontinent enjoy their iconic status.

What I shall address below is that both attributed their works to some divine agency. Some of their devotees see this in validating their own respective belief system. With the rise of Hindutva in India, and the violent persecution of Ahmadis in Pakistan, these claims assume considerable importance. Hence a careful, impartial examination is called for.

No mathematician has a story more romantic than Ramanujan’s. Many books, plays, and movies — such as The Man Who Knew Infinity (2015) — dwell upon this enigmatic figure. Drawing upon deep intuition, Ramanujan created new concepts in the theory of numbers, elliptic functions and infinite series. Even full-blown mathematicians take years to grasp his complex ideas.


Exceptional genes plus fortunate circumstances is why some become maths-science superstars.


Born in Madras to a low-level clerk, this young Brahmin boy was steeped in tradition, sang religious songs, attended temple pujas, and was a strict vegetarian. But by age 12, he was inventing sophisticated theorems and unwittingly duplicating some results of European mathematicians of the previous century. He flunked college twice for lack of interest in anything but mathematics — in which he excelled. His awestruck teachers could not decide whether he was a genius or fraud.

At 16, encouraged by one of his teachers, Ramanujan sent off a letter to the renowned pure mathematician G.W. Hardy at Cambridge University. It was accompanied by theorems densely packed into nine pages. Hardy was stunned and arranged for him to travel to England. Ramanujan duly obtained permission from the family goddess Namagiri, consulting appropriate astrological data before his voyage overseas.

At age 32, Ramanujan was dead. He had returned to Madras exhausted, half-famished and fed up with English winters. But even on his deathbed, his pen scrawled out profound results. A century later these still intrigue the brainiest of mathematicians and string theorists. He attributed his exceptional qualities to the psychic visitations of Namagiri who would whisper equations to him. Sometimes, he said, “she wrote on my tongue”. He told colleagues, “An equation for me has no meaning unless it represents a thought of God.”

This was how Ramanujan saw it. But how does one explain that Euler, Bernoulli, Gauss, Cantor, Hilbert and Gödel were non-Brahmin mathematicians who stood still taller? The edifice of modern mathematics owes largely to them, not to Ramanujan. Some were ardent Christians, others agnostic or atheistic. Nobody knows how to explain their feats.

Curiously, Abdus Salam, then a 19-year-old student at Government College Lahore, wrote his very first paper proposing a simpler solution to an intriguing mathematical problem posed about 20 years earlier by Ramanujan. He ended his paper by triumphantly declaring: “His [Ramanujan’s] solution is much more laborious”.

This was Salam’s debut into the world of high mathematics. Born into a conservative religious environment in Jhang — then a village-town — this child prodigy rapidly outpaced his teachers. Fortunately they bore him no grudge and helped him move on to Lahore. The next stop was Cambridge, where he excelled. By the early 1960s, he was one of the world’s top particle physicists, ultimately winning 20 international prizes and honours including the Nobel Prize in 1979.

In his later years, Salam gave numerous public lectures and interviews, recorded on camera and in print, locating his source of inspiration in his religious belief. In particular he said the concept of unity of God powered his quest for the unification of nature’s fundamental forces as well as his search for ever fewer numbers of elementary particles.

For me, to engage on a sensitive matter with one so senior and superior was not easy. But sometime in 1986 I picked up the courage to ask Salam the obvious question: both he, who thought himself a believer, and Steven Weinberg, an avowed atheist, had worked independently on unifying two of nature’s four fundamental forces and yet had arrived at precisely the same conclusions. How?

Salam gave his answer in the preface he wrote for my book on Islam and science (1990), where he stated: “I can confirm that he [Hoodbhoy] is right…”, and then went on to explicitly clarify that any bias towards the unification paradigm in his thinking was only unconsciously motivated by his religious background.

There is not the slightest doubt that Salam used exactly the same tools as Weinberg did — principally quantum mechanics and relativity theory — and did physics exactly as other physicists do (but better than most). His political and religious views were irrelevant to his work. Let’s note that although they are giants of physics, Salam and Weinberg stood on the shoulders of still greater giants — Einstein, Pauli, Dirac, Wheeler, and Feynman — whose personal philosophies of life vastly differed from each other.

Salam sourced his inspiration to his religious beliefs, while Ramanujan claimed direct transmission from his gods. These claims cannot ever be proved or disproved. It is also irrelevant here that Salam thought of himself as a Muslim whereas, by Pakistani law, he is not.

How can prodigious talent blossom in the absence of rigorous scientific training? Two factors explain Ramanujan’s and Salam’s successes. First, nature sometimes gifts an individual with exceptional innate mathematical ability. This is associated with brain circuits in the parietal lobe and acquired through genetic transmission. Second, by good fortune, Ramanujan and Salam managed to escape into a scholarly environment — Cambridge Uni­ver­sity — where their genius could flower. Had either stayed back home he would be unheard of today.
It is usual to take pride in the geniuses belonging to one’s own tribe. The ancient civilisations of China, India, Greece, Arabia, and modern European civilisation all claim superiority over others because of the creations of their most brilliant minds. But in fact an individual’s exceptional genes and fortunate circumstances — not some supreme transcendence — are the real reasons. While sources of inspiration do differ, empirically and logically deduced results don’t. Science and its heroes belong to all humankind, not to any one tribe.

Thursday 26 October 2017

On Militant Atheism - Why the Soviet attempt to stamp out religion failed

Giles Fraser in The Guardian



The Russian revolution had started earlier in February. The tsar had already abdicated. And a provisional bourgeois government had begun to establish itself. But it was the occupation of government buildings in Petrograd, on 25 October 1917, by the Red Guards of the Bolsheviks that marks the beginning of the Communist era proper. And it was from this date that an experiment wholly unprecedented in world history began: the systematic, state-sponsored attempt to eliminate religion. “Militant atheism is not merely incidental or marginal to Communist policy. It is not a side effect, but the central pivot,” wrote Aleksandr Solzhenitsyn. Lenin compared religion to venereal disease.

Within just weeks of the October revolution, the People’s Commissariat for Enlightenment was established to remove all references to religion from school curriculums. In the years that followed, churches and monasteries were destroyed or turned into public toilets. Their land and property was appropriated. Thousands of bishops, monks and clergy were systematically murdered by the security services. Specialist propaganda units were formed, like the League of the Godless. Christian intellectuals were rounded up and sent to camps.

The Soviets had originally believed that when the church had been deprived of its power, religion would quickly wither away. When this did not happen, they redoubled their efforts. In Stalin’s purges of 1936 and 1937 tens of thousands of clergy were rounded up and shot. Under Khrushchev it became illegal to teach religion to your own children. From 1917 to the perestroika period of the 1980s, the more religion persisted, the more the Soviets would seek new and inventive ways to eradicate it. Today the Russian Orthodox churches are packed full. Once the grip of oppression had been released, the faithful returned to church in their millions.

The Soviet experiment manifestly failed. If you want to know why it failed, you could do no better than go along to the British Museum in London next week when the Living with Gods exhibition opens. In collaboration with a BBC Radio 4 series, this exhibition describes some of the myriad ways in which faith expresses itself, using religious objects to examine how people believe rather than what they believe. The first sentence of explanation provided by the British Museum is very telling: “The practice and experience of beliefs are natural to all people.” From prayer flags to a Leeds United kippah, from water jugs to processional chariots, this exhibition tells the story of humanity’s innate and passionate desire to make sense of the world beyond the strictly empirical.

Jill Cook, the exhibition’s curator, remembers going into pre-glasnost churches like Kazan Cathedral in St Petersburg, which had been converted into a museum of atheism. One of the items she has included in the exhibition is a 1989 velvet and silk embroidered image of Christ, for the back of a cope. The person who made this image had no other vestments to work from – they had all been destroyed – other than those she had seen lampooning Christianity in the museum of atheism. What had been a piss-take has been repurposed into a devotional object. Services resumed in Kazan Cathedral in 1992.

The penultimate image of the exhibition is a 1975 poster of a cheeky-looking cosmonaut walking around in space and declaring: “There is no god.” Below him, on Earth, a church is falling over. This was from the period of so-called scientific atheism.


 A poster showing a cosmonaut walking in space and saying: ‘There is no god.’ By Vladimir Menshikow, 1975. Photograph: British Museum

But there is one last exhibit to go. Round the corner, a glass case contains small model boats with burnt matchsticks in them representing people huddled together. And two tiny shirts that had been used as shrouds for drowned children. At the side of them is a small cross, made from the wood of a ship that was wrecked off the Italian island of Lampedusa on 11 October 2013. The ship contained Somali and Eritrean Christian refugees, fleeing poverty and persecution. Francesco Tuccio, the local Lampedusa carpenter, desperately wanted to do something for them, in whatever way he could. So he did all he knew and made them a cross. Just like a famous carpenter before him, I suppose. And what this exhibition demonstrates is that nothing – not decades of propaganda nor state-sponsored terror – will be able to quash that instinct from human life.

Sunday 15 October 2017

Why religion is here to stay and science won’t destroy it

Peter Harrison in The Wire.In



In 1966, just over 50 years ago, the distinguished Canadian-born anthropologist Anthony Wallace confidently predicted the global demise of religion at the hands of an advancing science: ‘belief in supernatural powers is doomed to die out, all over the world, as a result of the increasing adequacy and diffusion of scientific knowledge’. Wallace’s vision was not exceptional. On the contrary, the modern social sciences, which took shape in 19th-century western Europe, took their own recent historical experience of secularisation as a universal model. An assumption lay at the core of the social sciences, either presuming or sometimes predicting that all cultures would eventually converge on something roughly approximating secular, Western, liberal democracy. Then something closer to the opposite happened.

Not only has secularism failed to continue its steady global march but countries as varied as Iran, India, Israel, Algeria and Turkey have either had their secular governments replaced by religious ones, or have seen the rise of influential religious nationalist movements. Secularisation, as predicted by the social sciences, has failed.

To be sure, this failure is not unqualified. Many Western countries continue to witness decline in religious belief and practice. The most recent census data released in Australia, for example, shows that 30 per cent of the population identify as having ‘no religion’, and that this percentage is increasing. International surveys confirm comparatively low levels of religious commitment in western Europe and Australasia. Even the United States, a long-time source of embarrassment for the secularisation thesis, has seen a rise in unbelief.

The percentage of atheists in the US now sits at an all-time high (if ‘high’ is the right word) of around 3 per cent. Yet, for all that, globally, the total number of people who consider themselves to be religious remains high, and demographic trends suggest that the overall pattern for the immediate future will be one of religious growth. But this isn’t the only failure of the secularisation thesis.

Scientists, intellectuals and social scientists expected that the spread of modern science would drive secularisation – that science would be a secularising force. But that simply hasn’t been the case. If we look at those societies where religion remains vibrant, their key common features are less to do with science, and more to do with feelings of existential security and protection from some of the basic uncertainties of life in the form of public goods. A social safety net might be correlated with scientific advances but only loosely, and again the case of the US is instructive. The US is arguably the most scientifically and technologically advanced society in the world, and yet at the same time the most religious of Western societies. As the British sociologist David Martin concluded in The Future of Christianity (2011): ‘There is no consistent relation between the degree of scientific advance and a reduced profile of religious influence, belief and practice.’

The story of science and secularisation becomes even more intriguing when we consider those societies that have witnessed significant reactions against secularist agendas. India’s first prime minister Jawaharlal Nehru championed secular and scientific ideals, and enlisted scientific education in the project of modernisation. Nehru was confident that Hindu visions of a Vedic past and Muslim dreams of an Islamic theocracy would both succumb to the inexorable historical march of secularisation. ‘There is only one-way traffic in Time,’ he declared. But as the subsequent rise of Hindu and Islamic fundamentalism adequately attests, Nehru was wrong. Moreover, the association of science with a secularising agenda has backfired, with science becoming a collateral casualty of resistance to secularism.

Turkey provides an even more revealing case. Like most pioneering nationalists, Mustafa Kemal Atatürk, the founder of the Turkish republic, was a committed secularist. Atatürk believed that science was destined to displace religion. In order to make sure that Turkey was on the right side of history, he gave science, in particular evolutionary biology, a central place in the state education system of the fledgling Turkish republic.

As a result, evolution came to be associated with Atatürk’s entire political programme, including secularism. Islamist parties in Turkey, seeking to counter the secularist ideals of the nation’s founders, have also attacked the teaching of evolution. For them, evolution is associated with secular materialism. This sentiment culminated in the decision this June to remove the teaching of evolution from the high-school classroom. Again, science has become a victim of guilt by association.

The US represents a different cultural context, where it might seem that the key issue is a conflict between literal readings of Genesis and key features of evolutionary history. But in fact, much of the creationist discourse centres on moral values. In the US case too, we see anti-evolutionism motivated at least in part by the assumption that evolutionary theory is a stalking horse for secular materialism and its attendant moral commitments. As in India and Turkey, secularism is actually hurting science.

In brief, global secularisation is not inevitable and, when it does happen, it is not caused by science. Further, when the attempt is made to use science to advance secularism, the results can damage science. The thesis that ‘science causes secularisation’ simply fails the empirical test, and enlisting science as an instrument of secularisation turns out to be poor strategy. The science and secularism pairing is so awkward that it raises the question: why did anyone think otherwise?

Historically, two related sources advanced the idea that science would displace religion. First, 19th-century progressivist conceptions of history, particularly associated with the French philosopher Auguste Comte, held to a theory of history in which societies pass through three stages – religious, metaphysical and scientific (or ‘positive’). Comte coined the term ‘sociology’ and he wanted to diminish the social influence of religion and replace it with a new science of society. Comte’s influence extended to the ‘young Turks’ and Atatürk.

The 19th century also witnessed the inception of the ‘conflict model’ of science and religion. This was the view that history can be understood in terms of a ‘conflict between two epochs in the evolution of human thought – the theological and the scientific’. This description comes from Andrew Dickson White’s influential A History of the Warfare of Science with Theology in Christendom (1896), the title of which nicely encapsulates its author’s general theory. White’s work, as well as John William Draper’s earlier History of the Conflict Between Religion and Science (1874), firmly established the conflict thesis as the default way of thinking about the historical relations between science and religion. Both works were translated into multiple languages. Draper’s History went through more than 50 printings in the US alone, was translated into 20 languages and, notably, became a bestseller in the late Ottoman empire, where it informed Atatürk’s understanding that progress meant science superseding religion.

Today, people are less confident that history moves through a series of set stages toward a single destination. Nor, despite its popular persistence, do most historians of science support the idea of an enduring conflict between science and religion. Renowned collisions, such as the Galileo affair, turned on politics and personalities, not just science and religion. Darwin had significant religious supporters and scientific detractors, as well as vice versa. Many other alleged instances of science-religion conflict have now been exposed as pure inventions. In fact, contrary to conflict, the historical norm has more often been one of mutual support between science and religion. In its formative years in the 17th century, modern science relied on religious legitimation. During the 18th and 19th centuries, natural theology helped to popularise science.

The conflict model of science and religion offered a mistaken view of the past and, when combined with expectations of secularisation, led to a flawed vision of the future. Secularisation theory failed at both description and prediction. The real question is why we continue to encounter proponents of science-religion conflict. Many are prominent scientists. It would be superfluous to rehearse Richard Dawkins’s musings on this topic, but he is by no means a solitary voice. Stephen Hawking thinks that ‘science will win because it works’; Sam Harris has declared that ‘science must destroy religion’; Stephen Weinberg thinks that science has weakened religious certitude; Colin Blakemore predicts that science will eventually make religion unnecessary. Historical evidence simply does not support such contentions. Indeed, it suggests that they are misguided.

So why do they persist? The answers are political. Leaving aside any lingering fondness for quaint 19th-century understandings of history, we must look to the fear of Islamic fundamentalism, exasperation with creationism, an aversion to alliances between the religious Right and climate-change denial, and worries about the erosion of scientific authority. While we might be sympathetic to these concerns, there is no disguising the fact that they arise out of an unhelpful intrusion of normative commitments into the discussion. Wishful thinking – hoping that science will vanquish religion – is no substitute for a sober assessment of present realities. Continuing with this advocacy is likely to have an effect opposite to that intended.

Religion is not going away any time soon, and science will not destroy it. If anything, it is science that is subject to increasing threats to its authority and social legitimacy. Given this, science needs all the friends it can get. Its advocates would be well advised to stop fabricating an enemy out of religion, or insisting that the only path to a secure future lies in a marriage of science and secularism.

Tuesday 11 July 2017

How economics became a religion

John Rapley in The Guardian



Although Britain has an established church, few of us today pay it much mind. We follow an even more powerful religion, around which we have oriented our lives: economics. Think about it. Economics offers a comprehensive doctrine with a moral code promising adherents salvation in this world; an ideology so compelling that the faithful remake whole societies to conform to its demands. It has its gnostics, mystics and magicians who conjure money out of thin air, using spells such as “derivative” or “structured investment vehicle”. And, like the old religions it has displaced, it has its prophets, reformists, moralists and above all, its high priests who uphold orthodoxy in the face of heresy.

Over time, successive economists slid into the role we had removed from the churchmen: giving us guidance on how to reach a promised land of material abundance and endless contentment. For a long time, they seemed to deliver on that promise, succeeding in a way few other religions had ever done, our incomes rising thousands of times over and delivering a cornucopia bursting with new inventions, cures and delights.

This was our heaven, and richly did we reward the economic priesthood, with status, wealth and power to shape our societies according to their vision. At the end of the 20th century, amid an economic boom that saw the western economies become richer than humanity had ever known, economics seemed to have conquered the globe. With nearly every country on the planet adhering to the same free-market playbook, and with university students flocking to do degrees in the subject, economics seemed to be attaining the goal that had eluded every other religious doctrine in history: converting the entire planet to its creed.

Yet if history teaches anything, it’s that whenever economists feel certain that they have found the holy grail of endless peace and prosperity, the end of the present regime is nigh. On the eve of the 1929 Wall Street crash, the American economist Irving Fisher advised people to go out and buy shares; in the 1960s, Keynesian economists said there would never be another recession because they had perfected the tools of demand management.

The 2008 crash was no different. Five years earlier, on 4 January 2003, the Nobel laureate Robert Lucas had delivered a triumphal presidential address to the American Economics Association. Reminding his colleagues that macroeconomics had been born in the depression precisely to try to prevent another such disaster ever recurring, he declared that he and his colleagues had reached their own end of history: “Macroeconomics in this original sense has succeeded,” he instructed the conclave. “Its central problem of depression prevention has been solved.”

No sooner do we persuade ourselves that the economic priesthood has finally broken the old curse than it comes back to haunt us all: pride always goes before a fall. Since the crash of 2008, most of us have watched our living standards decline. Meanwhile, the priesthood seemed to withdraw to the cloisters, bickering over who got it wrong. Not surprisingly, our faith in the “experts” has dissipated.

Hubris, never a particularly good thing, can be especially dangerous in economics, because its scholars don’t just observe the laws of nature; they help make them. If the government, guided by its priesthood, changes the incentive-structure of society to align with the assumption that people behave selfishly, for instance, then lo and behold, people will start to do just that. They are rewarded for doing so and penalised for doing otherwise. If you are educated to believe greed is good, then you will be more likely to live accordingly.

The hubris in economics came not from a moral failing among economists, but from a false conviction: the belief that theirs was a science. It neither is nor can be one, and has always operated more like a church. You just have to look at its history to realise that.

The American Economic Association, to which Robert Lucas gave his address, was created in 1885, just when economics was starting to define itself as a distinct discipline. At its first meeting, the association’s founders proposed a platform that declared: “The conflict of labour and capital has brought to the front a vast number of social problems whose solution is impossible without the united efforts of church, state and science.” It would be a long path from that beginning to the market evangelism of recent decades.

Yet even at that time, such social activism provoked controversy. One of the AEA’s founders, Henry Carter Adams, subsequently delivered an address at Cornell University in which he defended free speech for radicals and accused industrialists of stoking xenophobia to distract workers from their mistreatment. Unknown to him, the New York lumber king and Cornell benefactor Henry Sage was in the audience. As soon as the lecture was done, Sage stormed into the university president’s office and insisted: “This man must go; he is sapping the foundations of our society.” When Adams’s tenure was subsequently blocked, he agreed to moderate his views. Accordingly, the final draft of the AEA platform expunged the reference to laissez-faire economics as being “unsafe in politics and unsound in morals”.

 
‘Economics has always operated more like a church’ … Trinity Church seen from Wall Street. Photograph: Alamy Stock Photo

So was set a pattern that has persisted to this day. Powerful political interests – which historically have included not only rich industrialists, but electorates as well – helped to shape the canon of economics, which was then enforced by its scholarly community.

Once a principle is established as orthodox, its observance is enforced in much the same way that a religious doctrine maintains its integrity: by repressing or simply eschewing heresies. In Purity and Danger, the anthropologist Mary Douglas observed the way taboos functioned to help humans impose order on a seemingly disordered, chaotic world. The premises of conventional economics haven’t functioned all that differently. Robert Lucas once noted approvingly that by the late 20th century, economics had so effectively purged itself of Keynesianism that “the audience start(ed) to whisper and giggle to one another” when anyone expressed a Keynesian idea at a seminar. Such responses served to remind practitioners of the taboos of economics: a gentle nudge to a young academic that such shibboleths might not sound so good before a tenure committee. This preoccupation with order and coherence may be less a function of the method than of its practitioners. Studies of personality traits common to various disciplines have discovered that economics, like engineering, tends to attract people with an unusually strong preference for order, and a distaste for ambiguity.

The irony is that, in its determination to make itself a science that can reach hard and fast conclusions, economics has had to dispense with scientific method at times. For starters, it rests on a set of premises about the world not as it is, but as economists would like it to be. Just as any religious service includes a profession of faith, membership in the priesthood of economics entails certain core convictions about human nature. Among other things, most economists believe that we humans are self-interested, rational, essentially individualistic, and prefer more money to less. These articles of faith are taken as self-evident. Back in the 1930s, the great economist Lionel Robbins described his profession in a way that has stood ever since as a cardinal rule for millions of economists. The field’s basic premises came from “deduction from simple assumptions reflecting very elementary facts of general experience” and as such were “as universal as the laws of mathematics or mechanics, and as little capable of ‘suspension’”.

Deducing laws from premises deemed eternal and beyond question is a time-honoured method. For thousands of years, monks in medieval monasteries built a vast corpus of scholarship doing just that, using a method perfected by Thomas Aquinas known as scholasticism. However, this is not the method used by scientists, who tend to require assumptions to be tested empirically before a theory can be built out of them.
But, economists will maintain, this is precisely what they themselves do – what sets them apart from the monks is that they must still test their hypotheses against the evidence. Well, yes, but this statement is actually more problematic than many mainstream economists may realise. Physicists resolve their debates by looking at the data, upon which they by and large agree. The data used by economists, however, is much more disputed. When, for example, Robert Lucas insisted that Eugene Fama’s efficient-markets hypothesis – which maintains that since a free market collates all available information to traders, the prices it yields can never be wrong – held true despite “a flood of criticism”, he did so with as much conviction and supporting evidence as his fellow economist Robert Shiller had mustered in rejecting the hypothesis. When the Swedish central bank had to decide who would win the 2013 Nobel prize in economics, it was torn between Shiller’s claim that markets frequently got the price wrong and Fama’s insistence that markets always got the price right. Thus it opted to split the difference and gave both men the medal – a bit of Solomonic wisdom that would have elicited howls of laughter had it been a science prize. In economic theory, very often, you believe what you want to believe – and as with any act of faith, your choice of heads or tails will as likely reflect sentimental predisposition as scientific assessment.

It’s no mystery why the data used by economists and other social scientists so rarely throws up incontestable answers: it is human data. Unlike people, subatomic particles don’t lie on opinion surveys or change their minds about things. Mindful of that difference, at his own presidential address to the American Economic Association nearly a half-century ago, another Nobel laureate, Wassily Leontief, struck a modest tone. He reminded his audience that the data used by economists differed greatly from that used by physicists or biologists. For the latter, he cautioned, “the magnitude of most parameters is practically constant”, whereas the observations in economics were constantly changing. Data sets had to be regularly updated to remain useful. Some data was just simply bad. Collecting and analysing the data requires civil servants with a high degree of skill and a good deal of time, which less economically developed countries may not have in abundance. So, for example, in 2010 alone, Ghana’s government – which probably has one of the better data-gathering capacities in Africa – recalculated its economic output by 60%. Testing your hypothesis before and after that kind of revision would lead to entirely different results.

 
‘The data used by economists rarely throws up incontestable answers’ … traders at the New York Stock Exchange in October 2008. Photograph: Spencer Platt/Getty Images

Leontief wanted economists to spend more time getting to know their data, and less time in mathematical modelling. However, as he ruefully admitted, the trend was already going in the opposite direction. Today, the economist who wanders into a village to get a deeper sense of what the data reveals is a rare creature. Once an economic model is ready to be tested, number-crunching ends up being done largely at computers plugged into large databases. It’s not a method that fully satisfies a sceptic. For, just as you can find a quotation in the Bible that will justify almost any behaviour, you can find human data to support almost any statement you want to make about the way the world works.

That’s why ideas in economics can go in and out of fashion. The progress of science is generally linear. As new research confirms or replaces existing theories, one generation builds upon the next. Economics, however, moves in cycles. A given doctrine can rise, fall and then later rise again. That’s because economists don’t confirm their theories in quite the same way physicists do, by just looking at the evidence. Instead, much as happens with preachers who gather a congregation, a school rises by building a following – among both politicians and the wider public.

For example, Milton Friedman was one of the most influential economists of the late 20th century. But he had been around for decades before he got much of a hearing. He might well have remained a marginal figure had it not been that politicians such as Margaret Thatcher and Ronald Reagan were sold on his belief in the virtue of a free market. They sold that idea to the public, got elected, then remade society according to those designs. An economist who gets a following gets a pulpit. Although scientists, in contrast, might appeal to public opinion to boost their careers or attract research funds, outside of pseudo-sciences, they don’t win support for their theories in this way.
However, if you think describing economics as a religion debunks it, you’re wrong. We need economics. It can be – it has been – a force for tremendous good. But only if we keep its purpose in mind, and always remember what it can and can’t do.

The Irish have been known to describe their notionally Catholic land as one where a thin Christian veneer was painted over an ancient paganism. The same might be said of our own adherence to today’s neoliberal orthodoxy, which stresses individual liberty, limited government and the free market. Despite outward observance of a well-entrenched doctrine, we haven’t fully transformed into the economic animals we are meant to be. Like the Christian who attends church but doesn’t always keep the commandments, we behave as economic theory predicts only when it suits us. Contrary to the tenets of orthodox economists, contemporary research suggests that, rather than seeking always to maximise our personal gain, humans still remain reasonably altruistic and selfless. Nor is it clear that the endless accumulation of wealth always makes us happier. And when we do make decisions, especially those to do with matters of principle, we seem not to engage in the sort of rational “utility-maximizing” calculus that orthodox economic models take as a given. The truth is, in much of our daily life we don’t fit the model all that well.


Economists work best when they take the stories we have given them, and advise us on how we can help them to come true


For decades, neoliberal evangelists replied to such objections by saying it was incumbent on us all to adapt to the model, which was held to be immutable – one recalls Bill Clinton’s depiction of neoliberal globalisation, for instance, as a “force of nature”. And yet, in the wake of the 2008 financial crisis and the consequent recession, there has been a turn against globalisation across much of the west. More broadly, there has been a wide repudiation of the “experts”, most notably in the 2016 US election and Brexit referendum.

It would be tempting for anyone who belongs to the “expert” class, and to the priesthood of economics, to dismiss such behaviour as a clash between faith and facts, in which the facts are bound to win in the end. In truth, the clash was between two rival faiths – in effect, two distinct moral tales. So enamoured had the so-called experts become with their scientific authority that they blinded themselves to the fact that their own narrative of scientific progress was embedded in a moral tale. It happened to be a narrative that had a happy ending for those who told it, for it perpetuated the story of their own relatively comfortable position as the reward of life in a meritocratic society that blessed people for their skills and flexibility. That narrative made no room for the losers of this order, whose resentments were derided as being a reflection of their boorish and retrograde character – which is to say, their fundamental vice. The best this moral tale could offer everyone else was incremental adaptation to an order whose caste system had become calcified. For an audience yearning for a happy ending, this was bound to be a tale of woe.

The failure of this grand narrative is not, however, a reason for students of economics to dispense with narratives altogether. Narratives will remain an inescapable part of the human sciences for the simple reason that they are inescapable for humans. It’s funny that so few economists get this, because businesses do. As the Nobel laureates George Akerlof and Robert Shiller write in their recent book, Phishing for Phools, marketers use them all the time, weaving stories in the hopes that we will place ourselves in them and be persuaded to buy what they are selling. Akerlof and Shiller contend that the idea that free markets work perfectly, and the idea that big government is the cause of so many of our problems, are part of a story that is actually misleading people into adjusting their behaviour in order to fit the plot. They thus believe storytelling is a “new variable” for economics, since “the mental frames that underlie people’s decisions” are shaped by the stories they tell themselves.

Economists arguably do their best work when they take the stories we have given them, and advise us on how we can help them to come true. Such agnosticism demands a humility that was lacking in economic orthodoxy in recent years. Nevertheless, economists don’t have to abandon their traditions if they are to overcome the failings of a narrative that has been rejected. Rather they can look within their own history to find a method that avoids the evangelical certainty of orthodoxy.

In his 1971 presidential address to the American Economic Association, Wassily Leontief counselled against the dangers of self-satisfaction. He noted that although economics was starting to ride “the crest of intellectual respectability … an uneasy feeling about the present state of our discipline has been growing in some of us who have watched its unprecedented development over the last three decades”.

Noting that pure theory was making economics more remote from day-to-day reality, he said the problem lay in “the palpable inadequacy of the scientific means” of using mathematical approaches to address mundane concerns. So much time went into model-construction that the assumptions on which the models were based became an afterthought. “But,” he warned – a warning that the sub-prime boom’s fascination with mathematical models, and the bust’s subsequent revelation of their flaws, now reveals to have been prophetic – “it is precisely the empirical validity of these assumptions on which the usefulness of the entire exercise depends.”

Leontief thought that economics departments were increasingly hiring and promoting young economists who wanted to build pure models with little empirical relevance. Even when they did empirical analysis, Leontief said economists seldom took any interest in the meaning or value of their data. He thus called for economists to explore their assumptions and data by conducting social, demographic and anthropological work, and said economics needed to work more closely with other disciplines.


Leontief’s call for humility some 40 years ago stands as a reminder that the same religions that can speak up for human freedom and dignity when in opposition, can become obsessed with their rightness and the need to purge others of their wickedness once they attain power. When the church retains its distance from power, and a modest expectation about what it can achieve, it can stir our minds to envision new possibilities and even new worlds. Once economists apply this kind of sceptical scientific method to a human realm in which ultimate reality may never be fully discernible, they will probably find themselves retreating from dogmatism in their claims.

Paradoxically, therefore, as economics becomes more truly scientific, it will become less of a science. Acknowledging these limitations will free it to serve us once more.

Tuesday 27 June 2017

Is the staggeringly profitable business of scientific publishing bad for science?

Stephen Buranyi in The Guardian


In 2011, Claudio Aspesi, a senior investment analyst at Bernstein Research in London, made a bet that the dominant firm in one of the most lucrative industries in the world was headed for a crash. Reed-Elsevier, a multinational publishing giant with annual revenues exceeding £6bn, was an investor’s darling. It was one of the few publishers that had successfully managed the transition to the internet, and a recent company report was predicting yet another year of growth. Aspesi, though, had reason to believe that that prediction – along with those of every other major financial analyst – was wrong.

The core of Elsevier’s operation is in scientific journals, the weekly or monthly publications in which scientists share their results. Despite the narrow audience, scientific publishing is a remarkably big business. With total global revenues of more than £19bn, it weighs in somewhere between the recording and the film industries in size, but it is far more profitable. In 2010, Elsevier’s scientific publishing arm reported profits of £724m on just over £2bn in revenue. It was a 36% margin – higher than Apple, Google, or Amazon posted that year.

But Elsevier’s business model seemed a truly puzzling thing. In order to make money, a traditional publisher – say, a magazine – first has to cover a multitude of costs: it pays writers for the articles; it employs editors to commission, shape and check the articles; and it pays to distribute the finished product to subscribers and retailers. All of this is expensive, and successful magazines typically make profits of around 12-15%.

The way to make money from a scientific article looks very similar, except that scientific publishers manage to duck most of the actual costs. Scientists create work under their own direction – funded largely by governments – and give it to publishers for free; the publisher pays scientific editors who judge whether the work is worth publishing and check its grammar, but the bulk of the editorial burden – checking the scientific validity and evaluating the experiments, a process known as peer review – is done by working scientists on a volunteer basis. The publishers then sell the product back to government-funded institutional and university libraries, to be read by scientists – who, in a collective sense, created the product in the first place.

It is as if the New Yorker or the Economist demanded that journalists write and edit each other’s work for free, and asked the government to foot the bill. Outside observers tend to fall into a sort of stunned disbelief when describing this setup. A 2004 parliamentary science and technology committee report on the industry drily observed that “in a traditional market suppliers are paid for the goods they provide”. A 2005 Deutsche Bank report referred to it as a “bizarre” “triple-pay” system, in which “the state funds most research, pays the salaries of most of those checking the quality of research, and then buys most of the published product”.

Scientists are well aware that they seem to be getting a bad deal. The publishing business is “perverse and needless”, the Berkeley biologist Michael Eisen wrote in a 2003 article for the Guardian, declaring that it “should be a public scandal”. Adrian Sutton, a physicist at Imperial College, told me that scientists “are all slaves to publishers. What other industry receives its raw materials from its customers, gets those same customers to carry out the quality control of those materials, and then sells the same materials back to the customers at a vastly inflated price?” (A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”.)

Many scientists also believe that the publishing industry exerts too much influence over what scientists choose to study, which is ultimately bad for science itself. Journals prize new and spectacular results – after all, they are in the business of selling subscriptions – and scientists, knowing exactly what kind of work gets published, align their submissions accordingly. This produces a steady stream of papers, the importance of which is immediately apparent. But it also means that scientists do not have an accurate map of their field of inquiry. Researchers may end up inadvertently exploring dead ends that their fellow scientists have already run up against, solely because the information about previous failures has never been given space in the pages of the relevant scientific publications. A 2013 study, for example, reported that half of all clinical trials in the US are never published in a journal.

According to critics, the journal system actually holds back scientific progress. In a 2008 essay, Dr Neal Young of the National Institutes of Health (NIH), which funds and conducts medical research for the US government, argued that, given the importance of scientific innovation to society, “there is a moral imperative to reconsider how scientific data are judged and disseminated”.

Aspesi, after talking to a network of more than 25 prominent scientists and activists, had come to believe the tide was about to turn against the industry that Elsevier led. More and more research libraries, which purchase journals for universities, were claiming that their budgets were exhausted by decades of price increases, and were threatening to cancel their multi-million-pound subscription packages unless Elsevier dropped its prices. State organisations such as the American NIH and the German Research Foundation (DFG) had recently committed to making their research available through free online journals, and Aspesi believed that governments might step in and ensure that all publicly funded research would be available for free, to anyone. Elsevier and its competitors would be caught in a perfect storm, with their customers revolting from below, and government regulation looming above.

In March 2011, Aspesi published a report recommending that his clients sell Elsevier stock. A few months later, in a conference call between Elsevier management and investment firms, he pressed the CEO of Elsevier, Erik Engstrom, about the deteriorating relationship with the libraries. He asked what was wrong with the business if “your customers are so desperate”. Engstrom dodged the question. Over the next two weeks, Elsevier stock tumbled by more than 20%, losing £1bn in value. The problems Aspesi saw were deep and structural, and he believed they would play out over the next half-decade – but things already seemed to be moving in the direction he had predicted.

Over the next year, however, most libraries backed down and committed to Elsevier’s contracts, and governments largely failed to push an alternative model for disseminating research. In 2012 and 2013, Elsevier posted profit margins of more than 40%. The following year, Aspesi reversed his recommendation to sell. “He listened to us too closely, and he got a bit burned,” David Prosser, the head of Research Libraries UK, and a prominent voice for reforming the publishing industry, told me recently. Elsevier was here to stay.

Illustration: Dom McKenzie

Aspesi was not the first person to incorrectly predict the end of the scientific publishing boom, and he is unlikely to be the last. It is hard to believe that what is essentially a for-profit oligopoly functioning within an otherwise heavily regulated, government-funded enterprise can avoid extinction in the long run. But publishing has been deeply enmeshed in the science profession for decades. Today, every scientist knows that their career depends on being published, and professional success is especially determined by getting work into the most prestigious journals. The long, slow, nearly directionless work pursued by some of the most influential scientists of the 20th century is no longer a viable career option. Under today’s system, the father of genetic sequencing, Fred Sanger, who published very little in the two decades between his 1958 and 1980 Nobel prizes, may well have found himself out of a job.

Even scientists who are fighting for reform are often not aware of the roots of the system: how, in the boom years after the second world war, entrepreneurs built fortunes by taking publishing out of the hands of scientists and expanding the business on a previously unimaginable scale. And no one was more transformative and ingenious than Robert Maxwell, who turned scientific journals into a spectacular money-making machine that bankrolled his rise in British society. Maxwell would go on to become an MP, a press baron who challenged Rupert Murdoch, and one of the most notorious figures in British life. But his true importance was far larger than most of us realise. Improbable as it might sound, few people in the last century have done more to shape the way science is conducted today than Maxwell.

In 1946, the 23-year-old Robert Maxwell was working in Berlin and already had a significant reputation. Although he had grown up in a poor Czech village, he had fought for the British army during the war as part of a contingent of European exiles, winning a Military Cross and British citizenship in the process. After the war, he served as an intelligence officer in Berlin, using his nine languages to interrogate prisoners. Maxwell was tall, brash, and not at all content with his already considerable success – an acquaintance at the time recalled him confessing his greatest desire: “to be a millionaire”.

At the same time, the British government was preparing an unlikely project that would allow him to do just that. Top British scientists – from Alexander Fleming, who discovered penicillin, to the physicist Charles Galton Darwin, grandson of Charles Darwin – were concerned that while British science was world-class, its publishing arm was dismal. Science publishers were mainly known for being inefficient and constantly broke. Journals, which often appeared on cheap, thin paper, were produced almost as an afterthought by scientific societies. The British Chemical Society had a months-long backlog of articles for publication, and relied on cash handouts from the Royal Society to run its printing operations.

The government’s solution was to pair the venerable British publishing house Butterworths (now owned by Elsevier) with the renowned German publisher Springer, to draw on the latter’s expertise. Butterworths would learn to turn a profit on journals, and British science would get its work out at a faster pace. Maxwell had already established his own business helping Springer ship scientific articles to Britain. The Butterworths directors, being ex-British intelligence themselves, hired the young Maxwell to help manage the company, and another ex-spook, Paul Rosbaud, a metallurgist who spent the war passing Nazi nuclear secrets to the British through the French and Dutch resistance, as scientific editor.

They couldn’t have begun at a better time. Science was about to enter a period of unprecedented growth, having gone from being a scattered, amateur pursuit of wealthy gentleman to a respected profession. In the postwar years, it would become a byword for progress. “Science has been in the wings. It should be brought to the centre of the stage – for in it lies much of our hope for the future,” wrote the American engineer and Manhattan Project administrator Vannevar Bush, in a 1945 report to President Harry S Truman. After the war, government emerged for the first time as the major patron of scientific endeavour, not just in the military, but through newly created agencies such as the US National Science Foundation, and the rapidly expanding university system.

When Butterworths decided to abandon the fledgling project in 1951, Maxwell offered £13,000 (about £420,000 today) for both Butterworth’s and Springer’s shares, giving him control of the company. Rosbaud stayed on as scientific director, and named the new venture Pergamon Press, after a coin from the ancient Greek city of Pergamon, featuring Athena, goddess of wisdom, which they adapted for the company’s logo – a simple line drawing appropriately representing both knowledge and money.

In an environment newly flush with cash and optimism, it was Rosbaud who pioneered the method that would drive Pergamon’s success. As science expanded, he realised that it would need new journals to cover new areas of study. The scientific societies that had traditionally created journals were unwieldy institutions that tended to move slowly, hampered by internal debates between members about the boundaries of their field. Rosbaud had none of these constraints. All he needed to do was to convince a prominent academic that their particular field required a new journal to showcase it properly, and install that person at the helm of it. Pergamon would then begin selling subscriptions to university libraries, which suddenly had a lot of government money to spend.

Maxwell was a quick study. In 1955, he and Rosbaud attended the Geneva Conference on Peaceful Uses of Atomic Energy. Maxwell rented an office near the conference and wandered into seminars and official functions offering to publish any papers the scientists had come to present, and asking them to sign exclusive contracts to edit Pergamon journals. Other publishers were shocked by his brash style. Daan Frank, of North Holland Publishing (now owned by Elsevier) would later complain that Maxwell was “dishonest” for scooping up scientists without regard for specific content.

Rosbaud, too, was reportedly put off by Maxwell’s hunger for profit. Unlike the humble former scientist, Maxwell favoured expensive suits and slicked-back hair. Having rounded his Czech accent into a formidably posh, newsreader basso, he looked and sounded precisely like the tycoon he wished to be. In 1955, Rosbaud told the Nobel prize-winning physicist Nevill Mott that the journals were his beloved little “ewe lambs”, and Maxwell was the biblical King David, who would butcher and sell them for profit. In 1956, the pair had a falling out, and Rosbaud left the company.

By then, Maxwell had taken Rosbaud’s business model and turned it into something all his own. Scientific conferences tended to be drab, low-ceilinged affairs, but when Maxwell returned to the Geneva conference that year, he rented a house in nearby Collonge-Bellerive, a picturesque town on the lakeshore, where he entertained guests at parties with booze, cigars and sailboat trips. Scientists had never seen anything like him. “He always said we don’t compete on sales, we compete on authors,” Albert Henderson, a former deputy director at Pergamon, told me. “We would attend conferences specifically looking to recruit editors for new journals.” There are tales of parties on the roof of the Athens Hilton, of gifts of Concorde flights, of scientists being put on a chartered boat tour of the Greek islands to plan their new journal.

By 1959, Pergamon was publishing 40 journals; six years later it would publish 150. This put Maxwell well ahead of the competition. (In 1959, Pergamon’s rival, Elsevier, had just 10 English-language journals, and it would take the company another decade to reach 50.) By 1960, Maxwell had taken to being driven in a chauffeured Rolls-Royce, and moved his home and the Pergamon operation from London to the palatial Headington Hill Hall estate in Oxford, which was also home to the British book publishing house Blackwell’s.

Scientific societies, such as the British Society of Rheology, seeing the writing on the wall, even began letting Pergamon take over their journals for a small regular fee. Leslie Iversen, former editor at the Journal of Neurochemistry, recalls being wooed with lavish dinners at Maxwell’s estate. “He was very impressive, this big entrepreneur,” said Iversen. “We would get dinner and fine wine, and at the end he would present us a cheque – a few thousand pounds for the society. It was more money than us poor scientists had ever seen.”

Maxwell insisted on grand titles – “International Journal of” was a favourite prefix. Peter Ashby, a former vice president at Pergamon, described this to me as a “PR trick”, but it also reflected a deep understanding of how science, and society’s attitude to science, had changed. Collaborating and getting your work seen on the international stage was becoming a new form of prestige for researchers, and in many cases Maxwell had the market cornered before anyone else realised it existed. When the Soviet Union launched Sputnik, the first man-made satellite, in 1959, western scientists scrambled to catch up on Russian space research, and were surprised to learn that Maxwell had already negotiated an exclusive English-language deal to publish the Russian Academy of Sciences’ journals earlier in the decade.

“He had interests in all of these places. I went to Japan, he had an American man running an office there by himself. I went to India, there was someone there,” said Ashby. And the international markets could be extremely lucrative. Ronald Suleski, who ran Pergamon’s Japanese office in the 1970s, told me that the Japanese scientific societies, desperate to get their work published in English, gave Maxwell the rights to their members’ results for free.

In a letter celebrating Pergamon’s 40th anniversary, Eiichi Kobayashi, director of Maruzen, Pergamon’s longtime Japanese distributor, recalled of Maxwell that “each time I have the pleasure of meeting him, I am reminded of F Scott Fitzgerald’s words that a millionaire is no ordinary man”.

The scientific article has essentially become the only way science is systematically represented in the world. (As Robert Kiley, head of digital services at the library of the Wellcome Trust, the world’s second-biggest private funder of biomedical research, puts it: “We spend a billion pounds a year, and we get back articles.”) It is the primary resource of our most respected realm of expertise. “Publishing is the expression of our work. A good idea, a conversation or correspondence, even from the most brilliant person in the world … doesn’t count for anything unless you have it published,” says Neal Young of the NIH. If you control access to the scientific literature, it is, to all intents and purposes, like controlling science.

Maxwell’s success was built on an insight into the nature of scientific journals that would take others years to understand and replicate. While his competitors groused about him diluting the market, Maxwell knew that there was, in fact, no limit to the market. Creating The Journal of Nuclear Energy didn’t take business away from rival publisher North Holland’s journal Nuclear Physics. Scientific articles are about unique discoveries: one article cannot substitute for another. If a serious new journal appeared, scientists would simply request that their university library subscribe to that one as well. If Maxwell was creating three times as many journals as his competition, he would make three times more money.

The only potential limit was a slow-down in government funding, but there was little sign of that happening. In the 1960s, Kennedy bankrolled the space programme, and at the outset of the 1970s Nixon declared a “war on cancer”, while at the same time the British government developed its own nuclear programme with American aid. No matter the political climate, science was buoyed by great swells of government money.


  Robert Maxwell in 1985. Photograph: Terry O'Neill/Hulton/Getty

In its early days, Pergamon had been at the centre of fierce debates about the ethics of allowing commercial interests into the supposedly disinterested and profit-shunning world of science. In a 1988 letter commemorating the 40th anniversary of Pergamon, John Coales of Cambridge University noted that initially many of his friends “considered [Maxwell] the greatest villain yet unhung”.

But by the end of the 1960s, commercial publishing was considered the status quo, and publishers were seen as a necessary partner in the advancement of science. Pergamon helped turbocharge the field’s great expansion by speeding up the publication process and presenting it in a more stylish package. Scientists’ concerns about signing away their copyright were overwhelmed by the convenience of dealing with Pergamon, the shine it gave their work, and the force of Maxwell’s personality. Scientists, it seemed, were largely happy with the wolf they had let in the door.

“He was a bully, but I quite liked him,” says Denis Noble, a physiologist at Oxford University and the editor of the journal Progress in Biophysics & Molecular Biology. Occasionally, Maxwell would call Noble to his house for a meeting. “Often there would be a party going on, a nice musical ensemble, there was no barrier between his work and personal life,” Noble says. Maxwell would then proceed to alternately browbeat and charm him into splitting the biannual journal into a monthly or bimonthly publication, which would lead to an attendant increase in subscription payments.

In the end, though, Maxwell would nearly always defer to the scientists’ wishes, and scientists came to appreciate his patronly persona. “I have to confess that, quickly realising his predatory and entrepreneurial ambitions, I nevertheless took a great liking to him,” Arthur Barrett, then editor of the journal Vacuum, wrote in a 1988 piece about the publication’s early years. And the feeling was mutual. Maxwell doted on his relationships with famous scientists, who were treated with uncharacteristic deference. “He realised early on that the scientists were vitally important. He would do whatever they wanted. It drove the rest of the staff crazy,” Richard Coleman, who worked in journal production at Pergamon in the late 1960s, told me. When Pergamon was the target of a hostile takeover attempt, a 1973 Guardian article reported that journal editors threatened “to desert” rather than work for another chairman.

Maxwell had transformed the business of publishing, but the day-to-day work of science remained unchanged. Scientists still largely took their work to whichever journal was the best fit for their research area – and Maxwell was happy to publish any and all research that his editors deemed sufficiently rigorous. In the mid-1970s, though, publishers began to meddle with the practice of science itself, starting down a path that would lock scientists’ careers into the publishing system, and impose the business’s own standards on the direction of research. One journal became the symbol of this transformation.

“At the start of my career, nobody took much notice of where you published, and then everything changed in 1974 with Cell,” Randy Schekman, the Berkeley molecular biologist and Nobel prize winner, told me. Cell (now owned by Elsevier) was a journal started by Massachusetts Institute of Technology (MIT) to showcase the newly ascendant field of molecular biology. It was edited a young biologist named Ben Lewin, who approached his work with an intense, almost literary bent. Lewin prized long, rigorous papers that answered big questions – often representing years of research that would have yielded multiple papers in other venues – and, breaking with the idea that journals were passive instruments to communicate science, he rejected far more papers than he published.

What he created was a venue for scientific blockbusters, and scientists began shaping their work on his terms. “Lewin was clever. He realised scientists are very vain, and wanted to be part of this selective members club; Cell was ‘it’, and you had to get your paper in there,” Schekman said. “I was subject to this kind of pressure, too.” He ended up publishing some of his Nobel-cited work in Cell.

Suddenly, where you published became immensely important. Other editors took a similarly activist approach in the hopes of replicating Cell’s success. Publishers also adopted a metric called “impact factor,” invented in the 1960s by Eugene Garfield, a librarian and linguist, as a rough calculation of how often papers in a given journal are cited in other papers. For publishers, it became a way to rank and advertise the scientific reach of their products. The new-look journals, with their emphasis on big results, shot to the top of these new rankings, and scientists who published in “high-impact” journals were rewarded with jobs and funding. Almost overnight, a new currency of prestige had been created in the scientific world. (Garfield later referred to his creation as “like nuclear energy … a mixed blessing”.)

It is difficult to overstate how much power a journal editor now had to shape a scientist’s career and the direction of science itself. “Young people tell me all the time, ‘If I don’t publish in CNS [a common acronym for Cell/Nature/Science, the most prestigious journals in biology], I won’t get a job,” says Schekman. He compared the pursuit of high-impact publications to an incentive system as rotten as banking bonuses. “They have a very big influence on where science goes,” he said.

And so science became a strange co-production between scientists and journal editors, with the former increasingly pursuing discoveries that would impress the latter. These days, given a choice of projects, a scientist will almost always reject both the prosaic work of confirming or disproving past studies, and the decades-long pursuit of a risky “moonshot”, in favour of a middle ground: a topic that is popular with editors and likely to yield regular publications. “Academics are incentivised to produce research that caters to these demands,” said the biologist and Nobel laureate Sydney Brenner in a 2014 interview, calling the system “corrupt.”

Maxwell understood the way journals were now the kingmakers of science. But his main concern was still expansion, and he still had a keen vision of where science was heading, and which new fields of study he could colonise. Richard Charkin, the former CEO of the British publisher Macmillan, who was an editor at Pergamon in 1974, recalls Maxwell waving Watson and Crick’s one-page report on the structure of DNA at an editorial meeting and declaring that the future was in life science and its multitude of tiny questions, each of which could have its own publication. “I think we launched a hundred journals that year,” Charkin said. “I mean, Jesus wept.”

Pergamon also branched into social sciences and psychology. A series of journals prefixed “Computers and” suggest that Maxwell spotted the growing importance of digital technology. “It was endless,” Peter Ashby told me. “Oxford Polytechnic [now Oxford Brookes University] started a department of hospitality with a chef. We had to go find out who the head of the department was, make him start a journal. And boom – International Journal of Hospitality Management.”

By the late 1970s, Maxwell was also dealing with a more crowded market. “I was at Oxford University Press at that time,” Charkin told me. “We sat up and said, ‘Hell, these journals make a lot of money!” Meanwhile, in the Netherlands, Elsevier had begun expanding its English-language journals, absorbing the domestic competition in a series of acquisitions and growing at a rate of 35 titles a year.

As Maxwell had predicted, competition didn’t drive down prices. Between 1975 and 1985, the average price of a journal doubled. The New York Times reported that in 1984 it cost $2,500 to subscribe to the journal Brain Research; in 1988, it cost more than $5,000. That same year, Harvard Library overran its research journal budget by half a million dollars.

Scientists occasionally questioned the fairness of this hugely profitable business to which they supplied their work for free, but it was university librarians who first realised the trap in the market Maxwell had created. The librarians used university funds to buy journals on behalf of scientists. Maxwell was well aware of this. “Scientists are not as price-conscious as other professionals, mainly because they are not spending their own money,” he told his publication Global Business in a 1988 interview. And since there was no way to swap one journal for another, cheaper one, the result was, Maxwell continued, “a perpetual financing machine”. Librarians were locked into a series of thousands of tiny monopolies. There were now more than a million scientific articles being published a year, and they had to buy all of them at whatever price the publishers wanted.

From a business perspective, it was a total victory for Maxwell. Libraries were a captive market, and journals had improbably installed themselves as the gatekeepers of scientific prestige – meaning that scientists couldn’t simply abandon them if a new method of sharing results came along. “Were we not so naive, we would long ago have recognised our true position: that we are sitting on top of fat piles of money which clever people on all sides are trying to transfer on to their piles,” wrote the University of Michigan librarian Robert Houbeck in a trade journal in 1988. Three years earlier, despite scientific funding suffering its first multi-year dip in decades, Pergamon had reported a 47% profit margin.

Maxwell wouldn’t be around to tend his victorious empire. The acquisitive nature that drove Pergamon’s success also led him to make a surfeit of flashy but questionable investments, including the football teams Oxford United and Derby County FC, television stations around the world, and, in 1984, the UK’s Mirror newspaper group, where he began to spend more and more of his time. In 1991, to finance his impending purchase of the New York Daily News, Maxwell sold Pergamon to its quiet Dutch competitor Elsevier for £440m (£919m today).

Many former Pergamon employees separately told me that they knew it was all over for Maxwell when he made the Elsevier deal, because Pergamon was the company he truly loved. Later that year, he became mired in a series of scandals over his mounting debts, shady accounting practices, and an explosive accusation by the American journalist Seymour Hersh that he was an Israeli spy with links to arms traders. On 5 November 1991, Maxwell was found drowned off his yacht in the Canary Islands. The world was stunned, and by the next day the Mirror’s tabloid rival Sun was posing the question on everyone’s mind: “DID HE FALL … DID HE JUMP?”, its headline blared. (A third explanation, that he was pushed, would also come up.)

The story dominated the British press for months, with suspicion growing that Maxwell had committed suicide, after an investigation revealed that he had stolen more than £400m from the Mirror pension fund to service his debts. (In December 1991, a Spanish coroner’s report ruled the death accidental.) The speculation was endless: in 2003, the journalists Gordon Thomas and Martin Dillon published a book alleging that Maxwell was assassinated by Mossad to hide his spying activities. By that time, Maxwell was long gone, but the business he had started continued to thrive in new hands, reaching new levels of profit and global power over the coming decades.

If Maxwell’s genius was in expansion, Elsevier’s was in consolidation. With the purchase of Pergamon’s 400-strong catalogue, Elsevier now controlled more than 1,000 scientific journals, making it by far the largest scientific publisher in the world.

At the time of the merger, Charkin, the former Macmillan CEO, recalls advising Pierre Vinken, the CEO of Elsevier, that Pergamon was a mature business, and that Elsevier had overpaid for it. But Vinken had no doubts, Charkin recalled: “He said, ‘You have no idea how profitable these journals are once you stop doing anything. When you’re building a journal, you spend time getting good editorial boards, you treat them well, you give them dinners. Then you market the thing and your salespeople go out there to sell subscriptions, which is slow and tough, and you try to make the journal as good as possible. That’s what happened at Pergamon. And then we buy it and we stop doing all that stuff and then the cash just pours out and you wouldn’t believe how wonderful it is.’ He was right and I was wrong.”

By 1994, three years after acquiring Pergamon, Elsevier had raised its prices by 50%. Universities complained that their budgets were stretched to breaking point – the US-based Publishers Weekly reported librarians referring to a “doomsday machine” in their industry – and, for the first time, they began cancelling subscriptions to less popular journals.

Illustration: Dom McKenzie

At the time, Elsevier’s behaviour seemed suicidal. It was angering its customers just as the internet was arriving to offer them a free alternative. A 1995 Forbes article described scientists sharing results over early web servers, and asked if Elsevier was to be “The Internet’s First Victim”. But, as always, the publishers understood the market better than the academics.

In 1998, Elsevier rolled out its plan for the internet age, which would come to be called “The Big Deal”. It offered electronic access to bundles of hundreds of journals at a time: a university would pay a set fee each year – according to a report based on freedom of information requests, Cornell University’s 2009 tab was just short of $2m – and any student or professor could download any journal they wanted through Elsevier’s website. Universities signed up en masse.

Those predicting Elsevier’s downfall had assumed scientists experimenting with sharing their work for free online could slowly outcompete Elsevier’s titles by replacing them one at a time. In response, Elsevier created a switch that fused Maxwell’s thousands of tiny monopolies into one so large that, like a basic resource – say water, or power – it was impossible for universities to do without. Pay, and the scientific lights stayed on, but refuse, and up to a quarter of the scientific literature would go dark at any one institution. It concentrated immense power in the hands of the largest publishers, and Elsevier’s profits began another steep rise that would lead them into the billions by the 2010s. In 2015, a Financial Times article anointed Elsevier “the business the internet could not kill”.

Publishers are now wound so tightly around the various organs of the scientific body that no single effort has been able to dislodge them. In a 2015 report, an information scientist from the University of Montreal, Vincent Larivière, showed that Elsevier owned 24% of the scientific journal market, while Maxwell’s old partners Springer, and his crosstown rivals Wiley-Blackwell, controlled about another 12% each. These three companies accounted for half the market. (An Elsevier representative familiar with the report told me that by their own estimate they publish only 16% of the scientific literature.)

“Despite my giving sermons all over the world on this topic, it seems journals hold sway even more prominently than before,” Randy Schekman told me. It is that influence, more than the profits that drove the system’s expansion, that most frustrates scientists today.

Elsevier says its primary goal is to facilitate the work of scientists and other researchers. An Elsevier rep noted that the company publishes 1.5m papers a year; 14 million scientists entrust Elsevier to publish their results, and 800,000 scientists donate their time to help them with editing and peer-review. “We help researchers be more productive and efficient,” Alicia Wise, senior vice president of global strategic networks, told me. “And that’s a win for research institutions, and for research funders like governments.”

On the question of why so many scientists are so critical of journal publishers, Tom Reller, vice president of corporate relations at Elsevier, said: “It’s not for us to talk about other people’s motivations. We look at the numbers [of scientists who trust their results to Elsevier] and that suggests we are doing a good job.” Asked about criticisms of Elsevier’s business model, Reller said in an email that these criticisms overlooked “all the things that publishers do to add value – above and beyond the contributions that public-sector funding brings”. That, he said, is what they were charging for.

In a sense, it is not any one publisher’s fault that the scientific world seems to bend to the industry’s gravitational pull. When governments including those of China and Mexico offer financial bonuses for publishing in high-impact journals, they are not responding to a demand by any specific publisher, but following the rewards of an enormously complex system that has to accommodate the utopian ideals of science with the commercial goals of the publishers that dominate it. (“We scientists have not given a lot of thought to the water we’re swimming in,” Neal Young told me.)

Since the early 2000s, scientists have championed an alternative to subscription publishing called “open access”. This solves the difficulty of balancing scientific and commercial imperatives by simply removing the commercial element. In practice, this usually takes the form of online journals, to which scientists pay an upfront free to cover editing costs, which then ensure the work is available free to access for anyone in perpetuity. But despite the backing of some of the biggest funding agencies in the world, including the Gates Foundation and the Wellcome Trust, only about a quarter of scientific papers are made freely available at the time of their publication.

The idea that scientific research should be freely available for anyone to use is a sharp departure, even a threat, to the current system – which relies on publishers’ ability to restrict access to the scientific literature in order to maintain its immense profitability. In recent years, the most radical opposition to the status quo has coalesced around a controversial website called Sci-Hub – a sort of Napster for science that allows anyone to download scientific papers for free. Its creator, Alexandra Elbakyan, a Kazhakstani, is in hiding, facing charges of hacking and copyright infringement in the US. Elsevier recently obtained a $15m injunction (the maximum allowable amount) against her.

Elbakyan is an unabashed utopian. “Science should belong to scientists and not the publishers,” she told me in an email. In a letter to the court, she cited the cited Article 27 of the UN’s Universal Declaration of Human Rights, asserting the right “to share in scientific advancement and its benefits”.

Whatever the fate of Sci-Hub, it seems that frustration with the current system is growing. But history shows that betting against science publishers is a risky move. After all, back in 1988, Maxwell predicted that in the future there would only be a handful of immensely powerful publishing companies left, and that they would ply their trade in an electronic age with no printing costs, leading to almost “pure profit”. That sounds a lot like the world we live in now.