Search This Blog

Thursday 27 February 2020

Why your brain is not a computer

For decades it has been the dominant metaphor in neuroscience. But could this idea have been leading us astray all along? By Matthew Cobb in The Guardian 

We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity.

We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain’s very structure at will, altering the animal’s behaviour as a result. Some of the most profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a robotic arm with the power of their mind.

Every day, we hear about new discoveries that shed light on how brains work, along with the promise – or threat – of new technology that will enable us to do such far-fetched things as read minds, or detect criminals, or even be uploaded into a computer. Books are repeatedly produced that each claim to explain the brain in different ways.

And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.

I n 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.

“Only 20 to 30 years ago, neuroanatomical and neurophysiological information was relatively scarce, while understanding mind-related processes seemed within reach,” Frégnac wrote. “Nowadays, we are drowning in a flood of information. Paradoxically, all sense of global understanding is in acute danger of getting washed away. Each overcoming of technological barriers opens a Pandora’s box by revealing hidden variables, mechanisms and nonlinearities, adding new levels of complexity.”

The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application of experimental technologies, substantial advances in data analysis methods and intense application of theoretic concepts and models.”

There are indeed theoretical approaches to brain function, including to the most mysterious thing the human brain can do – produce consciousness. But none of these frameworks are widely accepted, for none has yet passed the decisive test of experimental investigation. It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)

As observed by Francis Crick, the co-discoverer of the DNA double helix, the brain is an integrated, evolved structure with different bits of it appearing at different moments in evolution and adapted to solve different problems. Our current comprehension of how it all works is extremely partial – for example, most neuroscience sensory research has been focused on sight, not smell; smell is conceptually and technically more challenging. But the way that olfaction and vision work are different, both computationally and structurally. By focusing on vision, we have developed a very limited understanding of what the brain does and how it does it.

The nature of the brain – simultaneously integrated and composite – may mean that our future understanding will inevitably be fragmented and composed of different explanations for different parts. Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”

For more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.

“Descartes was impressed by the hydraulic figures in the royal gardens, and developed a hydraulic theory of the action of the brain,” Lashley wrote. “We have since had telephone theories, electrical field theories and now theories based on computing machines and automatic rudders. I suggest we are more likely to find out about how the brain works by studying the brain itself, and the phenomena of behaviour, than by indulging in far-fetched physical analogies.”

This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. Since its inception in the 1920s, the idea of a neural code has come to dominate neuroscientific thinking – more than 11,000 papers on the topic have been published in the past 10 years. Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.

The unstated implication in most descriptions of neural coding is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as “downstream structures” that have access to the optimal way to decode the signals. But the ways in which such structures actually process those signals is unknown, and is rarely explicitly hypothesised, even in simple models of neural network function.


 
MRI scan of a brain. Photograph: Getty/iStockphoto

The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.

By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.

The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.

Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development; despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.

One sign that our metaphors may be losing their explanatory power is the widespread assumption that much of what nervous systems do, from simple systems right up to the appearance of consciousness in humans, can only be explained as emergent properties – things that you cannot predict from an analysis of the components, but which emerge as the system functions.

In 1981, the British psychologist Richard Gregory argued that the reliance on emergence as a way of explaining brain function indicated a problem with the theoretical framework: “The appearance of ‘emergence’ may well be a sign that a more general (or at least different) conceptual scheme is needed … It is the role of good theories to remove the appearance of emergence. (So explanations in terms of emergence are bogus.)”

This overlooks the fact that there are different kinds of emergence: weak and strong. Weak emergent features, such as the movement of a shoal of tiny fish in response to a shark, can be understood in terms of the rules that govern the behaviour of their component parts. In such cases, apparently mysterious group behaviours are based on the behaviour of individuals, each of which is responding to factors such as the movement of a neighbour, or external stimuli such as the approach of a predator.

This kind of weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components. You and the page you are reading this on are both made of atoms, but your ability to read and understand comes from features that emerge through atoms in your body forming higher-level structures, such as neurons and their patterns of firing – not simply from atoms interacting.

Strong emergence has recently been criticised by some neuroscientists as risking “metaphysical implausibility”, because there is no evident causal mechanism, nor any single explanation, of how emergence occurs. Like Gregory, these critics claim that the reliance on emergence to explain complex phenomena suggests that neuroscience is at a key historical juncture, similar to that which saw the slow transformation of alchemy into chemistry. But faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.

Interestingly, while some neuroscientists are discombobulated by the metaphysics of emergence, researchers in artificial intelligence revel in the idea, believing that the sheer complexity of modern computers, or of their interconnectedness through the internet, will lead to what is dramatically known as the singularity. Machines will become conscious.

There are plenty of fictional explorations of this possibility (in which things often end badly for all concerned), and the subject certainly excites the public’s imagination, but there is no reason, beyond our ignorance of how consciousness works, to suppose that it will happen in the near future. In principle, it must be possible, because the working hypothesis is that mind is a product of matter, which we should therefore be able to mimic in a device. But the scale of complexity of even the simplest brains dwarfs any machine we can currently envisage. For decades – centuries – to come, the singularity will be the stuff of science fiction, not science.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.

The materialist working hypothesis is that brains and minds, in humans and maggots and everything else, are identical. Neurons and the processes they support – including consciousness – are the same thing. In a computer, software and hardware are separate; however, our brains and our minds consist of what can best be described as wetware, in which what is happening and where it is happening are completely intertwined.

Imagining that we can repurpose our nervous system to run different programmes, or upload our mind to a server, might sound scientific, but lurking behind this idea is a non-materialist view going back to Descartes and beyond. It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind. It would be possible to give this idea a veneer of scientific respectability by posing it in terms of reading the state of a set of neurons and writing that to a new substrate, organic or artificial.

But to even begin to imagine how that might work in practice, we would need both an understanding of neuronal function that is far beyond anything we can currently envisage, and would require unimaginably vast computational power and a simulation that precisely mimicked the structure of the brain in question. For this to be possible even in principle, we would first need to be able to fully model the activity of a nervous system capable of holding a single state, never mind a thought. We are so far away from taking this first step that the possibility of uploading your mind can be dismissed as a fantasy, at least until the far future.

For the moment, the brain-as-computer metaphor retains its dominance, although there is disagreement about how strong a metaphor it is. In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die. Less dramatically, but drawing similar conclusions, two decades earlier the historian S Ryan Johansson argued that “endlessly debating the truth or falsity of a metaphor like ‘the brain is a computer’ is a waste of time. The relationship proposed is metaphorical, and it is ordering us to do something, not trying to tell us the truth.”

On the other hand, the US expert in artificial intelligence, Gary Marcus, has made a robust defence of the computer metaphor: “Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that. The real question isn’t whether the brain is an information processor, per se, but rather how do brains store and encode information, and what operations do they perform over that information, once it is encoded.”

Marcus went on to argue that the task of neuroscience is to “reverse engineer” the brain, much as one might study a computer, examining its components and their interconnections to decipher how it works. This suggestion has been around for some time. In 1989, Crick recognised its attractiveness, but felt it would fail, because of the brain’s complex and messy evolutionary history – he dramatically claimed it would be like trying to reverse engineer a piece of “alien technology”. Attempts to find an overall explanation of how the brain works that flow logically from its structure would be doomed to failure, he argued, because the starting point is almost certainly wrong – there is no overall logic.

Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as expected.

The duo – Eric Jonas and Konrad Paul Kording – employed the very techniques they normally used to analyse the brain and applied them to the MOS 6507 processor found in computers from the late 70s and early 80s that enabled those machines to run video games such as Donkey Kong and Space Invaders.

First, they obtained the connectome of the chip by scanning the 3510 enhancement-mode transistors it contained and simulating the device on a modern computer (including running the games programmes for 10 seconds). They then used the full range of neuroscientific techniques, such as “lesions” (removing transistors from the simulation), analysing the “spiking” activity of the virtual transistors and studying their connectivity, observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games.

Despite deploying this powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works (it has “ground truth”, in technospeak), the study failed to detect the hierarchy of information processing that occurs inside the chip. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”

This sobering outcome suggests that, despite the attractiveness of the computer metaphor and the fact that brains do indeed process information and somehow represent the external world, we still need to make significant theoretical breakthroughs in order to make progress. Even if our brains were designed along logical lines, which they are not, our present conceptual and analytical tools would be completely inadequate for the task of explaining them. This does not mean that simulation projects are pointless – by modelling (or simulating) we can test hypotheses and, by linking the model with well-established systems that can be precisely manipulated, we can gain insight into how real brains function. This is an extremely powerful tool, but a degree of modesty is required when it comes to the claims that are made for such studies, and realism is needed with regard to the difficulties of drawing parallels between brains and artificial systems.


 
Current ‘reverse engineering’ techniques cannot deliver a proper understanding of an Atari console chip, let alone of a human brain. Photograph: Radharc Images/Alamy

Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices. Although it is often argued that particular functions are tightly localised in the brain, as they are in a machine, this certainty has been repeatedly challenged by new neuroanatomical discoveries of unsuspected connections between brain regions, or amazing examples of plasticity, in which people can function normally without bits of the brain that are supposedly devoted to particular behaviours.

In reality, the very structures of a brain and a computer are completely different. In 2006, Larry Abbott wrote an essay titled “Where are the switches on this thing?”, in which he explored the potential biophysical bases of that most elementary component of an electronic device – a switch. Although inhibitory synapses can change the flow of activity by rendering a downstream neuron unresponsive, such interactions are relatively rare in the brain.

A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation. The nervous system alters its working by changes in the patterns of activation in networks of cells composed of large numbers of units; it is these networks that channel, shift and shunt activity. Unlike any device we have yet envisaged, the nodes of these networks are not stable points like transistors or valves, but sets of neurons – hundreds, thousands, tens of thousands strong – that can respond consistently as a network over time, even if the component cells show inconsistent behaviour.

Understanding even the simplest of such networks is currently beyond our grasp. Eve Marder, a neuroscientist at Brandeis University, has spent much of her career trying to understand how a few dozen neurons in the lobster’s stomach produce a rhythmic grinding. Despite vast amounts of effort and ingenuity, we still cannot predict the effect of changing one component in this tiny network that is not even a simple brain.

This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level. Finding the link between these two levels of analysis will be a challenge for much of the rest of the century, I suspect. And the prospect of properly understanding what is happening in cases of mental illness is even further away.

Not all neuroscientists are pessimistic – some confidently claim that the application of new mathematical methods will enable us to understand the myriad interconnections in the human brain. Others – like myself – favour studying animals at the other end of the scale, focusing our attention on the tiny brains of worms or maggots and employing the well-established approach of seeking to understand how a simple system works and then applying those lessons to more complex cases. Many neuroscientists, if they think about the problem at all, simply consider that progress will inevitably be piecemeal and slow, because there is no grand unified theory of the brain lurking around the corner.






There are many alternative scenarios about how the future of our understanding of the brain could play out: perhaps the various computational projects will come good and theoreticians will crack the functioning of all brains, or the connectomes will reveal principles of brain function that are currently hidden from us. Or a theory will somehow pop out of the vast amounts of imaging data we are generating. Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations. Or by focusing on simple neural network principles we will understand higher-level organisation. Or some radical new approach integrating physiology and biochemistry and anatomy will shed decisive light on what is going on. Or new comparative evolutionary studies will show how other animals are conscious and provide insight into the functioning of our own brains. Or unimagined new technology will change all our views by providing a radical new metaphor for the brain. Or our computer systems will provide us with alarming new insight by becoming conscious. Or a new framework will emerge from cybernetics, control theory, complexity and dynamical systems theory, semantics and semiotics. Or we will accept that there is no theory to be found because brains have no overall logic, just adequate explanations of each tiny part, and we will have to be satisfied with that.

Monday 24 February 2020

Why well-to-do Indians are fleeing the country and economists aren’t returning

The economic refugees of old have been replaced by well-placed people leaving (or staying away from) India’s unattractive political economy writes TN NINAN in The Print




Montek Singh Ahluwalia, in his non-memoir, Backstage: The Story behind India’s High Growth Years, recounts how he and wife, Isher, decided to return to India from Washington 40 years ago, giving up attractive careers at the World Bank and International Monetary Fund (IMF). Montek joined government as an economic adviser in the finance ministry, and Isher joined a think tank. They would have had modest salaries and below-par government housing, but they felt they were contributing to India’s development process. Along the way, they became the capital’s power couple, so life had its compensations.

Other economists too came back around the same time, some earlier, and some later: Manmohan Singh, Bimal Jalan, Vijay Kelkar, Shankar Acharya, Rakesh Mohan, and so on. They returned after studying at the best universities and working in plum jobs at international organisations. They and others like them became the leading makers (or influencers) of economic policy for the next three or four decades, rising like Montek to high offices and enjoying good reputations, plus of course the bungalows of Lutyens’ Delhi and social cachets that would not be available to them elsewhere.

The question that was posed earlier this week at the release of Montek’s book was: Why aren’t people like them coming back today, bag and baggage, to set down roots here in India? The ones who came more recently were clutching the green cards that gave them an escape hatch through which to return to green pastures: Arvind Panagariya, Raghuram Rajan, Arvind Subramanian, and other perfectly honourable gentlemen like them.

One answer is that India has always had economic refugees, and they went where they could find jobs (in West Asia and Singapore), or a better education that would underwrite good careers. Many have done brilliantly, heading global tech giants and winning Nobel prizes. But there is a darker side to the story. Although India is no longer the desperately poor country of the 1980s and 1990s (having risen a few years ago to lower-middle income status), has ceased to be an economic prison like Cuba, and offers more career options with higher salaries, vastly superior cars and consumer goods, modern hospitals, and new liberal arts colleges, and the simple freedom to travel without signing “P” forms and getting eight dollars to take with you, it seems to have become a less attractive country in which to live and work.

Businessmen, including some with recognisable names and faces, are becoming “overseas citizens”. They are investing more in other markets where life is simpler. Wealthy professionals with internationally marketable skills and degrees are also taking their money with them (prompting the finance minister in her Budget to introduce a tax on such money transfers). They may be fleeing tax terrorism, prodded by more limited economic opportunities than they had imagined, or simply keeping one foot in India and another overseas because public discourse here has acquired a nasty edge and who knows what’s coming next. Or perhaps it is just the air quality in our cities which is a deterrent. Whatever the reason, the economic refugees of old have been replaced by well-placed people leaving (or staying away from) India’s unattractive political economy. Diplomats from under-populated countries like Australia and Canada report a sudden increase in the number of Indians seeking to emigrate.

The other question is, should our economists look back with satisfaction, or in anger? To be sure, there were high points like the reforms of 1991, the years of rapid growth a decade ago, and transformation in sectors like telecom. But we should not have waited till 1991 to launch the reforms. As Montek writes, Rajiv Gandhi was warned by the IMF chief in early 1988 that a crisis was building up, but he did nothing. The telecom revolution here was not special to India; other countries too engineered dramatic improvements in tele-density. Nor were India’s years of rapid growth unique; emerging markets as a whole grew at 7.9 per cent in 2004-08. Forget China, today India is being bettered in trade by Bangladesh and Vietnam. And the Thai baht is worth Rs 2.25; it was half that in 1991.

Friday 21 February 2020

Economists should learn lessons from meteorologists

Weather forecasters make hypotheses and test them daily writes Tim Harford in The FT


The UK’s national weather service, the Met Office, is to get a £1.2bn computer to help with its forecasting activities. That is a lot of silicon. My instinctive response was: when do we economists get one? 


People may grumble about the weather forecast, but in many places we take its accuracy for granted. When we ask our phones about tomorrow’s weather, we act as though we are gazing through a window into the future. Nobody treats the latest forecasts from the Bank of England or the IMF as a window into anything. 

That is partly because politics gets in the way. On the issue of Brexit, for example, extreme forecasts from partisans attracted attention, while independent mainstream forecasters have proved to be pretty much on the money. Few people stopped to praise the economic bean-counters. 

Economists might also protest that nobody asks them to forecast economic activity tomorrow or even next week; they are asked to describe the prospects for the next year or so. True, some almanacs offer long-range weather forecasts based on methods that are secret, arcane, or both — but the professionals regard such attempts as laughable. 

Enough excuses; economists deserve few prizes for prediction. Prakash Loungani of the IMF has conducted several reviews of mainstream forecasts, finding them dismally likely to miss recessions. Economists are not very good at seeing into the future — to the extent that most argue forecasting is simply none of their business. The weather forecasters are good, and getting better all the time. Could we economists do as well with a couple of billion dollars’ worth of kit, or is something else lacking? 

The question seemed worth exploring to me, so I picked up Andrew Blum’s recent book, The Weather Machine, to understand what meteorologists actually do and how they do it. I realised quickly that a weather forecast is intimately connected to a map in a way that an economic forecast is not. 

Without wishing to oversimplify the remarkable science of meteorology, one part of the game is straightforward: if it’s raining to the west of you and the wind is blowing from the west, you can expect rain soon. Weather forecasts begin with weather observations: the more observations, the better. 

In the 1850s, the Smithsonian Institution in Washington DC used reports from telegraph operators to patch together local downpours into a national weather map. More than a century and a half later, economists still lack high-definition, high-frequency maps of the economic weather, although we are starting to see how they might be possible, tapping into data from satellites and digital payments. 

An example is an attempt — published in 2012 — by a large team of economists to build a simulation of the Washington DC housing market as a complex system. It seems a long way from a full understanding of the economy, but then the Smithsonian’s paper map was a long way from a proper weather forecast, too. 

Weather forecasters could argue that they have a better theory of atmospheric conditions than economists have of the economy. It was all sketched out in 1904 by the Norwegian mathematician Vilhelm Bjerknes, who published “The problem of weather prediction”, an academic paper describing the circulation of masses of air. If you knew the density, pressure, temperature, humidity and the velocity of the air in three dimensions, and plugged the results into Bjerknes’s formulas, you would be on the way to a respectable weather forecast — if only you could solve those computationally-demanding equations. The processing power to do so was to arrive many decades later. 

The missing pieces, then: much better, more detailed and more frequent data. Better theory too, perhaps — although it is striking that many critiques of the economic mainstream seem to have little interest in high-resolution, high frequency data. Instead, they propose replacing one broad theory with another broad theory: the latest one I have seen emphasises “the energy cost of energy”. I am not sure that is the path to progress. 

The weather forecasters have another advantage: a habit of relentless improvement in the face of frequent feedback. Every morning’s forecast is a hypothesis to be tested. Every evening that hypothesis has been confirmed or refuted. If the economy offered similar daily lessons, economists might be quicker to learn. All these elements are linked. If we had more detailed data we might formulate more detailed theories, building an economic map from the bottom up rather than from the top down. And if we had more frequent feedback, we could test theories more often, making economics more empirical and less ideological. 

And yet — does anyone really want to spend a billion pounds on an economic simulation that will accurately predict the economic weather next week? Perhaps the limitations of economic forecasting reflect the limitations of the economics profession. Or perhaps the problem really is intractable.

Wednesday 19 February 2020

Capital and Ideology with Professor Thomas Piketty





The white swan harbingers of global economic crisis are already here

Seismic risks for the global system are growing, not least worsening US geopolitical rivalries, climate change and now the coronavirus outbreak writes Nouriel Roubini in The Guardian
 

 
A swan fighting with crows on a beach. Photograph: Kamila Koziol/Alamy Stock Photo/Alamy Stock Photo


In my 2010 book, Crisis Economics, I defined financial crises not as the “black swan” events that Nassim Nicholas Taleb described in his eponymous bestseller but as “white swans”. According to Taleb, black swans are events that emerge unpredictably, like a tornado, from a fat-tailed statistical distribution. But I argued that financial crises, at least, are more like hurricanes: they are the predictable result of builtup economic and financial vulnerabilities and policy mistakes.

There are times when we should expect the system to reach a tipping point – the “Minsky Moment” – when a boom and a bubble turn into a crash and a bust. Such events are not about the “unknown unknowns” but rather the “known unknowns”.
Beyond the usual economic and policy risks that most financial analysts worry about, a number of potentially seismic white swans are visible on the horizon this year. Any of them could trigger severe economic, financial, political and geopolitical disturbances unlike anything since the 2008 crisis.

For starters, the US is locked in an escalating strategic rivalry with at least four implicitly aligned revisionist powers: China, Russia, Iran and North Korea. These countries all have an interest in challenging the US-led global order and 2020 could be a critical year for them, owing to the US presidential election and the potential change in US global policies that could follow.

Under Donald Trump, the US is trying to contain or even trigger regime change in these four countries through economic sanctions and other means. Similarly, the four revisionists want to undercut American hard and soft power abroad by destabilising the US from within through asymmetric warfare. If the US election descends into partisan rancour, chaos, disputed vote tallies and accusations of “rigged” elections, so much the better for rivals of the US. A breakdown of the US political system would weaken American power abroad.

Moreover, some countries have a particular interest in removing Trump. The acute threat that he poses to the Iranian regime gives it every reason to escalate the conflict with the US in the coming months – even if it means risking a full-scale war – on the chance that the ensuing spike in oil prices would crash the US stock market, trigger a recession, and sink Trump’s re-election prospects. Yes, the consensus view is that the targeted killing of Qassem Suleimani has deterred Iran but that argument misunderstands the regime’s perverse incentives. War between US and Iran is likely this year; the current calm is the one before the proverbial storm.

As for US-China relations, the recent phase one deal is a temporary Band-Aid. The bilateral cold war over technology, data, investment, currency and finance is already escalating sharply. The Covid-19 outbreak has reinforced the position of those in the US arguing for containment and lent further momentum to the broader trend of Sino-American “decoupling”. More immediately, the epidemic is likely to be more severe than currently expected and the disruption to the Chinese economy will have spillover effects on global supply chains – including pharma inputs, of which China is a critical supplier – and business confidence, all of which will likely be more severe than financial markets’ current complacency suggests.

Although the Sino-American cold war is by definition a low-intensity conflict, a sharp escalation is likely this year. To some Chinese leaders, it cannot be a coincidence that their country is simultaneously experiencing a massive swine flu outbreak, severe bird flu, a coronavirus outbreak, political unrest in Hong Kong, the re-election of Taiwan’s pro-independence president, and stepped-up US naval operations in the East and South China Seas. Regardless of whether China has only itself to blame for some of these crises, the view in Beijing is veering toward the conspiratorial.

But open aggression is not really an option at this point, given the asymmetry of conventional power. China’s immediate response to US containment efforts will likely take the form of cyberwarfare. There are several obvious targets. Chinese hackers (and their Russian, North Korean, and Iranian counterparts) could interfere in the US election by flooding Americans with misinformation and deep fakes. With the US electorate already so polarised, it is not difficult to imagine armed partisans taking to the streets to challenge the results, leading to serious violence and chaos.

Revisionist powers could also attack the US and western financial systems – including the Society for Worldwide Interbank Financial Telecommunication (Swift) platform. Already, the European Central Bank president, Christine Lagarde, has warned that a cyber-attack on European financial markets could cost $645bn (£496.2bn). And security officials have expressed similar concerns about the US, where an even wider range of telecommunication infrastructure is potentially vulnerable.

By next year, the US-China conflict could have escalated from a cold war to a near hot one. A Chinese regime and economy severely damaged by the Covid-19 crisis and facing restless masses will need an external scapegoat, and will likely set its sights on Taiwan, Hong Kong, Vietnam and US naval positions in the East and South China Seas; confrontation could creep into escalating military accidents. It could also pursue the financial “nuclear option” of dumping its holdings of US Treasury bonds if escalation does take place. Because US assets comprise such a large share of China’s (and, to a lesser extent, Russia’s) foreign reserves, the Chinese are increasingly worried that such assets could be frozen through US sanctions (like those already used against Iran and North Korea).

Of course, dumping US Treasuries would impede China’s economic growth if dollar assets were sold and converted back into renminbi (which would appreciate). But China could diversify its reserves by converting them into another liquid asset that is less vulnerable to US primary or secondary sanctions, namely gold. Indeed, China and Russia have been stockpiling gold reserves (overtly and covertly), which explains the 30% spike in gold prices since early 2019.

In a sell-off scenario, the capital gains on gold would compensate for any loss incurred from dumping US Treasuries, whose yields would spike as their market price and value fell. So far, China and Russia’s shift into gold has occurred slowly, leaving Treasury yields unaffected. But if this diversification strategy accelerates, as is likely, it could trigger a shock in the US Treasuries market, possibly leading to a sharp economic slowdown in the US.

The US, of course, will not sit idly by while coming under asymmetric attack. It has already been increasing the pressure on these countries with sanctions and other forms of trade and financial warfare, not to mention its own world-beating cyberwarfare capabilities. US cyber-attacks against the four rivals will continue to intensify this year, raising the risk of the first-ever cyber world war and massive economic, financial and political disorder.

Looking beyond the risk of severe geopolitical escalations in 2020, there are additional medium-term risks associated with climate change, which could trigger costly environmental disasters. Climate change is not just a lumbering giant that will cause economic and financial havoc decades from now. It is a threat in the here and now, as demonstrated by the growing frequency and severity of extreme weather events. 

In addition to climate change, there is evidence that separate, deeper seismic events are under way, leading to rapid global movements in magnetic polarity and accelerating ocean currents. Any one of these developments could augur an environmental white swan event, as could climatic “tipping points” such as the collapse of major ice sheets in Antarctica or Greenland in the next few years. We already know that underwater volcanic activity is increasing; what if that trend translates into rapid marine acidification and the depletion of global fish stocks upon which billions of people rely?

As of early 2020, this is where we stand: the US and Iran have already had a military confrontation that will likely soon escalate; China is in the grip of a viral outbreak that could become a global pandemic; cyberwarfare is ongoing; major holders of US Treasuries are pursuing diversification strategies; the Democratic presidential primary is exposing rifts in the opposition to Trump and already casting doubt on vote-counting processes; rivalries between the US and four revisionist powers are escalating; and the real-world costs of climate change and other environmental trends are mounting.

This list is hardly exhaustive but it points to what one can reasonably expect for 2020. Financial markets, meanwhile, remain blissfully in denial of the risks, convinced that a calm if not happy year awaits major economies and global markets.

How could a UK points-based immigration system work? - BBC Newsnight


Thursday 13 February 2020

Why ‘winners’ pick Prashant Kishor

If Arvind Kejriwal was going to sweep the election anyway, why did he need Prashant Kishor asks Shivam Vij in The Print?


Delhi CM Arvind Kejriwal and political strategist Prashant Kishor at the AAP office in New Delhi  




Political strategist Prashant Kishor picks winners, his critics say. He appoints himself as consultant to a party, most likely to win an imminent election, and then takes credit for a victory he had nothing to do with, according to his critics.

But it gets curious if you flip the question. Why do winners need Prashant Kishor?

When Kishor starts working on an election, the critics say ‘What can he do?’ When the election is won, they say ‘What did he do?’

Why PK?

If Arvind Kejriwal was going to win Delhi 2020 anyway, why did he bring on Prashant Kishor? Why did he tweet announcing he was welcoming on board the Kishor-mentored Indian Political Action Committee?

Thanks to the anti-incumbency against Chandrababu Naidu, Jagan Mohan Reddy was going to become the chief minister of Andhra Pradesh in 2019 anyway, we are told. If it was so certain, why did Reddy go and get Prashant Kishor to design his entire campaign for a full two years?

Captain Amarinder Singh is a well-respected politician in Punjab. The Aam Aadmi Party did not have a face in the Punjab 2017 elections. It was Captain’s time. He won the election on his image. Which begs the question: why did Captain need Kishor?

For the 2015 Bihar assembly election, Nitish Kumar tied up with his bête noire Lalu Prasad Yadav. The caste combination was such that the coalition would have won anyway, some say. They had the Congress with them too. Sounds easy. But then why did Nitish Kumar need Prashant Kishor? And why did Nitish Kumar value Prashant Kishor so much that he later made him vice-president of his party?

Narendra Modi is the champion king of Indian politics. Prashant Kishor’s critics say he did not make much of a difference in Modi’s 282 seats in 2014. If this was the case, why was Modi wooing Kishor back in 2017?

Kishor’s critics admit that he has a tough battle ahead in West Bengal, where the BJP is putting everything at stake. But if Mamata Banerjee wins the Bengal election early next year, the same critics will say: Didi is a popular leader, the BJP had no face, she was going to win anyway. What did Kishor do?

The political commentators in Delhi feel the DMK is going to win the 2021 Tamil Nadu assembly election. It’s the DMK’s turn, and now that the father is no more, Stalin will be the CM. Fair enough. Why then has the DMK signed up Prashant Kishor? Are the foolish to give him attention and credit?

But over the next few months, we’ll be told by these very political commentators that Tamil Nadu is uncertain because of the Rajinikanth factor. If Stalin still wins, the same critics will say Rajnikanth was never a factor, and Kishor just landed up to take credit for Stalin’s pre-destined victory. 


----Also watch


----
After the fact

An election victory often looks like a foregone conclusion only after the fact. Just go back and check your own tweets and WhatsApp messages over the last two weeks. Many of you were wondering if the BJP’s Hindutva push can defeat the AAP.

The fact is, the AAP was down and out after three terrible election defeats: Punjab and MCD in 2017, and the Lok Sabha in 2019. They needed Kishor because they were, in fact, not certain of winning the 2020 Delhi election.

In Punjab, the Aam Aadmi Party was at one point in time said to be winning 100 of 117 seats. Captain Amarinder Singh desperately wanted Kishor because it was his last chance to be chief minister and he didn’t want to lose it. Many senior pundits and analysts felt that the AAP was winning Punjab, right till the results came out. Once the results were out, they said Captain had to win anyway.

Captain had lost two consecutive elections — one as an incumbent and the one as a challenger. This performance saw him booted out as Punjab Congress chief. The main problem with the Punjab Congress was factionalism. Kishor did many things in terms of strategy, branding and communication, but the most important was that he went around managing each faction to make sure they let Amarinder win this time. By contrast, we have just seen how the Congress couldn’t decide between Ashok Tanwar and Bhupinder Singh Hooda Tull the very end in Haryana, and thus lost a winnable election.

Jagan Mohan Reddy was so down and out in Andhra in 2014 that he seemed to be over. The Telugu Desam Party poached a third of his party’s MPs and MLAs. Reddy’s image was of a corrupt, feudal, arrogant dynast. His victory was far from certain.

In Bihar, Nitish Kumar’s stock in 2015 was quite low. He has had a crushing defeat in the Lok Sabha at the hands of the BJP, and Amit Shah was expanding the party into new territories like a conqueror. Yes, Nitish Kumar did tie-up with Lalu Yadav, but the critics said the alliance won’t last. The BJP was banking on their fighting over seat-sharing and other matters, and the alliance would break up even before the election. Kishor made sure that doesn’t happen. He made himself the common channel of communication between the two leaders to ensure there is no disharmony. (Fun fact: Kishor was against the Nitish-Lalu alliance. He insisted he could make Nitish win on his own but Nitish didn’t have the risk appetite for that.)

When the patient doesn’t take the medicine


While the critics say the political consultant chosen by winners gets no credit for the victories, the consultant gets all the blame for the losses. Hence, they say that Kishor could not make the Congress party improve its prospects in Uttar Pradesh in 2017.

In UP in 2016, Kishor achieved the Herculean task of making Rahul Gandhi travel the state for a consistent campaign on farmers’ issues without a single day’s holiday. But that was step one — or just the first “module” as these consultant types say. There were many other things lined-up non-stop to build momentum. The key was to declare Priyanka Gandhi as the chief ministerial candidate. The Congress party has agreed to all these proposals, and, as only the Congress can do, went back on them. All the plans were laid to waste.

It was a case of the patient not taking the medicine for the full course and then blaming the doctor.

Kishor’s mistake was that he didn’t part ways with the Congress there and then. The critics do have a point about what came thereafter: he became overconfident he could make an SP-Congress alliance win the state.

Malice and misunderstanding

Some of the dismissal of Kishor comes from malice: the durbaris around top politicians don’t want to lose their jobs to an American-style consultant. This was certainly the case with Congress.

Kishor is India’s first western-style political consultant. And the first man through the door often gets shot. There are others, but the political system doesn’t want them to be in the limelight, taking credit. The system wants to treat consultants as “vendors”. That is bound to change, sooner or later.

Some of the criticism of Kishor comes from a lack of understanding of this beast called modern political campaigning. What exactly is it that Prashant Kishor does? We’ll have to ask Mamata Banerjee, Jagan Mohan Reddy, Nitish Kumar, Captain Amarinder Singh, MK Stalin or Narendra Modi.

Wednesday 12 February 2020

Modi designed Kejriwal’s template for Delhi win years ago in Gujarat

The amazing thing about AAP is not that it fell back on conventional wisdom, but how quickly and eagerly it embraced the rules that it set out to change writes YOGENDRA YADAV in The Print



Aam Aadmi Party supporters celebrate AAP's win in the capital 

History repeats itself, first as tragedy, then as farce. This is one of the oft-quoted statements from Karl Marx, which alerts us that the re-occurrence of an event carries very different meanings in history. The Aam Adami Party’s repetition of its grand victory in the 2015 Delhi election is neither a tragedy nor a farce. In many ways, it does more to alter the equations of national politics. But it is no longer the victory that could change the established models of governance or the ways of Indian politics.

Judging by the craft of electoral battlefield, this is undoubtedly a memorable victory, bigger than the previous one. Coming at the end of a full term marred by a hostile central government of Prime Minister Narendra Modi, an electoral victory is rare and should call for compliments. Repeating the unmatched scale of victory — nearly 54 per cent votes and about 90 per cent seats — in the wake of a washout in the 2019 Lok Sabha election, a central government determined to deny the AAP another term, one of the most aggressive and vicious campaigns by the Bharatiya Janata Party (BJP), and a diffident Election Commission makes it even more historic.

Add to it the special sociology of voting. India Today’s exit poll that provides social break-up of votes confirms that the AAP actually consolidated its vote share among women and poor voters. It seems that the AAP lost a 4-5 per cent votes to the BJP but made up for it from the gains it made from the Congress. In terms of education and class, the correlation is straightforward: the poorer and less educated the voter, the greater the AAP’s lead over the BJP. That suggests an enduring alignment of voters that is here to stay. Arvind Kejriwal must be complimented for holding his nerves during this campaign and guiding his team to this success.

While the AAP’s victory in 2015 was a one-off exception that did not alter the national equations, the 2020 election result brings good news for the entire country. Since 2018, Delhi is now the ninth successive assembly election (Karnataka, Rajasthan, Madhya Pradesh, Chhattisgarh, Odisha, Haryana, Maharashtra, Jharkhand and Delhi) where the BJP failed to win, despite being a serious contender (excluding Telangana, Andhra Pradesh, and Mizoram where it was not). This may not be an indicator of a decline and eventual fall of Narendra Modi from national centre stage. Nation-wide opinion polls attest to the continuing popularity of Modi. Opinion polls in the run-up to the Delhi election had shown that most AAP voters prefer Modi as the national leader and BJP as the party of their choice for Lok Sabha. Yet, another defeat in state assembly elections would puncture the narrative of BJP’s rising tide. It would also mean stronger federal resistance to the Centre’s attempts to ride roughshod over states. 

Cause for relief

This defeat of the BJP carries a bigger message. The BJP’s election campaign in Delhi was a new low in India’s electoral history. From national leaders to local minions, this was a full-throttled communal polarisation. Short of officially calling for Hindu-Muslim riots, the BJP leadership did everything that it could — branding its opponents as terrorists, anti-national, Pakistanis and whatnot — as the Election Commission made polite noises. Had this model succeeded, this would have become a national template — incite-hatred-win-elections — with ethnic, caste and regional variants. Its defeat may not put an end to the polarisation strategy. The BJP may well read the increase in its vote share as an indicator of the success of polarisation. And the party is bound to try this in West Bengal and Uttar Pradesh. But this result would surely sow seeds of doubt in the minds of those who argue for this. That is a cause for relief.

Yet, it would be misleading to compare this victory of the AAP with its path-breaking electoral debut in 2013 and 2015. At the time of inception, the AAP promised nothing short of a new model of governance, even if the contours of that model were yet to be worked out. Its ideology of swaraj promised a new vision for India, breaking free of ideological rigidities of the past. Above all, it promised a new kind of politics that would challenge the established rules of the game.

This second victory is not a realisation of that promise. Instead, it confirms that this new player has learned the rules of the game better than the older players, and proven that you don’t need a new model of governance or vision to succeed in India’s politics.

Far from inaugurating a new model of governance, the AAP has replicated, more successfully than others, what is by now a box standard template of re-election. The template was designed by Narendra Modi himself in his second and third assembly elections in Gujarat, replicated and refined by chief ministers like Shivraj Singh Chouhan, Raman Singh, Nitish Kumar, and Naveen Patnaik. This template of re-election for an incumbent government comprises three elements: assured delivery of select welfare measures that directly reach the people, high-decibel publicity of these measures and the leaders personality to amplify these policies, and a strong election machine to convert these into votes.

 AAP’s template

Arvind Kejriwal used this template better than those who designed it. Free or cheap electricity did provide real relief to the poor and lower middle classes. Education may not have improved, but school infrastructure did. Mohalla clinics were mostly a start-up, but these did hold out a promise of accessible health services. These tangible gains were amplified through very simple and powerful communication, both official advertisements and party political publicity.

As a result, it became an article of faith that Delhi government was about education plus health. Everyone forgot about corruption, employment, pollution, transport and liquor. Arvind Kejriwal managed his image very deftly where it mattered most, the ordinary voters, without bothering much for the opinion-making classes. He too discovered that the public has a very short memory. All this was converted into votes through a powerful and well-oiled election machine, with some assistance from Prashant Kishor. This is not to take away from the brilliance and perseverance of the AAP leadership in executing and improvising on the template. It is just useful to remember that this is not a new model.

The same is true of the AAP’s political strategy. Far from rewriting the rules, the party has reaffirmed the existing rules. One, you cannot do politics without mobilising political entrepreneurs who are agnostic to political principles. Two, vision and principles are for the chattering classes, you don’t need to bother about these much. Three, a political party is all about winning elections, which is a necessary and sufficient test of political success. Four, a political party cannot work without a ‘high command’ that follows a single leader. The amazing thing about the AAP is not that it fell back on this conventional wisdom, but how quickly and eagerly it embraced the rules that it set out to change.

Many of these learnings paid off in the 2020 Delhi election. It could award ticket to every winnable candidate without any moral or ideological hindrance. The ideological flexibility allowed the AAP to quickly adjust to the Right-ward shift of the political spectrum. From welcoming the dilution of Article 370 and abolition of the state of Jammu and Kashmir to welcoming the Supreme Court verdict on Ayodhya, the party quickly shifted to the middle-Right. It managed, brilliantly, to remain ambiguous on the CAA and Shaheen Bagh through its campaign. Finally, it could limit the contest to the local issues of Delhi and paint itself as the only alternative at that level.

And this is the real irony: the party that was formed to break the tyranny of TINA (there is no alternative) won because there was no alternative to it.

So, the question is not whether these strategies work in elections. The AAP has shown that they do. The question we need to ask now is whether these can help us fight the larger battle to reclaim the republic.

Major Gaurav Arya First Candid Q&A on Defensive Offence.


Sunday 9 February 2020

Love as a drug: can romance be medically prescribed?

Andrew Anthony in The Guardian

For some time, it has been widespread medical practice to treat a range of psychological conditions, including depression and anxiety, with what might be called mind-altering drugs, namely selective serotonin reuptake inhibitors (SSRIs), which, as the name suggests, affect levels of serotonin in the brain. But there’s one mental category that isn’t considered appropriate for any kind of biomedical intervention. It’s arguably the most talked about of all human states, the cause of much of our finest art, literature and music, and it is celebrated or, depending on your view, commercially exploited once again on Friday: love.

It may be a many splendoured thing, but love is a condition for which there is famously no cure. All you need is love, as the song said, but money can’t buy you it. It’s viewed as an emotional ideal and yet the source of untold pain and suffering. Ask any 10 people what love is and you’re sure to get 10 different answers. Unsurprisingly, given that it is the stuff of romance, we tend to romanticise it. Millions of words have been spilled in trying to describe the feeling, but not many have been devoted to the biochemical processes that lie behind it.

In their new book, Love Is the Drug, Oxford ethicists Brian Earp and Julian Savulescu point out that this neglected aspect of love is just as important as its social or psychological structures. Intuitively, perhaps, we’ve always known this. After all, how do we explain the lack of interest felt on a new date? “There was no chemistry.”

Yet while we have largely come to accept that drugs that affect the brain have a part to play in treating psychological illnesses, the idea that the same approach could apply to love goes against the grain. We think of love as natural and healthy and therefore not something that is in need of what Earp and Savulescu delicately call “biomedical enhancement”.

The authors, however, argue that it’s time to change our attitudes and explore the possibilities offered by breakthroughs in biomedicine and neuroscience. “If it becomes possible to safely target the underlying neurochemistry that supports romantic attachment, using drugs or other brain-level technologies,” they write, “then there is reason to think this could help some people who really need it.”

They go further and suggest that such drugs have already been partially tested, have been used by huge numbers of people around the world, and should urgently become the subject of controlled research. The problem is the drugs they’re talking about are illegal psychoactive substances such as psilocybin and, in particular, methylenedioxymethamphetamine (MDMA), the active ingredient in the rave drug ecstasy.

They cite studies that show positive results for the use of MDMA in counselling those suffering from post-traumatic stress disorder (PTSD) and speculate that similar outcomes might be expected for couples whose relationships have hit the rocks.

But isn’t that a bit of an inductive stretch? What does the effect of, say, fighting in Iraq have to do with failing romances? Earp points out that there is already a small study showing how couples in which one partner has PTSD have benefited from the regulated use of MDMA. The way the drug is thought to work on PTSD sufferers, he says, is by breaking down the defence mechanisms that prevent their being able to open up.

“Our point is that trauma falls on a spectrum and relationships themselves can be traumatic,” he explains. “What causes a lot of relationships to break down over time is traumatic or semi-traumatic events that take place either inside or outside the relationship. People start to close down and stop sharing with their partners. Insofar as love requires a certain kind of intimacy, the defence mechanism and the kneejerk fear responses that we build up around talking about certain issues with our partners are the very things that this drug directly enables us to overcome.”

As may be gathered from that response, Earp is not interested in bringing biomedical enhancement to first dates, for reasons of what he terms “authenticity”. He wants to focus on those who have already passed that initial chemistry test and whose love has subsequently become worn and torn by the everyday rigours of life.

“If you take a drug that all of a sudden makes you feel much closer to someone than you did five minutes ago, there’s a risk that it’s the drug doing the work rather than some sort of established compatibility between you and the other person,” he says. “I think it was Timothy Leary who coined the term ‘instant marriage syndrome’, where people would meet someone at a dance and think, ‘Ooh, I’ve met my soulmate’ and they’d go and get married and as the drug wore off, and they got to know each other better, they found they didn’t actually have good compatibility.”

Of course MDMA is best known in this country for its starring role in the so-called second summer of love in 1988, when a generation of rave-goers discovered ecstasy, got “loved up” and shared the mass euphoria of dancing all night in an urban warehouse or field. The social idealism glimpsed at the beginning of that social movement soon spiralled into hedonistic excess, and it wasn’t long before stories of teenage deaths related to taking the drug ruined the utopian dream.

Though largely unheard of in the UK before that summer, MDMA was already technically illegal for more than 10 years under umbrella legislation concerning phenethylamines. In the US, it was not made illegal until 1985. Earp and Savulescu are not now calling for its wholesale legalisation. They acknowledge its potential dangers, particularly if taken in the wrong situation with inadequate support, and argue that it should only be available in a therapeutic setting, under the guidance of a professional.


FacebookTwitterPinterest Kristin Kreuk and Adam Sinclair in Ecstasy, an adaptation of Irvine Welsh’s 1996 story The Undefeatured, set amid ecstasy users in the rave scene. Photograph: Intandem Films/Allstar

Until 1985, as Love Is the Drug reminds us, MDMA had been used by many relationship counsellors in the US. In 1998, psychiatrists George Greer and Requa Tolbert wrote in the Journal of Psychoactive Drugs, of their experience of conducting MDMA-enhanced therapeutic sessions with about 80 clients in the first half of the 1980s.

These clients had to give their informed consent and were selected after a pre-screening process. Then Greer and Tolbert would meet the clients in their homes, where they would administer a pure dose of between 77mg and 150mg of MDMA, with a 50mg booster if requested later on (the street drug in the UK is said to contain upwards of 150mg, and occasionally as much as 300mg). According to Greer and Tolbert, 90% of their clients benefited from MDMA-assisted psychotherapy, with “some”, as Earp and Savulescu write, “reporting that they felt more love toward their partners and were better able to move beyond past pains and pointless grudges”.

A cynic might say, what’s left of love after that? But a more serious point is how to distinguish the relationships that are worth saving or enhancing from those that are fundamentally dysfunctional, when there might be a danger that the temporary high could help disguise the dysfunction.

Earp and Savulescu are careful not to be too prescriptive in their definitions of love, allowing that it’s pretty much whatever those who declare possession of it say it is. Equally, Earp is on guard for external paternalistic judgments of other people’s relationships. His belief is that there is a monogamy/promiscuity spectrum along which we all fall and that no position on it is more “natural” than any other. So one-size-fits-all classifications are destined to miss the mark.

“I think it would be a mistake to say everyone should be lifelong monogamists, no matter what, and we’re going to enforce that through the criminal code,” he says. “But it would also be a mistake to say that we’re all just bonobos and monogamy is a thing of the past and we should have as many sexual partners as we can find. In the world of meaning, subjective experience and how we relate to each other, there’s a lot of room for diverse interpretations of what’s valuable.”

History has a bad track record of deciding what the “right” relationship is, says Earp, noting that it was only very recently that homosexual love was brought within the fold of acceptability. But there is one objective criterion to which the pair do hold firm. “When it comes to violent abuse, we’ve drawn a pretty strong line in the sand collectively as a society,” he says. “That is a very strong signal that it’s objectively a bad relationship.”

The book makes several bold claims that seem the product of marketing needs rather than hardcore scientific fact. For example, it states that the “biological underpinnings of romantic love are being revealed” and that the prospect of real love drugs is upon us. But there remains a great deal of debate, not to say confusion, about the workings of even such fundamental biological constituents as the hormone testosterone regarding its role in the libido. And as you might expect from professional ethicists, the book is at its most impressive when considering the moral, social and pragmatic issues concerned with scientific development, rather than the details of the development itself.

If and when the aforementioned biological underpinnings are revealed, and we are able to regulate emotions and behaviour through biomedical supplements, does that suggest we will become somehow less autonomous and, consequently, more like a programmable machine?

“There are lots of ways we take steps to try to shape ourselves and our self-narratives,” says Earp. “There are ones that we’re comfortable with because they don’t seem to involve the brain and we’re a little bit scared of interacting with the brain directly.”

But the fact is, he says, even words can affect our brains. He cites the example of the Oedipus myth. One moment he’s happily having sex with Jocasta, feeling love towards her, the next he discovers that she’s his mother. “He hasn’t taken any drugs but you can bet that all of a sudden his testosterone levels will plummet and his libido will drop.”

Neurochemistry is changing all the time, says Earp, and one way that can happen is by the direct administration of drugs, which have their own benefits and risks.

“We just need to identify those cases where intervening with drugs or psychology or chaining our social circumstance will be likely to improve authenticity or autonomy rather than detract from it.”

He speaks with such reasoned composure on the subject that it comes as a surprise to learn that he has never taken MDMA himself.

“I’ve been interested in that experience but I haven’t had the opportunity to go forward with that because it remains unjustly and inappropriately prohibited,” he says.

The solution, he insists, is open research. In the meantime, we’ll just have to continue fumbling away in the dark, breaking up and making up, trying to understand not just ourselves but the other person – at least until the love drug arrives.
Microdosing: the perfect prescription?

In praise of ecstasy
Small studies have found that doses of MDMA can have beneficial effects for ex-military and first-responder PTSD sufferers; however, treatment takes place in controlled environments assisted by psychotherapy. There is no good evidence that recreational microdosing is effective or advisable.

Pot potential
Quality research on the effects of microdosing cannabinoids – THC and CBD – is nascent. A 2017 study found that very low doses of THC reduce stress, yet higher doses increase anxiety. In other studies, CBD has shown potential in the treatment of insomnia and a range of anxiety disorders.

Spore lore
In a recent episode of Netflix’s The Goop Lab, employees of Gwyneth Paltrow’s “wellness” company decamped to Jamaica to microdose with magic mushrooms in order to solve various emotional or trauma issues. Although many Silicon Valley types are advocates, there is little high-quality evidence that this is effective.

Major Gaurav Arya Speaks On Imran Khan, His Army & Ministers