Search This Blog

Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts

Thursday, 27 February 2020

Why your brain is not a computer

For decades it has been the dominant metaphor in neuroscience. But could this idea have been leading us astray all along? By Matthew Cobb in The Guardian 

We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity.

We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain’s very structure at will, altering the animal’s behaviour as a result. Some of the most profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a robotic arm with the power of their mind.

Every day, we hear about new discoveries that shed light on how brains work, along with the promise – or threat – of new technology that will enable us to do such far-fetched things as read minds, or detect criminals, or even be uploaded into a computer. Books are repeatedly produced that each claim to explain the brain in different ways.

And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.

I n 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.

“Only 20 to 30 years ago, neuroanatomical and neurophysiological information was relatively scarce, while understanding mind-related processes seemed within reach,” Frégnac wrote. “Nowadays, we are drowning in a flood of information. Paradoxically, all sense of global understanding is in acute danger of getting washed away. Each overcoming of technological barriers opens a Pandora’s box by revealing hidden variables, mechanisms and nonlinearities, adding new levels of complexity.”

The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application of experimental technologies, substantial advances in data analysis methods and intense application of theoretic concepts and models.”

There are indeed theoretical approaches to brain function, including to the most mysterious thing the human brain can do – produce consciousness. But none of these frameworks are widely accepted, for none has yet passed the decisive test of experimental investigation. It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)

As observed by Francis Crick, the co-discoverer of the DNA double helix, the brain is an integrated, evolved structure with different bits of it appearing at different moments in evolution and adapted to solve different problems. Our current comprehension of how it all works is extremely partial – for example, most neuroscience sensory research has been focused on sight, not smell; smell is conceptually and technically more challenging. But the way that olfaction and vision work are different, both computationally and structurally. By focusing on vision, we have developed a very limited understanding of what the brain does and how it does it.

The nature of the brain – simultaneously integrated and composite – may mean that our future understanding will inevitably be fragmented and composed of different explanations for different parts. Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”

For more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.

“Descartes was impressed by the hydraulic figures in the royal gardens, and developed a hydraulic theory of the action of the brain,” Lashley wrote. “We have since had telephone theories, electrical field theories and now theories based on computing machines and automatic rudders. I suggest we are more likely to find out about how the brain works by studying the brain itself, and the phenomena of behaviour, than by indulging in far-fetched physical analogies.”

This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. Since its inception in the 1920s, the idea of a neural code has come to dominate neuroscientific thinking – more than 11,000 papers on the topic have been published in the past 10 years. Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.

The unstated implication in most descriptions of neural coding is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as “downstream structures” that have access to the optimal way to decode the signals. But the ways in which such structures actually process those signals is unknown, and is rarely explicitly hypothesised, even in simple models of neural network function.


 
MRI scan of a brain. Photograph: Getty/iStockphoto

The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.

By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.

The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.

Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development; despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.

One sign that our metaphors may be losing their explanatory power is the widespread assumption that much of what nervous systems do, from simple systems right up to the appearance of consciousness in humans, can only be explained as emergent properties – things that you cannot predict from an analysis of the components, but which emerge as the system functions.

In 1981, the British psychologist Richard Gregory argued that the reliance on emergence as a way of explaining brain function indicated a problem with the theoretical framework: “The appearance of ‘emergence’ may well be a sign that a more general (or at least different) conceptual scheme is needed … It is the role of good theories to remove the appearance of emergence. (So explanations in terms of emergence are bogus.)”

This overlooks the fact that there are different kinds of emergence: weak and strong. Weak emergent features, such as the movement of a shoal of tiny fish in response to a shark, can be understood in terms of the rules that govern the behaviour of their component parts. In such cases, apparently mysterious group behaviours are based on the behaviour of individuals, each of which is responding to factors such as the movement of a neighbour, or external stimuli such as the approach of a predator.

This kind of weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components. You and the page you are reading this on are both made of atoms, but your ability to read and understand comes from features that emerge through atoms in your body forming higher-level structures, such as neurons and their patterns of firing – not simply from atoms interacting.

Strong emergence has recently been criticised by some neuroscientists as risking “metaphysical implausibility”, because there is no evident causal mechanism, nor any single explanation, of how emergence occurs. Like Gregory, these critics claim that the reliance on emergence to explain complex phenomena suggests that neuroscience is at a key historical juncture, similar to that which saw the slow transformation of alchemy into chemistry. But faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.

Interestingly, while some neuroscientists are discombobulated by the metaphysics of emergence, researchers in artificial intelligence revel in the idea, believing that the sheer complexity of modern computers, or of their interconnectedness through the internet, will lead to what is dramatically known as the singularity. Machines will become conscious.

There are plenty of fictional explorations of this possibility (in which things often end badly for all concerned), and the subject certainly excites the public’s imagination, but there is no reason, beyond our ignorance of how consciousness works, to suppose that it will happen in the near future. In principle, it must be possible, because the working hypothesis is that mind is a product of matter, which we should therefore be able to mimic in a device. But the scale of complexity of even the simplest brains dwarfs any machine we can currently envisage. For decades – centuries – to come, the singularity will be the stuff of science fiction, not science.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.

The materialist working hypothesis is that brains and minds, in humans and maggots and everything else, are identical. Neurons and the processes they support – including consciousness – are the same thing. In a computer, software and hardware are separate; however, our brains and our minds consist of what can best be described as wetware, in which what is happening and where it is happening are completely intertwined.

Imagining that we can repurpose our nervous system to run different programmes, or upload our mind to a server, might sound scientific, but lurking behind this idea is a non-materialist view going back to Descartes and beyond. It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind. It would be possible to give this idea a veneer of scientific respectability by posing it in terms of reading the state of a set of neurons and writing that to a new substrate, organic or artificial.

But to even begin to imagine how that might work in practice, we would need both an understanding of neuronal function that is far beyond anything we can currently envisage, and would require unimaginably vast computational power and a simulation that precisely mimicked the structure of the brain in question. For this to be possible even in principle, we would first need to be able to fully model the activity of a nervous system capable of holding a single state, never mind a thought. We are so far away from taking this first step that the possibility of uploading your mind can be dismissed as a fantasy, at least until the far future.

For the moment, the brain-as-computer metaphor retains its dominance, although there is disagreement about how strong a metaphor it is. In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die. Less dramatically, but drawing similar conclusions, two decades earlier the historian S Ryan Johansson argued that “endlessly debating the truth or falsity of a metaphor like ‘the brain is a computer’ is a waste of time. The relationship proposed is metaphorical, and it is ordering us to do something, not trying to tell us the truth.”

On the other hand, the US expert in artificial intelligence, Gary Marcus, has made a robust defence of the computer metaphor: “Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that. The real question isn’t whether the brain is an information processor, per se, but rather how do brains store and encode information, and what operations do they perform over that information, once it is encoded.”

Marcus went on to argue that the task of neuroscience is to “reverse engineer” the brain, much as one might study a computer, examining its components and their interconnections to decipher how it works. This suggestion has been around for some time. In 1989, Crick recognised its attractiveness, but felt it would fail, because of the brain’s complex and messy evolutionary history – he dramatically claimed it would be like trying to reverse engineer a piece of “alien technology”. Attempts to find an overall explanation of how the brain works that flow logically from its structure would be doomed to failure, he argued, because the starting point is almost certainly wrong – there is no overall logic.

Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as expected.

The duo – Eric Jonas and Konrad Paul Kording – employed the very techniques they normally used to analyse the brain and applied them to the MOS 6507 processor found in computers from the late 70s and early 80s that enabled those machines to run video games such as Donkey Kong and Space Invaders.

First, they obtained the connectome of the chip by scanning the 3510 enhancement-mode transistors it contained and simulating the device on a modern computer (including running the games programmes for 10 seconds). They then used the full range of neuroscientific techniques, such as “lesions” (removing transistors from the simulation), analysing the “spiking” activity of the virtual transistors and studying their connectivity, observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games.

Despite deploying this powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works (it has “ground truth”, in technospeak), the study failed to detect the hierarchy of information processing that occurs inside the chip. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”

This sobering outcome suggests that, despite the attractiveness of the computer metaphor and the fact that brains do indeed process information and somehow represent the external world, we still need to make significant theoretical breakthroughs in order to make progress. Even if our brains were designed along logical lines, which they are not, our present conceptual and analytical tools would be completely inadequate for the task of explaining them. This does not mean that simulation projects are pointless – by modelling (or simulating) we can test hypotheses and, by linking the model with well-established systems that can be precisely manipulated, we can gain insight into how real brains function. This is an extremely powerful tool, but a degree of modesty is required when it comes to the claims that are made for such studies, and realism is needed with regard to the difficulties of drawing parallels between brains and artificial systems.


 
Current ‘reverse engineering’ techniques cannot deliver a proper understanding of an Atari console chip, let alone of a human brain. Photograph: Radharc Images/Alamy

Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices. Although it is often argued that particular functions are tightly localised in the brain, as they are in a machine, this certainty has been repeatedly challenged by new neuroanatomical discoveries of unsuspected connections between brain regions, or amazing examples of plasticity, in which people can function normally without bits of the brain that are supposedly devoted to particular behaviours.

In reality, the very structures of a brain and a computer are completely different. In 2006, Larry Abbott wrote an essay titled “Where are the switches on this thing?”, in which he explored the potential biophysical bases of that most elementary component of an electronic device – a switch. Although inhibitory synapses can change the flow of activity by rendering a downstream neuron unresponsive, such interactions are relatively rare in the brain.

A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation. The nervous system alters its working by changes in the patterns of activation in networks of cells composed of large numbers of units; it is these networks that channel, shift and shunt activity. Unlike any device we have yet envisaged, the nodes of these networks are not stable points like transistors or valves, but sets of neurons – hundreds, thousands, tens of thousands strong – that can respond consistently as a network over time, even if the component cells show inconsistent behaviour.

Understanding even the simplest of such networks is currently beyond our grasp. Eve Marder, a neuroscientist at Brandeis University, has spent much of her career trying to understand how a few dozen neurons in the lobster’s stomach produce a rhythmic grinding. Despite vast amounts of effort and ingenuity, we still cannot predict the effect of changing one component in this tiny network that is not even a simple brain.

This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level. Finding the link between these two levels of analysis will be a challenge for much of the rest of the century, I suspect. And the prospect of properly understanding what is happening in cases of mental illness is even further away.

Not all neuroscientists are pessimistic – some confidently claim that the application of new mathematical methods will enable us to understand the myriad interconnections in the human brain. Others – like myself – favour studying animals at the other end of the scale, focusing our attention on the tiny brains of worms or maggots and employing the well-established approach of seeking to understand how a simple system works and then applying those lessons to more complex cases. Many neuroscientists, if they think about the problem at all, simply consider that progress will inevitably be piecemeal and slow, because there is no grand unified theory of the brain lurking around the corner.






There are many alternative scenarios about how the future of our understanding of the brain could play out: perhaps the various computational projects will come good and theoreticians will crack the functioning of all brains, or the connectomes will reveal principles of brain function that are currently hidden from us. Or a theory will somehow pop out of the vast amounts of imaging data we are generating. Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations. Or by focusing on simple neural network principles we will understand higher-level organisation. Or some radical new approach integrating physiology and biochemistry and anatomy will shed decisive light on what is going on. Or new comparative evolutionary studies will show how other animals are conscious and provide insight into the functioning of our own brains. Or unimagined new technology will change all our views by providing a radical new metaphor for the brain. Or our computer systems will provide us with alarming new insight by becoming conscious. Or a new framework will emerge from cybernetics, control theory, complexity and dynamical systems theory, semantics and semiotics. Or we will accept that there is no theory to be found because brains have no overall logic, just adequate explanations of each tiny part, and we will have to be satisfied with that.

Sunday, 26 May 2019

How an Economy shapes Political Consciousness - A Pakistan story

Nadeem Paracha in The Dawn


In March 1991, a few days after the US forces invaded Iraq for the first time, 90 per cent of Americans who were polled approved of President George H. Bush’s ‘job performance’. Bush’s approval ratings skyrocketed and political commentators predicted that the Republican Party would be able to retain the presidency in the 1992 election.

Republican presidents Ronald Reagan and then Bush had held the White House since 1981. And in 1991, it seemed Bush, too, would be able to win a second term just as his predecessor Reagan had.

However, by the end of 1991, Bush’s approval ratings began to plummet, surprising many political pundits. This is when the strategy team of Bush’s opponent Bill Clinton (Democratic Party) came up with the slogan, “It’s the economy, stupid.”

Clinton was able to break the winning streak of the Republican Party by attacking the Bush administration’s economic performance, knowing fully well that the struggling economy had begun to impact many Republican voters as well.

According to the famous German philosopher and political theorist Karl Marx, a person’s “political consciousness” is almost always shaped by his economic circumstances.

Let me demonstrate this through the example of an acquaintance of mine, Tahir, or rather, through the story of his dad, Baqir. I’ve known Tahir since school. His family became extremely conservative in the 1980s, but it wasn’t always so.

Tariq’s father had migrated to Pakistan from India in 1947. He was 16 at the time. In Karachi, Tahir’s paternal grandfather was a small trader who set up a shop in Karachi in 1949. Tahir’s father often visited the shop after school.

Tahir once told me that their “class status suddenly jumped from lower-middle to upper-middle” in the early 1950s, when his grandfather managed to export merchandise to the US forces stationed in Korea.

Between 1950 and 1953, the Pakistani economy witnessed a boom of sorts due to such exports to the US during armed conflict between the US military and China-backed North Korean armies.

Tahir’s father, Baqir, took over the family business in the mid-1950s and began to expand it. Tahir told me that his father led a “highly Westernised life” and befriended many industrialists, bureaucrats and politicians. Baqir fully supported Ayub Khan’s 1958 coup because he believed that political instability had begun to negatively impact his family’s economic fortunes.

And Baqir did greatly benefit from the Ayub regime’s ‘pro-business’ policies. In 1960, he married a bureaucrat’s daughter. It was a love marriage. Apart from expanding his export business, Baqir spread his economic interests by buying two cinemas in Karachi and one in Lahore. He also bought a restaurant and opened two bars in Karachi’s Saddar and Tariq Road areas.

He also built a new palatial family home in Karachi.

According to political economist Akbar Zaidi, the country’s annual growth rate during the Ayub regime (1958-69) was an impressive 6.7 per cent in GDP. But Zaidi also mentions that Ayub’s policies in this context also created economic disparities which were exploited by opposition parties, such as Z.A. Bhutto’s PPP.

Baqir was a card-carrying member of Ayub’s centrist and modernist Conventional Muslim League. In December 1971, the PPP came to power on a ‘socialist’ platform. There was an increase in Pakistan’s import bills due to the 1973 world oil price shock, a serious post-1973 global recession during 1974-77, failure of cotton crops in 1974-75, pest attacks on crops and massive floods in 1973, 1974 and 1976-77. Pakistan experienced the worst inflation during 1972-77, when prices increased by 15 per cent.

As his business nosedived, Baqir sold his cinemas and bars in 1973, and in 1975 he wrapped up his export business and moved the family to London where he opened two Pakistani restaurants. However, he returned to Karachi after the fall of the Bhutto regime in 1977. By 1980, he was able to resurrect his business in Karachi when the Gen Zia dictatorship initiated denationalisation, deregulation and privatisation policies.

Pakistan achieved a national savings/GDP ratio of 16 per cent in 1986-87 amidst massive inflows of worker remittances from the Middle East. Unprecedented financial aid from the US and Saudi Arabia (for the anti-Soviet insurgency in Afghanistan) also helped.

Baqir was successful in regenerating his export business and also became an importer after Zia lifted curbs on imports. This was the period of Zia’s ‘Islamisation’ and Baqir followed suit by shunning his ‘Westernised ways’. He became a ‘born-again Muslim’. His palatial house in Karachi also went through a transformation. Expensive paintings gave way to equally expensive calligraphy of sacred verses and water- colour paintings of Islam’s sacred sites.

He built a mosque in the area where the house stood and also one in his vast office.


He remained a Zia supporter even after the latter’s demise in 1988. He voted for Nawaz Sharif’s (then ‘Ziaist’ and pro-business) PML-N until his business once again began to go south due to international sanctions imposed on Pakistan after the country tested two nuclear devices in 1998.

In the early 2000s, Baqir handed over the reigns of the family business to Tahir. Tahir supported the Musharraf dictatorship for a while but, despite the 8.5 per cent growth rate achieved by the regime till 2005, Tahir could not revive the family business.

Out of frustration, he sold it off and joined a multinational organisation as an employee. The frustration was also vented out through supporting the anti-Musharraf movement in 2007. The economy had begun to spiral down and this also meant Tahir’s wish to revive the family business was thwarted.

He got married and moved to Qatar and then Saudi Arabia. This is when I reconnected with him through Facebook. He supported Imran Khan in 2013 and, just before the 2018 elections, he was posting statuses about the upcoming ‘Islamic welfare state’ and Riyasat-i-Madina on Facebook.

However, only recently, as the country’s economy is once again threatening to spiral down, his Facebook posts have become critical of Khan’s regime. So I inboxed him: “Tahir, it seems there is no place for you to restart the family business in Riyasat-i-Madina.”

He didn’t reply.

Saturday, 5 May 2018

Is Marx still relevant 200 years later?

Amartya Sen in The Indian Express







How should we think about Karl Marx on his 200th birthday? His big influence on the politics of the world is universally acknowledged, though people would differ on how good or bad that influence has been. But going beyond that, there can be little doubt that the intellectual world has been transformed by the reflective departures Marx generated, from class analysis as an essential part of social understanding, to the explication of the profound contrast between needs and hard work as conflicting foundations of people’s moral entitlements. Some of the influences have been so pervasive, with such strong impact on the concepts and connection we look for in our day-to-day analysis, that we may not be fully aware where the influences came from. In reading some classic works of Marx, we are often placed in the uncomfortable position of the theatre-goer who loved Hamlet as a play, but wondered why it was so full of quotations.

Marxian analysis remains important today not just because of Marx’s own original work, but also because of the extraordinary contributions made in that tradition by many leading historians, social scientists and creative artists — from Antonio Gramsci, Rosa Luxemburg, Jean-Paul Sartre and Bertolt Brecht to Piero Sraffa, Maurice Dobb and Eric Hobsbawm (to mention just a few names). We do not have to be a Marxist to make use of the richness of Marx’s insights — just as one does not have to be an Aristotelian to learn from Aristotle.

There are ideas in Marx’s corpus of work that remain under-explored. I would place among the relatively neglected ideas Marx’s highly original concept of “objective illusion,” and related to that, his discussion of “false consciousness”. An objective illusion may arise from what we can see from our particular position — how things look from there (no matter how misleading). Consider the relative sizes of the sun and the moon, and the fact that from the earth they look to be about the same size (Satyajit Ray offered some interesting conversations on this phenomenon in his film, Agantuk). But to conclude from this observation that the sun and the moon are in fact of the same size in terms of mass or volume would be mistaken, and yet to deny that they do look to be about the same size from the earth would be a mistake too. Marx’s investigation of objective illusion — of “the outer form of things” — is a pioneering contribution to understanding the implications of positional dependence of observations.

The phenomenon of objective illusion helps to explain the widespread tendency of workers in an exploitative society to fail to see that there is any exploitation going on — an example that Marx did much to investigate, in the form of “false consciousness”. The idea can have many applications going beyond Marx’s own use of it. Powerful use can be made of the notion of objective illusion to understand, for example, how women, and indeed men, in strongly sexist societies may not see clearly enough — in the absence of informed political agitation — that there are huge elements of gender inequality in what look like family-oriented just societies, as bastions of role-based fairness.

There is, however, a danger in seeing Marx in narrowly formulaic terms — for example, in seeing him as a “materialist” who allegedly understood the world in terms of the importance of material conditions, denying the significance of ideas and beliefs. This is not only a serious misreading of Marx, who emphasised two-way relations between ideas and material conditions, but also a seriously missed opportunity to see the far-reaching role of ideas on which Marx threw such important light.

Let me illustrate the point with a debate on the discipline of historical explanation that was quite widespread in our own time. In one of Eric Hobsbawm’s lesser known essays, called “Where Are British Historians Going?”, published in the Marxist Quarterly in 1955, he discussed how the Marxist pointer to the two-way relationship between ideas and material conditions offers very different lessons in the contemporary world than it had in the intellectual world that Marx himself saw around him, where the prevailing focus — for example by Hegel and Hegelians — was very much on highlighting the influence of ideas on material conditions.

In contrast, the tendency of dominant schools of history in the mid-twentieth century — Hobsbawm cited here the hugely influential historical works of Lewis Bernstein Namier — had come to embrace a type of materialism that saw human action as being almost entirely motivated by a simple kind of material interest, in particular narrowly defined self-interest. Given this completely different kind of bias (very far removed from the idealist traditions of Hegel and other influential thinkers in Marx’s own time), Hobsbawm argued that a balanced two-way view must demand that analysis in Marxian lines today must particularly emphasise the importance of ideas and their influence on material conditions.

For example, it is crucial to recognise that Edmund Burke’s hugely influential criticism of Warren Hastings’s misbehaviour in India — in the famous Impeachment hearings — was directly related to Burke’s strongly held ideas of justice and fairness, whereas the self-interest-obsessed materialist historians, such as Namier, saw no more in Burke’s discontent than the influence of his [Burke’s] profit-seeking concerns which had suffered because of the policies pursued by Hastings. The overreliance on materialism — in fact of a particularly narrow kind — needed serious correction, argued Hobsbawm: “In the pre-Namier days, Marxists regarded it as one of their chief historical duties to draw attention to the material basis of politics. But since bourgeois historians have adopted what is a particular form of vulgar materialism, Marxists had to remind them that history is the struggle of men for ideas, as well as a reflection of their material environment. Mr Trevor-Roper [a famous right-wing historian] is not merely mistaken in believing that the English Revolution was the reflection of the declining fortunes of country gentlemen, but also in his belief that Puritanism was simply a reflection of their impending bankruptcies.”

To Hobsbawm’s critique, it could be added that the so-called “rational choice theory” (so dominant in recent years in large parts of mainstream economics and political analysis) thrives on a single-minded focus on self-interest as the sole human motivation, thereby missing comprehensively the balance that Marx had argued for. A rational choice theorist can, in fact, learn a great deal from reading Marx’s Economic and Philosophic Manuscripts and The German Ideology. While this would be a very different lesson from what Marx wanted Hegelians to consider, a commitment to doing justice to the two-way relations characterises both parts of Marx’s capacious pedagogy. What has to be avoided is the narrowing of Marx’s thoughts through simple formulas respectfully distributed in his name.

In remembering Marx on his 200th birthday, we not only celebrate a great intellectual, but also one whose critical analyses and investigations have many insights to offer to us today. Paying attention to Marx may be more important than paying him respect.


-------


Slavoj Zizek in The Independent


There is a delicious old Soviet joke about Radio Yerevan: a listener asks: “Is it true that Rabinovitch won a new car in the lottery?”, and the radio presenter answers: “In principle yes, it’s true, only it wasn’t a new car but an old bicycle, and he didn’t win it but it was stolen from him.”

Does exactly the same not hold for Marx’s legacy today? Let’s ask Radio Yerevan: “Is Marx’s theory still relevant today?” We can guess the answer: in principle yes, he describes wonderfully the mad dance of capitalist dynamics which only reached its peak today, more than a century and a half later, but… Gerald A Cohen enumerated the four features of the classic Marxist notion of the working class: (1) it constitutes the majority of society; (2) it produces the wealth of society; (3) it consists of the exploited members of society; and (4) its members are the needy people in society. When these four features are combined, they generate two further features: (5) the working class has nothing to lose from revolution; and (6) it can and will engage in a revolutionary transformation of society.

None of the first four features applies to today’s working class, which is why features (5) and (6) cannot be generated. Even if some of the features continue to apply to parts of today’s society, they are no longer united in a single agent: the needy people in society are no longer the workers, and so on.

But let’s dig into this question of relevance and appropriateness further. Not only is Marx’s critique of political economy and his outline of the capitalist dynamics still fully relevant, but one could even take a step further and claim that it is only today, with global capitalism, that it is fully relevant.

However, at the moment of triumph is one of defeat. After overcoming external obstacles the new threat comes from within. In other words, Marx was not simply wrong, he was often right – but more literally than he himself expected to be.

For example, Marx couldn’t have imagined that the capitalist dynamics of dissolving all particular identities would translate into ethnic identities as well. Today’s celebration of “minorities” and “marginals” is the predominant majority position – alt-rightists who complain about the terror of “political correctness” take advantage of this by presenting themselves as protectors of an endangered minority, attempting to mirror campaigns on the other side.

And then there’s the case of “commodity fetishism”. Recall the classic joke about a man who believes himself to be a grain of seed and is taken to a mental institution where the doctors do their best to finally convince him that he is not a grain but a man. When he is cured (convinced that he is not a grain of seed but a man) and allowed to leave the hospital, he immediately comes back trembling. There is a chicken outside the door and he is afraid that it will eat him.

“Dear fellow,” says his doctor, “you know very well that you are not a grain of seed but a man.”

“Of course I know that,” replies the patient, “but does the chicken know it?”

So how does this apply to the notion of commodity fetishism? Note the very beginning of the subchapter on commodity fetishism in Marx’s Das Kapital: “A commodity appears at first sight an extremely obvious, trivial thing. But its analysis brings out that it is a very strange thing, abounding in metaphysical subtleties and theological niceties.”

Commodity fetishism (our belief that commodities are magic objects, endowed with an inherent metaphysical power) is not located in our mind, in the way we (mis)perceive reality, but in our social reality itself. We may know the truth, but we act as if we don’t know it – in our real life, we act like the chicken from the joke.

Niels Bohr, who already gave the right answer to Einstein’s “God doesn’t play dice“(“Don’t tell God what to do!”), also provided the perfect example of how a fetishist disavowal of belief works. Seeing a horseshoe on his door, a surprised visitor commented that he didn’t think Bohr believed superstitious ideas about horseshoes bringing good luck to people. Bohr snapped back: “I also do not believe in it; I have it there because I was told that it works whether one believes in it or not!”

This is how ideology works in our cynical era: we don’t have to believe in it. Nobody takes democracy or justice seriously, we are all aware of their corruption, but we practice them – in other words, we display our belief in them – because we assume they work even if we do not believe in them.

With regard to religion, we no longer “really believe”, we just follow (some of the) religious rituals and mores as part of the respect for the “lifestyle” of the community to which we belong (non-believing Jews obeying kosher rules “out of respect for tradition”, for example).

“I do not really believe in it, it is just part of my culture” seems to be the predominant mode of the displaced belief, characteristic of our times. “Culture” is the name for all those things we practice without really believing in them, without taking them quite seriously.

This is why we dismiss fundamentalist believers as “barbarians” or “primitive”, as anti-cultural, as a threat to culture – they dare to take seriously their beliefs. The cynical era in which we live would have no surprises for Marx.

Marx’s theories are thus not simply alive: Marx is a ghost who continues to haunt us – and the only way to keep him alive is to focus on those of his insights which are today more true than in his own time.

Thursday, 28 December 2017

I used to think people made rational decisions. But now I know I was wrong

Deborah Orr in The Guardian

It’s been coming on for a while, so I can’t claim any eureka moment. But something did crystallise this year. What I changed my mind about was people. More specifically, I realised that people cannot be relied upon to make rational choices. We would have fixed global warming by now if we were rational. Instead, there’s a stubborn refusal to let go of the idea that environmental degradation is a “debate” in which there are “two sides”.

Debating is a great way of exploring an issue when there is real room for doubt and nuance. But when a conclusion can be reached simply by assembling a mountain of known facts, debating is just a means of pitting the rational against the irrational. 

Humans like to think we are rational. Some of us are more rational than others. But, essentially, we are all slaves to our feelings and emotions. The trick is to realise this, and be sure to guard against it. It’s something that, in the modern world, we are not good at. Authentic emotions are valued over dry, dull authentic evidence at every turn.

I think that as individuality has become fetishised, our belief in our right to make half-formed snap judgments, based on little more than how we feel, has become problematically unchallengeable. When Uma Thurman declared that she would wait for her anger to abate before she spoke about Harvey Weinstein, it was, I believe, in recognition of this tendency to speak first and think later.

Good for her. The value of calm reasoning is not something that one sees acknowledged very often at the moment. Often, the feelings and emotions that form the basis of important views aren’t so very fine. Sometimes humans understand and control their emotions so little that they sooner or later coagulate into a roiling soup of anxiety, fear, sadness, self-loathing, resentment and anger which expresses itself however it can, finding objects to project its hurt and confusion on to. Like immigrants. Or transsexuals. Or liberals. Or Tories. Or women. Or men.

Even if the desire to find living, breathing scapegoats is resisted, untrammelled emotion can result in unwise and self-defeating decisions, devoid of any rationality. Rationality is a tool we have created to govern our emotions. That’s why education, knowledge, information is the cornerstone of democracy. And that’s why despots love ignorance.

Sometimes we can identify and harness the emotions we need to get us through the thing we know, rationally, that we have to do. It’s great when you’re in the zone. Even negative emotions can be used rationally. I, for example, use anger a lot in my work. I’m writing on it at this moment, just as much as I’m writing on a computer. I’ll stop in a moment. I’ll reach for facts to calm myself. I’ll reach for facts to make my emotions seem rational. Or maybe that’s just me. Whatever that means.


‘‘Consciousness’ involves no executive or causal relationship with any of the psychological processes attributed to it'David Oakley and Peter Halligan


It’s a fact that I can find some facts to back up my feelings about people. Just writing that down helps me to feel secure and in control. The irrationality of humans has been considered a fact since the 1970s, when two psychologists, Amos Tversky and Daniel Kahneman, showed that human decisions were often completely irrational, not at all in their own interests and based on “cognitive biases”. Their ideas were a big deal, and also formed the basis of Michael Lewis’s book, The Undoing Project.

More recent research – or more recent theory, to be precise – has rendered even Tversky and Kahneman’s ideas about the unreliability of the human mind overly rational.

Chasing the Rainbow: The Non-Conscious Nature of Being is a research paper from University College London and Cardiff University. Its authors, David Oakley and Peter Halligan, argue “that ‘consciousness’ contains no top-down control processes and that ‘consciousness’ involves no executive, causal, or controlling relationship with any of the familiar psychological processes conventionally attributed to it”.

Which can only mean that even when we think we’re being rational, we’re not even really thinking. That thing we call thinking – we don’t even know what it really is.

When I started out in journalism, opinion columns weren’t a big thing. Using the word “I’ in journalism was frowned upon. The dispassionate dissemination of facts was the goal to be reached for.

Now so much opinion is published, in print and online, and so many people offer their opinions about the opinions, that people in our government feel comfortable in declaring that experts are overrated, and the president of the United States regularly says that anything he doesn’t like is “fake news”.

So, people. They’re a problem. That’s what I’ve decided. I’m part of a big problem. All I can do now is get my message out there.

Friday, 22 May 2015

Seven common myths about meditation


Julia Roberts learns how to meditate in the film Eat, Pray, Love. Photograph: Allstar/COLUMBIA PICTURES/Sportsphoto Ltd./Allstar

Catherine Wikholm in The Guardian

Meditation is becoming increasingly popular, and in recent years there have been calls for mindfulness (a meditative practice with Buddhist roots) to be more widely available on the NHS. Often promoted as a sure-fire way to reduce stress, it’s also being increasingly offered in schools, universities and businesses.

For the secularised mind, meditation fills a spiritual vacuum; it brings the hope of becoming a better, happier individual in a more peaceful world. However, the fact that meditation was primarily designed not to make us happier, but to destroy our sense of individual self – who we feel and think we are most of the time – is often overlooked in the science and media stories about it, which focus almost exclusively on the benefits practitioners can expect.

If you’re considering it, here are seven common beliefs about meditation that are not supported by scientific evidence.

Myth 1: Meditation never has adverse or negative effects. It will change you for the better (and only the better)

Fact 1: It’s easy to see why this myth might spring up. After all, sitting in silence and focusing on your breathing would seem like a fairly innocuous activity with little potential for harm. But when you consider how many of us, when worried or facing difficult circumstances, cope by keeping ourselves very busy and with little time to think, it isn’t that much of a surprise to find that sitting without distractions, with only ourselves, might lead to disturbing emotions rising to the surface.

However, many scientists have turned a blind eye to the potential unexpected or harmful consequences of meditation. With Transcendental Meditation, this is probably because many of those who have researched it have also been personally involved in the movement; with mindfulness, the reasons are less clear, because it is presented as a secular technique. Nevertheless, there is emerging scientific evidence from case studies, surveys of meditators’ experience and historical studies to show that meditation can be associated with stress, negative effectsand mental health problems. For example, one study found that mindfulness meditation led to increased cortisol, a biological marker of stress, despite the fact that participants subjectively reported feeling less stressed.


Myth 2: Meditation can benefit everyone

FacebookTwitterPinterest Photograph: Alamy

Fact 2: The idea that meditation is a cure-all for all lacks scientific basis. “One man’s meat is another man’s poison,” the psychologist Arnold Lazarus reminded us in his writings about meditation. Although there has been relatively little research into how individual circumstances – such as age, gender, or personality type – might play a role in the value of meditation, there is a growing awareness that meditation works differently for each individual.

For example, it may provide an effective stress-relief technique for individuals facing serious problems (such as being unemployed), but have little value for low-stressed individuals. Or it may benefit depressed individuals who suffered trauma and abuse in their childhood, but not other depressed people. There is also some evidence that – along with yoga – it can be of particular use to prisoners, for whom it improves psychological wellbeing and, perhaps more importantly, encourages better control over impulsivity. We shouldn’t be surprised about meditation having variable benefits from person to person. After all, the practice wasn’t intended to make us happier or less stressed, but to assist us in diving deep within and challenging who we believe we are.


Myth 3: If everyone meditated the world would be a much better place

Fact 3: All global religions share the belief that following their particular practices and ideals will make us better individuals. So far, there is no clear scientific evidence that meditation is more effective at making us, for example, more compassionate than other spiritual or psychological practices. Research on this topic has serious methodological and theoretical limitations and biases. Most of the studies have no adequate control groups and generally fail to assess the expectations of participants (ie, if we expect to benefit from something, we may be more likely to report benefits).


Myth 4: If you’re seeking personal change and growth, meditating is as efficient – or more – than having therapy

Fact 4: There is very little evidence that an eight-week mindfulness-based group programme has the same benefits as of being in conventional psychological therapy – most studies compare mindfulness to “treatment as usual” (such as seeing your GP), rather than one-to-one therapy. Although mindfulness interventions are group-based and most psychological therapy is conducted on a one-to-one basis, both approaches involve developing an increased awareness of our thoughts, emotions and way of relating to others. But the levels of awareness probably differ. A therapist can encourage us to examine conscious or unconscious patterns within ourselves, whereas these might be difficult to access in a one-size-fits-all group course, or if we were meditating on our own.


Myth 5: Meditation produces a unique state of consciousness that we can measure scientifically

FacebookTwitterPinterest Teachers and pupils practise meditation techniques at Bethnal Green Academy Photograph: Sean Smith for the Guardian

Fact 5: Meditation produces states of consciousness that we can indeed measure using various scientific instruments. However, the overall evidence is that these states are not physiologically unique. Furthermore, although different kinds of meditation may have diverse effects on consciousness (and on the brain), there is no scientific consensus about what these effects are.

Myth 6: We can practise meditation as a purely scientific technique with no religious or spiritual leanings

Fact 6: In principle, it’s perfectly possible to meditate and be uninterested in the spiritual background to the practice. However, research shows that meditation leads us to become more spiritual, and that this increase in spirituality is partly responsible for the practice’s positive effects. So, even if we set out to ignore meditation’s spiritual roots, those roots may nonetheless envelop us, to a greater or lesser degree. Overall, it is unclear whether secular models of mindfulness meditation are fully secular.

Myth 7: Science has unequivocally shown how meditation can change us and why

Fact 7: Meta-analyses show there is moderate evidence that meditation affects us in various ways, such as increasing positive emotions and reducing anxiety. However, it is less clear how powerful and long-lasting these changes are.

Some studies show that meditating can have a greater impact than physical relaxation, although other research using a placebo meditation contradicts this finding. We need better studies but, perhaps as important, we also need models that explain how meditation works. For example, with mindfulness-based cognitive therapy (MBCT), we still can’t be sure of the “active” ingredient. Is it the meditation itself that causes positive effects, or is it the fact that the participant learns to step back and become aware of his or her thoughts and feelings in a supportive group environment?

There simply is no cohesive, overarching attempt to describe the various psychobiological processes that meditation sets in motion. Unless we can clearly map the effects of meditation – both the positive and the negative – and identify the processes underpinning the practice, our scientific understanding of meditation is precarious and can easily lead to exaggeration and misinterpretation.

Wednesday, 10 October 2012

Afterlife exists says top brain surgeon


During his illness Dr Alexander says that the part of his brain which controls human thought and emotion "shut down" and that he then experienced "something so profound that it gave me a scientific reason to believe in consciousness after death." In an essay for American magazine Newsweek, which he wrote to promote his book Proof of Heaven, Dr Alexander says he was met by a beautiful blue-eyed woman in a "place of clouds, big fluffy pink-white ones" and "shimmering beings".
He continues: "Birds? Angels? These words registered later, when I was writing down my recollections. But neither of these words do justice to the beings themselves, which were quite simply different from anything I have known on this planet. They were more advanced. Higher forms." The doctor adds that a "huge and booming like a glorious chant, came down from above, and I wondered if the winged beings were producing it. the sound was palpable and almost material, like a rain that you can feel on your skin but doesn't get you wet."
Dr Alexander says he had heard stories from patients who spoke of outer body experiences but had disregarded them as "wishful thinking" but has reconsidered his opinion following his own experience.
He added: "I know full well how extraordinary, how frankly unbelievable, all this sounds. Had someone even a doctor told me a story like this in the old days, I would have been quite certain that they were under the spell of some delusion. 
"But what happened to me was, far from being delusional, as real or more real than any event in my life. That includes my wedding day and the birth of my two sons." He added: "I've spent decades as a neurosurgeon at some of the most prestigous medical institutions in our country. I know that many of my peers hold as I myself did to the theory that the brain, and in particular the cortex, generates consciousness and that we live in a universe devoid of any kind of emotion, much less the unconditional love that I now know God and the universe have toward us.
"But that belief, that theory, now lies broken at our feet. What happened to me destroyed it."