'People will forgive you for being wrong, but they will never forgive you for being right - especially if events prove you right while proving them wrong.' Thomas Sowell
Search This Blog
Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts
Monday, 17 June 2024
Monday, 29 January 2024
Monday, 22 May 2023
Wednesday, 11 January 2023
Friday, 31 December 2021
Monday, 6 December 2021
Friday, 10 April 2020
Information can make you sick
Trader turned neuroscientist John Coates in The FT on why economic crises are also medical ones.
As coronavirus infection rates peak in many countries, the markets rally. There is a nagging worry that a second wave of infections might occur once lockdowns are lifted or summer passes. But for anyone immersed in the financial markets there should be a further concern. Volatility created by the pandemic could itself cause a second wave of health problems. Volatility can make you sick, just as a virus can.
To get an inkling of what this other second wave might look like, it helps to recall what happened after the credit crisis. That event was both a financial and medical disaster. Various epidemiological studies suggest it may be responsible for 260,000 cancer deaths in OECD countries; a 17.8 per cent increase in the Greek mortality rate between 2010-16; and a spike in cardiovascular disease in London for the years 2008-09, with an additional 2,000 deaths due to heart attacks. The current economic crisis may be far worse than 2008-09, so the medical fallout could be as well.
Why do financial and medical crises go hand in hand? Many of the above studies focused on unemployment and reduced access to healthcare as causes of the adverse health outcomes. But research my colleagues and I have conducted on trading floors for the past 12 years suggest to me that uncertainty itself, regardless of outcome, can have independent and profound effects on physiology and health.
Our studies were designed initially to test a hunch I had while running a trading desk for Deutsche Bank, that the rollercoaster of physical sensations a person experiences while immersed in the markets alters their risk-taking. After retraining in neuroscience and physiology at Cambridge University, I set up shop on various hedge fund and asset manager trading floors, along with colleagues, mostly medical researchers. Using wearable tech and sampling biochemistry, we tracked the traders’ cardiovascular, endocrine and immune systems.
My goal was to demonstrate how these physiological changes altered trader performance. Increasingly, though, I came to see that a trading floor provides an elegant model for studying occupational health.
One remarkable thing we found was that traders’ bodies calibrated sensitively to market volatility. For humans, apparently, information is physical. You do not process information dispassionately, as a computer does; rather your brain quietly figures out what movement might ensue from the information, and prepares your body, altering heart rate, adrenaline levels, immune activation and so on.
Your brain did not evolve to support Platonic thought; it evolved to process movement. Our larger brain controls a more sophisticated set of muscles, giving us an ability to learn new movements unmatched by any other animal — or robot — on the planet. If you want to understand yourself, fellow humans, even the markets, put movement at the very core of what we are.
Essential to our exquisite motor control is an equally advanced system of fuel injection, one that has been misleadingly termed “the stress response”. Stress connotes something nasty but the stress response is nothing more sinister than a metabolic preparation for movement. Cortisol, the main stress molecule, inhibits bodily systems not needed during movement, such as digestion and reproduction, and marshals glucose and free fatty acids as fuel for our cells.
The stress response evolved to be short lived, acutely activated for only a few hours or days. Yet during a crisis such as the current one, you can activate the stress response for weeks and months at a time. Then an acute stress response morphs into a chronic one. Your digestive system is inhibited so you become susceptible to gastrointestinal disorders; blood pressure increases so you are prone to hypertension; fatty acids and glucose circulate in your blood but are not used, because you are stuck at home, so your risks increase for cardiovascular disease. Finally, by inhibiting parts of the immune system, stress impairs your ability to recover from diseases such as cancer, and Covid-19.
So why the connection with uncertainty? The stress response is largely predictive rather than reactive. Just as we try to predict the future location of a tennis ball, so too we predict our metabolic needs. When we encounter situations of novelty and uncertainty, we do not know what to expect, so we marshal a preparatory stress response. The stress response is comparable to revving your engine at a yellow light. Situations of novelty can be described, following Claude Shannon, inventor of information theory, as “information rich”. Conveniently, informational load in the financial markets can be measured by the level of volatility: the more Shannon information flowing into the markets, the higher the volatility.
In two of our studies we found that traders’ cortisol levels did in fact track bond volatility almost tick for tick. It did not even matter if the traders were making or losing money; just put a human in the presence of information and their metabolism calibrates to it. Take a moment to contemplate that curious result — there are molecules in your blood that track the amount of information you process.
Today, with historically elevated volatility, there is a good chance cortisol levels are trending higher. Immune systems could also be affected. When your body is attacked by a pathogen, your immune system coordinates a suite of changes known as “sickness behaviour”. You develop a fever, lose your appetite and withdraw socially. You also experience increased risk aversion.
Central to the immune response is inflammation, the process of eliminating pathogens and initiating tissue repair. However, inflammation can also occur in stressful situations, because cytokines, the molecules triggering inflammation, assist in the recruitment of metabolic reserves. If inflammation becomes systemic and chronic, it contributes to a wide range of health problems. We found that interleukin-1-beta, the first responder of inflammation, tracked volatility as closely as cortisol.
Recently we have focused on the cardiovascular system. Working with a large and sophisticated fund manager, we have used cutting-edge wearable tech that permits portfolio managers to track their cardiovascular data, physical activity and sleep. The cardiovascular system similarly tracks volatility and risk appetite.
In short, here we may have a mechanism connecting financial and health crises. On the one hand, fluctuating levels of stress and inflammation affect risk-taking. In a lab-based study, we found that chronically elevated cortisol caused a large decrease in risk appetite. Shifting risk presents tricky problems for risk management — and for central banks. Physiology-induced risk aversion can feed a bear market, morphing it into a crash so dangerous that the state has to step in with asset purchases. On the other hand, chronically elevated stress and inflammation are known to contribute to a wide range of health problems.
We are not accustomed to combining financial and medical data in this way. But corporate and state health programs should start.
The markets today are living through a period of volatility the likes of which I have never encountered. March was, to put it mildly, information rich. As a result, there is now the very real possibility of a second wave of disease. Viruses can make you sick, but so too can information.
As coronavirus infection rates peak in many countries, the markets rally. There is a nagging worry that a second wave of infections might occur once lockdowns are lifted or summer passes. But for anyone immersed in the financial markets there should be a further concern. Volatility created by the pandemic could itself cause a second wave of health problems. Volatility can make you sick, just as a virus can.
To get an inkling of what this other second wave might look like, it helps to recall what happened after the credit crisis. That event was both a financial and medical disaster. Various epidemiological studies suggest it may be responsible for 260,000 cancer deaths in OECD countries; a 17.8 per cent increase in the Greek mortality rate between 2010-16; and a spike in cardiovascular disease in London for the years 2008-09, with an additional 2,000 deaths due to heart attacks. The current economic crisis may be far worse than 2008-09, so the medical fallout could be as well.
Why do financial and medical crises go hand in hand? Many of the above studies focused on unemployment and reduced access to healthcare as causes of the adverse health outcomes. But research my colleagues and I have conducted on trading floors for the past 12 years suggest to me that uncertainty itself, regardless of outcome, can have independent and profound effects on physiology and health.
Our studies were designed initially to test a hunch I had while running a trading desk for Deutsche Bank, that the rollercoaster of physical sensations a person experiences while immersed in the markets alters their risk-taking. After retraining in neuroscience and physiology at Cambridge University, I set up shop on various hedge fund and asset manager trading floors, along with colleagues, mostly medical researchers. Using wearable tech and sampling biochemistry, we tracked the traders’ cardiovascular, endocrine and immune systems.
My goal was to demonstrate how these physiological changes altered trader performance. Increasingly, though, I came to see that a trading floor provides an elegant model for studying occupational health.
One remarkable thing we found was that traders’ bodies calibrated sensitively to market volatility. For humans, apparently, information is physical. You do not process information dispassionately, as a computer does; rather your brain quietly figures out what movement might ensue from the information, and prepares your body, altering heart rate, adrenaline levels, immune activation and so on.
Your brain did not evolve to support Platonic thought; it evolved to process movement. Our larger brain controls a more sophisticated set of muscles, giving us an ability to learn new movements unmatched by any other animal — or robot — on the planet. If you want to understand yourself, fellow humans, even the markets, put movement at the very core of what we are.
Essential to our exquisite motor control is an equally advanced system of fuel injection, one that has been misleadingly termed “the stress response”. Stress connotes something nasty but the stress response is nothing more sinister than a metabolic preparation for movement. Cortisol, the main stress molecule, inhibits bodily systems not needed during movement, such as digestion and reproduction, and marshals glucose and free fatty acids as fuel for our cells.
The stress response evolved to be short lived, acutely activated for only a few hours or days. Yet during a crisis such as the current one, you can activate the stress response for weeks and months at a time. Then an acute stress response morphs into a chronic one. Your digestive system is inhibited so you become susceptible to gastrointestinal disorders; blood pressure increases so you are prone to hypertension; fatty acids and glucose circulate in your blood but are not used, because you are stuck at home, so your risks increase for cardiovascular disease. Finally, by inhibiting parts of the immune system, stress impairs your ability to recover from diseases such as cancer, and Covid-19.
So why the connection with uncertainty? The stress response is largely predictive rather than reactive. Just as we try to predict the future location of a tennis ball, so too we predict our metabolic needs. When we encounter situations of novelty and uncertainty, we do not know what to expect, so we marshal a preparatory stress response. The stress response is comparable to revving your engine at a yellow light. Situations of novelty can be described, following Claude Shannon, inventor of information theory, as “information rich”. Conveniently, informational load in the financial markets can be measured by the level of volatility: the more Shannon information flowing into the markets, the higher the volatility.
In two of our studies we found that traders’ cortisol levels did in fact track bond volatility almost tick for tick. It did not even matter if the traders were making or losing money; just put a human in the presence of information and their metabolism calibrates to it. Take a moment to contemplate that curious result — there are molecules in your blood that track the amount of information you process.
Today, with historically elevated volatility, there is a good chance cortisol levels are trending higher. Immune systems could also be affected. When your body is attacked by a pathogen, your immune system coordinates a suite of changes known as “sickness behaviour”. You develop a fever, lose your appetite and withdraw socially. You also experience increased risk aversion.
Central to the immune response is inflammation, the process of eliminating pathogens and initiating tissue repair. However, inflammation can also occur in stressful situations, because cytokines, the molecules triggering inflammation, assist in the recruitment of metabolic reserves. If inflammation becomes systemic and chronic, it contributes to a wide range of health problems. We found that interleukin-1-beta, the first responder of inflammation, tracked volatility as closely as cortisol.
Recently we have focused on the cardiovascular system. Working with a large and sophisticated fund manager, we have used cutting-edge wearable tech that permits portfolio managers to track their cardiovascular data, physical activity and sleep. The cardiovascular system similarly tracks volatility and risk appetite.
In short, here we may have a mechanism connecting financial and health crises. On the one hand, fluctuating levels of stress and inflammation affect risk-taking. In a lab-based study, we found that chronically elevated cortisol caused a large decrease in risk appetite. Shifting risk presents tricky problems for risk management — and for central banks. Physiology-induced risk aversion can feed a bear market, morphing it into a crash so dangerous that the state has to step in with asset purchases. On the other hand, chronically elevated stress and inflammation are known to contribute to a wide range of health problems.
We are not accustomed to combining financial and medical data in this way. But corporate and state health programs should start.
The markets today are living through a period of volatility the likes of which I have never encountered. March was, to put it mildly, information rich. As a result, there is now the very real possibility of a second wave of disease. Viruses can make you sick, but so too can information.
Thursday, 27 February 2020
Why your brain is not a computer
For decades it has been the dominant metaphor in neuroscience. But could this idea have been leading us astray all along? By Matthew Cobb in The Guardian
We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity.
There are many alternative scenarios about how the future of our understanding of the brain could play out: perhaps the various computational projects will come good and theoreticians will crack the functioning of all brains, or the connectomes will reveal principles of brain function that are currently hidden from us. Or a theory will somehow pop out of the vast amounts of imaging data we are generating. Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations. Or by focusing on simple neural network principles we will understand higher-level organisation. Or some radical new approach integrating physiology and biochemistry and anatomy will shed decisive light on what is going on. Or new comparative evolutionary studies will show how other animals are conscious and provide insight into the functioning of our own brains. Or unimagined new technology will change all our views by providing a radical new metaphor for the brain. Or our computer systems will provide us with alarming new insight by becoming conscious. Or a new framework will emerge from cybernetics, control theory, complexity and dynamical systems theory, semantics and semiotics. Or we will accept that there is no theory to be found because brains have no overall logic, just adequate explanations of each tiny part, and we will have to be satisfied with that.
We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain. Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity.
We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain’s very structure at will, altering the animal’s behaviour as a result. Some of the most profound consequences of our growing mastery can be seen in our ability to enable a paralysed person to control a robotic arm with the power of their mind.
Every day, we hear about new discoveries that shed light on how brains work, along with the promise – or threat – of new technology that will enable us to do such far-fetched things as read minds, or detect criminals, or even be uploaded into a computer. Books are repeatedly produced that each claim to explain the brain in different ways.
And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.
I n 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.
“Only 20 to 30 years ago, neuroanatomical and neurophysiological information was relatively scarce, while understanding mind-related processes seemed within reach,” Frégnac wrote. “Nowadays, we are drowning in a flood of information. Paradoxically, all sense of global understanding is in acute danger of getting washed away. Each overcoming of technological barriers opens a Pandora’s box by revealing hidden variables, mechanisms and nonlinearities, adding new levels of complexity.”
The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application of experimental technologies, substantial advances in data analysis methods and intense application of theoretic concepts and models.”
There are indeed theoretical approaches to brain function, including to the most mysterious thing the human brain can do – produce consciousness. But none of these frameworks are widely accepted, for none has yet passed the decisive test of experimental investigation. It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)
As observed by Francis Crick, the co-discoverer of the DNA double helix, the brain is an integrated, evolved structure with different bits of it appearing at different moments in evolution and adapted to solve different problems. Our current comprehension of how it all works is extremely partial – for example, most neuroscience sensory research has been focused on sight, not smell; smell is conceptually and technically more challenging. But the way that olfaction and vision work are different, both computationally and structurally. By focusing on vision, we have developed a very limited understanding of what the brain does and how it does it.
The nature of the brain – simultaneously integrated and composite – may mean that our future understanding will inevitably be fragmented and composed of different explanations for different parts. Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”
For more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.
“Descartes was impressed by the hydraulic figures in the royal gardens, and developed a hydraulic theory of the action of the brain,” Lashley wrote. “We have since had telephone theories, electrical field theories and now theories based on computing machines and automatic rudders. I suggest we are more likely to find out about how the brain works by studying the brain itself, and the phenomena of behaviour, than by indulging in far-fetched physical analogies.”
This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. Since its inception in the 1920s, the idea of a neural code has come to dominate neuroscientific thinking – more than 11,000 papers on the topic have been published in the past 10 years. Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.
The unstated implication in most descriptions of neural coding is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as “downstream structures” that have access to the optimal way to decode the signals. But the ways in which such structures actually process those signals is unknown, and is rarely explicitly hypothesised, even in simple models of neural network function.
Every day, we hear about new discoveries that shed light on how brains work, along with the promise – or threat – of new technology that will enable us to do such far-fetched things as read minds, or detect criminals, or even be uploaded into a computer. Books are repeatedly produced that each claim to explain the brain in different ways.
And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.
I n 2017, the French neuroscientist Yves Frégnac focused on the current fashion of collecting massive amounts of data in expensive, large-scale projects and argued that the tsunami of data they are producing is leading to major bottlenecks in progress, partly because, as he put it pithily, “big data is not knowledge”.
“Only 20 to 30 years ago, neuroanatomical and neurophysiological information was relatively scarce, while understanding mind-related processes seemed within reach,” Frégnac wrote. “Nowadays, we are drowning in a flood of information. Paradoxically, all sense of global understanding is in acute danger of getting washed away. Each overcoming of technological barriers opens a Pandora’s box by revealing hidden variables, mechanisms and nonlinearities, adding new levels of complexity.”
The neuroscientists Anne Churchland and Larry Abbott have also emphasised our difficulties in interpreting the massive amount of data that is being produced by laboratories all over the world: “Obtaining deep understanding from this onslaught will require, in addition to the skilful and creative application of experimental technologies, substantial advances in data analysis methods and intense application of theoretic concepts and models.”
There are indeed theoretical approaches to brain function, including to the most mysterious thing the human brain can do – produce consciousness. But none of these frameworks are widely accepted, for none has yet passed the decisive test of experimental investigation. It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)
As observed by Francis Crick, the co-discoverer of the DNA double helix, the brain is an integrated, evolved structure with different bits of it appearing at different moments in evolution and adapted to solve different problems. Our current comprehension of how it all works is extremely partial – for example, most neuroscience sensory research has been focused on sight, not smell; smell is conceptually and technically more challenging. But the way that olfaction and vision work are different, both computationally and structurally. By focusing on vision, we have developed a very limited understanding of what the brain does and how it does it.
The nature of the brain – simultaneously integrated and composite – may mean that our future understanding will inevitably be fragmented and composed of different explanations for different parts. Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”
For more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.
“Descartes was impressed by the hydraulic figures in the royal gardens, and developed a hydraulic theory of the action of the brain,” Lashley wrote. “We have since had telephone theories, electrical field theories and now theories based on computing machines and automatic rudders. I suggest we are more likely to find out about how the brain works by studying the brain itself, and the phenomena of behaviour, than by indulging in far-fetched physical analogies.”
This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. Since its inception in the 1920s, the idea of a neural code has come to dominate neuroscientific thinking – more than 11,000 papers on the topic have been published in the past 10 years. Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.
The unstated implication in most descriptions of neural coding is that the activity of neural networks is presented to an ideal observer or reader within the brain, often described as “downstream structures” that have access to the optimal way to decode the signals. But the ways in which such structures actually process those signals is unknown, and is rarely explicitly hypothesised, even in simple models of neural network function.
MRI scan of a brain. Photograph: Getty/iStockphoto
The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.
By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.
The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.
Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development; despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.
One sign that our metaphors may be losing their explanatory power is the widespread assumption that much of what nervous systems do, from simple systems right up to the appearance of consciousness in humans, can only be explained as emergent properties – things that you cannot predict from an analysis of the components, but which emerge as the system functions.
In 1981, the British psychologist Richard Gregory argued that the reliance on emergence as a way of explaining brain function indicated a problem with the theoretical framework: “The appearance of ‘emergence’ may well be a sign that a more general (or at least different) conceptual scheme is needed … It is the role of good theories to remove the appearance of emergence. (So explanations in terms of emergence are bogus.)”
This overlooks the fact that there are different kinds of emergence: weak and strong. Weak emergent features, such as the movement of a shoal of tiny fish in response to a shark, can be understood in terms of the rules that govern the behaviour of their component parts. In such cases, apparently mysterious group behaviours are based on the behaviour of individuals, each of which is responding to factors such as the movement of a neighbour, or external stimuli such as the approach of a predator.
This kind of weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components. You and the page you are reading this on are both made of atoms, but your ability to read and understand comes from features that emerge through atoms in your body forming higher-level structures, such as neurons and their patterns of firing – not simply from atoms interacting.
Strong emergence has recently been criticised by some neuroscientists as risking “metaphysical implausibility”, because there is no evident causal mechanism, nor any single explanation, of how emergence occurs. Like Gregory, these critics claim that the reliance on emergence to explain complex phenomena suggests that neuroscience is at a key historical juncture, similar to that which saw the slow transformation of alchemy into chemistry. But faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.
Interestingly, while some neuroscientists are discombobulated by the metaphysics of emergence, researchers in artificial intelligence revel in the idea, believing that the sheer complexity of modern computers, or of their interconnectedness through the internet, will lead to what is dramatically known as the singularity. Machines will become conscious.
There are plenty of fictional explorations of this possibility (in which things often end badly for all concerned), and the subject certainly excites the public’s imagination, but there is no reason, beyond our ignorance of how consciousness works, to suppose that it will happen in the near future. In principle, it must be possible, because the working hypothesis is that mind is a product of matter, which we should therefore be able to mimic in a device. But the scale of complexity of even the simplest brains dwarfs any machine we can currently envisage. For decades – centuries – to come, the singularity will be the stuff of science fiction, not science.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning
A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.
The materialist working hypothesis is that brains and minds, in humans and maggots and everything else, are identical. Neurons and the processes they support – including consciousness – are the same thing. In a computer, software and hardware are separate; however, our brains and our minds consist of what can best be described as wetware, in which what is happening and where it is happening are completely intertwined.
Imagining that we can repurpose our nervous system to run different programmes, or upload our mind to a server, might sound scientific, but lurking behind this idea is a non-materialist view going back to Descartes and beyond. It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind. It would be possible to give this idea a veneer of scientific respectability by posing it in terms of reading the state of a set of neurons and writing that to a new substrate, organic or artificial.
But to even begin to imagine how that might work in practice, we would need both an understanding of neuronal function that is far beyond anything we can currently envisage, and would require unimaginably vast computational power and a simulation that precisely mimicked the structure of the brain in question. For this to be possible even in principle, we would first need to be able to fully model the activity of a nervous system capable of holding a single state, never mind a thought. We are so far away from taking this first step that the possibility of uploading your mind can be dismissed as a fantasy, at least until the far future.
For the moment, the brain-as-computer metaphor retains its dominance, although there is disagreement about how strong a metaphor it is. In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die. Less dramatically, but drawing similar conclusions, two decades earlier the historian S Ryan Johansson argued that “endlessly debating the truth or falsity of a metaphor like ‘the brain is a computer’ is a waste of time. The relationship proposed is metaphorical, and it is ordering us to do something, not trying to tell us the truth.”
On the other hand, the US expert in artificial intelligence, Gary Marcus, has made a robust defence of the computer metaphor: “Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that. The real question isn’t whether the brain is an information processor, per se, but rather how do brains store and encode information, and what operations do they perform over that information, once it is encoded.”
Marcus went on to argue that the task of neuroscience is to “reverse engineer” the brain, much as one might study a computer, examining its components and their interconnections to decipher how it works. This suggestion has been around for some time. In 1989, Crick recognised its attractiveness, but felt it would fail, because of the brain’s complex and messy evolutionary history – he dramatically claimed it would be like trying to reverse engineer a piece of “alien technology”. Attempts to find an overall explanation of how the brain works that flow logically from its structure would be doomed to failure, he argued, because the starting point is almost certainly wrong – there is no overall logic.
Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as expected.
The duo – Eric Jonas and Konrad Paul Kording – employed the very techniques they normally used to analyse the brain and applied them to the MOS 6507 processor found in computers from the late 70s and early 80s that enabled those machines to run video games such as Donkey Kong and Space Invaders.
First, they obtained the connectome of the chip by scanning the 3510 enhancement-mode transistors it contained and simulating the device on a modern computer (including running the games programmes for 10 seconds). They then used the full range of neuroscientific techniques, such as “lesions” (removing transistors from the simulation), analysing the “spiking” activity of the virtual transistors and studying their connectivity, observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games.
Despite deploying this powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works (it has “ground truth”, in technospeak), the study failed to detect the hierarchy of information processing that occurs inside the chip. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”
This sobering outcome suggests that, despite the attractiveness of the computer metaphor and the fact that brains do indeed process information and somehow represent the external world, we still need to make significant theoretical breakthroughs in order to make progress. Even if our brains were designed along logical lines, which they are not, our present conceptual and analytical tools would be completely inadequate for the task of explaining them. This does not mean that simulation projects are pointless – by modelling (or simulating) we can test hypotheses and, by linking the model with well-established systems that can be precisely manipulated, we can gain insight into how real brains function. This is an extremely powerful tool, but a degree of modesty is required when it comes to the claims that are made for such studies, and realism is needed with regard to the difficulties of drawing parallels between brains and artificial systems.
The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.
By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.
The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.
Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development; despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.
One sign that our metaphors may be losing their explanatory power is the widespread assumption that much of what nervous systems do, from simple systems right up to the appearance of consciousness in humans, can only be explained as emergent properties – things that you cannot predict from an analysis of the components, but which emerge as the system functions.
In 1981, the British psychologist Richard Gregory argued that the reliance on emergence as a way of explaining brain function indicated a problem with the theoretical framework: “The appearance of ‘emergence’ may well be a sign that a more general (or at least different) conceptual scheme is needed … It is the role of good theories to remove the appearance of emergence. (So explanations in terms of emergence are bogus.)”
This overlooks the fact that there are different kinds of emergence: weak and strong. Weak emergent features, such as the movement of a shoal of tiny fish in response to a shark, can be understood in terms of the rules that govern the behaviour of their component parts. In such cases, apparently mysterious group behaviours are based on the behaviour of individuals, each of which is responding to factors such as the movement of a neighbour, or external stimuli such as the approach of a predator.
This kind of weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components. You and the page you are reading this on are both made of atoms, but your ability to read and understand comes from features that emerge through atoms in your body forming higher-level structures, such as neurons and their patterns of firing – not simply from atoms interacting.
Strong emergence has recently been criticised by some neuroscientists as risking “metaphysical implausibility”, because there is no evident causal mechanism, nor any single explanation, of how emergence occurs. Like Gregory, these critics claim that the reliance on emergence to explain complex phenomena suggests that neuroscience is at a key historical juncture, similar to that which saw the slow transformation of alchemy into chemistry. But faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.
Interestingly, while some neuroscientists are discombobulated by the metaphysics of emergence, researchers in artificial intelligence revel in the idea, believing that the sheer complexity of modern computers, or of their interconnectedness through the internet, will lead to what is dramatically known as the singularity. Machines will become conscious.
There are plenty of fictional explorations of this possibility (in which things often end badly for all concerned), and the subject certainly excites the public’s imagination, but there is no reason, beyond our ignorance of how consciousness works, to suppose that it will happen in the near future. In principle, it must be possible, because the working hypothesis is that mind is a product of matter, which we should therefore be able to mimic in a device. But the scale of complexity of even the simplest brains dwarfs any machine we can currently envisage. For decades – centuries – to come, the singularity will be the stuff of science fiction, not science.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning
A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.
The materialist working hypothesis is that brains and minds, in humans and maggots and everything else, are identical. Neurons and the processes they support – including consciousness – are the same thing. In a computer, software and hardware are separate; however, our brains and our minds consist of what can best be described as wetware, in which what is happening and where it is happening are completely intertwined.
Imagining that we can repurpose our nervous system to run different programmes, or upload our mind to a server, might sound scientific, but lurking behind this idea is a non-materialist view going back to Descartes and beyond. It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind. It would be possible to give this idea a veneer of scientific respectability by posing it in terms of reading the state of a set of neurons and writing that to a new substrate, organic or artificial.
But to even begin to imagine how that might work in practice, we would need both an understanding of neuronal function that is far beyond anything we can currently envisage, and would require unimaginably vast computational power and a simulation that precisely mimicked the structure of the brain in question. For this to be possible even in principle, we would first need to be able to fully model the activity of a nervous system capable of holding a single state, never mind a thought. We are so far away from taking this first step that the possibility of uploading your mind can be dismissed as a fantasy, at least until the far future.
For the moment, the brain-as-computer metaphor retains its dominance, although there is disagreement about how strong a metaphor it is. In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die. Less dramatically, but drawing similar conclusions, two decades earlier the historian S Ryan Johansson argued that “endlessly debating the truth or falsity of a metaphor like ‘the brain is a computer’ is a waste of time. The relationship proposed is metaphorical, and it is ordering us to do something, not trying to tell us the truth.”
On the other hand, the US expert in artificial intelligence, Gary Marcus, has made a robust defence of the computer metaphor: “Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that. The real question isn’t whether the brain is an information processor, per se, but rather how do brains store and encode information, and what operations do they perform over that information, once it is encoded.”
Marcus went on to argue that the task of neuroscience is to “reverse engineer” the brain, much as one might study a computer, examining its components and their interconnections to decipher how it works. This suggestion has been around for some time. In 1989, Crick recognised its attractiveness, but felt it would fail, because of the brain’s complex and messy evolutionary history – he dramatically claimed it would be like trying to reverse engineer a piece of “alien technology”. Attempts to find an overall explanation of how the brain works that flow logically from its structure would be doomed to failure, he argued, because the starting point is almost certainly wrong – there is no overall logic.
Reverse engineering a computer is often used as a thought experiment to show how, in principle, we might understand the brain. Inevitably, these thought experiments are successful, encouraging us to pursue this way of understanding the squishy organs in our heads. But in 2017, a pair of neuroscientists decided to actually do the experiment on a real computer chip, which had a real logic and real components with clearly designed functions. Things did not go as expected.
The duo – Eric Jonas and Konrad Paul Kording – employed the very techniques they normally used to analyse the brain and applied them to the MOS 6507 processor found in computers from the late 70s and early 80s that enabled those machines to run video games such as Donkey Kong and Space Invaders.
First, they obtained the connectome of the chip by scanning the 3510 enhancement-mode transistors it contained and simulating the device on a modern computer (including running the games programmes for 10 seconds). They then used the full range of neuroscientific techniques, such as “lesions” (removing transistors from the simulation), analysing the “spiking” activity of the virtual transistors and studying their connectivity, observing the effect of various manipulations on the behaviour of the system, as measured by its ability to launch each of the games.
Despite deploying this powerful analytical armoury, and despite the fact that there is a clear explanation for how the chip works (it has “ground truth”, in technospeak), the study failed to detect the hierarchy of information processing that occurs inside the chip. As Jonas and Kording put it, the techniques fell short of producing “a meaningful understanding”. Their conclusion was bleak: “Ultimately, the problem is not that neuroscientists could not understand a microprocessor, the problem is that they would not understand it given the approaches they are currently taking.”
This sobering outcome suggests that, despite the attractiveness of the computer metaphor and the fact that brains do indeed process information and somehow represent the external world, we still need to make significant theoretical breakthroughs in order to make progress. Even if our brains were designed along logical lines, which they are not, our present conceptual and analytical tools would be completely inadequate for the task of explaining them. This does not mean that simulation projects are pointless – by modelling (or simulating) we can test hypotheses and, by linking the model with well-established systems that can be precisely manipulated, we can gain insight into how real brains function. This is an extremely powerful tool, but a degree of modesty is required when it comes to the claims that are made for such studies, and realism is needed with regard to the difficulties of drawing parallels between brains and artificial systems.
Current ‘reverse engineering’ techniques cannot deliver a proper understanding of an Atari console chip, let alone of a human brain. Photograph: Radharc Images/Alamy
Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices. Although it is often argued that particular functions are tightly localised in the brain, as they are in a machine, this certainty has been repeatedly challenged by new neuroanatomical discoveries of unsuspected connections between brain regions, or amazing examples of plasticity, in which people can function normally without bits of the brain that are supposedly devoted to particular behaviours.
In reality, the very structures of a brain and a computer are completely different. In 2006, Larry Abbott wrote an essay titled “Where are the switches on this thing?”, in which he explored the potential biophysical bases of that most elementary component of an electronic device – a switch. Although inhibitory synapses can change the flow of activity by rendering a downstream neuron unresponsive, such interactions are relatively rare in the brain.
A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation. The nervous system alters its working by changes in the patterns of activation in networks of cells composed of large numbers of units; it is these networks that channel, shift and shunt activity. Unlike any device we have yet envisaged, the nodes of these networks are not stable points like transistors or valves, but sets of neurons – hundreds, thousands, tens of thousands strong – that can respond consistently as a network over time, even if the component cells show inconsistent behaviour.
Understanding even the simplest of such networks is currently beyond our grasp. Eve Marder, a neuroscientist at Brandeis University, has spent much of her career trying to understand how a few dozen neurons in the lobster’s stomach produce a rhythmic grinding. Despite vast amounts of effort and ingenuity, we still cannot predict the effect of changing one component in this tiny network that is not even a simple brain.
This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level. Finding the link between these two levels of analysis will be a challenge for much of the rest of the century, I suspect. And the prospect of properly understanding what is happening in cases of mental illness is even further away.
Not all neuroscientists are pessimistic – some confidently claim that the application of new mathematical methods will enable us to understand the myriad interconnections in the human brain. Others – like myself – favour studying animals at the other end of the scale, focusing our attention on the tiny brains of worms or maggots and employing the well-established approach of seeking to understand how a simple system works and then applying those lessons to more complex cases. Many neuroscientists, if they think about the problem at all, simply consider that progress will inevitably be piecemeal and slow, because there is no grand unified theory of the brain lurking around the corner.
Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices. Although it is often argued that particular functions are tightly localised in the brain, as they are in a machine, this certainty has been repeatedly challenged by new neuroanatomical discoveries of unsuspected connections between brain regions, or amazing examples of plasticity, in which people can function normally without bits of the brain that are supposedly devoted to particular behaviours.
In reality, the very structures of a brain and a computer are completely different. In 2006, Larry Abbott wrote an essay titled “Where are the switches on this thing?”, in which he explored the potential biophysical bases of that most elementary component of an electronic device – a switch. Although inhibitory synapses can change the flow of activity by rendering a downstream neuron unresponsive, such interactions are relatively rare in the brain.
A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation. The nervous system alters its working by changes in the patterns of activation in networks of cells composed of large numbers of units; it is these networks that channel, shift and shunt activity. Unlike any device we have yet envisaged, the nodes of these networks are not stable points like transistors or valves, but sets of neurons – hundreds, thousands, tens of thousands strong – that can respond consistently as a network over time, even if the component cells show inconsistent behaviour.
Understanding even the simplest of such networks is currently beyond our grasp. Eve Marder, a neuroscientist at Brandeis University, has spent much of her career trying to understand how a few dozen neurons in the lobster’s stomach produce a rhythmic grinding. Despite vast amounts of effort and ingenuity, we still cannot predict the effect of changing one component in this tiny network that is not even a simple brain.
This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level. Finding the link between these two levels of analysis will be a challenge for much of the rest of the century, I suspect. And the prospect of properly understanding what is happening in cases of mental illness is even further away.
Not all neuroscientists are pessimistic – some confidently claim that the application of new mathematical methods will enable us to understand the myriad interconnections in the human brain. Others – like myself – favour studying animals at the other end of the scale, focusing our attention on the tiny brains of worms or maggots and employing the well-established approach of seeking to understand how a simple system works and then applying those lessons to more complex cases. Many neuroscientists, if they think about the problem at all, simply consider that progress will inevitably be piecemeal and slow, because there is no grand unified theory of the brain lurking around the corner.
There are many alternative scenarios about how the future of our understanding of the brain could play out: perhaps the various computational projects will come good and theoreticians will crack the functioning of all brains, or the connectomes will reveal principles of brain function that are currently hidden from us. Or a theory will somehow pop out of the vast amounts of imaging data we are generating. Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations. Or by focusing on simple neural network principles we will understand higher-level organisation. Or some radical new approach integrating physiology and biochemistry and anatomy will shed decisive light on what is going on. Or new comparative evolutionary studies will show how other animals are conscious and provide insight into the functioning of our own brains. Or unimagined new technology will change all our views by providing a radical new metaphor for the brain. Or our computer systems will provide us with alarming new insight by becoming conscious. Or a new framework will emerge from cybernetics, control theory, complexity and dynamical systems theory, semantics and semiotics. Or we will accept that there is no theory to be found because brains have no overall logic, just adequate explanations of each tiny part, and we will have to be satisfied with that.
Wednesday, 24 August 2016
How tricksters make you see what they want you to see
By David Robson in the BBC
Could you be fooled into “seeing” something that doesn’t exist?
Matthew Tompkins, a magician-turned-psychologist at the University of Oxford, has been investigating the ways that tricksters implant thoughts in people’s minds. With a masterful sleight of hand, he can make a poker chip disappear right in front of your eyes, or conjure a crayon out of thin air.
And finally, let’s watch the “phantom vanish trick”, which was the focus of his latest experiment:
Although interesting in themselves, the first three videos are really a warm-up for this more ambitious illusion, in which Tompkins tries to plant an image in the participant’s minds using the power of suggestion alone.
Around a third of his participants believed they had seen Tompkins take an object from the pot and tuck it into his hand – only to make it disappear later on. In fact, his fingers were always empty, but his clever pantomiming created an illusion of a real, visible object.
How is that possible? Psychologists have long known that the brain acts like an expert art restorer, touching up the rough images hitting our retina according to context and expectation. This “top-down processing” allows us to build a clear picture from the barest of details (such as this famous picture of the “Dalmatian in the snow”). It’s the reason we can make out a face in the dark, for instance. But occasionally, the brain may fill in too many of the gaps, allowing expectation to warp a picture so that it no longer reflects reality. In some ways, we really do see what we want to see.
This “top-down processing” is reflected in measures of brain activity, and it could easily explain the phantom vanish trick. The warm-up videos, the direction of his gaze, and his deft hand gestures all primed the participants’ brains to see the object between his fingers, and for some participants, this expectation overrode the reality in front of their eyes.
Sunday, 1 March 2015
14 Things To Know Before You Start Meditating
Sasha Bronner in The Huffington Post
New to meditating? It can be confusing. Not new to meditating? It can still be confusing.
The practice of meditation is said to have been around for thousands of years -- and yet, in the last few, especially in America, it seems that everyone knows at least one person who has taken on the ancient art of de-stressing.
Because it has been around for so long and because there are many different types of meditation, there are some essential truths you should know before you too take the dive into meditation or mindfulness (or both). Take a look at the suggestions below.
1. You don't need a mantra (but you can have one if you want).
It has become common for people to confuse mantra with the idea of an intention or specific words to live by. A motto. But the actual word "mantra" means something quite different. Man means mind and tra means vehicle. A mantra is a mind-vehicle. Mantras can be used in meditation as a tool to help your mind enter (or stay in) your meditation practice.
Other types of meditation use things like sound, counting breaths or even just the breath itself as a similar tool. Another way to think about a mantra is like an anchor. It anchors your mind as you meditate and can be what you come back to when your thoughts (inevitably) wander.
2. Don’t expect your brain to go blank.
One of the biggest misconceptions about meditation is that your mind is supposed to go blank and that you reach a super-Zen state of consciousness. This is typically not true. It's important to keep in mind that you don’t have to try to clear thoughts from your brain during meditation.
One of the biggest misconceptions about meditation is that your mind is supposed to go blank and that you reach a super-Zen state of consciousness. This is typically not true. It's important to keep in mind that you don’t have to try to clear thoughts from your brain during meditation.
The "nature of the mind to move from one thought to another is in fact the very basis of meditation," says Deepak Chopra, a meditation expert and founder of the Chopra Center for Wellbeing. "We don’t eliminate the tendency of the mind to jump from one thought to another. That’s not possible anyway." Depending on the type of meditation you learn, there are tools for gently bringing your focus back to your meditation practice. Alternatively, some types of meditation actually emphasize being present and mindful to thoughts as they arise as part of the practice.
3. You do not have to sit cross-legged or hold you hands in any position.
You can sit in any position that is comfortable to you. Most people sit upright in a chair or on a cushion. Your hands can fall gently in your lap or at your sides. It is best not to lie down unless you’re doing a body scan meditation or meditation for sleep.
4. Having said that, it’s also okay if you do fall asleep.
It’s very common to doze off during meditation and some believe that the brief sleep you get is actually very restorative. It’s not the goal, but if it’s a byproduct of your meditation, that is OK. Other practices might give tricks on how to stay more alert if you fall asleep (check out No. 19 on these tips from Headspace), like sitting upright in a chair. In our experience, the relaxation that can come from meditation is a wonderful thing -- and if that means a mini-snooze, so be it.
It’s very common to doze off during meditation and some believe that the brief sleep you get is actually very restorative. It’s not the goal, but if it’s a byproduct of your meditation, that is OK. Other practices might give tricks on how to stay more alert if you fall asleep (check out No. 19 on these tips from Headspace), like sitting upright in a chair. In our experience, the relaxation that can come from meditation is a wonderful thing -- and if that means a mini-snooze, so be it.
5. There are many ways to learn.
With meditation becoming so available to the masses, you can learn how to meditate alone, in a group, on a retreat, with your phone or even by listening to guided meditations online. Everyone has a different learning style and there are plenty of options out there to fit individual needs. Read our suggestions for how to get started.
With meditation becoming so available to the masses, you can learn how to meditate alone, in a group, on a retreat, with your phone or even by listening to guided meditations online. Everyone has a different learning style and there are plenty of options out there to fit individual needs. Read our suggestions for how to get started.
6. You can meditate for a distinct purpose or for general wellness.
Some meditation exercises are aimed at one goal, like helping to ease anxiety or helping people who have trouble sleeping. One popular mindfulness meditation technique, loving-kindness meditation, promotes the positive act of wishing ourselves or others happiness. However, if you don't have a specific goal in mind, you can still reap the benefits of the practice.
7. Meditation has so many health perks.
Meditation can help boost the immune system, reduce stress and anxiety, improve concentration, decrease blood pressure, improve your sleep, increase your happiness, and has even helped people deal with alcohol or smoking addictions.
Meditation can help boost the immune system, reduce stress and anxiety, improve concentration, decrease blood pressure, improve your sleep, increase your happiness, and has even helped people deal with alcohol or smoking addictions.
8. It can also physically change your brain.
Researchers have not only looked at the brains of meditators and non-meditators to study the differences, but they have also started looking at a group of brains before and after eight weeks of mindfulness meditation. The results are remarkable. Scientists noted everything from "changes in grey matter volume to reduced activity in the 'me' centers of the brain to enhanced connectivity between brain regions," Forbes reported earlier this year.
Researchers have not only looked at the brains of meditators and non-meditators to study the differences, but they have also started looking at a group of brains before and after eight weeks of mindfulness meditation. The results are remarkable. Scientists noted everything from "changes in grey matter volume to reduced activity in the 'me' centers of the brain to enhanced connectivity between brain regions," Forbes reported earlier this year.
Those who participated in an eight week mindfulness program also showed signs of ashrinking of the amygdala (the brain’s "fight or flight" center) as well as a thickening of the pre-frontal cortex, which handles brain functions like concentration and awareness.
Researchers also looked at brain imaging on long-term, experienced meditators. Many, when not in a state of meditation, had brain image results that looked more like the images of a regular person's brain while meditating. In other words, the experienced meditator's brain is remarkably different than the non-meditator's brain.
9. Oprah meditates.
So does Paul McCartney, Jerry Seinfeld, Howard Stern, Lena Dunham, Barbara Walters, Arianna Huffington and Kobe Bryant. Oprah teams up with Deepak Chopra for 21-day online meditation experiences that anyone can join, anywhere. The program is free and the next one begins in March 2015.
10. It’s more mainstream than you might think.
Think meditation is still a new-age concept? Think again. GQ magazine wrote their own guide to Transcendental Meditation. Time’s February 2014 cover story wasdevoted to "the mindful revolution" and many big companies, such as Google, Apple, Nike and HBO, have started promoting meditation at work with free classes and new meditation rooms.
Think meditation is still a new-age concept? Think again. GQ magazine wrote their own guide to Transcendental Meditation. Time’s February 2014 cover story wasdevoted to "the mindful revolution" and many big companies, such as Google, Apple, Nike and HBO, have started promoting meditation at work with free classes and new meditation rooms.
11. Mindfulness and meditation are not the same thing.
The two are talked about in conjunction often because one form of meditation is called mindfulness meditation. Mindfulness is defined most loosely as cultivating a present awareness in your everyday life. One way to do this is through meditation -- but not all meditation practices necessarily focus on mindfulness.
The two are talked about in conjunction often because one form of meditation is called mindfulness meditation. Mindfulness is defined most loosely as cultivating a present awareness in your everyday life. One way to do this is through meditation -- but not all meditation practices necessarily focus on mindfulness.
Mindfulness meditation is referred to most often when experts talk about the health benefits of meditation. Anderson Cooper recently did a special on his experience practicing mindfulness with expert Jon Kabat-Zinn for "60 Minutes."
12. Don’t believe yourself when you say you don’t have time to meditate.
While some formal meditation practices call for 20 minutes, twice a day, many other meditation exercises can be as short as five or 10 minutes. We easily spend that amount of time flipping through Netflix or liking things on Instagram. For some, it’s setting the morning alarm 10 minutes earlier or getting off email a few minutes before dinner to practice.
Another way to think about incorporating meditation into your daily routine is likening it to brushing your teeth. You might not do it at the exact same time each morning, but you always make sure you brush your teeth before you leave the house for the day. For those who start to see the benefits of daily meditation, it becomes a non-negotiable part of your routine.
13. You may not think you’re “doing it right” the first time you meditate.
Or the second or the third. That’s OK. It’s an exercise that you practice just like sit-ups or push-ups at the gym. You don’t expect a six-pack after one day of exercise, so think of meditation the same way.
Or the second or the third. That’s OK. It’s an exercise that you practice just like sit-ups or push-ups at the gym. You don’t expect a six-pack after one day of exercise, so think of meditation the same way.
14. Take a step back.
Many meditation teachers encourage you to assess your progress by noticing how you feel in between meditations -- not while you’re sitting down practicing one. It’s not uncommon to feel bored, distracted, frustrated or even discouraged some days while meditating. Hopefully you also have days of feeling energized, calm, happy and at peace. Instead of judging each meditation, try to think about how you feel throughout the week. Less stressed, less road rage, sleeping a little bit better? Sounds like it's working.
Many meditation teachers encourage you to assess your progress by noticing how you feel in between meditations -- not while you’re sitting down practicing one. It’s not uncommon to feel bored, distracted, frustrated or even discouraged some days while meditating. Hopefully you also have days of feeling energized, calm, happy and at peace. Instead of judging each meditation, try to think about how you feel throughout the week. Less stressed, less road rage, sleeping a little bit better? Sounds like it's working.
Friday, 27 December 2013
Brainwashed by the cult of the super-rich
Followers, in thrall to Harrods and Downton Abbey, repeat the mantra that the greed of a few means prosperity for all
Last week, Tory MP Esther McVey, Iain Duncan Smith's deputy, insisted it was "right" that half a million Britons be dependent on food banks in "tough times". Around the same time, the motor racing heiress Tamara Ecclestone totted up a champagne bill of £30,000 in one evening. A rich teenager in Texas has just got away with probation for drunkenly running over and killing four people because his lawyers argued successfully that he suffered from "affluenza", which rendered him unable to handle a car responsibly. What we've been realising for some time now is that, for all the team sport rhetoric, only two sides are really at play in Britain and beyond: Team Super-Rich and Team Everyone Else.
The rich are not merely different: they've become a cult which drafts us as members. We are invited to deceive ourselves into believing we are playing for the same stakes while worshipping the same ideals, a process labelled "aspiration". Reaching its zenith at this time of year, our participation in cult rituals – buy, consume, accumulate beyond need – helps mute our criticism and diffuse anger at systemic exploitation. That's why we buy into the notion that a £20 Zara necklace worn by the Duchess of Cambridge on a designer gown costing thousands of pounds is evidence that she is like us. We hear that the monarch begrudges police officers who guard her family and her palaces a handful of cashew nuts and interpret it as eccentricity rather than an apt metaphor for the Dickensian meanness of spirit that underlies the selective concentration of wealth. The adulation of royalty is not a harmless anachronism; it is calculated totem worship that only entrenches the bizarre notion that some people are rich simply because they are more deserving but somehow they are still just like us.
Cults rely on spectacles of opulence intended to stoke an obsessive veneration for riches. The Rich Kids of Instagram who showed us what the "unapologetically uber-rich" can do because they have "more money than you" will find further fame in a novel and a reality show. Beyond the sumptuous lifestyle spreads in glossies or the gift-strewn shop windows at Harrods and Selfridges, and Gwyneth Paltrow's Goop website, shows like Downton Abbey keep us in thrall to the idea of moolah, mansions and autocratic power. They help us forget that wealthy British landowners, including the Queen, get millions of pounds in farming subsidies while the rest of us take back to the modest homes, which we probably don't own, lower salaries and slashed pensions. Transfixed by courtroom dramas involving people who can spend a small family's living income on flower arrangements, we don't ask why inherited wealth is rewarded by more revenue but tough manual labour or care work by low wages.
Cue the predictable charge of "class envy" or what Boris Johnson dismisses as "bashing or moaning or preaching or bitching". Issued by its high priests, this brand of condemnation is integral to the cult of the rich. We must repeat the mantra that the greed of a few means prosperity for all. Those who stick to writ and offer humble thanks to the acquisitive are contradictorily assured by mansion-dwellers that money does not buy happiness and that electric blankets can replace central heating. Enter "austerity chic" wherein celebrity footballers are hailed for the odd Poundland foray, millionaire property pundits teach us how to "make do" with handmade home projects and celebrity chefs demonstrate how to "save" on ingredients – after we've purchased their money-spinning books, of course.
Cultish thinking means that the stupendously rich who throw small slivers of their fortunes at charity, or merely grace lavish fundraisers – like Prince William's Winter Whites gala for the homeless at his taxpayer-funded Kensington Palace home – with their presence, become instant saints. The poor and the less well-off, subject to austerity and exploitation, their "excesses" constantly policed and criminalised, are turned into objects of patronage, grateful canvasses against which the generosity of wealth can be stirringly displayed. The cult of the rich propounds the idea that vast economic inequalities are both natural and just: the winner who takes most is, like any cult hero, just more intelligent and deserving, even when inherited affluence gives them a head start.
We are mildly baffled rather than galvanised into righteous indignation when told that the rich are being persecuted – bullied for taxes and lynched for bonuses. The demonising of the poor is the flip side of the cult of the rich or, as a friend puts it, together they comprise the yin and yang of maintaining a dismal status quo. It is time to change it through reality checks, not reality shows.
Wednesday, 10 October 2012
Afterlife exists says top brain surgeon
A prominent scientist who had previously dismissed the possibility of the afterlife says he has reconsidered his belief after experiencing an out of body experience which has convinced him that heaven exists.
Dr Eben Alexander, a Harvard-educated neurosurgeon, fell into a coma for seven days in 2008 after contracting meningitis.
During his illness Dr Alexander says that the part of his brain which controls human thought and emotion "shut down" and that he then experienced "something so profound that it gave me a scientific reason to believe in consciousness after death." In an essay for American magazine Newsweek, which he wrote to promote his book Proof of Heaven, Dr Alexander says he was met by a beautiful blue-eyed woman in a "place of clouds, big fluffy pink-white ones" and "shimmering beings".
He continues: "Birds? Angels? These words registered later, when I was writing down my recollections. But neither of these words do justice to the beings themselves, which were quite simply different from anything I have known on this planet. They were more advanced. Higher forms." The doctor adds that a "huge and booming like a glorious chant, came down from above, and I wondered if the winged beings were producing it. the sound was palpable and almost material, like a rain that you can feel on your skin but doesn't get you wet."
Dr Alexander says he had heard stories from patients who spoke of outer body experiences but had disregarded them as "wishful thinking" but has reconsidered his opinion following his own experience.
He added: "I know full well how extraordinary, how frankly unbelievable, all this sounds. Had someone even a doctor told me a story like this in the old days, I would have been quite certain that they were under the spell of some delusion.
"But what happened to me was, far from being delusional, as real or more real than any event in my life. That includes my wedding day and the birth of my two sons." He added: "I've spent decades as a neurosurgeon at some of the most prestigous medical institutions in our country. I know that many of my peers hold as I myself did to the theory that the brain, and in particular the cortex, generates consciousness and that we live in a universe devoid of any kind of emotion, much less the unconditional love that I now know God and the universe have toward us.
"But that belief, that theory, now lies broken at our feet. What happened to me destroyed it."
Tuesday, 5 July 2011
INTELLIGENT MAN
Whenever an INTELLIGENT MAN makes an important decision
He closes his eyes Thinks a lot
Listens to his heart. Uses his brain.
Contemplates pros and cons
&
Finally does what his WIFE says
He closes his eyes Thinks a lot
Listens to his heart. Uses his brain.
Contemplates pros and cons
&
Finally does what his WIFE says
Thursday, 26 May 2011
'A good spinner needs a ten-year apprenticeship' Terry Jenner
Nagraj Gollapudi
September 27, 2007
Terry Jenner played nine Tests for Australia in the 1970s but it is as a coach, and specifically as Shane Warne's mentor and the man Warne turned to in a crisis, that he is better known. Jenner said that his CV wouldn't be complete without a trip to India, the spiritual home of spin bowling, and this September he finally made it when he was invited by the MAC Spin Foundation to train youngsters in Chennai. Jenner spoke at length to Cricinfo on the art and craft of spin bowling in general and legspin in particular. What follows is the first in a two-part interview.
"Most of the time the art of the spin bowler is to get the batsman to look to drive you. That's where your wickets come"
How has the role of spin changed over the decades you've watched cricket?
The limited-overs game has made the major change to spin bowling. When I started playing, for example, you used to break partnerships in the first couple of the days of the match and then on the last couple of days you were expected to play more of a major role. But in recent years, with the entry of Shane Warne, who came on on the first day of the Test and completely dominated on good pitches, it has sort of changed the specs that way.
But the difficulty I'm reading at the moment is that captains and coaches seem to be of the opinion that spin bowlers are there either to rest the pace bowlers or to just keep it tight; they are not allowed to risk runs to gain rewards. That's the biggest change.
In the 1960s, when I first started, you were allowed to get hit around the park a bit, as long as you managed to get wickets - it was based more on your strike-rate than how many runs you went for. So limited-overs cricket has influenced bowlers to bowl a negative line and not the attacking line, and I don't know with the advent of Twenty20 how we'll advance. We will never go back, unfortunately, to the likes of Warne and the wrist-spinners before him who went for runs but the quality was more.
What are the challenges of being a spinner in modern cricket?
The huge challenge is just getting to bowl at club level through to first-class level. When you get to the first-class level they tend to you allow you to bowl, but once you get to bowl, instead of allowing you to be a free spirit, you are restricted to men around the bat - push it through, don't let the batsman play the stroke, don't free their arms up ... all those modern thoughts on how the spinner should bowl.
Do spinners spin the ball less these days?
The capacity to spin is still there, but to spin it you actually have to flight it up, and if you flight it up there's always that risk of over-pitching and the batsman getting you on the full, and therefore the risk of runs being scored. So if you consider the general mentality of a spinner trying to bowl dot balls and bowl defensive lines, then you can't spin it.
I'll give you an example of an offspin bowler bowling at middle and leg. How far does he want to spin it? If he needs to spin it, he needs to bowl a foot outside the off stump and spin it back, but if he has to bowl a defensive line then he sacrifices the spin, otherwise he'll be just bowling down the leg side.
It's impossible for you to try and take a wicket every ball, but when you're really young that's what you do - you just try and spin it as hard as you can and take the consequences, and that usually means you don't get to bowl many overs. The art of improving is when you learn how to get into your overs, get out of your overs, and use the middle deliveries to attack
Legspinners bowling at leg stump or just outside - there's been so few over the years capable of spinning the ball from just outside leg past off, yet that's the line they tend to bowl. So I don't think they spin it any less; the capacity to spin is still wonderful. I still see little kids spinning the ball a long way. I take the little kids over to watch the big kids bowl and I say, "Have a look: the big kids are all running in off big, long runs, jumping high in the air and firing it down there, and more importantly going straight." And I say to the little kids, "They once were like you. And one of you who hangs on to the spin all the way through is the one that's gonna go forward."
Great spinners have always bowled at the batsman and not to the batsman. But the trend these days is that spinners are becoming increasingly defensive.
First of all they play him [the young spinner] out of his age group. Earlier the idea of finding a good, young talent, when people identified one, was that they didn't move him up and play him in the higher grade or in the higher age group. There was no different age-group cricket around back then, and if you were a youngster you went into the seniors and you played in the bottom grade and then you played there for a few years while you learned the craft and then they moved you to the next grade. So you kept going till you came out the other end and that could've been anywhere around age 19, 20, 21 or whatever. Now the expectation is that by the time you are 16 or 17 you are supposed to be mastering this craft.
It's a long apprenticeship. If you find a good 10- or 11-year-old, he needs to have a ten-year apprenticeship at least. There's a rule of thumb here that says that if the best there's ever been, which is Shane Warne - and there is every reason to believe he is - sort of started to strike his best at 23-24, what makes you think we can find 18- or 19-year-olds to do it today? I mean, he [Warne] has only been out of the game for half an hour and yet we're already expecting kids to step up to the plate much, much before they are ready.
It's a game of patience with spin bowlers and developing them. It's so important that we are patient in helping them, understanding their need for patience, at the same time understanding from outside the fence - as coach, captain etc. We need to understand them and allow them to be scored off, allow them to learn how to defend themselves, allow them to understand that there are times when you do need to defend. But most of the time the art of the spin bowler is to get the batsman to look to drive you. That's where your wickets come, that's where you spin it most.
Warne said you never imposed yourself as a coach.
With Warne, when I first met him he bowled me a legbreak which spun nearly two feet-plus, and I was just in awe. All I wanted to do was try and help that young man become the best he could be, just to help him understand his gift, understand what he had, and to that end I never tried to change him. That's what he meant by me never imposing myself. We established a good relationship based on the basics of bowling and his basics were always pretty good. Over the years whenever he wandered away from them, we worked it back to them. There were lot of times over his career where, having a bowled a lot of overs, some bad habits had come in. It was not a case of standing over him. I was just making him aware of where he was at the moment and how he could be back to where he was when he was spinning them and curving them. His trust was the most important gift that he gave me, and it's an important thing for a coach to understand not to breach that trust. That trust isn't about secrets, it's about the trust of the information you give him, that it won't harm him, and that was our relationship.
I don't think of myself as an authority on spin bowling. I see myself as a coach who's developed a solid learning by watching and working with the best that's been, and a lot of other developing spinners. So I'm in a terrific business-class seat because I get to see a lot of this stuff and learn from it, and of course I've spoken to Richie Benaud quite a lot over the years.
Shane would speak to Abdul Qadir and he would feed back to me what Abdul Qadir said. Most people relate your knowledge to how many wickets you took and I don't think that's relevant. I think it's your capacity to learn and deliver, to communicate that what you've learned back to people.
From the outside it seems like there is a problem of over-coaching these days.
There are so many coaches now. We have specialist coaches, general coaches, we've got sports science and psychology. Coaching has changed.
Shane, in his retirement speech, referred to me as his technical coach (by which he meant technique), as Dr Phil [the psychologist on the Oprah Winfrey Show]. That means when he wanted someone to talk to, I was the bouncing board. He said the most uplifting thing ever said about me: that whenever he rang me, when he hung the phone up he always felt better for having made the call.
"Think high, spin up" was the first mantra you shared with Warne. What does it mean?
When I first met Shane his arm was quite low, and back then, given I had no genuine experience of coaching spin, I asked Richie Benaud and made him aware of this young Shane Warne fellow and asked him about the shoulder being low. Richie said, "As long as he spins it up from the hand, it'll be fine." But later, when we tried to introduce variations, we talked about the topspinner and I said to Shane, "You're gonna have to get your shoulder up to get that topspinner to spin over the top, otherwise it spins down low and it won't produce any shape." So when he got back to his mark the trigger in his mind was "think high, spin up", and when he did that he spun up over the ball and developed the topspinner. Quite often even in the case of the legbreak it was "think high, spin up" because his arm tended to get low, especially after his shoulder operation.
Can you explain the risk-for-reward theory that you teach youngsters about?
This is part of learning the art and craft. It's impossible for you to try and take a wicket every ball, but when you're really young that's what you do - you just try and spin it as hard as you can and take the consequences, and that usually means you don't get to bowl many overs. The art of improving is when you learn how to get into your overs, get out of your overs, and use the middle deliveries in an over to attack. I called them the risk and reward balls in an over. In other words, you do risk runs off those deliveries but you can also gain rewards.
There's been no one in the time that I've been around who could theoretically bowl six wicket-taking balls an over other than SK Warne. The likes of [Anil] Kumble ... he's trying to keep the lines tight and keep you at home, keep you at home while he works on you, but he's not trying to get you out every ball, he's working a plan.
The thing about excellent or great bowlers is that they rarely go for a four or a six off the last delivery. That is the point I make to kids, explaining how a mug like me used to continually go for a four or six off the last ball of the over while trying to get a wicket so I could stay on. And when you do that, that's the last thing your captain remembers, that's the last thing your team-mates remember, it's the last thing the selectors remember. So to that end you are better off bowling a quicker ball in line with the stumps which limits the batsman's opportunities to attack. So what I'm saying is, there's always a time when you need to defend, but you've got to know how to attack and that's why you need such a long apprenticeship.
Warne said the most uplifting thing ever said about me: that whenever he rang me, when he hung the phone up he always felt better for having made the call
Richie Benaud writes in his book that his dad told him to keep it simple and concentrate on perfecting the stock ball. Benaud says that you shouldn't even think about learning the flipper before you have mastered the legbreak, top spinner and wrong'un. Do you agree?
I totally agree with what Richie said. If you don't have a stock ball, what is the variation? You know what I'm saying? There are five different deliveries a legbreak bowler can bowl, but Warne said on more than one occasion that because of natural variation you can bowl six different legbreaks in an over; what's important is the line and length that you are bowling that encourages the batsman to get out of his comfort zone or intimidates him, and that's the key to it all. Richie spun his legbreak a small amount by comparison with Warne but because of that his use of the slider and the flipper were mostly effective because he bowled middle- and middle-and-off lines, whereas Warne was leg stump, outside leg stump.
Richie's a wise man and in the days he played there were eight-ball overs here in Australia. If you went for four an over, you were considered to be a pretty handy bowler. If you go for four an over now, it's expensive - that's because it's six-ball overs. But Richie was a great example of somebody who knew his strengths and worked on whatever weaknesses he might've had. He knew he wasn't a massive spinner of the ball, therefore his line and length had to be impeccable, and he worked around that.
In fact, in his autobiography Warne writes, "What matters is not always how many deliveries you possess, but how many the batsmen thinks you have."
That's the mystery of spin, isn't it? I remember, every Test series Warnie would come out with a mystery ball or something like that, but the truth is there are only so many balls that you can really bowl - you can't look like you're bowling a legbreak and bowl an offbreak.
Sonny Ramdhin was very difficult to read as he bowled with his sleeves down back in the 1950s; he had an unique grip and unique way of releasing the ball, as does Murali [Muttiah Muralitharan]. What they do with their wrists, it's very difficult to pick between the offbreak and the legbreak. Generally a legbreak bowler has to locate his wrist in a position to enhance the spin in the direction he wants the ball to go, which means the batsman should be able to see the relocation of the wrist.
In part two of his interview on the art of spin bowling, Terry Jenner looks at the damage caused to young spinners by the curbs placed on their attacking instincts. He also surveys the current slow-bowling landscape and appraises the leading practitioners around.
"Most spin bowlers have enormous attacking instinct, which gets suppressed by various captains and coaches" Nagraj Gollapudi
Bishan Bedi once said that a lot of bowling is done in the mind. Would you say that spin bowling requires the most mental energy of all the cricketing arts?
The thing about that is Bishan Bedi - who has, what, 260-odd Test wickets? - bowled against some of the very best players ever to go around the game. He had at his fingertips the control of spin and pace. Now, when you've got that, when you've developed that ability, then it's just about when to use them, how to use them, so therefore it becomes a matter of the brain. You can't have the brain dominating your game when you haven't got the capacity to bowl a legbreak or an offbreak where you want it to land. So that's why you have to practise those stock deliveries until it becomes just natural for you - almost like you can land them where you want them to land blindfolded, and then it just becomes mind over matter. Then the brain does take over.
There's nothing better than watching a quality spin bowler of any yolk - left-hand, right-hand - working on a quality batsman who knows he needs to break the bowler's rhythm or he might lose his wicket. That contest is a battle of minds then, because the quality batsman's got the technique and the quality bowler's got the capacity to bowl the balls where he wants to, within reason. So Bishan is exactly right.
What came naturally to someone like Bedi was flight. How important is flight in spin bowling?
When I was very young someone said to me, "You never beat a batsman off the pitch unless you first beat him in the air." Some people think that's an old-fashioned way of bowling. Once, at a conference in England, at Telford, Bishan said "Spin is in the air and break is off the pitch", which supported exactly what that guy told me 40 years ago. On top of that Bedi said stumping was his favourite dismissal because you had beaten the batsman in the air and then off the pitch. You wouldn't get too many coaches out there today who would endorse that remark because they don't necessarily understand what spin really is.
When you appraised the trainees in Chennai [at the MAC Spin Foundation], you said if they can separate the one-day cricket shown on TV and the one-day cricket played at school level, then there is a chance a good spinner will come along.
What I was telling them was: when you bowl a ball that's fairly flat and short of a length and the batsman goes back and pushes it to the off side, the whole team claps because no run was scored off it. Then you come in and toss the next one up and the batsman drives it to cover and it's still no run, but no one applauds it; they breathe a sigh of relief. That's the lack of understanding we have within teams about the role of the spin bowler. You should be applauding when he has invited the batsman to drive because that's what courage is, that's where the skill is, that's where the spin is, and that's where the wickets come. Bowling short of a length, that's the role of a medium pacer, part-timer. Most spin bowlers have enormous attacking instinct which gets suppressed by various captains, coaches and ideological thoughts in clubs and teams.
You talked at the beginning of the interview about the importance of being patient with a spinner. But isn't it true that the spinner gets another chance even if he gets hit, but the batsman never does?
I don't think you can compare them that way. If the spinner gets hit, he gets taken off. If he goes for 10 or 12 off an over, they take him off. Batsmen have got lots of things in their favour.
What I mean by patience is that to develop the craft takes a lot of overs, lots of balls in the nets, lots of target bowling. And you don't always get a bowl. Even if you are doing all this week-in, week-out, you don't always get to bowl, so you need to be patient. And then one day you walk into the ground and finally they toss you the ball. It is very easy to behave in a hungry, desperate manner because you think, "At last, I've got the ball." And you forget all the good things you do and suddenly try to get a wicket every ball because it's your only hope of getting into the game and staying on. The result is, you don't actually stay on and you don't get more games. So the patience, which is what you learn as you go along, can only come about if the spinner is allowed to develop at his pace instead of us pushing him up the rung because we think we've found one at last.
How much of a role does attitude play?
Attitude is an interesting thing. Depends on how you refer to it - whether it's attitude to bowling, attitude to being hit, attitude to the game itself.
When you bowl a ball that's fairly flat and short of a length and the batsman goes back and pushes it to the off side, the whole team claps because no run was scored off it. Then you come in and toss the next one up and the batsman drives it to cover and it's still no run, but no one applauds it; they breathe a sigh of relief
When Warne was asked what a legspin bowler needs more than anything else, he said, "Love". What he meant was love and understanding. They need someone to put their arm around them and say, "Mate, its okay, tomorrow is another day." Because you get thumped, mate. When you are trying to spin the ball from the back of your hand and land it in an area that's a very small target, that takes a lot of skill, and it also requires the patience to develop that skill. That's what I mean by patience, and the patience also needs to be with the coach, the captain, and whoever else is working with this young person, and the parents, who need to understand that he is not going to develop overnight.
And pushing him up the grade before he is ready isn't necessarily a great reward for him because that puts pressure on him all the time. Any person who plays under pressure all the time, ultimately the majority of them break. That's not what you want, you want them to come through feeling sure, scoring lots of wins, feeling good about themselves, recognising their role in the team, and having their team-mates recognise their role.
I don't think people - coaches, selectors - let the spin bowler know what his role actually is. He gets in the team and suddenly he gets to bowl and is told, "Here's the field, bowl to this", and in his mind he can't bowl.
Could you talk about contemporary spinners - Anil Kumble, Harbhajan Singh, Daniel Vettori, Monty Panesar, and Muttiah Muralitharan of course?
Of all the spinners today, the one I admire most of all is Vettori. He has come to Australia on two or three occasions and on each occasion he has troubled the Australian batsmen. He is a man who doesn't spin it a lot but he has an amazing ability to change the pace, to force the batsman into thinking he can drive it, but suddenly they have to check their stroke. And that's skill. If you haven't got lots of spin, then you've got to have the subtlety of change of pace.
And, of course, there is Kumble. I always marvel at the fact that he has worked his career around mainly containment and at the same time bowled enough wicket-taking balls to get to 566 wickets. That's a skill in itself. He is such a humble person as well and I admire him.
I marvel a little bit at Murali's wrist because it is very clever what he does with that, but to the naked eye I can't tell what is 15 degrees and what's not. I've just got to accept the word above us. All I know is that it would be very difficult to coach someone else to bowl like Murali. So we've got to put him in a significant list of one-offs - I hate to use the word "freak" - that probably won't be repeated.
I don't see enough of Harbhajan Singh - he is in and out of the Indian side. What I will say is that when I do see him bowl, I love the position of the seam. He has a beautiful seam position.
I love the way Stuart MacGill spins the ball. He is quite fearless in his capacity to spin the ball.
I love the energy that young [Piyush] Chawla displays in his bowling. The enthusiasm and the rawness, if you like. This is what I mean when I talk about pushing the boundaries. He is 18, playing limited-overs cricket, and at the moment he is bowling leggies and wrong'uns and I think that's terrific. But I hope the time doesn't come when he no longer has to spin the ball. When he tries to hold his place against Harbhajan Singh, for example. To do that he has to fire them in much quicker. He is already around the 80kph mark, which is quite healthy for a 18-year-old boy, but he still spins it at that pace, so it's fine. But ultimately if he is encouraged to bowl at a speed at which he doesn't spin the ball, that would be the sad part.
That's why I say this, there are lots of spinners around but it's the young, developing spinners who are probably suffering from all the stuff from television that encourages defence as a means to being successful as a spinner.
Monty is an outstanding prospect. You've got to look at how a guy can improve. He has done very, very well but how can he improve? He has got to have a change-up, a change of pace. At the moment, if you look at the speed gun in any given over from Monty, it's 56.2mph on average every ball. So he bowls the same ball; his line, his length, everything is impeccable, but then when it's time to knock over a tail, a couple of times he has been caught short because he has not been able to vary his pace. I think Monty is such an intelligent bowler and person that he will be in the nets working on that to try and make sure he can invite the lower order to have a go at him and not just try and bowl them out. That probably is his area of concern; the rest of it is outstanding.
What would you say are the attributes of a good spinner?
Courage, skill, patience, unpredictability, and spin. You get bits and pieces of all those, but if you have got spin then there is always a chance you can develop the other areas. For all the brilliant things that people saw Warne do, his greatest strength was the size of the heart, and that you couldn't see.
Subscribe to:
Posts (Atom)