Search This Blog

Showing posts with label myth. Show all posts
Showing posts with label myth. Show all posts

Friday 5 January 2018

The Myth of Bhima Koregaon Reinforces the Identities It Seeks to Transcend

BY ANAND TELTUMBDE  in The Wire

The resolve to fight Hindutva forces is certainly laudable, but the myth used for the purpose may be grossly counterproductive.




Bhima Koregaon victory pillar. Credit: Wikipedia



Two hundred years ago, the last battle of the Anglo-Maratha war was fought at Koregaon village on the banks of Bhima river near Pune. The battle marked the firm hold of the British Empire in India. The British erected an obelisk at the battle ground in the memory of the dead. It has 49 names, 22 of them are identified by their ‘nak’ suffix as Mahars. It was construed as the testimony to the gallantry of Mahar soldiers, and was rightly used by the first batch of Mahar leaders such as Gopal Baba Walangkar, Shivram Janba Kamble and even Ramji Ambedkar, B.R. Ambedkar’s father, when pleading the British for the restoration of Mahar recruitment in the British army when it was stopped in 1893. The stoppage of Mahar recruitment was a consequence of the Indian uprising of 1857, after which the British reassessed their recruiting strategies to include only those from ‘martial races’ in the army.

But when Babasaheb Ambedkar painted the Battle of Bhima Koregaon as the battle of Mahar soldiers against their caste oppression in Peshwa rule, he was creating a pure myth. As myths are required to build movements, he perhaps saw its necessity then. But after a century, when it solidifies into a quasi-history and tends to push Dalits deeper into an identitarian marshland, it should become a worrisome matter. Many Dalit organisations recently formed a joint front to observe the 200th anniversary of this battle as a campaign to launch an attack on the new Peshwai, the rising Brahmanic rule of the Hindutva forces. Their long marches culminated into an Elgar Parishad (conference) at the Shaniwarwada at Pune on December 31. While the resolve to fight the Hindutva forces is certainly laudable, the myth used for the purpose may be grossly counterproductive insofar as it reinforces identitarian tendencies whereas the necessity is to transcend them.

As regards history, it is a fact that when the East India Company developed its military aspirations, it recruited Dalits in disproportionately large numbers, perhaps for their unflinching loyalty and faithfulness and also because they were cheaply available. One finds disproportionate numbers of the Namshudras in Bengal, the Parayas in Madras and the Mahars in Maharashtra in its army. If the Dalits wanted to claim significant contribution to the establishment of the British Raj in India, it may not be as such incorrect. But to attribute motive of fighting caste oppression to their soldiery shall be far-fetched and unhistorical.

The East India Company fought and won several battles from the first one in Plassey in 1757 before the last battle of the Anglo-Maratha war. Obviously, all of them were not against the Peshwas. Most of them were not even against the Hindus. They were simply wars between the two ruling powers, which their soldiers fought just as their duty. To make them appear as anti-caste or anti-religion will not only be factually incorrect, but also an erroneous understanding of historical caste. Caste, until after the late 19th century when there was a substantial spread of education among the Dalits, has been the life-world of people. They took caste as a natural order and their oppression as the fate that they had to meekly endure. Therefore, there was no question of any resistance to caste, leave apart physical war against them. Contrary to such myths of bravery, there is no evidence of any militant resistance the Dalits ever posed against the Brahmanic oppression.

With regard to formation of warring armies, they were not purely composed on communal lines. While the Dalit soldiers may be relatively in large numbers in British army, it is not that they did not exist in Muslim or Maratha armies. As with communities, all castes existed in all the armies. In the Battle of Koregaon, one of the three wings of the Peshawa infantry was Arabs, which had reportedly fought most fiercely and had most casualties. What could be their motivation? Did they want the Peshwa’s Brahmanic rule to triumph? The fact is that they simply fought as soldiers for their masters, as the Dalits did for theirs. It would be grossly erroneous to attribute loftier motives to them than this.


Anglo-Maratha war. Credit: Wikipedia


Before the battle of Koregaon on January 1, 1818, the Peshwas had been reduced to weaklings by the earlier two Anglo-Maratha wars. As a matter of fact, the Peshwa Bajirao II had fled Pune and was attempting to attack Pune from outside. Peshwa’s army comprised 20,000 cavalry and 8,000 infantry, out of which around 2,000 men, divided into three infantry parties each comprising 600 Arabs, Gosains and soldiers, mounted the attack. The majority of the attackers were Arabs, reputed to be the finest among the Peshwa soldiers. The Company troops comprised 834 men, including around 500 soldiers of the 2nd Battalion of the 1st Regiment of Bombay Native Infantry, which was manned predominantly by Mahar soldiers. Although there is no record of their exact number, it is obvious that all of them were not Mahars. Even going by the casualties, the majority of those died in the battle (27 out of 49) were not Mahars. The Peshwa army ultimately withdrew, fearing the arrival of a larger British force led by General Joseph Smith. In view of these factual details, it may be misleading to portray the battle as Mahars’ vengeance against the Peshwas’ Brahmanic rule.

There is no evidence that after the defeat of Peshwai, there was any relief that accrued to Mahars. As a matter of fact, their caste oppression continued unabated. Rather, as hinted earlier, the ungrateful British stopped their recruitment to the army, refusing to acknowledge their past bravery. They ignored their pleas to restore recruitment until threatened by the First World War, in the wake of which they restarted their recruitment. There is no dispute that the British colonial rule brought Dalits numerous benefits, to the extent that the very birth of the Dalit movement may be attributable to it. But it must simultaneously be understood that it was unintended and primarily dictated by their colonial logic. It is unfortunate that Dalits blind themselves to this reality with their identity blinkers.

It is equally incorrect to say that since the Peshwa forces belonged to the Maratha confederacy, they were the nationalist forces, and the defeating British forces were the imperialists. To see historical facts through the spectacles of a non-existent nation is equally condemnable. There was no concept of an Indian nation; as a matter of fact, this concept eludes us even to this day. Paradoxically, India itself is by and large a gift of British rule, having forged a political unity of vast landmass of the subcontinent. Those who have been driving it as a nation for their selfish gains are indeed debauched like Peshwas and are the biggest anti-nationals.

The Dalits do need to fight this new Peshwai recreated by the Hindutva marauders. For that, they better open their eyes to see the reality, rather than an ostrich-like look into the mythical past and imagine their greatness.

Sunday 1 October 2017

The pendulum swings against privatisation

Evidence suggests that ending state ownership works in some markets but not others


Tim Harford in The Financial Times


Political fashions can change quickly, as a glance at almost any western democracy will tell you. The pendulum of the politically possible swings back and forth. Nowhere is this more obvious than in the debates over privatisation and nationalisation. 


In the late 1940s, experts advocated nationalisation on a scale hard to imagine today. Arthur Lewis thought the government should run the phone system, insurance and the car industry. James Meade wanted to socialise iron, steel and chemicals; both men later won Nobel memorial prizes in economics. 

They were in tune with the times: the British government ended up owning not only utilities and heavy industry but airlines, travel agents and even the removal company, Pickfords. The pendulum swung back in the 1980s and early 1990s, as Margaret Thatcher and John Major began an ever more ambitious series of privatisations, concluding with water, electricity and the railways. The world watched, and often followed suit. 

Was it all worth it? The question arises because the pendulum is swinging back again: Jeremy Corbyn, the bookies’ favourite to be the next UK prime minister, wants to renationalise the railways, electricity, water and gas. (He has not yet mentioned Pickfords.) Furthermore, he cites these ambitions as a reason to withdraw from the European single market. 

Privatisation’s proponents mention the galvanising effect of the profit motive, or the entrepreneurial spirit of private enterprise. Opponents talk of fat cats and selling off the family silver 

That is odd, since there is nothing in single market rules to prevent state ownership of railways and utilities — the excuse seems to be yet another Eurosceptic myth, the leftwing reflection of rightwing tabloids moaning about banana regulation. Since the entire British political class has lost its mind over Brexit, it would be unfair to single out Mr Corbyn on those grounds. 

Still, he has reopened a debate that long seemed settled, and piqued my interest. Did privatisation work? Proponents sometimes mention the galvanising effect of the profit motive, or the entrepreneurial spirit of private enterprise. Opponents talk of fat cats and selling off the family silver. Realists might prefer to look at the evidence, and the ambitious UK programme has delivered plenty of that over the years. 

There is no reason for a government to own Pickfords, but the calculus of privatisation is more subtle when it comes to natural monopolies — markets that are broadly immune to competition. If I am not satisfied with what Pickford’s has to offer me when I move home, I am not short of options. But the same is not true of the Royal Mail: if I want to write to my MP then the big red pillar box at the end of the street is really the only game in town. 

Competition does sometimes emerge in unlikely seeming circumstances. British Telecom seemed to have an iron grip on telephone services in the UK — as did AT&T in the US. The grip melted away in the face of regulation and, more importantly, technological change. 

Railways seem like a natural monopoly, yet there are two separate railway lines from my home town of Oxford into London, and two separate railway companies will sell me tickets for the journey. They compete with two bus companies; competition can sometimes seem irrepressible. 

But the truth is that competition has often failed to bloom, even when one might have expected it. If I run a bus service at 20 and 50 minutes past the hour, then a competitor can grab my business without competing on price by running a service at 19 and 49 minutes past the hour. Customers will not be well served by that. 

Meanwhile electricity and phone companies offer bewildering tariffs, and it is hard to see how water companies will ever truly compete with each other; the logic of geography suggests otherwise. 

All this matters because the broad lesson of the great privatisation experiment is that it has worked well when competition has been unleashed, but less well when a government-run business has been replaced by a government-regulated monopoly. 

A few years ago, the economist David Parker assembled a survey of post-privatisation performance studies. The most striking thing is the diversity of results. Sometimes productivity soared. Sometimes investors and managers skimmed off all the cream. Revealingly, performance often leapt in the year or two before privatisation, suggesting that state-owned enterprises could be well-run when the political will existed — but that political will was often absent. 

My overall reading of the evidence is that privatisation tended to improve profitability, productivity and pricing — but the gains were neither vast nor guaranteed. Electricity privatisation was a success; water privatisation was a disappointment. Privatised railways now serve vastly more passengers than British Rail did. That is a success story but it looks like a failure every time your nose is crushed up against someone’s armpit on the 18:09 from London Victoria. 

The evidence suggests this conclusion: the picture is mixed, the details matter, and you can get results if you get the execution right. Our politicians offer a different conclusion: the picture is stark, the details are irrelevant, and we metaphorically execute not our policies but our opponents. The pendulum swings — but shows no sign of pausing in the centre.

Friday 2 June 2017

The myths about money that British voters should reject

Ha Joon Chang in The Guardian


Illustration: Nate Kitch


Befitting a surprise election, the manifestos from the main parties contained surprises. Labour is shaking off decades of shyness about nationalisation and tax increases for the rich and for the first time in decades has a policy agenda that is not Tory-lite. The Conservatives, meanwhile, say they are rejecting “the cult of selfish individualism” and “belief in untrammelled free markets”, while adopting the quasi-Marxist idea of an energy price cap.

Despite these significant shifts, myths about the economy refuse to go away and hamper a more productive debate. They concern how the government manages public finances – “tax and spend”, if you will.

The first is that there is an inherent virtue in balancing the books. Conservatives still cling to the idea of eliminating the budget deficit, even if it is with a 10-year delay (2025, as opposed to George Osborne’s original goal of 2015). The budget-balancing myth is so powerful that Labour feels it has to cost its new spending pledges down to the last penny, lest it be accused of fiscal irresponsibility.

However, as Keynes and his followers told us, whether a balanced budget is a good or a bad thing depends on the circumstances. In an overheating economy, deficit spending would be a serious folly. However, in today’s UK economy, whose underlying stagnation has been masked only by the release of excess liquidity on an oceanic scale, some deficit spending may be good – necessary, even.

The second myth is that the UK welfare state is especially large. Conservatives believe that it is bloated out of all proportion and needs to be drastically cut. Even the Labour party partly buys into this idea. Its extra spending pledge on this front is presented as an attempt to reverse the worst of the Tory cuts, rather than as an attempt to expand provision to rebuild the foundation for a decent society.

The reality is the UK welfare state is not large at all. As of 2016, the British welfare state (measured by public social spending) was, at 21.5% of GDP, barely three-quarters of welfare spending in comparably rich countries in Europe – France’s is 31.5% and Denmark’s is 28.7%, for example. The UK welfare state is barely larger than the OECD average (21%), which includes a dozen or so countries such as Mexico, Chile, Turkey and Estonia, which are much poorer and/or have less need for public welfare provision. They have younger populations and stronger extended family networks.

The third myth is that welfare spending is consumption – that it is a drain on the nation’s productive resources and thus has to be minimised. This myth is what Conservative supporters subscribe to when they say that, despite their negative impact, we have to accept cuts in such things as disability benefit, unemployment benefit, child care and free school meals, because we “can’t afford them”. This myth even tints, although doesn’t define, Labour’s view on the welfare state. For example, Labour argues for an expansion of welfare spending, but promises to finance it with current revenue, thereby implicitly admitting that the money that goes into it is consumption that does not add to future output.


 ‘It is a myth that, despite their negative impact, we have to accept cuts in such things as disability benefit, unemployment benefit, child care and free school meals.’ Photograph: monkeybusinessimages/Getty Images/iStockphoto


However, a lot of welfare spending is investment that pays back more than it costs, through increased productivity in the future. Expenditure on education (especially early learning programmes such as Sure Start), childcare and school meals programmes is an investment in the nation’s future productivity. Unemployment benefit, especially if combined with good publicly funded retraining and job-search programmes, such as in Scandinavia, preserve the human productive capabilities that would otherwise be lost, as we have seen in so many former industrial towns in the UK. Increased spending on disability benefits and care for older people helps carers to have more time and less stress, making them more productive workers.

The last myth is that tax is a burden, which therefore by definition needs to be minimised. The Conservatives are clear about this, proposing to cut corporation tax further to 17%, one of the lowest levels in the rich world. However, even Labour is using the language of “burden” about taxes. In proposing tax increases for the highest income earners and large corporations, Jeremy Corbyn spoke of his belief that “those with the broadest shoulders should bear the greatest burden”.

But would you call the money that you pay for your takeaway curry or Netflix subscription a burden? You wouldn’t, because you recognise that you are getting your curry and TV shows in return. Likewise, you shouldn’t call your taxes a burden because in return you get an array of public services, from education, health and old-age care, through to flood defence and roads to the police and military.

If tax really were a pure burden, all rich individuals and companies would move to Paraguay or Bulgaria, where the top rate of income tax is 10%. Of course, this does not happen because, in those countries, in return for low tax you get poor public services. Conversely, most rich Swedes don’t go into tax exile because of their 60% top income tax rate, because they get a good welfare state and excellent education in return. Japanese and German companies don’t move out of their countries in droves despite some of the highest corporate income tax rates in the world (31% and 30% respectively) because they get good infrastructure, well-educated workers, strong public support for research and development, and well-functioning administrative and legal systems.

Low tax is not in itself a virtue. The question should be whether the government is providing services of satisfactory quality, given the tax receipts, not what the level of tax is.

The British debate on economic policy is finally moving on from the bankrupt neoliberal consensus of the past few decades. But the departure won’t be complete until we do away with the persistent myths about tax and spend.

Thursday 30 March 2017

The myth of the ‘lone wolf’ terrorist

Jason Burke in The Guardian


At around 8pm on Sunday 29 January, a young man walked into a mosque in the Sainte-Foy neighbourhood of Quebec City and opened fire on worshippers with a 9mm handgun. The imam had just finished leading the congregation in prayer when the intruder started shooting at them. He killed six and injured 19 more. The dead included an IT specialist employed by the city council, a grocer, and a science professor.

The suspect, Alexandre Bissonnette, a 27-year-old student, has been charged with six counts of murder, though not terrorism. Within hours of the attack, Ralph Goodale, the Canadian minister for public safety, described the killer as “a lone wolf”. His statement was rapidly picked up by the world’s media.

Goodale’s statement came as no surprise. In early 2017, well into the second decade of the most intense wave of international terrorism since the 1970s, the lone wolf has, for many observers, come to represent the most urgent security threat faced by the west. The term, which describes an individual actor who strikes alone and is not affiliated with any larger group, is now widely used by politicians, journalists, security officials and the general public. It is used for Islamic militant attackers and, as the shooting in Quebec shows, for killers with other ideological motivations. Within hours of the news breaking of an attack on pedestrians and a policeman in central London last week, it was used to describe the 52-year-old British convert responsible. Yet few beyond the esoteric world of terrorism analysis appear to give this almost ubiquitous term much thought.

Terrorism has changed dramatically in recent years. Attacks by groups with defined chains of command have become rarer, as the prevalence of terrorist networks, autonomous cells, and, in rare cases, individuals, has grown. This evolution has prompted a search for a new vocabulary, as it should. The label that seems to have been decided on is “lone wolves”. They are, we have been repeatedly told, “Terror enemy No 1”.

Yet using the term as liberally as we do is a mistake. Labels frame the way we see the world, and thus influence attitudes and eventually policies. Using the wrong words to describe problems that we need to understand distorts public perceptions, as well as the decisions taken by our leaders. Lazy talk of “lone wolves” obscures the real nature of the threat against us, and makes us all less safe.

The image of the lone wolf who splits from the pack has been a staple of popular culture since the 19th century, cropping up in stories about empire and exploration from British India to the wild west. From 1914 onwards, the term was popularised by a bestselling series of crime novels and films centred upon a criminal-turned-good-guy nicknamed Lone Wolf. Around that time, it also began to appear in US law enforcement circles and newspapers. In April 1925, the New York Times reported on a man who “assumed the title of ‘Lone Wolf’”, who terrorised women in a Boston apartment building. But it would be many decades before the term came to be associated with terrorism.

In the 1960s and 1970s, waves of rightwing and leftwing terrorism struck the US and western Europe. It was often hard to tell who was responsible: hierarchical groups, diffuse networks or individuals effectively operating alone. Still, the majority of actors belonged to organisations modelled on existing military or revolutionary groups. Lone actors were seen as eccentric oddities, not as the primary threat.

The modern concept of lone-wolf terrorism was developed by rightwing extremists in the US. In 1983, at a time when far-right organisations were coming under immense pressure from the FBI, a white nationalist named Louis Beam published a manifesto that called for “leaderless resistance” to the US government. Beam, who was a member of both the Ku Klux Klan and the Aryan Nations group, was not the first extremist to elaborate the strategy, but he is one of the best known. He told his followers that only a movement based on “very small or even one-man cells of resistance … could combat the most powerful government on earth”.

 
Oklahoma City bomber Timothy McVeigh leaves court, 1995. Photograph: David Longstreath/AP

Experts still argue over how much impact the thinking of Beam and other like-minded white supremacists had on rightwing extremists in the US. Timothy McVeigh, who killed 168 people with a bomb directed at a government office in Oklahoma City in 1995, is sometimes cited as an example of someone inspired by their ideas. But McVeigh had told others of his plans, had an accomplice, and had been involved for many years with rightwing militia groups. McVeigh may have thought of himself as a lone wolf, but he was not one.

One far-right figure who made explicit use of the term lone wolf was Tom Metzger, the leader of White Aryan Resistance, a group based in Indiana. Metzger is thought to have authored, or at least published on his website, a call to arms entitled “Laws for the Lone Wolf”. “I am preparing for the coming War. I am ready when the line is crossed … I am the underground Insurgent fighter and independent. I am in your neighborhoods, schools, police departments, bars, coffee shops, malls, etc. I am, The Lone Wolf!,” it reads.

From the mid-1990s onwards, as Metzger’s ideas began to spread, the number of hate crimes committed by self-styled “leaderless” rightwing extremists rose. In 1998, the FBI launched Operation Lone Wolf against a small group of white supremacists on the US west coast. A year later, Alex Curtis, a young, influential rightwing extremist and protege of Metzger, told his hundreds of followers in an email that “lone wolves who are smart and commit to action in a cold-mannered way can accomplish virtually any task before them ... We are already too far along to try to educate the white masses and we cannot worry about [their] reaction to lone wolf/small cell strikes.”

The same year, the New York Times published a long article on the new threat headlined “New Face of Terror Crimes: ‘Lone Wolf’ Weaned on Hate”. This seems to have been the moment when the idea of terrorist “lone wolves” began to migrate from rightwing extremist circles, and the law enforcement officials monitoring them, to the mainstream. In court on charges of hate crimes in 2000, Curtis was described by prosecutors as an advocate of lone-wolf terrorism.

When, more than a decade later, the term finally became a part of the everyday vocabulary of millions of people, it was in a dramatically different context.

After 9/11, lone-wolf terrorism suddenly seemed like a distraction from more serious threats. The 19 men who carried out the attacks were jihadis who had been hand picked, trained, equipped and funded by Osama bin Laden, the leader of al-Qaida, and a small group of close associates.

Although 9/11 was far from a typical terrorist attack, it quickly came to dominate thinking about the threat from Islamic militants. Security services built up organograms of terrorist groups. Analysts focused on individual terrorists only insofar as they were connected to bigger entities. Personal relations – particularly friendships based on shared ambitions and battlefield experiences, as well as tribal or familial links – were mistaken for institutional ones, formally connecting individuals to organisations and placing them under a chain of command.


As the 2000s drew to a close, attacks perpetrated by people who seemed to be acting alone began to outnumber all others

This approach suited the institutions and individuals tasked with carrying out the “war on terror”. For prosecutors, who were working with outdated legislation, proving membership of a terrorist group was often the only way to secure convictions of individuals planning violence. For a number of governments around the world – Uzbekistan, Pakistan, Egypt – linking attacks on their soil to “al-Qaida” became a way to shift attention away from their own brutality, corruption and incompetence, and to gain diplomatic or material benefits from Washington. For some officials in Washington, linking terrorist attacks to “state-sponsored” groups became a convenient way to justify policies, such as the continuing isolation of Iran, or military interventions such as the invasion of Iraq. For many analysts and policymakers, who were heavily influenced by the conventional wisdom on terrorism inherited from the cold war, thinking in terms of hierarchical groups and state sponsors was comfortably familiar.

A final factor was more subtle. Attributing the new wave of violence to a single group not only obscured the deep, complex and troubling roots of Islamic militancy but also suggested the threat it posed would end when al-Qaida was finally eliminated. This was reassuring, both for decision-makers and the public.

By the middle of the decade, it was clear that this analysis was inadequate. Bombs in Bali, Istanbul and Mombasa were the work of centrally organised attackers, but the 2004 attack on trains in Madrid had been executed by a small network only tenuously connected to the al-Qaida senior leadership 4,000 miles away. For every operation like the 2005 bombings in London – which was close to the model established by the 9/11 attacks – there were more attacks that didn’t seem to have any direct link to Bin Laden, even if they might have been inspired by his ideology. There was growing evidence that the threat from Islamic militancy was evolving into something different, something closer to the “leaderless resistance” promoted by white supremacists two decades earlier.

As the 2000s drew to a close, attacks perpetrated by people who seemed to be acting alone began to outnumber all others. These events were less deadly than the spectacular strikes of a few years earlier, but the trend was alarming. In the UK in 2008, a convert to Islam with mental health problems attempted to blow up a restaurant in Exeter, though he injured no one but himself. In 2009, a US army major shot 13 dead in Fort Hood, Texas. In 2010, a female student stabbed an MPin London. None appeared, initially, to have any broader connections to the global jihadi movement.

In an attempt to understand how this new threat had developed, analysts raked through the growing body of texts posted online by jihadi thinkers. It seemed that one strategist had been particularly influential: a Syrian called Mustafa Setmariam Nasar, better known as Abu Musab al-Suri. In 2004, in a sprawling set of writings posted on an extremist website, Nasar had laid out a new strategy that was remarkably similar to “leaderless resistance”, although there is no evidence that he knew of the thinking of men such as Beam or Metzger. Nasar’s maxim was “Principles, not organisations”. He envisaged individual attackers and cells, guided by texts published online, striking targets across the world.

Having identified this new threat, security officials, journalists and policymakers needed a new vocabulary to describe it. The rise of the term lone wolf wasn’t wholly unprecedented. In the aftermath of 9/11, the US had passed anti-terror legislation that included a so-called “lone wolf provision”. This made it possible to pursue terrorists who were members of groups based abroad but who were acting alone in the US. Yet this provision conformed to the prevailing idea that all terrorists belonged to bigger groups and acted on orders from their superiors. The stereotype of the lone wolf terrorist that dominates today’s media landscape was not yet fully formed.

It is hard to be exact about when things changed. By around 2006, a small number of analysts had begun to refer to lone-wolf attacks in the context of Islamic militancy, and Israeli officials were using the term to describe attacks by apparently solitary Palestinian attackers. Yet these were outliers. In researching this article, I called eight counter-terrorism officials active over the last decade to ask them when they had first heard references to lone-wolf terrorism. One said around 2008, three said 2009, three 2010 and one around 2011. “The expression is what gave the concept traction,” Richard Barrett, who held senior counter-terrorist positions in MI6, the British overseas intelligence service, and the UN through the period, told me. Before the rise of the lone wolf, security officials used phrases – all equally flawed – such as “homegrowns”, “cleanskins”, “freelancers” or simply “unaffiliated”.

As successive jihadi plots were uncovered that did not appear to be linked to al-Qaida or other such groups, the term became more common. Between 2009 and 2012 it appears in around 300 articles in major English-language news publications each year, according the professional cuttings search engine Lexis Nexis. Since then, the term has become ubiquitous. In the 12 months before the London attack last week, the number of references to “lone wolves” exceeded the total of those over the previous three years, topping 1,000.

Lone wolves are now apparently everywhere, stalking our streets, schools and airports. Yet, as with the tendency to attribute all terrorist attacks to al-Qaida a decade earlier, this is a dangerous simplification.

In March 2012, a 23-year-old petty criminal named Mohamed Merah went on a shooting spree – a series of three attacks over a period of nine days – in south-west France, killing seven people. Bernard Squarcini, head of the French domestic intelligence service, described Merah as a lone wolf. So did the interior ministry spokesman, and, inevitably, many journalists. A year later, Lee Rigby, an off-duty soldier, was run over and hacked to death in London. Once again, the two attackers were dubbed lone wolves by officials and the media. So, too, were Dzhokhar and Tamerlan Tsarnaev, the brothers who bombed the Boston Marathon in 2013. The same label has been applied to more recent attackers, including the men who drove vehicles into crowds in Nice and Berlin last year, and in London last week.


The Boston Marathon bombing carried out by Dzhokhar and Tamerlan Tsarnaev in 2013. Photograph: Dan Lampariello/Reuters

One problem facing security services, politicians and the media is that instant analysis is difficult. It takes months to unravel the truth behind a major, or even minor, terrorist operation. The demand for information from a frightened public, relayed by a febrile news media, is intense. People seek quick, familiar explanations.

Yet many of the attacks that have been confidently identified as lone-wolf operations have turned out to be nothing of the sort. Very often, terrorists who are initially labelled lone wolves, have active links to established groups such as Islamic State and al-Qaida. Merah, for instance, had recently travelled to Pakistan and been trained, albeit cursorily, by a jihadi group allied with al-Qaida. Merah was also linked to a network of local extremists, some of whom went on to carry out attacks in Libya, Iraq and Syria. Bernard Cazeneuve, who was then the French interior minister, later agreed that calling Merah a lone wolf had been a mistake.

If, in cases such as Merah’s, the label of lone wolf is plainly incorrect, there are other, more subtle cases where it is still highly misleading. Another category of attackers, for instance, are those who strike alone, without guidance from formal terrorist organisations, but who have had face-to-face contact with loose networks of people who share extremist beliefs. The Exeter restaurant bomber, dismissed as an unstable loner, was actually in contact with a circle of local militant sympathisers before his attack. (They have never been identified.) The killers of Lee Rigby had been on the periphery of extremist movements in the UK for years, appearing at rallies of groups such as the now proscribed al-Muhajiroun, run by Anjem Choudary, a preacher convicted of terrorist offences in 2016 who is reported to have “inspired” up to 100 British militants.

A third category is made up of attackers who strike alone, after having had close contact online, rather than face-to-face, with extremist groups or individuals. A wave of attackers in France last year were, at first, wrongly seen as lone wolves “inspired” rather than commissioned by Isis. It soon emerged that the individuals involved, such as the two teenagers who killed a priest in front of his congregation in Normandy, had been recruited online by a senior Isis militant. In three recent incidents in Germany, all initially dubbed “lone-wolf attacks”, Isis militants actually used messaging apps to direct recruits in the minutes before they attacked. “Pray that I become a martyr,” one attacker who assaulted passengers on a train with an axe and knife told his interlocutor. “I am now waiting for the train.” Then: “I am starting now.”

Very often, what appear to be the clearest lone-wolf cases are revealed to be more complex. Even the strange case of the man who killed 86 people with a truck in Nice in July 2016 – with his background of alcohol abuse, casual sex and lack of apparent interest in religion or radical ideologies – may not be a true lone wolf. Eight of his friends and associates have been arrested and police are investigating his potential links to a broader network.

What research does show is that we may be more likely to find lone wolves among far-right extremists than among their jihadi counterparts. Though even in those cases, the term still conceals more than it reveals.

The murder of the Labour MP Jo Cox, days before the EU referendum, by a 52-year-old called Thomas Mair, was the culmination of a steady intensification of rightwing extremist violence in the UK that had been largely ignored by the media and policymakers. According to police, on several occasions attackers came close to causing more casualties in a single operation than jihadis had ever inflicted. The closest call came in 2013 when Pavlo Lapshyn, a Ukrainian PhD student in the UK, planted a bomb outside a mosque in Tipton, West Midlands. Fortunately, Lapshyn had got his timings wrong and the congregation had yet to gather when the device exploded. Embedded in the trunks of trees surrounding the building, police found some of the 100 nails Lapshyn had added to the bomb to make it more lethal.

Lapshyn was a recent arrival, but the UK has produced numerous homegrown far-right extremists in recent years. One was Martyn Gilleard, who was sentenced to 16 years for terrorism and child pornography offences in 2008. When officers searched his home in Goole, East Yorkshire, they found knives, guns, machetes, swords, axes, bullets and four nail bombs. A year later, Ian Davison became the first Briton convicted under new legislation dealing with the production of chemical weapons. Davison was sentenced to 10 years in prison for manufacturing ricin, a lethal biological poison made from castor beans. His aim, the court heard, was “the creation of an international Aryan group who would establish white supremacy in white countries”.

Lapshyn, Gilleard and Davison were each described as lone wolves by police officers, judges and journalists. Yet even a cursory survey of their individual stories undermines this description. Gilleard was the local branch organiser of a neo-Nazi group, while Davison founded the Aryan Strike Force, the members of which went on training days in Cumbria where they flew swastika flags.

Thomas Mair, who was also widely described as a lone wolf, does appear to have been an authentic loner, yet his involvement in rightwing extremism goes back decades. In May 1999, the National Alliance, a white-supremacist organisation in West Virginia, sent Mair manuals that explained how to construct bombs and assemble homemade pistols. Seventeen years later, when police raided his home after the murder, they found stacks of far-right literature, Nazi memorabilia and cuttings on Anders Breivik, the Norwegian terrorist who murdered 77 people in 2011.

 
A government building in Oslo bombed by Anders Breivik, July 2011. Photograph: Scanpix/Reuters

Even Breivik himself, who has been called “the deadliest lone-wolf attacker in [Europe’s] history”, was not a true lone wolf. Prior to his arrest, Breivik had long been in contact with far-right organisations. A member of the English Defence League told the Telegraph that Breivik had been in regular contact with its members via Facebook, and had a “hypnotic” effect on them.

If such facts fit awkwardly with the commonly accepted idea of the lone wolf, they fit better with academic research that has shown that very few violent extremists who launch attacks act without letting others know what they may be planning. In the late 1990s, after realising that in most instances school shooters would reveal their intentions to close associates before acting, the FBI began to talk about “leakage” of critical information. By 2009, it had extended the concept to terrorist attacks, and found that “leakage” was identifiable in more than four-fifths of 80 ongoing cases they were investigating. Of these leaks, 95% were to friends, close relatives or authority figures.

More recent research has underlined the garrulous nature of violent extremists. In 2013, researchers at Pennsylvania State University examined the interactions of 119 lone-wolf terrorists from a wide variety of ideological and faith backgrounds. The academics found that, even though the terrorists launched their attacks alone, in 79% of cases others were aware of the individual’s extremist ideology, and in 64% of cases family and friends were aware of the individual’s intent to engage in terrorism-related activity. Another more recent survey found that 45% of Islamic militant cases talked about their inspiration and possible actions with family and friends. While only 18% of rightwing counterparts did, they were much more likely to “post telling indicators” on the internet.

Few extremists remain without human contact, even if that contact is only found online. Last year, a team at the University of Miami studied 196 pro-Isis groupsoperating on social media during the first eight months of 2015. These groups had a combined total of more than 100,000 members. Researchers also found that pro-Isis individuals who were not in a group – who they dubbed “online ‘lone wolf’ actors” – had either recently been in a group or soon went on to join one.


Any terrorist, however socially or physically isolated, is still part of a broader movement
There is a much broader point here. Any terrorist, however socially or physically isolated, is still part of a broader movement. The lengthy manifesto that Breivik published hours before he started killing drew heavily on a dense ecosystem of far-right blogs, websites and writers. His ideas on strategy drew directly from the “leaderless resistance” school of Beam and others. Even his musical tastes were shaped by his ideology. He was, for example, a fan of Saga, a Swedish white nationalist singer, whose lyrics include lines about “The greatest race to ever walk the earth … betrayed”.

It is little different for Islamic militants, who emerge as often from the fertile and desperately depressing world of online jihadism – with its execution videos, mythologised history, selectively read religious texts and Photoshopped pictures of alleged atrocities against Muslims – as from organised groups that meet in person.

Terrorist violence of all kinds is directed against specific targets. These are not selected at random, nor are such attacks the products of a fevered and irrational imagination operating in complete isolation.

Just like the old idea that a single organisation, al-Qaida, was responsible for all Islamic terrorism, the rise of the lone-wolf paradigm is convenient for many different actors. First, there are the terrorists themselves. The notion that we are surrounded by anonymous lone wolves poised to strike at any time inspires fear and polarises the public. What could be more alarming and divisive than the idea that someone nearby – perhaps a colleague, a neighbour, a fellow commuter – might secretly be a lone wolf?

Terrorist groups also need to work constantly to motivate their activists. The idea of “lone wolves” invests murderous attackers with a special status, even glamour. Breivik, for instance, congratulated himself in his manifesto for becoming a “self-financed and self-indoctrinated single individual attack cell”. Al-Qaida propaganda lauded the 2009 Fort Hood shooter as “a pioneer, a trailblazer, and a role model who has opened a door, lit a path, and shown the way forward for every Muslim who finds himself among the unbelievers”.

The lone-wolf paradigm can be helpful for security services and policymakers, too, since the public assumes that lone wolves are difficult to catch. This would be justified if the popular image of the lone wolf as a solitary actor was accurate. But, as we have seen, this is rarely the case.


Westminster terrorist Khalid Masood. Photograph: Reuters

The reason that many attacks are not prevented is not because it was impossible to anticipate the perpetrator’s actions, but because someone screwed up. German law enforcement agencies were aware that the man who killed 12 in Berlin before Christmas was an Isis sympathiser and had talked about committing an attack. Repeated attempts to deport him had failed, stymied by bureaucracy, lack of resources and poor case preparation. In Britain, a parliamentary report into the killing of Lee Rigby identified a number of serious delays and potential missed opportunities to prevent it. Khalid Masood, the man who attacked Westminster last week, was identified in 2010 as a potential extremist by MI5.

But perhaps the most disquieting explanation for the ubiquity of the term is that it tells us something we want to believe. Yes, the terrorist threat now appears much more amorphous and unpredictable than ever before. At the same time, the idea that terrorists operate alone allows us to break the link between an act of violence and its ideological hinterland. It implies that the responsibility for an individual’s violent extremism lies solely with the individual themselves.

The truth is much more disturbing. Terrorism is not something you do by yourself, it is highly social. People become interested in ideas, ideologies and activities, even appalling ones, because other people are interested in them.

In his eulogy at the funeral of those killed in the mosque shooting in Quebec, the imam Hassan Guillet spoke of the alleged shooter. Over previous days details had emerged of the young man’s life. “Alexandre [Bissonette], before being a killer, was a victim himself,” said Hassan. “Before he planted his bullets in the heads of his victims, somebody planted ideas more dangerous than the bullets in his head. Unfortunately, day after day, week after week, month after month, certain politicians, and certain reporters and certain media, poisoned our atmosphere.

“We did not want to see it …. because we love this country, we love this society. We wanted our society to be perfect. We were like some parents who, when a neighbour tells them their kid is smoking or taking drugs, answers: ‘I don’t believe it, my child is perfect.’ We don’t want to see it. And we didn’t see it, and it happened.”

“But,” he went on to say, “there was a certain malaise. Let us face it. Alexandre Bissonnette didn’t emerge from a vacuum.”

Thursday 23 March 2017

Momentum, a convenient sporting myth

Suresh Menon in The Hindu

So Australia has the “momentum” going into the final Test match in Dharamsala.

At least, their skipper Steve Smith thinks so. Had Virat Kohli said that India have the momentum, he would have been right too. The reason is quite simple. “Momentum” does not exist, so you can pour into the word any meaning you want. Sportsmen do it all the time. It is as uplifting as the thought: “I am due a big score” or “the rivals are due a defeat”. Sport does not work that way, but there is consolation in thinking that it does.

“Momentum” is one of our most comforting sporting myths, the favourite of television pundits and newspaper columnists as well as team coaches everywhere. It reaffirms what we love to believe about sport: that winning is a habit, set to continue if unchecked; that confidence is everything, and players carry it from one victory to the next; and above all, that randomness, which is a more fundamental explanation, is anathema. It is at once the loser’s solace and the winner’s excuse. Few streaks transcend random processes. Of course streaks occur — that is the nature of sport. But that is no guide to future  performance.

Momentum, momentum, who’s got the momentum? is a popular sport-within-a-sport. It is a concept that borders on the verge of meaning, and sounds better than “I have a feeling about this.”

A study in the 1980s by Thomas Gilovich and Amos Tversky raised the question of “hot hands” or streaks in the NBA. They studied the Philadelphia 76ers and found no evidence of momentum. Immediate past success had no bearing on future attempts, just as a coin might fall heads or tails regardless of what the previous toss might have been.
That and later studies — including the probability of the winner of the fourth set winning the fifth too in tennis — confirmed what a coin-tossing logician might have suspected: that momentum, like the unicorn, does not exist.

Statistics and mythology are strange bedfellows, wrote the late Stephen Jay Gould, evolutionary biologist and baseball fan. One can lead to the other over the course of an entire series or even through a single over in cricket.

Gould has also explained the attraction of patterns, and how we are hard-wired to see patterns in randomness. In many cases, patterns can be discerned in retrospect anyway, but only in retrospect. “Momentum” is usually recognised after the event, and seems to be borne of convenience rather than logic.

The momentum in the current series was with India before the matches began. Then they lost the first Test in Pune, and the momentum swung to Australia for the Bengaluru Test which then India won, grabbing the momentum again.

The third Test was drawn, so the momentum is either with Australia for plucking a draw from the jaws of defeat or with India for pushing Australia to the edge. Such simplistic analyses have kept pundits in business and given “momentum” a respectability and false importance in competitive sport. There is something romantic too in the idea, and many find that irresistible.

Momentum is such a vital component of sport that it has assumed the contours of a tangible object. Fans can reach out and touch it. Teams have it, they carry it, they ride it, they take great comfort from it and work hard to ensure that the opposition does not steal it from them. They carry it from venue to venue like they might their bats and boots and helmets.

To be fair to Steve Smith, what he actually said was “If there’s anything called momentum, it’s with us at the moment,” giving us a glimpse into a measured skepticism. If it exists, then we have it.

Does Peter Handscomb have momentum on his side, after a match-saving half-century in Ranchi? By the same token, does Ravindra Jadeja, after a half-century and nine wickets in the same match? Is team momentum the sum total of all the individual momentums? Will Ravi Ashwin, in that case, begin the final Test with a negative momentum having been less than at his best on the final day in Ranchi? How long before someone decides that momentum is temporary, but skill is permanent?

It is convenient to believe that either one team or the other has the momentum going into the final Test. Yet it is equally possible that those who swing the match with their performance might be players who haven’t been a great success in the series so far.

Someone like fast bowler Pat Cummins, or Virat Kohli himself. A whole grocery list of attributes then becomes more important than momentum: motivation, attitude, desperation, and imponderables that cannot be easily packaged and labeled.

Whichever team wins, momentum will have nothing to do with it. But that will not stop the next captain from telling us that the momentum is with his side. It might seem like blasphemy to disagree with him, so deeply is the concept grouted into our sporting consciousness.

Tuesday 28 February 2017

What does focus mean in cricket ?

Simon Barnes in Cricinfo

Cricketers are always talking about focus. So is everybody else in big-time sport. You hear more talk about focus from professional athletes than you do from professional photographers. The difference is that when photographers mention it, there's a general agreement on what they are talking about.

"Hard work, sacrifice and focus will never show up in tests," said Lance Armstrong, making focus unlike most of the other stuff he used. Focus has become a magic word, one used to explain every half-decent performance in sport.

It has also become an interview staple - the right answer to almost any question.

"How do you feel about the shattering on-pitch row that took place today?"

"I just try and stay focused on my batting."

It's a rebuff to the interviewer, a statement of intent and a personal call to order: what matters here is not your story but my batting.

Focus reflects the idea that you can train your mind, that your mind is as much an instrument of the will as your body


Focus takes in every part of modern sport. It is used in the minutiae of action. A batsman's first job is to focus on the ball, and that involves literal and metaphorical use of the word. Batting is first about looking at the ball - some batsmen mutter "watch the ball" every single time. But there is also a figurative focus. You confine your attention to the action, refusing to get distracted by sledging fielders, the fact that the team is 108 for 7, and that you haven't made double figures for the last five innings.

From here the idea of focus expands beyond the immediate action and takes you to the mindset of the professional athlete. This fearsome thing combines a horror of the past with a straw-clutching concentration on the future. For some, this is a natural state, for others, one that requires painful effort.

Either way, the idea is that concentrating - focusing - on the past is counterproductive. Memories of both success and failure are equally damaging. All that matters is the next match. "I prefer to focus on what is coming next," said the racing driver, Sebastian Vettel, spelling out the way professional athletes school themselves to think.

Focus reflects the idea that you can train your mind, that your mind is as much an instrument of the will as your body. Both can be improved by coaching and training and sheer bloody effort. You can school yourself to "focus on the positives".




While everyone watches you, you watch the ball © Getty Images


So after a horrendous defeat, you talk about the good things it involved. Tim Henman, the British tennis player, was a master at this. "But there's a lot of positive I can take from this," he would say, before leaving Wimbledon at the semi-final stage once again. A focused individual chooses what kind of defeat he endures. The best make defeat a stepping stone to victory. Tim never quite did, of course, but we British never stopped loving him.

Focus can operate over a still wider field. You keep your focus not just on the ball or on the future or on the positives. You also keep focus on your entire life. Don't let outside distractions affect you. Stay focused on football or cricket or running.

So if you shift your focus from golf to cocktail waitresses, you end up like Tiger Woods. The conventional view of Tiger's troubles is that he lost his focus. The fact of the matter is that he had his life in perfect balance. What threw him off was getting found out.
That's because there is a contradiction in the idea of focus. It is normally understood as unrelenting concentration on a single thing, but batsmen maintain their focus by constantly going out of focus. The key to a long innings, as all batsmen will explain, is "switching off" between balls and at the non-striker's end.
The focused athlete has become part of 21st-century mythology - a perfect example of what we all need to do if we are to become more successful people

In the same way, many male athletes improve dramatically when they become fathers. The loss of focus actually helps. Sport is no longer the only thing or even the most important thing in life. The consequent lessening of intensity - of focus - becomes a positive asset.

Focus has become part of the survival kit of the modern athlete. The focused athlete has become part of 21st-century mythology - a perfect example of what we all need to do if we are to become more successful people. The image (preferably in sharp focus) of a sprinter at the start of a race or, a footballer making contact with the ball, or a batsman in the instant before the ball arrives - these seem to reveal important truths about the way life should be lived. Only focus, and the world is yours!

The myth is that once you have achieved focus you can do just about anything. The word has acquired an almost religious significance, a mystic state of perfect attainment. That's mostly because it can mean more or less anything you choose.

Tuesday 10 February 2015

It's time to tackle the myths in education

Tom Bennett in The Telegraph

Are you a visual learner or a kinaesthetic learner? Perhaps you are an auditory learner? Maybe you learn best when implementing a combination of these 'learning styles'.
Over the past 40 years, the 'learning style' theory has garnered support from professionals across the education community and has become a much-used teaching tool across the UK.
But does the longevity of 'learning styles' and its persistent presence in the classroom actually mean it has any educational value at all? The simple answer is, no one can be sure; because no one has categorically proved the theory one way or the other.
Tom Bennett, teacher, author and Director of researchED, says there are many such theories that fill classrooms across Britain that have little grounding in scientific research. According to Bennett, it's time teachers learnt to raise a "sceptical eyebrow".
“We have had all kinds of rubbish thrown at us over the last 10 to 20 years,” he says. “We’ve been told that kids only learn properly in groups. We’ve had people claiming that children learn using brain gym, people saying that kids only learn if you appeal to their learning style. There’s not a scrap of research that substantiates this, and, unfortunately, it is indicative of the really, really dysfunctional state of social science research that exists today.” 
One of the main problems in resolving this issue is the fact that educational theory, unlike the actual sciences, is very difficult to test. How do you find out if the assertion that ‘children learn best in groups’ is actually correct? How do you test the effectiveness of 'homework', when homework can consist of anything from essays to artwork?
A new fund, launched last year by the Wellcome Trust and the Education Endowment Foundation (EEF), is seeking to answer some of these questions. Six university-led projects have been funded to research how neuroscience can help pupils learn more effectively in the classroom.
While Bennett welcomes the work of the EEF, he says teachers need to be weary of who is leading research projects.
“You hear people say that children must have iPads in order to be 21st century learners, but when you look at the research that tries to substantiate this claim, it’s normally written by iPad manufacturers and technology zealots, and that’s fine, but don’t pretend it’s research," he says. "Children don’t have the time to waste on that rubbish, especially poor children.”
Bennett isn’t the only one to voice these concerns. According to new research by the Organisation for Economic Co-operation and Development (OECD), trillions of dollars are spent on education policies around the world, but just one in 10 are actually evaluated.
Commenting on the research, Andreas Schleicher, OECD director of education and skills, said: "If we want to improve educational outcomes we need to have a much more systematic and evidence-based approach.”
Speaking at the Education World Forum in London, Schleicher added: "We need to make education a lot more of a science."
It seems an obvious statement, but, clearly, not one that has been put into practice over the years. With many initiatives left unsubstantiated.
Bennett has been a vocal critic of such educational practices and founded researchED as a way to counter the myths in education and improve research literacy within the education community.
“There are two main things I am calling for here,” he says. “One is that I want to highlight to teachers the rubbish that is out there, so that when someone comes along and says, ‘you should do this to help children learn’ teachers can raise a sceptical eyebrow and say ‘what’s the evidence behind that?’, ‘why should I spend six extra hours a week doing this?’, ‘why should my school spend half a million pounds doing it?’
“These are really important questions; both for ministers looking at education policy, and for team leaders within a school environment.
“The second thing is I would like teachers to engage more with driving good research. At the moment, a lot of research is very distant from the classroom, it’s done by people who don’t understand children, it’s done by people who have never taught. I want teachers to engage more with good research and drive future research.”
One of Bennett’s goals with researchED is to give teachers the opportunity and courage to question research, to be sceptical about practices and to look at the provenance of research before wholly accepting assertions as fact.
The organisation has proved hugely successful since its launch in 2013, growing from an initial conference in Dulwich College, to launches in New York and Sydney this year.
It has also led to Bennett being nominated for the inaugural $1 million Varkey Foundation Global Teacher Prize, the largest prize of its kind given to one exceptional teacher in recognition of their contribution to education.
Along with Richard Spencer, a teacher at Middlesbrough College in Billingham, County Durham, Bennett is the only nomination from the UK.
“It’s very strange,” he says. “I certainly don’t feel like one of the top two teachers in the country. There are probably better teachers in my school.
“I like this award, not only because I’ve been nominated, but because it’s a celebration of teachers and raises their status nationally and internationally. All the people on the list – and I’m very honoured to be on the shortlist – have done lots of things outside of the classroom to try and make things better for teaching in general.”
“From my point of view, and to return to my main argument, I want teachers to be a lot more sceptical of what they read, because often the evidence is far less conclusive than people would like to have you believe.
“Really good science tells you when you’re wrong. I’m not saying that people don’t have learning styles, because there is no evidence that we don’t. But as Richard Dawkins highlighted, ‘you can’t prove a negative’”

Monday 2 February 2015

Depression and spiritual awakening -- two sides of one door

Depression and spiritual awakening -- two sides of one door  Lisa Miller 



Lessons from the Mental Hospital -- Glennon Doyle Melton  




Psychosis or Spiritual Awakening 

Friday 9 January 2015

An economy is not like a household budget

Repeat after me: the Australian economy is not like a household budget

Our political and economic thinking has been warped by bad analogies to the point where we can’t see the real economy. The Abbott government is happy to play along
woolworths_interior
‘National governments with their own currency bear absolutely no resemblance to a household or a business.’ Photograph: Scott Lewis/flickr

To prosecute its economic agenda, the Abbott government has relied on the constant repetition of economic myths. I’ve previously dealt with the myths of the budget emergency, the debt crisis and the endlessly repeated lie that the carbon tax was wrecking the economy – but these are only the most obvious myths and not necessarily the most important.
This week, Mathias Cormann repeated one of the other great myths of modern government financing, saying that it was “unfair to rob our children and grandchildren of their opportunities [in order] to pay for today’s lifestyle”.
The suggestion that future generations will have a reduced standard of living because of our government debt needs some unpacking.
What is it that limits the standard of living of people in 2030? It’s the goods and services that those people can produce. Goods and services cannot be sent back in time in order to pay for past spending. The standard of living of people in 2030 will be a factor of the number of workers and their productivity, not how much debt their government carries from the past. So where does government debt fit in?
As I’ve explained elsewhere, the finances of a sovereign government with its own fiat currency bear absolutely no resemblance to the finances of a household or a business. The federal government can create money. They don’t create all of the money that they need for all their expenses because that would cause out-of-control inflation.
The obvious conclusion to be drawn from these two uncontroversial facts is that taxation and borrowing are not the limiting factors on government expenditure, inflation is. Acknowledging this completely turns the mainstream commentary on government financing on its head.
The federal government does not need anybody else’s money in the form of taxation or borrowing in order to spend. They can create money. The reason they tax and borrow is to take money out of the economy so that their spending does not cause inflation or affect official interest rates. In other words, taxation and government debt are tools for economic management, not for revenue raising.
You may have to sit with all this for a moment and calm the voice in your head that is telling you it can’t possibly be true. Our political and economic thinking has been so thoroughly colonised by the finance industry that we often find it difficult to see the real economy. The real economy is the labour of workers combined with capital and land to produce goods and services.
How did the massive postwar government debts impact on the lives of people living in the 1950s and 60s? It didn’t. These are often referred to as the “golden years” where inequality fell and the standard of living rose at a dramatic pace. Could the workers in the postwar years send their goods and services back in time to support or pay for the war effort? Of course not, it’s a ludicrous proposition. Abbott and Hockey’s suggestion that future generations will suffer because of today’s government spending is just as ludicrous.
The only way in the real economy that future generations can suffer because of today’s government debt is if the government raises taxes or cuts spending in order to repay the debt and this causes higher unemployment. This is never necessary and governments who advocate this (like the Abbott government) have fallen prey to household finance analogies.
While there is spare capacity in the economy, inflation risk is low and there is room for greater government expenditure. One simplistic measure of spare capacity is unemployment. While there is excess unemployment there is room for more (targeted) government expenditure. In other words, sovereign governments have the capacity toalways maintain low levels of unemployment if they use inflation as their expenditure cap rather than taxes and borrowing.
If unemployment is the only price future generations pay for today’s government debt and the government can always lower unemployment by more spending, what’s the impact on future generations of government debt? None. Why then don’t we just go on a massive spending spree and have huge debts? Because spending beyond the productive capacity of the real economy would cause inflation.
The costs of too much government expenditure are felt immediately afterwards in the form of inflation and are not borne by future generations.
Hopefully now you can see the full picture. Government expenditure today is not limited by taxation or borrowing but by inflation risk. Government expenditure in 2030 will not be limited by taxation, borrowing or previous debt but by inflation risk. When you’re first presented with these facts it can seem like a magic pudding or a perpetual motion machine but that’s just because we’re used to thinking about finances from a household or business perspective.
National governments with their own currency bear absolutely no resemblance to a household or a business. All of the frequently used analogies give a distorted picture of the reality of government finances. To get a clear picture you need to peel back all the layers of finance speak and look at the real economy.
There are many important conversations and debates we should be having about government finances, the role of government, productivity, consumption and leisure. We cannot have them while the government and media commentators perpetuate myths about how our economy actually functions. Ultimately the material standard of living of future generations is going to depend on the productivity of workers and on a safe environment and climate. Now there’s a policy conversation worth having.