Search This Blog

Thursday 30 March 2017

The myth of the ‘lone wolf’ terrorist

Jason Burke in The Guardian


At around 8pm on Sunday 29 January, a young man walked into a mosque in the Sainte-Foy neighbourhood of Quebec City and opened fire on worshippers with a 9mm handgun. The imam had just finished leading the congregation in prayer when the intruder started shooting at them. He killed six and injured 19 more. The dead included an IT specialist employed by the city council, a grocer, and a science professor.

The suspect, Alexandre Bissonnette, a 27-year-old student, has been charged with six counts of murder, though not terrorism. Within hours of the attack, Ralph Goodale, the Canadian minister for public safety, described the killer as “a lone wolf”. His statement was rapidly picked up by the world’s media.

Goodale’s statement came as no surprise. In early 2017, well into the second decade of the most intense wave of international terrorism since the 1970s, the lone wolf has, for many observers, come to represent the most urgent security threat faced by the west. The term, which describes an individual actor who strikes alone and is not affiliated with any larger group, is now widely used by politicians, journalists, security officials and the general public. It is used for Islamic militant attackers and, as the shooting in Quebec shows, for killers with other ideological motivations. Within hours of the news breaking of an attack on pedestrians and a policeman in central London last week, it was used to describe the 52-year-old British convert responsible. Yet few beyond the esoteric world of terrorism analysis appear to give this almost ubiquitous term much thought.

Terrorism has changed dramatically in recent years. Attacks by groups with defined chains of command have become rarer, as the prevalence of terrorist networks, autonomous cells, and, in rare cases, individuals, has grown. This evolution has prompted a search for a new vocabulary, as it should. The label that seems to have been decided on is “lone wolves”. They are, we have been repeatedly told, “Terror enemy No 1”.

Yet using the term as liberally as we do is a mistake. Labels frame the way we see the world, and thus influence attitudes and eventually policies. Using the wrong words to describe problems that we need to understand distorts public perceptions, as well as the decisions taken by our leaders. Lazy talk of “lone wolves” obscures the real nature of the threat against us, and makes us all less safe.

The image of the lone wolf who splits from the pack has been a staple of popular culture since the 19th century, cropping up in stories about empire and exploration from British India to the wild west. From 1914 onwards, the term was popularised by a bestselling series of crime novels and films centred upon a criminal-turned-good-guy nicknamed Lone Wolf. Around that time, it also began to appear in US law enforcement circles and newspapers. In April 1925, the New York Times reported on a man who “assumed the title of ‘Lone Wolf’”, who terrorised women in a Boston apartment building. But it would be many decades before the term came to be associated with terrorism.

In the 1960s and 1970s, waves of rightwing and leftwing terrorism struck the US and western Europe. It was often hard to tell who was responsible: hierarchical groups, diffuse networks or individuals effectively operating alone. Still, the majority of actors belonged to organisations modelled on existing military or revolutionary groups. Lone actors were seen as eccentric oddities, not as the primary threat.

The modern concept of lone-wolf terrorism was developed by rightwing extremists in the US. In 1983, at a time when far-right organisations were coming under immense pressure from the FBI, a white nationalist named Louis Beam published a manifesto that called for “leaderless resistance” to the US government. Beam, who was a member of both the Ku Klux Klan and the Aryan Nations group, was not the first extremist to elaborate the strategy, but he is one of the best known. He told his followers that only a movement based on “very small or even one-man cells of resistance … could combat the most powerful government on earth”.

 
Oklahoma City bomber Timothy McVeigh leaves court, 1995. Photograph: David Longstreath/AP

Experts still argue over how much impact the thinking of Beam and other like-minded white supremacists had on rightwing extremists in the US. Timothy McVeigh, who killed 168 people with a bomb directed at a government office in Oklahoma City in 1995, is sometimes cited as an example of someone inspired by their ideas. But McVeigh had told others of his plans, had an accomplice, and had been involved for many years with rightwing militia groups. McVeigh may have thought of himself as a lone wolf, but he was not one.

One far-right figure who made explicit use of the term lone wolf was Tom Metzger, the leader of White Aryan Resistance, a group based in Indiana. Metzger is thought to have authored, or at least published on his website, a call to arms entitled “Laws for the Lone Wolf”. “I am preparing for the coming War. I am ready when the line is crossed … I am the underground Insurgent fighter and independent. I am in your neighborhoods, schools, police departments, bars, coffee shops, malls, etc. I am, The Lone Wolf!,” it reads.

From the mid-1990s onwards, as Metzger’s ideas began to spread, the number of hate crimes committed by self-styled “leaderless” rightwing extremists rose. In 1998, the FBI launched Operation Lone Wolf against a small group of white supremacists on the US west coast. A year later, Alex Curtis, a young, influential rightwing extremist and protege of Metzger, told his hundreds of followers in an email that “lone wolves who are smart and commit to action in a cold-mannered way can accomplish virtually any task before them ... We are already too far along to try to educate the white masses and we cannot worry about [their] reaction to lone wolf/small cell strikes.”

The same year, the New York Times published a long article on the new threat headlined “New Face of Terror Crimes: ‘Lone Wolf’ Weaned on Hate”. This seems to have been the moment when the idea of terrorist “lone wolves” began to migrate from rightwing extremist circles, and the law enforcement officials monitoring them, to the mainstream. In court on charges of hate crimes in 2000, Curtis was described by prosecutors as an advocate of lone-wolf terrorism.

When, more than a decade later, the term finally became a part of the everyday vocabulary of millions of people, it was in a dramatically different context.

After 9/11, lone-wolf terrorism suddenly seemed like a distraction from more serious threats. The 19 men who carried out the attacks were jihadis who had been hand picked, trained, equipped and funded by Osama bin Laden, the leader of al-Qaida, and a small group of close associates.

Although 9/11 was far from a typical terrorist attack, it quickly came to dominate thinking about the threat from Islamic militants. Security services built up organograms of terrorist groups. Analysts focused on individual terrorists only insofar as they were connected to bigger entities. Personal relations – particularly friendships based on shared ambitions and battlefield experiences, as well as tribal or familial links – were mistaken for institutional ones, formally connecting individuals to organisations and placing them under a chain of command.


As the 2000s drew to a close, attacks perpetrated by people who seemed to be acting alone began to outnumber all others

This approach suited the institutions and individuals tasked with carrying out the “war on terror”. For prosecutors, who were working with outdated legislation, proving membership of a terrorist group was often the only way to secure convictions of individuals planning violence. For a number of governments around the world – Uzbekistan, Pakistan, Egypt – linking attacks on their soil to “al-Qaida” became a way to shift attention away from their own brutality, corruption and incompetence, and to gain diplomatic or material benefits from Washington. For some officials in Washington, linking terrorist attacks to “state-sponsored” groups became a convenient way to justify policies, such as the continuing isolation of Iran, or military interventions such as the invasion of Iraq. For many analysts and policymakers, who were heavily influenced by the conventional wisdom on terrorism inherited from the cold war, thinking in terms of hierarchical groups and state sponsors was comfortably familiar.

A final factor was more subtle. Attributing the new wave of violence to a single group not only obscured the deep, complex and troubling roots of Islamic militancy but also suggested the threat it posed would end when al-Qaida was finally eliminated. This was reassuring, both for decision-makers and the public.

By the middle of the decade, it was clear that this analysis was inadequate. Bombs in Bali, Istanbul and Mombasa were the work of centrally organised attackers, but the 2004 attack on trains in Madrid had been executed by a small network only tenuously connected to the al-Qaida senior leadership 4,000 miles away. For every operation like the 2005 bombings in London – which was close to the model established by the 9/11 attacks – there were more attacks that didn’t seem to have any direct link to Bin Laden, even if they might have been inspired by his ideology. There was growing evidence that the threat from Islamic militancy was evolving into something different, something closer to the “leaderless resistance” promoted by white supremacists two decades earlier.

As the 2000s drew to a close, attacks perpetrated by people who seemed to be acting alone began to outnumber all others. These events were less deadly than the spectacular strikes of a few years earlier, but the trend was alarming. In the UK in 2008, a convert to Islam with mental health problems attempted to blow up a restaurant in Exeter, though he injured no one but himself. In 2009, a US army major shot 13 dead in Fort Hood, Texas. In 2010, a female student stabbed an MPin London. None appeared, initially, to have any broader connections to the global jihadi movement.

In an attempt to understand how this new threat had developed, analysts raked through the growing body of texts posted online by jihadi thinkers. It seemed that one strategist had been particularly influential: a Syrian called Mustafa Setmariam Nasar, better known as Abu Musab al-Suri. In 2004, in a sprawling set of writings posted on an extremist website, Nasar had laid out a new strategy that was remarkably similar to “leaderless resistance”, although there is no evidence that he knew of the thinking of men such as Beam or Metzger. Nasar’s maxim was “Principles, not organisations”. He envisaged individual attackers and cells, guided by texts published online, striking targets across the world.

Having identified this new threat, security officials, journalists and policymakers needed a new vocabulary to describe it. The rise of the term lone wolf wasn’t wholly unprecedented. In the aftermath of 9/11, the US had passed anti-terror legislation that included a so-called “lone wolf provision”. This made it possible to pursue terrorists who were members of groups based abroad but who were acting alone in the US. Yet this provision conformed to the prevailing idea that all terrorists belonged to bigger groups and acted on orders from their superiors. The stereotype of the lone wolf terrorist that dominates today’s media landscape was not yet fully formed.

It is hard to be exact about when things changed. By around 2006, a small number of analysts had begun to refer to lone-wolf attacks in the context of Islamic militancy, and Israeli officials were using the term to describe attacks by apparently solitary Palestinian attackers. Yet these were outliers. In researching this article, I called eight counter-terrorism officials active over the last decade to ask them when they had first heard references to lone-wolf terrorism. One said around 2008, three said 2009, three 2010 and one around 2011. “The expression is what gave the concept traction,” Richard Barrett, who held senior counter-terrorist positions in MI6, the British overseas intelligence service, and the UN through the period, told me. Before the rise of the lone wolf, security officials used phrases – all equally flawed – such as “homegrowns”, “cleanskins”, “freelancers” or simply “unaffiliated”.

As successive jihadi plots were uncovered that did not appear to be linked to al-Qaida or other such groups, the term became more common. Between 2009 and 2012 it appears in around 300 articles in major English-language news publications each year, according the professional cuttings search engine Lexis Nexis. Since then, the term has become ubiquitous. In the 12 months before the London attack last week, the number of references to “lone wolves” exceeded the total of those over the previous three years, topping 1,000.

Lone wolves are now apparently everywhere, stalking our streets, schools and airports. Yet, as with the tendency to attribute all terrorist attacks to al-Qaida a decade earlier, this is a dangerous simplification.

In March 2012, a 23-year-old petty criminal named Mohamed Merah went on a shooting spree – a series of three attacks over a period of nine days – in south-west France, killing seven people. Bernard Squarcini, head of the French domestic intelligence service, described Merah as a lone wolf. So did the interior ministry spokesman, and, inevitably, many journalists. A year later, Lee Rigby, an off-duty soldier, was run over and hacked to death in London. Once again, the two attackers were dubbed lone wolves by officials and the media. So, too, were Dzhokhar and Tamerlan Tsarnaev, the brothers who bombed the Boston Marathon in 2013. The same label has been applied to more recent attackers, including the men who drove vehicles into crowds in Nice and Berlin last year, and in London last week.


The Boston Marathon bombing carried out by Dzhokhar and Tamerlan Tsarnaev in 2013. Photograph: Dan Lampariello/Reuters

One problem facing security services, politicians and the media is that instant analysis is difficult. It takes months to unravel the truth behind a major, or even minor, terrorist operation. The demand for information from a frightened public, relayed by a febrile news media, is intense. People seek quick, familiar explanations.

Yet many of the attacks that have been confidently identified as lone-wolf operations have turned out to be nothing of the sort. Very often, terrorists who are initially labelled lone wolves, have active links to established groups such as Islamic State and al-Qaida. Merah, for instance, had recently travelled to Pakistan and been trained, albeit cursorily, by a jihadi group allied with al-Qaida. Merah was also linked to a network of local extremists, some of whom went on to carry out attacks in Libya, Iraq and Syria. Bernard Cazeneuve, who was then the French interior minister, later agreed that calling Merah a lone wolf had been a mistake.

If, in cases such as Merah’s, the label of lone wolf is plainly incorrect, there are other, more subtle cases where it is still highly misleading. Another category of attackers, for instance, are those who strike alone, without guidance from formal terrorist organisations, but who have had face-to-face contact with loose networks of people who share extremist beliefs. The Exeter restaurant bomber, dismissed as an unstable loner, was actually in contact with a circle of local militant sympathisers before his attack. (They have never been identified.) The killers of Lee Rigby had been on the periphery of extremist movements in the UK for years, appearing at rallies of groups such as the now proscribed al-Muhajiroun, run by Anjem Choudary, a preacher convicted of terrorist offences in 2016 who is reported to have “inspired” up to 100 British militants.

A third category is made up of attackers who strike alone, after having had close contact online, rather than face-to-face, with extremist groups or individuals. A wave of attackers in France last year were, at first, wrongly seen as lone wolves “inspired” rather than commissioned by Isis. It soon emerged that the individuals involved, such as the two teenagers who killed a priest in front of his congregation in Normandy, had been recruited online by a senior Isis militant. In three recent incidents in Germany, all initially dubbed “lone-wolf attacks”, Isis militants actually used messaging apps to direct recruits in the minutes before they attacked. “Pray that I become a martyr,” one attacker who assaulted passengers on a train with an axe and knife told his interlocutor. “I am now waiting for the train.” Then: “I am starting now.”

Very often, what appear to be the clearest lone-wolf cases are revealed to be more complex. Even the strange case of the man who killed 86 people with a truck in Nice in July 2016 – with his background of alcohol abuse, casual sex and lack of apparent interest in religion or radical ideologies – may not be a true lone wolf. Eight of his friends and associates have been arrested and police are investigating his potential links to a broader network.

What research does show is that we may be more likely to find lone wolves among far-right extremists than among their jihadi counterparts. Though even in those cases, the term still conceals more than it reveals.

The murder of the Labour MP Jo Cox, days before the EU referendum, by a 52-year-old called Thomas Mair, was the culmination of a steady intensification of rightwing extremist violence in the UK that had been largely ignored by the media and policymakers. According to police, on several occasions attackers came close to causing more casualties in a single operation than jihadis had ever inflicted. The closest call came in 2013 when Pavlo Lapshyn, a Ukrainian PhD student in the UK, planted a bomb outside a mosque in Tipton, West Midlands. Fortunately, Lapshyn had got his timings wrong and the congregation had yet to gather when the device exploded. Embedded in the trunks of trees surrounding the building, police found some of the 100 nails Lapshyn had added to the bomb to make it more lethal.

Lapshyn was a recent arrival, but the UK has produced numerous homegrown far-right extremists in recent years. One was Martyn Gilleard, who was sentenced to 16 years for terrorism and child pornography offences in 2008. When officers searched his home in Goole, East Yorkshire, they found knives, guns, machetes, swords, axes, bullets and four nail bombs. A year later, Ian Davison became the first Briton convicted under new legislation dealing with the production of chemical weapons. Davison was sentenced to 10 years in prison for manufacturing ricin, a lethal biological poison made from castor beans. His aim, the court heard, was “the creation of an international Aryan group who would establish white supremacy in white countries”.

Lapshyn, Gilleard and Davison were each described as lone wolves by police officers, judges and journalists. Yet even a cursory survey of their individual stories undermines this description. Gilleard was the local branch organiser of a neo-Nazi group, while Davison founded the Aryan Strike Force, the members of which went on training days in Cumbria where they flew swastika flags.

Thomas Mair, who was also widely described as a lone wolf, does appear to have been an authentic loner, yet his involvement in rightwing extremism goes back decades. In May 1999, the National Alliance, a white-supremacist organisation in West Virginia, sent Mair manuals that explained how to construct bombs and assemble homemade pistols. Seventeen years later, when police raided his home after the murder, they found stacks of far-right literature, Nazi memorabilia and cuttings on Anders Breivik, the Norwegian terrorist who murdered 77 people in 2011.

 
A government building in Oslo bombed by Anders Breivik, July 2011. Photograph: Scanpix/Reuters

Even Breivik himself, who has been called “the deadliest lone-wolf attacker in [Europe’s] history”, was not a true lone wolf. Prior to his arrest, Breivik had long been in contact with far-right organisations. A member of the English Defence League told the Telegraph that Breivik had been in regular contact with its members via Facebook, and had a “hypnotic” effect on them.

If such facts fit awkwardly with the commonly accepted idea of the lone wolf, they fit better with academic research that has shown that very few violent extremists who launch attacks act without letting others know what they may be planning. In the late 1990s, after realising that in most instances school shooters would reveal their intentions to close associates before acting, the FBI began to talk about “leakage” of critical information. By 2009, it had extended the concept to terrorist attacks, and found that “leakage” was identifiable in more than four-fifths of 80 ongoing cases they were investigating. Of these leaks, 95% were to friends, close relatives or authority figures.

More recent research has underlined the garrulous nature of violent extremists. In 2013, researchers at Pennsylvania State University examined the interactions of 119 lone-wolf terrorists from a wide variety of ideological and faith backgrounds. The academics found that, even though the terrorists launched their attacks alone, in 79% of cases others were aware of the individual’s extremist ideology, and in 64% of cases family and friends were aware of the individual’s intent to engage in terrorism-related activity. Another more recent survey found that 45% of Islamic militant cases talked about their inspiration and possible actions with family and friends. While only 18% of rightwing counterparts did, they were much more likely to “post telling indicators” on the internet.

Few extremists remain without human contact, even if that contact is only found online. Last year, a team at the University of Miami studied 196 pro-Isis groupsoperating on social media during the first eight months of 2015. These groups had a combined total of more than 100,000 members. Researchers also found that pro-Isis individuals who were not in a group – who they dubbed “online ‘lone wolf’ actors” – had either recently been in a group or soon went on to join one.


Any terrorist, however socially or physically isolated, is still part of a broader movement
There is a much broader point here. Any terrorist, however socially or physically isolated, is still part of a broader movement. The lengthy manifesto that Breivik published hours before he started killing drew heavily on a dense ecosystem of far-right blogs, websites and writers. His ideas on strategy drew directly from the “leaderless resistance” school of Beam and others. Even his musical tastes were shaped by his ideology. He was, for example, a fan of Saga, a Swedish white nationalist singer, whose lyrics include lines about “The greatest race to ever walk the earth … betrayed”.

It is little different for Islamic militants, who emerge as often from the fertile and desperately depressing world of online jihadism – with its execution videos, mythologised history, selectively read religious texts and Photoshopped pictures of alleged atrocities against Muslims – as from organised groups that meet in person.

Terrorist violence of all kinds is directed against specific targets. These are not selected at random, nor are such attacks the products of a fevered and irrational imagination operating in complete isolation.

Just like the old idea that a single organisation, al-Qaida, was responsible for all Islamic terrorism, the rise of the lone-wolf paradigm is convenient for many different actors. First, there are the terrorists themselves. The notion that we are surrounded by anonymous lone wolves poised to strike at any time inspires fear and polarises the public. What could be more alarming and divisive than the idea that someone nearby – perhaps a colleague, a neighbour, a fellow commuter – might secretly be a lone wolf?

Terrorist groups also need to work constantly to motivate their activists. The idea of “lone wolves” invests murderous attackers with a special status, even glamour. Breivik, for instance, congratulated himself in his manifesto for becoming a “self-financed and self-indoctrinated single individual attack cell”. Al-Qaida propaganda lauded the 2009 Fort Hood shooter as “a pioneer, a trailblazer, and a role model who has opened a door, lit a path, and shown the way forward for every Muslim who finds himself among the unbelievers”.

The lone-wolf paradigm can be helpful for security services and policymakers, too, since the public assumes that lone wolves are difficult to catch. This would be justified if the popular image of the lone wolf as a solitary actor was accurate. But, as we have seen, this is rarely the case.


Westminster terrorist Khalid Masood. Photograph: Reuters

The reason that many attacks are not prevented is not because it was impossible to anticipate the perpetrator’s actions, but because someone screwed up. German law enforcement agencies were aware that the man who killed 12 in Berlin before Christmas was an Isis sympathiser and had talked about committing an attack. Repeated attempts to deport him had failed, stymied by bureaucracy, lack of resources and poor case preparation. In Britain, a parliamentary report into the killing of Lee Rigby identified a number of serious delays and potential missed opportunities to prevent it. Khalid Masood, the man who attacked Westminster last week, was identified in 2010 as a potential extremist by MI5.

But perhaps the most disquieting explanation for the ubiquity of the term is that it tells us something we want to believe. Yes, the terrorist threat now appears much more amorphous and unpredictable than ever before. At the same time, the idea that terrorists operate alone allows us to break the link between an act of violence and its ideological hinterland. It implies that the responsibility for an individual’s violent extremism lies solely with the individual themselves.

The truth is much more disturbing. Terrorism is not something you do by yourself, it is highly social. People become interested in ideas, ideologies and activities, even appalling ones, because other people are interested in them.

In his eulogy at the funeral of those killed in the mosque shooting in Quebec, the imam Hassan Guillet spoke of the alleged shooter. Over previous days details had emerged of the young man’s life. “Alexandre [Bissonette], before being a killer, was a victim himself,” said Hassan. “Before he planted his bullets in the heads of his victims, somebody planted ideas more dangerous than the bullets in his head. Unfortunately, day after day, week after week, month after month, certain politicians, and certain reporters and certain media, poisoned our atmosphere.

“We did not want to see it …. because we love this country, we love this society. We wanted our society to be perfect. We were like some parents who, when a neighbour tells them their kid is smoking or taking drugs, answers: ‘I don’t believe it, my child is perfect.’ We don’t want to see it. And we didn’t see it, and it happened.”

“But,” he went on to say, “there was a certain malaise. Let us face it. Alexandre Bissonnette didn’t emerge from a vacuum.”

Wednesday 29 March 2017

A world without retirement

Amelia Hill in The Guardian


We are entering the age of no retirement. The journey into that chilling reality is not a long one: the first generation who will experience it are now in their 40s and 50s. They grew up assuming they could expect the kind of retirement their parents enjoyed – stopping work in their mid-60s on a generous income, with time and good health enough to fulfil long-held dreams. For them, it may already be too late to make the changes necessary to retire at all.

In 2010, British women got their state pension at 60 and men got theirs at 65. By October 2020, both sexes will have to wait until they are 66. By 2028, the age will rise again, to 67. And the creep will continue. By the early 2060s, people will still be working in their 70s, but according to research, we will all need to keep working into our 80s if we want to enjoy the same standard of retirement as our parents.

This is what a world without retirement looks like. Workers will be unable to down tools, even when they can barely hold them with hands gnarled by age-related arthritis. The raising of the state retirement age will create a new social inequality. Those living in areas in which the average life expectancy is lower than the state retirement age (south-east England has the highest average life expectancy, Scotland the lowest) will subsidise those better off by dying before they can claim the pension they have contributed to throughout their lives. In other words, wealthier people become beneficiaries of what remains of the welfare state.

Retirement is likely to be sustained in recognisable form in the short and medium term. Looming on the horizon, however, is a complete dismantling of this safety net.

For those of pensionable age who cannot afford to retire, but cannot continue working – because of poor health, or ageing parents who need care, or because potential employers would rather hire younger workers – the great progress Britain has made in tackling poverty among the elderly over the last two decades will be reversed. This group is liable to suffer the sort of widespread poverty not seen in Britain for 30 to 40 years.

Many now in their 20s will be unable to save throughout their youth and middle age because of increasingly casualised employment, student debt and rising property prices. By the time they are old, members of this new generation of poor pensioners are liable to be, on average, far worse off than the average poor pensioner today.

A series of factors has contributed to this situation: increased life expectancy, woeful pension planning by successive governments, the end of the final-salary pension scheme (in which people got two-thirds of their final salary as a pension) and our own failure to save.

For two months, as part of an experiment by the Guardian in collaborative reporting, I have been investigating what retirement looks like today – and what it might look like for the next wave of retirees, their children and grandchildren. The evidence reveals a sinkhole beneath the state’s provision of pensions. Under the weight of our vastly increased longevity, retirement – one of our most cherished institutions – is in danger of collapsing into it.




Working just as hard, but unpaid? What happens when women retire




Many of those contemplating retirement are alarmed by the new landscape. A 62-year-old woman, who is for the first time in her life struggling to pay her mortgage (and wishes to remain anonymous), told me: “I am more stressed now than I was in my 30s. I lived on a very tight budget then, but I was young and could cope emotionally. I don’t mean to sound bitter, but I never thought I would feel this scared of the future at my age. I’m not remotely materialistic and have never wanted a fancy lifestyle. But not knowing if I will be without a home in the next few months is a very scary place to be.”

And it is not just the older generation who fear old age. Adam Palfrey is 30, with three children and a disabled wife who cannot work. “I must confess, I am absolutely terrified of retirement,” he told me. “I have nothing stashed away. Savings are out of the question. I only just earn enough that, with housing benefit, disability living allowance and tax credits, I manage to keep our heads above water. I work every hour I can just to keep things afloat. There’s no way I could keep this up aged 70-plus, just so that my partner and I can live a basic life. As for my three children … God knows. I can scarcely bring myself to think about it.”

It is not news that the population is ageing. What is remarkable is that we have failed to prepare the ground for this inevitable change. Life expectancy in Britain is growing by a dramatic five hours a day. Thanks to a period of relative peace in the UK, low infant mortality and continual medical advances, over the past two decades the life expectancy of babies born here has increased by some five years. (A baby born at the end of my eight-week The new retirement series has a life expectancy almost 12 days longer than a baby born at the start of it.)


Dr Peter Jarvis and Sue Perkins at Bletchley Park. Photograph: Linda Nylind for the Guardian

In 2014, the average age of the UK population exceeded 40 for the first time – up from 33.9 in 1974. In little more than a decade, half of the country’s population will be aged over 50. This will transform Britain – and it is no mere blip; the trend will continue as life expectancy increases. This year marked a demographic turning point in the UK. As the baby-boom generation (now aged between 53 and 71) entered retirement, for the first time since the early 1980s there were more people either too old or too young to work than there were of working age.

The number of people in the UK aged 85 or more is expected to more than double in the next 25 years. By 2040, nearly one in seven Britons will be over 75. Half of all children born in the UK are predicted to live to 103. Some 10 million of us currently alive in the UK (and 130 million throughout Europe) are likely to live past the age of 100.


Governments see raising the state retirement age as a way to cover the cost of an ageing population

The challenges are considerable. The tax imbalance that comes with an ageing population, whose tax contribution falls far short of their use of services, will rise to £15bn a year by 2060. Covering this gap will cost the equivalent of a 4p income tax rise for the working-age population.

It is easy to see why governments might regard raising the state retirement age as a way to cover the cost of an ageing population. A successful pursuit of full employment of people into their late 60s could maintain the ratio of workers to non-workers for many decades to come. And were the employment rate for older workers to match that of the 30-40 age group, the additional tax payments could be as much as £88.4bn. According to PwC’s Golden Age Index, had our employment rates for those aged 55 years and older been as high as those in Sweden between 2003 and 2013, UK national GDP would have been £105bn – or 5.8% – higher.

There are, of course, problems to this approach. Those who can happily work into their 70s and beyond are likely to be the privileged few: the highly educated elite who haven’t spent their working lives in jobs that negatively affect their health. If the state pension age is pushed further away, for those with failing health, family responsibilities or no jobs, life will become very difficult.

The new state pension, introduced on 6 April 2016, will be paid to men born on or after 6 April 1951, and women born on or after 6 April 1953. Assuming you have paid 35 years of National Insurance, it will pay out £155.65 a week. The old scheme (worth a basic sum of £119.30 per week, with more for those who paid into additional state pension schemes such as Serps or S2P) applies to those born before those dates.

Frank Field, Labour MP and chair of the work and pensions select committee, told me that the new figure of just over £8,000 a year is enough to guarantee all pensioners a decent standard of living: an “adequate minimum”, as he put it. Anything above that, he said, should be privately funded, without tax breaks or other government help.

“Once the minimum has been reached, it’s not the job of government to bribe people to save more,” he says. “To provide luxurious pension payments was never the aim of the state pension.”

Whether the new state pension can really be described as a “comfortable minimum” turns out to be a matter of opinion. Dr Ros Altmann, who was brought into government in April 2015 to work on pensions policy, is the UK government’s former older workers’ champion and a governor of the Pensions Policy Institute. When I relayed Field’s comments to her, she was left briefly speechless. Then she managed a “wow”. “Did he really say that? Would he be happy to live on just over £8,000 a year?” she asked, finally.

Tom McPhail, head of retirement policy at financial advisers Hargreaves Lansdown, is clear that the new state pension has not been set at a high-enough level to guarantee a dignified older age to those who have no other income. “How sufficient is the new state pension? That’s an easy one to answer: It’s not,” he said.

Field makes the assumption that people have enough additional private financial ballast to bolster their state pensions. But the reality is that many people have neither savings – nearly a third of all households would struggle to pay an unexpected £500 bill – nor sufficient private pension provision to bring their state pension entitlement up to a level to ensure a comfortable retirement by most people’s understanding of the term. In fact, savings are the great dividing line in retirement, and the scale of the so-called “pension gap” – the gap between what your pension pot will pay out and the amount you need to live comfortably in older age – is shocking.

Three in 10 Britons aged 55-64 do not have any pension savings at all. Almost half of those in their 30s and 40s are not saving adequately or at all. In part, that is because we underestimate the amount of money we need to save. According to research by Saga earlier this month, four in 10 of those aged over 40 have no idea of the cost of even a basic lifestyle in retirement. When it came to understanding the size of the total pension pot they would need to fund retirement, over 80% admitted they had no idea how big this would need to be.

Retirement is an ancient concept. It caused one of the worst military disasters ever faced by the Roman empire when, in AD14, the imperial power increased the retirement age and decreased the pensions of its legionaries, causing mutiny in Pannonia and Germany. The ringleaders were rounded up and disposed of, but the institution remains so highly prized that any threat to its continued existence is liable to cause mutiny. “Retirement has been stolen. You can pay in as much as you like. They will never pay back. Time for a grey revolution,” one reader emailed.

It was in 1881 that the German chancellor, Otto von Bismarck, made a radical speech to the Reichstag, calling for government-run financial support for those aged over 70 who were “disabled from work by age and invalidity”.


Roger Hall in Porlock Bay, Somerset. Photograph: Sam Frost for the Guardian

The scheme wasn’t the socialist ideal it is sometimes assumed to be: Bismarck was actually advocating a disability pension, not a retirement pension as we understand it today. Besides, the retirement age he recommended just about aligned with average life expectancy in Germany at that time. Bismarck did, however, have a further vision that was genuinely too radical for his era: he proposed a pension that could be drawn at any age, if the contributor was judged unfit for work. Those drawing it earlier would receive a lower amount.

This notion is surfacing again in various forms. The New Economics Foundation isarguing for a shorter working week, via a “slow retirement”, in which employees give up an hour of work per week every year from the age of 35. The idea is that older workers will release more of their work time to younger ones, which will allow a steady handover of retained wisdom. A universal basic income, whereby everyone receives a set sum from the state each year, regardless of how much they do or don’t work, might have a similar effect, enabling people to move to part-time work as they age.

Widespread poverty among the over-65s led to the 1946 National Insurance Act, which introduced the first contributory, flat-rate pension in the UK for women of 60 and men of 65. At first, pension rates were low and most pensioners did not have enough to get by. But by the late 1970s, the value of the state pension rose and an increasing number of people – mainly men – were able to benefit from occupational pension schemes. By 1967, more than 8 million employees working for private companies were entitled to a final-salary pension, along with 4 million state workers. In 1978, the Labour government introduced a fully fledged “earnings-linked” state top-up system for those without access to a company scheme.

With pension payments now at a rate that enabled older people to stop work without risking penury, older men (and to a lesser extent older women) began to enjoy a “third age”, which fell between the end of work and the start of old age. In 1970, the employment rate for men aged 60-64 was 81%; by 1985 it had fallen to 49.7%.

Access to a comfortable old age is a powerful political idea. John Macnicol, a visiting professor at the London School of Economics and author of Neoliberalising Old Age, believes that when jobs were needed for younger men after the second world war, a “socially elegant mythology” was created in which retirement was a time for older workers to kick back and relax.

He believes that in the 1990s, however, the narrative was cynically changed and the image of pensioners was deliberately altered: from being poor, frail, dependent and deserving, to well off, hedonistic, politically powerful and selfish. The notion of “the prosperous pensioner was constructed in the face of evidence that showed exactly the opposite to be the case”, he said, “so that the right to retirement [could be] undermined: more coercive working practices, forcing older people to stay in employment, could be presented as providing new ‘opportunities’, removing barriers to working, bestowing greater inclusion and even achieving upward social mobility”.

This change in attitude towards pensioners helped the government bring in a hike in retirement age. In 1995, the Conservative government under John Major announced a steady increase from 60 to 65 in the state pension age for women, to come in between April 2010 and April 2020. Most agreed that equalising the state pension age was fair enough. What they objected to is that the government waited until 2009 – a year before the increases were set to begin – to start contacting those affected, leaving thousands of women without time to rearrange their finances or adjust their employment plans to fill the gaping hole in their income.

Then, in 2011 – when the state pension age for women had risen to 63 – the coalition government accelerated the timetable: the state pension age for women will now reach 65 in November 2018, at which point it will rise alongside men’s: to 66 by 2020 and to 67 by 2028.

When she retired from the ministry of work and pensions in 2016, Ros Altmann stated that she was “not convinced the government had adequately addressed the hardship facing women who have had their state pension age increased at short notice”.

After surviving cancer at 52, Jackie Harrison, now 62, looked over her savings and decided she could just about afford to take early retirement. “I had achieved 36 years of national insurance contributions,” she said. “I used to phone the Department for Work and Pensions every year to ensure that I had worked enough to get my full pension at 60.”

Then she was told her personal pension age was increasing from 60 to 63 years and six months. “I wasn’t eligible for any benefits because of my partner’s pension, but I could nevertheless still just about manage until the new state retirement age,” she said. But when she was 58, the goalposts moved again – this time to 66. “I’d been out of the workplace for so long that I didn’t have a hope of being able to get back into it,” she said. “But nor did it give me enough time to make other financial arrangements.”

Harrison made the agonising decision to raise money by selling her family home and moving to a different city, where she could live more cheaply. Her decisions had heavy implications for the rest of her family – and the state. When she moved, she left behind a vulnerable adult daughter and baby grandchild and octogenarian parents.

“This is not the retirement I had planned at all,” Harrison told me. “I had loads of savings once, but now I live in a constant state of worry due to financial pressures. It seems so unfair when I have worked all my life and planned for my retirement. I just don’t know how I am going to manage for another four years”. Women born in the 1950s are already living in their age of no retirement.

In 2006, it became legal for employers to force their workers to retire at the age of 65. A campaign led by Age Concern and Help the Aged was swift and effective in its argument that the new default retirement age law broke EU rules and gave employers too much leeway to justify direct discrimination on the grounds of age. On 1 October 2011, the law was overturned.

Since then, Britain’s workforce has greyed almost before our eyes: in the last 15 years, the number of working people aged 50-64 has increased by 60% to 8 million (far greater than the increase in the population of people over 50). The proportion of people aged 70-74 in employment, meanwhile, has almost doubled in the past 10 years. This trend will continue. By 2020, one-third of the workforce will be over 50.


A worker at Steelite International ceramics in Stoke-on-Trent. Photograph: Christopher Thomond for the Guardian

The proportional increase may be substantial, but it charts growth from a low level. In empirical terms, the impact is less positive: almost one-third of people in the UK aged 50-64 are not working. In fact, a greater number are becoming jobless than finding employment: almost 40% of employment and support allowance claimants are over 50, an indication that many older people are unable to easily find new and sustainable work.

This is unsustainable: by 2020, an estimated 12.5m jobs will become vacant as a result of older people leaving the workforce. Yet there will only be 7 million younger people to fill them. If we can no longer rely on immigration to fill the gaps, employers will have to shed their prejudices, workplaces will have to be adapted, and social services will have to step in to provide the care that ageing people can no longer give their grandchildren, ageing spouses or parents if they remain in the workforce.


Forcing older people to work longer if they cannot easily do so can cause more harm than good

But forcing older people to work longer if they cannot easily do so can cause more harm than good. Prof Debora Price, director of the Manchester Institute for Collaborative Research on Ageing, told me: “There is evidence to suggest that opportunities for people to work beyond state pension age might well be making inequalities worse, since those able to work into later life tend to be men who are highly educated and have been in higher-paid jobs.”

One answer is to return to Bismarck’s original plan, whereby the state pension can be accessed early by anyone who chooses to collect a smaller pension sum at an age lower than the state retirement age, perhaps because of poor health or other commitments.

This option, however, was rejected last week by John Cridland, the former head of the Confederation of British Industry’s business lobby group, who was appointed by the government in March 2016 to help cut the UK’s £100bn a year pension costs by reviewing the state pension age.

Instead, Cridland has recommended that the state pension age should rise from 67 to 68 by 2039, seven years earlier than currently timetabled. This will push the state retirement age back for a year for anyone in their early 40s. Cridland has rejected calls for early access to the state pension for those in poor health, but has left the door open for additional means-tested support to be made available one year before state pension age for those unable to work owing to ill health or caring responsibilities.

In spite of their anxieties about money, one of the things I have been most struck by, in my many conversations with older readers, is the pleasure they take in life.

One grandmother told me: “Last week, I swept across a crowded pub to pick up a raffle prize … with my dress tucked into my knickers! A few years ago I would have been mortified. Not any more. Told ’em they were lucky it was cold and I had knickers on!”

Monica Hartwell, 69, is part of the team at the volunteer-run Regal theatre in Minehead, as well as the film society and the museum. “The joy of getting older is much greater self-confidence,” she told me. “It’s the loss of angst about what people think of you: the size of your bum or whether others are judging you correctly. It’s not an arrogance, but you know who you are when you’re older and all those roles you played to fit in when you were younger are irrelevant.”


  Women in Ilkley, West Yorkshire, discuss retirement. Photograph: Christopher Thomond for the Guardian

The data bears out these experiences: 65 to 79 is the happiest age group for adults, according to the Office for National Statistics. Recently, a report claimed that women in their 80s have more enjoyable sex than those up to 30 years younger. Other research has found that 75% of those aged 50 and over are less bothered about what people think of them and 61% enjoy life more than when they were younger.

So what is the secret to a successful retirement? Private companies run courses to help those on the verge of retirement plan for changes in income, time and relationships. I have spoken to those running such courses, as well as those who have retired. The consensus is that there are five pillars, all of which rest on the “money bit” – the basic level of financial security without which later life is hard. Once that foundation is in place, retirees can build up the second pillar: a social network to replace their former work community. The third pillar is having purpose and challenging one’s mind. Fourth is ongoing personal development – exploring, questioning and learning are an important part of what makes us human; this should never stop, I was told. The fifth and final pillar is having fun.

I tried explaining final-salary pensions to a 20-year-old recently. They looked at me quizzically, as though I was telling them that I had seen a unicorn. When that same 20-year-old, however, tries to explain the traditional concept of retirement to their own children, they might well be met with the same level of incomprehension.

For their children, life might well be more like the joke that Ali Seamer emailed to me during a recent Q&A I ran with readers as part of my investigation into what retirement means today: “I’m going to have to work up to 6pm on the day of my funeral just to be able to afford the coffin,” he said.

In examining the reality of this new age of no retirement, I have become aware of two pitfalls undermining constructive debate. The first is the prejudice that an ageing population will place a huge burden on society.

This is refuted by numerous studies: the volunteer charity WRVS has done the most work to quantify the economic role played by older generations. Taking together the tax payments, spending power, caring and volunteer efforts of people aged 65-plus, it calculates that they contribute almost £40bn more to the UK economy than they receive in state pensions, welfare and health services.

The research suggests that this benefit to the economy will increase in coming years as increasing numbers of baby-boomers enter retirement. By 2030, it projects that the net contribution of older people will be worth some £75bn.

Older people’s contribution to society is not just economic. An ICM poll for the WRVS study found that 65% of older people say they regularly help out elderly neighbours; they are the most likely of all adult age groups to do so.

The second pitfall is the conflict between generations that can be caused by the issue of retirement. The financial problems of the young have been blamed on baby boomers. But the truth is that the UK pension languishes far below that which is provided in most developed countries. And this contributory, taxed income – pensioners pay tax just like anyone else – is all that many old people have to live on.

Nearly 2 million of those aged 55-64 do not have any private pension savings and despite the commonly held belief that older people are all mortgage-free, fewer than 48% of those aged 55-64 own their own homes outright and nearly a quarter are still renting. It is true that some have benefitted greatly from rises in house prices, but the cost of lending was high – often 10% or more – during the 1970s and 1980s. One in 10 of those aged 65 and over still have a mortgage.

For all the recent talk of the average pensioner household being £20 a week better off than working households, the truth is that many are actually working to supplement their income. Still, to people just entering the workforce, the lives of today’s pensioners look impossibly privileged.

Rachael Ingram sums it up. At 19, working full-time and studying for an Open University degree, she is already putting 10% of her income aside for her pension. “I shouldn’t be worrying about saving for my pension at my age,” she told me. “I’m saving money that could go towards a deposit for my first house – I’m currently renting a flat in Liverpool – or out socialising. But I have no faith in government or the state pension. There will be no one to look after me when I’m old.”

I was vulnerable and wanted a home. What I got was a workhouse

Daniel Lavelle in The Guardian


There are many reasons why I became homeless, but no one was surprised it happened. I’m just another care leaver who lost control of their life. Almost every person I lived with in children’s homes and foster placements has since experienced mental health problems, stints in prison, and battles with drug and alcohol addiction. What would make me so special that I could avoid the inevitable breakdown?




Homeless in Britain: ‘I graduated with honours – and ended up on the streets’



I spent periods in a tent on a campsite near Saddleworth Moor, where I was woken up every night by my neighbour, a cantankerous Yorkshireman who would liberate the grievances he had been bottling up all day in a series of piercing screams.

The local housing advice service was no help. I was told that to be considered a priority need, I had to demonstrate that I was more vulnerable than my homeless counterparts. As one adviser put it: “I have to establish that you would be worse off than me, if I were homeless.” It may interest people that local councils are now running a misery contest for housing, a sort of X Factor for the destitute. Maybe my audition would have gone better if I’d had a few more missing teeth, and wet myself while singing Oom-Pah-Pah.

And then I befriended a resident of a residential charity for the homeless. He was far more helpful than the housing advisers, and managed to organise a place for me at the charity.

When I entered its walls, which were inside a converted factory, the place immediately struck me as having similarities with a Victorian workhouse. I was told by the “community leader” that I would receive basic subsistence: a room, food, clothing and a modest weekly allowance, in exchange for 40 hours’ labour.

The word “workhouse” conjures up images of Oliver Twist, and of bleak Victorian institutions populated by bedraggled paupers forced into backbreaking labour in exchange for meagre slops of porridge. At the charity home we were not expected to pick oakum or break boulders, but the work was hard and the returns were meagre.

Part of my job involved delivering furniture. I spent day after day lifting heavy items such as wardrobes and three-piece suites, sometimes up and down several flights of stairs. The work is described as voluntary by the charity, but in reality neither I nor any of my fellow inmates had anywhere else to go, and so had little choice but to do it.

The charity describes itself as a “working community”. But as far as I was concerned this was a workhouse in all but name: a civil prison, and a punishment for poverty. How do such charities manage to require their residents to work up to 40 hours a week without a wage, paying them only a small allowance for food and accommodation?

In 1999 the New Labour government exempted charities and other institutions from paying workers the national minimum wage if prior to entering a work scheme they were homeless or residing in a homeless hostel. There is perhaps no better demonstration that this country is yet to shake off punitive Victorian attitudes towards the “undeserving” poor.

These regulations not only strip homeless people of the right to a decent wage, but of all their other employment rights too. Because residents of such charities are not classed as employees, they cannot claim unfair dismissal or sick pay. Many people have lived and worked at the charity for up to 15 years, yet they can be sacked and evicted with no legal right to appeal.

I accept that residents, some of whom have suffered with long-term alcoholism and drug dependency, are far better off within the charity home’s walls than they would be on the streets or living alone. The environment is predominantly a positive one, where residents are well fed and safe, and are overseen by conscientious staff. The charity does give individuals the chance to participate in meaningful work and contribute to a community, sometimes for the first time in their lives. But none of this alters the fact that residents are forced by poverty to work for no pay.

The homelessness reduction bill, which last week passed its final obstacle in parliament, provides an opportunity to change our approach. It will force local authorities to provide assistance to people threatened with becoming homeless 56 days before they lose their home, ending the misery contest I and others have been subjected to over the years.

This bill represents a very small step in the right direction, but much more needs to be done to address the reasons people find themselves on the streets in the first place. And ending the exploitation of homeless people for their labour should be one of the first goals.

It is ironic that a Labour government created a backdoor for the revival of workhouses when it was Attlee’s government that abolished the workhouse system. The idea that the poor should be forced to work for board and basic subsistence was once universally condemned, but it has been revived without a murmur of public disapproval.

No one else in our society can be mandated to work full time for no pay, with no rights, on pain of being condemned to a life on the streets. So why is it OK to treat homeless people this way?

Tuesday 28 March 2017

Access to justice is no longer a worker’s right, but a luxury

Aditya Charkrabortty in The Guardian


Laws that cost too much to enforce are phoney laws. A civil right that people can’t afford to use is no right at all. And a society that turns justice into a luxury good is one no longer ruled by law, but by money and power. This week the highest court in the land will decide whether Britain will become such a society. There are plenty of signs that we have already gone too far.

Listen to the country’s top judge, Lord Thomas of Cwmgiedd, who admits that “our justice system has become unaffordable to most”. Look at our legal-aid system, slashed so heavily by David Cameron and Theresa May that the poor must act as their own trial lawyers, ready to be skittled by barristers in the pay of their moneyed opponents.

The latest case will be heard by seven supreme court judges and will pit the government against the trade union Unison. It will be the climax of a four-year legal battle over one of the most fundamental rights of all: the right of workers to stand up against their bosses. 

In 2013, Cameron stripped workers of the right to access the employment tribunal system. Whether a pregnant woman forced out of her job, a Bangladeshi-origin guy battling racism at work, or a young graduate with disabilities getting aggro from a boss, all would now have to pay £1,200 for a chance of redress.

The number of cases taken to tribunal promptly fell off a cliff – down by 70% within a year. Citizens Advice, employment lawyers and academics practically queued up to warn that workers – especially poor workers – were getting priced out of justice. But for Conservative ministers, all was fine. Loyal flacks such as Matthew Hancock (then employment minister) claimed those deterred by the fees were merely “unscrupulous” try-ons, intent on “bullying bosses”. Follow Hancock’s logic, and with all those time-wasters weeded out, you’d expect the number of successful tribunal claims to jump. They’ve actually dropped.

At each hearing of Unison’s case, the judges have wound up asking to see actual people for whom the fees have represented a barrier to justice. One was sure that“if the statistics … were drilled down to some individual cases, situations would be revealed that showed an inability on the part of some people to proceed before an employment tribunal through lack of funds”.

Should the supreme court judges want the same thing, they could meet Liliana Almanza. They’d find her a compelling witness, although she finds it hard to sit down for too long due to three herniated discs in her lower back, which make her feel like she’s lugging around “a lot of heavy weight” and which send pain shooting into her hands, legs, shoulders and neck. She also has sometimes severe depression and anxiety. The physical pain and the mental illness can feed off each other.

Almanza has worked as a cleaner at the University of London since 2011 and never kept her conditions from her employer, an outsourcing company called Cofely. Then came a new supervisor, who Almanza felt had it in for her and who piled on extra work. Almanza was sent to the “punishment floor” – actually three floors, normally handled by two people, but she had to do the work on her own and in little time. The extra workload, especially the pushing about of a hoover and a mop, caused her so much pain that she sometimes felt dizzy. Yet when Almanza complained, she says the supervisor either laughed or told her to sign off sick. Despite being required under law, there was no adjustment for her disabilities.

Almanza, who is Colombian, remembers the supervisor telling her how Latin Americans were a bunch of beggars. Other times, she’d call Almanza a “bitch” and a “whore”.

On the worst days, Almanza would walk over to Euston station and stand at the platform’s very edge. She’d wait for the tube to come. Then “a light would come on” and she’d pull herself back.

Almanza did exactly what ministers would want and submitted a grievance using Cofely’s in-house procedure. It was rejected. She appealed and did not hear anything for months. However desperate her situation, she would never have found the money for a tribunal. Some are exempt from the fees, but Almanza and her husband – both cleaners – apparently earned too much money for her to qualify. Nor does the means-testing account for living costs, even though after renting a single room in a shared ex-council house in London and paying bills they have almost no money each month.

Her union, the tiny Independent Workers of Great Britain (IWGB), pitched in some money to go to tribunal and helped crowdfund the rest. As soon as she did, Almanza remembers that her employer made a number of adjustments and lightened her workload.

I contacted Engie, as Cofely has been rebranded, for its response to Almanza’s charges. Its statement reads in part: “We do not tolerate discrimination in the workplace and all claims … are investigated thoroughly. Following extensive investigation of the allegations brought against Cofely Workplace, all claims were denied and Cofely was formally discharged from the proceedings by the court on 24th May 2016.” The court documents actually show that Cofely was discharged because the contract was taken over by another company, which also reached a settlement with Almanza.

Without charity and the shoestring resources of the IWGB, Almanza wouldn’t have been able to file a claim. If she could testify to the supreme court, what would she say? “I would tell the judges if I hadn’t been able to go to tribunal I don’t think I’d be here today. If I’d continued like that, I wouldn’t have been able to tell this story. Maybe it sounds like an exaggeration, a movie. But it’s one thing to talk about it, another thing to live it.”

Saffron storm, hard cash

Jawed Naqvi in The Dawn


A young man described himself as a dejected Muslim, and punctured the sharp analysis that was under way about the Uttar Pradesh defeat. The venue was a well-appointed seminar room at the India International Centre. Why don’t we show our outrage like they do in America, the young Muslim wanted to know. People in America are out on the streets fighting for the refugees, Latinos, Muslims, blacks, everyone. One US citizen was shot trying to protect an Indian victim of racial assault. Why are Indian opponents of Hindutva so full of wisdom and analysis but few, barring angry students in the universities, take to the streets?

It’s not that people are not fighting injustices. From Bastar to Indian Kashmir, from Manipur to Manesar, peasants, workers, college students, tribespeople, Dalits; they are fighting back. But they are vulnerable without a groundswell of mass support like we see in other countries.

Off and on, political parties are capable of expressing outrage. A heartbreaking scene in parliament is to see Congress MPs screaming their lungs out with rage, but that’s usually when Sonia Gandhi is attacked or Rahul Gandhi belittled. Yet there is no hope of stopping the Hindutva march without accepting the Congress as a pivot to defeat the Modi-Yogi party in 2019.
It’s a given. The slaughterhouses may or may not open any time soon, but an opposition win in 2019 is easier to foresee. It could be a pyrrhic victory, the way the dice is loaded, but it is the only way. Will the Congress join the battle without pushing itself as the natural claimant to power? Without humility, we may not be able to address the young man’s dejection.

Like it or not, there is no other opposition party with the reach of the Congress, even today. Should we be saddled with a party that rises to its feet to protect its leaders — which it should — but has lost the habit of marching against the insults and torture that large sections of Indians endure daily?
A common and valid fear is that the party is vulnerable before the IOUs its satraps may have signed with big league traders, who drive politics in India today.


If religious fascism is staring down India’s throat, there’s someone financing it.


The Congress needs to ask itself bluntly: who chose Mr Modi as prime minister? It was the same people that chose Manmohan Singh before him. The fact is that India has come to be ruled by traders, though they have neither the vision nor the capacity to industrialise or modernise this country of 1.5 billion. Their fabled appetite for inflicting bad loans on the state exchequer is legendary, though they have seldom measured up to Nehru’s maligned public sector to build any core industry. (Bringing spectrum machines from Europe and mobile phones from China for more and more people to watch mediocre reality shows is neither modernisation nor industrialisation.)

The traders have thrived by funding ruling parties and keeping their options open with the opposition when necessary. It’s like placing casino chips on the roulette table, which is what they have turned a once robust democracy into. If there’s religious fascism staring down India’s throat, there’s someone financing it.

The newspapers won’t tell you all that. The traders own the papers. The umbilical cord between religious regression and traders has been well established in a fabulous book on the Gita Press by a fellow journalist; same with TV.

Nehru wasn’t terribly impressed with them. He fired his finance minister for flirting with their ilk. Indira Gandhi did one better. She installed socialism as a talisman against private profiteers in the preamble of the constitution. They hated her for that. The older Indian literature (Premchand) and cinema were quite a lot about their shady reality — Mother India, Foot Path, Do Bigha Zamin, Shree 420, to name a few.

At the Congress centenary in Mumbai, Rajiv Gandhi called out the ‘moneybags’ riding the backs of party workers. They retaliated through his closest coterie to smear him with the Bofors refuse. The first move against Hindutva’s financiers will be an uphill journey. The IOUs will come into play.

For that, the Congress must evict the agents of the moneybags known to surround its leadership. But they’re not the only reality the Congress must discard. It has to rid itself of ‘soft Hindutva’ completely, and it absolutely must stop indulging regressive Muslim clerics as a vote bank.

For a start, the West Bengal, Karnataka, and Delhi assemblies will need every opposition member’s support in the coming days. The most laughable of the cases will be summoned against the unimpeachable Arvind Kejriwal, a bĂȘte noire for the traders, whose hanky-panky he excels in exposing.

For better or worse, it is the Congress that still holds the key to 2019. Even in the post-emergency rout, the party kept a vote share of 41 per cent. And after the 2014 shock, its vote has grown, not decreased.

While everyone needs to think about 2019, the left faces a more daunting challenge. It knows that the Modi-Yogi party does not enjoy a majority of Indian votes. However, the majority includes Mamata Banerjee, who says she wants to join hands with the left against the BJP. Others are Lalu Yadav, Nitish Kumar, Arvind Kejriwal, Mayawati, Akhilesh Yadav, most of the Dravida parties and, above all, the Congress. The left has inflicted self-harm by putting up candidates against all these opponents of the BJP — in Bihar, in Uttar Pradesh, in Delhi. In West Bengal and Kerala, can it see eye to eye with its anti-BJP rivals?
As the keystone in the needed coalition, the left must drastically tweak its politics. It alone has the ability to lift the profile of the Indian ideology, which is still Nehruvian at its core, as the worried man at the Indian International Centre will be pleased to note.

Monday 27 March 2017

Brexit deal must meet six tests, says Labour

  • Fair migration system for UK business and communities
  • Retaining strong, collaborative relationship with EU
  • Protecting national security and tackling cross-border crime
  • Delivering for all nations and regions of the UK
  • Protecting workers' rights and employment protections
  • Ensuring same benefits currently enjoyed within single market

Sunday 26 March 2017

Populism is the result of global economic failure

Larry Elliott in The Guardian


The rise of populism has rattled the global political establishment. Brexit came as a shock, as did the victory of Donald Trump. Much head-scratching has resulted as leaders seek to work out why large chunks of their electorates are so cross.






The answer seems pretty simple. Populism is the result of economic failure. The 10 years since the financial crisis have shown that the system of economic governance that has held sway for the past four decades is broken. Some call this approach neoliberalism. Perhaps a better description would be unpopulism.

Unpopulism meant tilting the balance of power in the workplace in favour of management and treating people like wage slaves. Unpopulism was rigged to ensure that the fruits of growth went to the few not to the many. Unpopulism decreed that those responsible for the global financial crisis got away with it while those who were innocent bore the brunt of austerity.
Anybody seeking to understand why Trump won the US presidential election should take a look at what has been happening to the division of the economic spoils. The share of national income that went to the bottom 90% of the population held steady at around 66% from 1950 to 1980. It then began a steep decline, falling to just over 50% when the financial crisis broke in 2007.

Similarly, it is no longer the case that everybody benefits when the US economy is doing well. During the business cycle upswing between 1961 and 1969, the bottom 90% of Americans took 67% of the income gains. During the Reagan expansion two decades later they took 20%. During the Greenspan housing bubble of 2001 to 2007, they got just two cents in every extra dollar of national income generated while the richest 10% took the rest.


Those responsible for global financial crisis got away with it while those who were innocent bore the brunt of austerity

The US economist Thomas Palley* says that up until the late 1970s countries operated a virtuous circle growth model in which wages were the engine of demand growth.

“Productivity growth drove wage growth which fueled demand growth. That promoted full employment which provided the incentive to invest, which drove further productivity growth,” he says.

Unpopulism was touted as the antidote to the supposedly-failed policies of the post-war era. It promised higher growth rates, higher investment rates, higher productivity rates and a trickle down of income from rich to poor. It has delivered none of these things.

James Montier and Philip Pilkington of the global investment firm GMO say that the system that arose in the 1970s was characterised by four significant economic policies: the abandonment of full employment and its replacement with inflation targeting; an increase in the globalisation of the flows of people, capital and trade; a focus on shareholder maximisation rather than reinvestment and growth; and the pursuit of flexible labour markets and the disruption of trade unions and workers’ organisations.

To take just the last of these four pillars, the idea was that trade unions and minimum wages were impediments to an efficient labour market. Collective bargaining and statutory pay floors would result in workers being paid more than the market rate, with the result that unemployment would inevitably rise.

Unpopulism decreed that the real value of the US minimum wage should be eroded. But unemployment is higher than it was when the minimum wage was worth more. Nor is there any correlation between trade union membership and unemployment. If anything, international comparisons suggest that those countries with higher trade union density have lower jobless rates. The countries that have higher minimum wages do not have higher unemployment rates.

“Labour market flexibility may sound appealing, but it is based on a theory that runs completely counter to all the evidence we have,” Montier and Pilkington note. “The alternative theory suggests that labour market flexibility is by no means desirable as it results in an economy with a bias to stagnate that can only maintain high rates of employment and economic growth through debt-fuelled bubbles that inevitably blow up, leading to the economy tipping back into stagnation.”

This quest for ever-greater labour-market flexibility has had some unexpected consequences. The bill in the UK for tax credits spiralled quickly once firms realised that they could pay poverty wages and let the state pick up the bill. Access to a global pool of low-cost labour meant there was less of an incentive to invest in productivity-enhancing equipment.

The abysmally-low levels of productivity growth since the crisis have encouraged the belief that this is a recent phenomenon, but as Andy Haldane, the Bank of England’s chief economist, noted last week, the trend started in most advanced countries in the 1970s.

“Certainly, the productivity puzzle is not something which has emerged since the global financial crisis, though it seems to have amplified pre-existing trends,” Haldane said.


Bolshie trade unions certainly can’t be blamed for Britain’s lost productivity decade. The orthodox view in the 1970s was that attempts to make the UK more efficient were being thwarted by shop stewards who modeled themselves on Fred Kite, the character played by Peter Sellers in I’m Alright Jack. Haldane puts the blame elsewhere: on poor management, which has left the UK with a big gap between frontier firms and a long tail of laggards. “Firms which export have systematically higher levels of productivity than domestically-oriented firms, on average by around a third. The same is true, even more dramatically, for foreign-owned firms. Their average productivity is twice that of domestically-oriented firms.”




Wolfgang Streeck: the German economist calling time on capitalism

Read more

Populism is seen as irrational and reprehensible. It is neither. It seems entirely rational for the bottom 90% of the US population to question why they are getting only 2% of income gains. It hardly seems strange that workers in Britain should complain at the weakest decade for real wage growth since the Napoleonic wars.

It has also become clear that ultra-low interest rates and quantitative easing are merely sticking-plaster solutions. Populism stems from a sense that the economic system is not working, which it clearly isn’t. In any other walk of life, a failed experiment results in change. Drugs that are supposed to provide miracle cures but are proved not to work are quickly abandoned. Businesses that insist on continuing to produce goods that consumers don’t like go bust. That’s how progress happens.

The good news is that the casting around for new ideas has begun. Trump has advocated protectionism. Theresa May is consulting on an industrial strategy. Montier and Pilkington suggest a commitment to full employment, job guarantees, re-industrialisation and a stronger role for trade unions. The bad news is that time is running short. More and more people are noticing that the emperor has no clothes.

Even if the polls are right this time and Marine Le Pen fails to win the French presidency, a full-scale political revolt is only another deep recession away. And that’s easy enough to envisage.