Thursday, 30 March 2017

The myth of the ‘lone wolf’ terrorist

Jason Burke in The Guardian

At around 8pm on Sunday 29 January, a young man walked into a mosque in the Sainte-Foy neighbourhood of Quebec City and opened fire on worshippers with a 9mm handgun. The imam had just finished leading the congregation in prayer when the intruder started shooting at them. He killed six and injured 19 more. The dead included an IT specialist employed by the city council, a grocer, and a science professor.

The suspect, Alexandre Bissonnette, a 27-year-old student, has been charged with six counts of murder, though not terrorism. Within hours of the attack, Ralph Goodale, the Canadian minister for public safety, described the killer as “a lone wolf”. His statement was rapidly picked up by the world’s media.

Goodale’s statement came as no surprise. In early 2017, well into the second decade of the most intense wave of international terrorism since the 1970s, the lone wolf has, for many observers, come to represent the most urgent security threat faced by the west. The term, which describes an individual actor who strikes alone and is not affiliated with any larger group, is now widely used by politicians, journalists, security officials and the general public. It is used for Islamic militant attackers and, as the shooting in Quebec shows, for killers with other ideological motivations. Within hours of the news breaking of an attack on pedestrians and a policeman in central London last week, it was used to describe the 52-year-old British convert responsible. Yet few beyond the esoteric world of terrorism analysis appear to give this almost ubiquitous term much thought.

Terrorism has changed dramatically in recent years. Attacks by groups with defined chains of command have become rarer, as the prevalence of terrorist networks, autonomous cells, and, in rare cases, individuals, has grown. This evolution has prompted a search for a new vocabulary, as it should. The label that seems to have been decided on is “lone wolves”. They are, we have been repeatedly told, “Terror enemy No 1”.

Yet using the term as liberally as we do is a mistake. Labels frame the way we see the world, and thus influence attitudes and eventually policies. Using the wrong words to describe problems that we need to understand distorts public perceptions, as well as the decisions taken by our leaders. Lazy talk of “lone wolves” obscures the real nature of the threat against us, and makes us all less safe.

The image of the lone wolf who splits from the pack has been a staple of popular culture since the 19th century, cropping up in stories about empire and exploration from British India to the wild west. From 1914 onwards, the term was popularised by a bestselling series of crime novels and films centred upon a criminal-turned-good-guy nicknamed Lone Wolf. Around that time, it also began to appear in US law enforcement circles and newspapers. In April 1925, the New York Times reported on a man who “assumed the title of ‘Lone Wolf’”, who terrorised women in a Boston apartment building. But it would be many decades before the term came to be associated with terrorism.

In the 1960s and 1970s, waves of rightwing and leftwing terrorism struck the US and western Europe. It was often hard to tell who was responsible: hierarchical groups, diffuse networks or individuals effectively operating alone. Still, the majority of actors belonged to organisations modelled on existing military or revolutionary groups. Lone actors were seen as eccentric oddities, not as the primary threat.

The modern concept of lone-wolf terrorism was developed by rightwing extremists in the US. In 1983, at a time when far-right organisations were coming under immense pressure from the FBI, a white nationalist named Louis Beam published a manifesto that called for “leaderless resistance” to the US government. Beam, who was a member of both the Ku Klux Klan and the Aryan Nations group, was not the first extremist to elaborate the strategy, but he is one of the best known. He told his followers that only a movement based on “very small or even one-man cells of resistance … could combat the most powerful government on earth”.

Oklahoma City bomber Timothy McVeigh leaves court, 1995. Photograph: David Longstreath/AP

Experts still argue over how much impact the thinking of Beam and other like-minded white supremacists had on rightwing extremists in the US. Timothy McVeigh, who killed 168 people with a bomb directed at a government office in Oklahoma City in 1995, is sometimes cited as an example of someone inspired by their ideas. But McVeigh had told others of his plans, had an accomplice, and had been involved for many years with rightwing militia groups. McVeigh may have thought of himself as a lone wolf, but he was not one.

One far-right figure who made explicit use of the term lone wolf was Tom Metzger, the leader of White Aryan Resistance, a group based in Indiana. Metzger is thought to have authored, or at least published on his website, a call to arms entitled “Laws for the Lone Wolf”. “I am preparing for the coming War. I am ready when the line is crossed … I am the underground Insurgent fighter and independent. I am in your neighborhoods, schools, police departments, bars, coffee shops, malls, etc. I am, The Lone Wolf!,” it reads.

From the mid-1990s onwards, as Metzger’s ideas began to spread, the number of hate crimes committed by self-styled “leaderless” rightwing extremists rose. In 1998, the FBI launched Operation Lone Wolf against a small group of white supremacists on the US west coast. A year later, Alex Curtis, a young, influential rightwing extremist and protege of Metzger, told his hundreds of followers in an email that “lone wolves who are smart and commit to action in a cold-mannered way can accomplish virtually any task before them ... We are already too far along to try to educate the white masses and we cannot worry about [their] reaction to lone wolf/small cell strikes.”

The same year, the New York Times published a long article on the new threat headlined “New Face of Terror Crimes: ‘Lone Wolf’ Weaned on Hate”. This seems to have been the moment when the idea of terrorist “lone wolves” began to migrate from rightwing extremist circles, and the law enforcement officials monitoring them, to the mainstream. In court on charges of hate crimes in 2000, Curtis was described by prosecutors as an advocate of lone-wolf terrorism.

When, more than a decade later, the term finally became a part of the everyday vocabulary of millions of people, it was in a dramatically different context.

After 9/11, lone-wolf terrorism suddenly seemed like a distraction from more serious threats. The 19 men who carried out the attacks were jihadis who had been hand picked, trained, equipped and funded by Osama bin Laden, the leader of al-Qaida, and a small group of close associates.

Although 9/11 was far from a typical terrorist attack, it quickly came to dominate thinking about the threat from Islamic militants. Security services built up organograms of terrorist groups. Analysts focused on individual terrorists only insofar as they were connected to bigger entities. Personal relations – particularly friendships based on shared ambitions and battlefield experiences, as well as tribal or familial links – were mistaken for institutional ones, formally connecting individuals to organisations and placing them under a chain of command.

As the 2000s drew to a close, attacks perpetrated by people who seemed to be acting alone began to outnumber all others

This approach suited the institutions and individuals tasked with carrying out the “war on terror”. For prosecutors, who were working with outdated legislation, proving membership of a terrorist group was often the only way to secure convictions of individuals planning violence. For a number of governments around the world – Uzbekistan, Pakistan, Egypt – linking attacks on their soil to “al-Qaida” became a way to shift attention away from their own brutality, corruption and incompetence, and to gain diplomatic or material benefits from Washington. For some officials in Washington, linking terrorist attacks to “state-sponsored” groups became a convenient way to justify policies, such as the continuing isolation of Iran, or military interventions such as the invasion of Iraq. For many analysts and policymakers, who were heavily influenced by the conventional wisdom on terrorism inherited from the cold war, thinking in terms of hierarchical groups and state sponsors was comfortably familiar.

A final factor was more subtle. Attributing the new wave of violence to a single group not only obscured the deep, complex and troubling roots of Islamic militancy but also suggested the threat it posed would end when al-Qaida was finally eliminated. This was reassuring, both for decision-makers and the public.

By the middle of the decade, it was clear that this analysis was inadequate. Bombs in Bali, Istanbul and Mombasa were the work of centrally organised attackers, but the 2004 attack on trains in Madrid had been executed by a small network only tenuously connected to the al-Qaida senior leadership 4,000 miles away. For every operation like the 2005 bombings in London – which was close to the model established by the 9/11 attacks – there were more attacks that didn’t seem to have any direct link to Bin Laden, even if they might have been inspired by his ideology. There was growing evidence that the threat from Islamic militancy was evolving into something different, something closer to the “leaderless resistance” promoted by white supremacists two decades earlier.

As the 2000s drew to a close, attacks perpetrated by people who seemed to be acting alone began to outnumber all others. These events were less deadly than the spectacular strikes of a few years earlier, but the trend was alarming. In the UK in 2008, a convert to Islam with mental health problems attempted to blow up a restaurant in Exeter, though he injured no one but himself. In 2009, a US army major shot 13 dead in Fort Hood, Texas. In 2010, a female student stabbed an MPin London. None appeared, initially, to have any broader connections to the global jihadi movement.

In an attempt to understand how this new threat had developed, analysts raked through the growing body of texts posted online by jihadi thinkers. It seemed that one strategist had been particularly influential: a Syrian called Mustafa Setmariam Nasar, better known as Abu Musab al-Suri. In 2004, in a sprawling set of writings posted on an extremist website, Nasar had laid out a new strategy that was remarkably similar to “leaderless resistance”, although there is no evidence that he knew of the thinking of men such as Beam or Metzger. Nasar’s maxim was “Principles, not organisations”. He envisaged individual attackers and cells, guided by texts published online, striking targets across the world.

Having identified this new threat, security officials, journalists and policymakers needed a new vocabulary to describe it. The rise of the term lone wolf wasn’t wholly unprecedented. In the aftermath of 9/11, the US had passed anti-terror legislation that included a so-called “lone wolf provision”. This made it possible to pursue terrorists who were members of groups based abroad but who were acting alone in the US. Yet this provision conformed to the prevailing idea that all terrorists belonged to bigger groups and acted on orders from their superiors. The stereotype of the lone wolf terrorist that dominates today’s media landscape was not yet fully formed.

It is hard to be exact about when things changed. By around 2006, a small number of analysts had begun to refer to lone-wolf attacks in the context of Islamic militancy, and Israeli officials were using the term to describe attacks by apparently solitary Palestinian attackers. Yet these were outliers. In researching this article, I called eight counter-terrorism officials active over the last decade to ask them when they had first heard references to lone-wolf terrorism. One said around 2008, three said 2009, three 2010 and one around 2011. “The expression is what gave the concept traction,” Richard Barrett, who held senior counter-terrorist positions in MI6, the British overseas intelligence service, and the UN through the period, told me. Before the rise of the lone wolf, security officials used phrases – all equally flawed – such as “homegrowns”, “cleanskins”, “freelancers” or simply “unaffiliated”.

As successive jihadi plots were uncovered that did not appear to be linked to al-Qaida or other such groups, the term became more common. Between 2009 and 2012 it appears in around 300 articles in major English-language news publications each year, according the professional cuttings search engine Lexis Nexis. Since then, the term has become ubiquitous. In the 12 months before the London attack last week, the number of references to “lone wolves” exceeded the total of those over the previous three years, topping 1,000.

Lone wolves are now apparently everywhere, stalking our streets, schools and airports. Yet, as with the tendency to attribute all terrorist attacks to al-Qaida a decade earlier, this is a dangerous simplification.

In March 2012, a 23-year-old petty criminal named Mohamed Merah went on a shooting spree – a series of three attacks over a period of nine days – in south-west France, killing seven people. Bernard Squarcini, head of the French domestic intelligence service, described Merah as a lone wolf. So did the interior ministry spokesman, and, inevitably, many journalists. A year later, Lee Rigby, an off-duty soldier, was run over and hacked to death in London. Once again, the two attackers were dubbed lone wolves by officials and the media. So, too, were Dzhokhar and Tamerlan Tsarnaev, the brothers who bombed the Boston Marathon in 2013. The same label has been applied to more recent attackers, including the men who drove vehicles into crowds in Nice and Berlin last year, and in London last week.

The Boston Marathon bombing carried out by Dzhokhar and Tamerlan Tsarnaev in 2013. Photograph: Dan Lampariello/Reuters

One problem facing security services, politicians and the media is that instant analysis is difficult. It takes months to unravel the truth behind a major, or even minor, terrorist operation. The demand for information from a frightened public, relayed by a febrile news media, is intense. People seek quick, familiar explanations.

Yet many of the attacks that have been confidently identified as lone-wolf operations have turned out to be nothing of the sort. Very often, terrorists who are initially labelled lone wolves, have active links to established groups such as Islamic State and al-Qaida. Merah, for instance, had recently travelled to Pakistan and been trained, albeit cursorily, by a jihadi group allied with al-Qaida. Merah was also linked to a network of local extremists, some of whom went on to carry out attacks in Libya, Iraq and Syria. Bernard Cazeneuve, who was then the French interior minister, later agreed that calling Merah a lone wolf had been a mistake.

If, in cases such as Merah’s, the label of lone wolf is plainly incorrect, there are other, more subtle cases where it is still highly misleading. Another category of attackers, for instance, are those who strike alone, without guidance from formal terrorist organisations, but who have had face-to-face contact with loose networks of people who share extremist beliefs. The Exeter restaurant bomber, dismissed as an unstable loner, was actually in contact with a circle of local militant sympathisers before his attack. (They have never been identified.) The killers of Lee Rigby had been on the periphery of extremist movements in the UK for years, appearing at rallies of groups such as the now proscribed al-Muhajiroun, run by Anjem Choudary, a preacher convicted of terrorist offences in 2016 who is reported to have “inspired” up to 100 British militants.

A third category is made up of attackers who strike alone, after having had close contact online, rather than face-to-face, with extremist groups or individuals. A wave of attackers in France last year were, at first, wrongly seen as lone wolves “inspired” rather than commissioned by Isis. It soon emerged that the individuals involved, such as the two teenagers who killed a priest in front of his congregation in Normandy, had been recruited online by a senior Isis militant. In three recent incidents in Germany, all initially dubbed “lone-wolf attacks”, Isis militants actually used messaging apps to direct recruits in the minutes before they attacked. “Pray that I become a martyr,” one attacker who assaulted passengers on a train with an axe and knife told his interlocutor. “I am now waiting for the train.” Then: “I am starting now.”

Very often, what appear to be the clearest lone-wolf cases are revealed to be more complex. Even the strange case of the man who killed 86 people with a truck in Nice in July 2016 – with his background of alcohol abuse, casual sex and lack of apparent interest in religion or radical ideologies – may not be a true lone wolf. Eight of his friends and associates have been arrested and police are investigating his potential links to a broader network.

What research does show is that we may be more likely to find lone wolves among far-right extremists than among their jihadi counterparts. Though even in those cases, the term still conceals more than it reveals.

The murder of the Labour MP Jo Cox, days before the EU referendum, by a 52-year-old called Thomas Mair, was the culmination of a steady intensification of rightwing extremist violence in the UK that had been largely ignored by the media and policymakers. According to police, on several occasions attackers came close to causing more casualties in a single operation than jihadis had ever inflicted. The closest call came in 2013 when Pavlo Lapshyn, a Ukrainian PhD student in the UK, planted a bomb outside a mosque in Tipton, West Midlands. Fortunately, Lapshyn had got his timings wrong and the congregation had yet to gather when the device exploded. Embedded in the trunks of trees surrounding the building, police found some of the 100 nails Lapshyn had added to the bomb to make it more lethal.

Lapshyn was a recent arrival, but the UK has produced numerous homegrown far-right extremists in recent years. One was Martyn Gilleard, who was sentenced to 16 years for terrorism and child pornography offences in 2008. When officers searched his home in Goole, East Yorkshire, they found knives, guns, machetes, swords, axes, bullets and four nail bombs. A year later, Ian Davison became the first Briton convicted under new legislation dealing with the production of chemical weapons. Davison was sentenced to 10 years in prison for manufacturing ricin, a lethal biological poison made from castor beans. His aim, the court heard, was “the creation of an international Aryan group who would establish white supremacy in white countries”.

Lapshyn, Gilleard and Davison were each described as lone wolves by police officers, judges and journalists. Yet even a cursory survey of their individual stories undermines this description. Gilleard was the local branch organiser of a neo-Nazi group, while Davison founded the Aryan Strike Force, the members of which went on training days in Cumbria where they flew swastika flags.

Thomas Mair, who was also widely described as a lone wolf, does appear to have been an authentic loner, yet his involvement in rightwing extremism goes back decades. In May 1999, the National Alliance, a white-supremacist organisation in West Virginia, sent Mair manuals that explained how to construct bombs and assemble homemade pistols. Seventeen years later, when police raided his home after the murder, they found stacks of far-right literature, Nazi memorabilia and cuttings on Anders Breivik, the Norwegian terrorist who murdered 77 people in 2011.

A government building in Oslo bombed by Anders Breivik, July 2011. Photograph: Scanpix/Reuters

Even Breivik himself, who has been called “the deadliest lone-wolf attacker in [Europe’s] history”, was not a true lone wolf. Prior to his arrest, Breivik had long been in contact with far-right organisations. A member of the English Defence League told the Telegraph that Breivik had been in regular contact with its members via Facebook, and had a “hypnotic” effect on them.

If such facts fit awkwardly with the commonly accepted idea of the lone wolf, they fit better with academic research that has shown that very few violent extremists who launch attacks act without letting others know what they may be planning. In the late 1990s, after realising that in most instances school shooters would reveal their intentions to close associates before acting, the FBI began to talk about “leakage” of critical information. By 2009, it had extended the concept to terrorist attacks, and found that “leakage” was identifiable in more than four-fifths of 80 ongoing cases they were investigating. Of these leaks, 95% were to friends, close relatives or authority figures.

More recent research has underlined the garrulous nature of violent extremists. In 2013, researchers at Pennsylvania State University examined the interactions of 119 lone-wolf terrorists from a wide variety of ideological and faith backgrounds. The academics found that, even though the terrorists launched their attacks alone, in 79% of cases others were aware of the individual’s extremist ideology, and in 64% of cases family and friends were aware of the individual’s intent to engage in terrorism-related activity. Another more recent survey found that 45% of Islamic militant cases talked about their inspiration and possible actions with family and friends. While only 18% of rightwing counterparts did, they were much more likely to “post telling indicators” on the internet.

Few extremists remain without human contact, even if that contact is only found online. Last year, a team at the University of Miami studied 196 pro-Isis groupsoperating on social media during the first eight months of 2015. These groups had a combined total of more than 100,000 members. Researchers also found that pro-Isis individuals who were not in a group – who they dubbed “online ‘lone wolf’ actors” – had either recently been in a group or soon went on to join one.

Any terrorist, however socially or physically isolated, is still part of a broader movement
There is a much broader point here. Any terrorist, however socially or physically isolated, is still part of a broader movement. The lengthy manifesto that Breivik published hours before he started killing drew heavily on a dense ecosystem of far-right blogs, websites and writers. His ideas on strategy drew directly from the “leaderless resistance” school of Beam and others. Even his musical tastes were shaped by his ideology. He was, for example, a fan of Saga, a Swedish white nationalist singer, whose lyrics include lines about “The greatest race to ever walk the earth … betrayed”.

It is little different for Islamic militants, who emerge as often from the fertile and desperately depressing world of online jihadism – with its execution videos, mythologised history, selectively read religious texts and Photoshopped pictures of alleged atrocities against Muslims – as from organised groups that meet in person.

Terrorist violence of all kinds is directed against specific targets. These are not selected at random, nor are such attacks the products of a fevered and irrational imagination operating in complete isolation.

Just like the old idea that a single organisation, al-Qaida, was responsible for all Islamic terrorism, the rise of the lone-wolf paradigm is convenient for many different actors. First, there are the terrorists themselves. The notion that we are surrounded by anonymous lone wolves poised to strike at any time inspires fear and polarises the public. What could be more alarming and divisive than the idea that someone nearby – perhaps a colleague, a neighbour, a fellow commuter – might secretly be a lone wolf?

Terrorist groups also need to work constantly to motivate their activists. The idea of “lone wolves” invests murderous attackers with a special status, even glamour. Breivik, for instance, congratulated himself in his manifesto for becoming a “self-financed and self-indoctrinated single individual attack cell”. Al-Qaida propaganda lauded the 2009 Fort Hood shooter as “a pioneer, a trailblazer, and a role model who has opened a door, lit a path, and shown the way forward for every Muslim who finds himself among the unbelievers”.

The lone-wolf paradigm can be helpful for security services and policymakers, too, since the public assumes that lone wolves are difficult to catch. This would be justified if the popular image of the lone wolf as a solitary actor was accurate. But, as we have seen, this is rarely the case.

Westminster terrorist Khalid Masood. Photograph: Reuters

The reason that many attacks are not prevented is not because it was impossible to anticipate the perpetrator’s actions, but because someone screwed up. German law enforcement agencies were aware that the man who killed 12 in Berlin before Christmas was an Isis sympathiser and had talked about committing an attack. Repeated attempts to deport him had failed, stymied by bureaucracy, lack of resources and poor case preparation. In Britain, a parliamentary report into the killing of Lee Rigby identified a number of serious delays and potential missed opportunities to prevent it. Khalid Masood, the man who attacked Westminster last week, was identified in 2010 as a potential extremist by MI5.

But perhaps the most disquieting explanation for the ubiquity of the term is that it tells us something we want to believe. Yes, the terrorist threat now appears much more amorphous and unpredictable than ever before. At the same time, the idea that terrorists operate alone allows us to break the link between an act of violence and its ideological hinterland. It implies that the responsibility for an individual’s violent extremism lies solely with the individual themselves.

The truth is much more disturbing. Terrorism is not something you do by yourself, it is highly social. People become interested in ideas, ideologies and activities, even appalling ones, because other people are interested in them.

In his eulogy at the funeral of those killed in the mosque shooting in Quebec, the imam Hassan Guillet spoke of the alleged shooter. Over previous days details had emerged of the young man’s life. “Alexandre [Bissonette], before being a killer, was a victim himself,” said Hassan. “Before he planted his bullets in the heads of his victims, somebody planted ideas more dangerous than the bullets in his head. Unfortunately, day after day, week after week, month after month, certain politicians, and certain reporters and certain media, poisoned our atmosphere.

“We did not want to see it …. because we love this country, we love this society. We wanted our society to be perfect. We were like some parents who, when a neighbour tells them their kid is smoking or taking drugs, answers: ‘I don’t believe it, my child is perfect.’ We don’t want to see it. And we didn’t see it, and it happened.”

“But,” he went on to say, “there was a certain malaise. Let us face it. Alexandre Bissonnette didn’t emerge from a vacuum.”

Wednesday, 29 March 2017

A world without retirement

Amelia Hill in The Guardian

We are entering the age of no retirement. The journey into that chilling reality is not a long one: the first generation who will experience it are now in their 40s and 50s. They grew up assuming they could expect the kind of retirement their parents enjoyed – stopping work in their mid-60s on a generous income, with time and good health enough to fulfil long-held dreams. For them, it may already be too late to make the changes necessary to retire at all.

In 2010, British women got their state pension at 60 and men got theirs at 65. By October 2020, both sexes will have to wait until they are 66. By 2028, the age will rise again, to 67. And the creep will continue. By the early 2060s, people will still be working in their 70s, but according to research, we will all need to keep working into our 80s if we want to enjoy the same standard of retirement as our parents.

This is what a world without retirement looks like. Workers will be unable to down tools, even when they can barely hold them with hands gnarled by age-related arthritis. The raising of the state retirement age will create a new social inequality. Those living in areas in which the average life expectancy is lower than the state retirement age (south-east England has the highest average life expectancy, Scotland the lowest) will subsidise those better off by dying before they can claim the pension they have contributed to throughout their lives. In other words, wealthier people become beneficiaries of what remains of the welfare state.

Retirement is likely to be sustained in recognisable form in the short and medium term. Looming on the horizon, however, is a complete dismantling of this safety net.

For those of pensionable age who cannot afford to retire, but cannot continue working – because of poor health, or ageing parents who need care, or because potential employers would rather hire younger workers – the great progress Britain has made in tackling poverty among the elderly over the last two decades will be reversed. This group is liable to suffer the sort of widespread poverty not seen in Britain for 30 to 40 years.

Many now in their 20s will be unable to save throughout their youth and middle age because of increasingly casualised employment, student debt and rising property prices. By the time they are old, members of this new generation of poor pensioners are liable to be, on average, far worse off than the average poor pensioner today.

A series of factors has contributed to this situation: increased life expectancy, woeful pension planning by successive governments, the end of the final-salary pension scheme (in which people got two-thirds of their final salary as a pension) and our own failure to save.

For two months, as part of an experiment by the Guardian in collaborative reporting, I have been investigating what retirement looks like today – and what it might look like for the next wave of retirees, their children and grandchildren. The evidence reveals a sinkhole beneath the state’s provision of pensions. Under the weight of our vastly increased longevity, retirement – one of our most cherished institutions – is in danger of collapsing into it.

Working just as hard, but unpaid? What happens when women retire

Many of those contemplating retirement are alarmed by the new landscape. A 62-year-old woman, who is for the first time in her life struggling to pay her mortgage (and wishes to remain anonymous), told me: “I am more stressed now than I was in my 30s. I lived on a very tight budget then, but I was young and could cope emotionally. I don’t mean to sound bitter, but I never thought I would feel this scared of the future at my age. I’m not remotely materialistic and have never wanted a fancy lifestyle. But not knowing if I will be without a home in the next few months is a very scary place to be.”

And it is not just the older generation who fear old age. Adam Palfrey is 30, with three children and a disabled wife who cannot work. “I must confess, I am absolutely terrified of retirement,” he told me. “I have nothing stashed away. Savings are out of the question. I only just earn enough that, with housing benefit, disability living allowance and tax credits, I manage to keep our heads above water. I work every hour I can just to keep things afloat. There’s no way I could keep this up aged 70-plus, just so that my partner and I can live a basic life. As for my three children … God knows. I can scarcely bring myself to think about it.”

It is not news that the population is ageing. What is remarkable is that we have failed to prepare the ground for this inevitable change. Life expectancy in Britain is growing by a dramatic five hours a day. Thanks to a period of relative peace in the UK, low infant mortality and continual medical advances, over the past two decades the life expectancy of babies born here has increased by some five years. (A baby born at the end of my eight-week The new retirement series has a life expectancy almost 12 days longer than a baby born at the start of it.)

Dr Peter Jarvis and Sue Perkins at Bletchley Park. Photograph: Linda Nylind for the Guardian

In 2014, the average age of the UK population exceeded 40 for the first time – up from 33.9 in 1974. In little more than a decade, half of the country’s population will be aged over 50. This will transform Britain – and it is no mere blip; the trend will continue as life expectancy increases. This year marked a demographic turning point in the UK. As the baby-boom generation (now aged between 53 and 71) entered retirement, for the first time since the early 1980s there were more people either too old or too young to work than there were of working age.

The number of people in the UK aged 85 or more is expected to more than double in the next 25 years. By 2040, nearly one in seven Britons will be over 75. Half of all children born in the UK are predicted to live to 103. Some 10 million of us currently alive in the UK (and 130 million throughout Europe) are likely to live past the age of 100.

Governments see raising the state retirement age as a way to cover the cost of an ageing population

The challenges are considerable. The tax imbalance that comes with an ageing population, whose tax contribution falls far short of their use of services, will rise to £15bn a year by 2060. Covering this gap will cost the equivalent of a 4p income tax rise for the working-age population.

It is easy to see why governments might regard raising the state retirement age as a way to cover the cost of an ageing population. A successful pursuit of full employment of people into their late 60s could maintain the ratio of workers to non-workers for many decades to come. And were the employment rate for older workers to match that of the 30-40 age group, the additional tax payments could be as much as £88.4bn. According to PwC’s Golden Age Index, had our employment rates for those aged 55 years and older been as high as those in Sweden between 2003 and 2013, UK national GDP would have been £105bn – or 5.8% – higher.

There are, of course, problems to this approach. Those who can happily work into their 70s and beyond are likely to be the privileged few: the highly educated elite who haven’t spent their working lives in jobs that negatively affect their health. If the state pension age is pushed further away, for those with failing health, family responsibilities or no jobs, life will become very difficult.

The new state pension, introduced on 6 April 2016, will be paid to men born on or after 6 April 1951, and women born on or after 6 April 1953. Assuming you have paid 35 years of National Insurance, it will pay out £155.65 a week. The old scheme (worth a basic sum of £119.30 per week, with more for those who paid into additional state pension schemes such as Serps or S2P) applies to those born before those dates.

Frank Field, Labour MP and chair of the work and pensions select committee, told me that the new figure of just over £8,000 a year is enough to guarantee all pensioners a decent standard of living: an “adequate minimum”, as he put it. Anything above that, he said, should be privately funded, without tax breaks or other government help.

“Once the minimum has been reached, it’s not the job of government to bribe people to save more,” he says. “To provide luxurious pension payments was never the aim of the state pension.”

Whether the new state pension can really be described as a “comfortable minimum” turns out to be a matter of opinion. Dr Ros Altmann, who was brought into government in April 2015 to work on pensions policy, is the UK government’s former older workers’ champion and a governor of the Pensions Policy Institute. When I relayed Field’s comments to her, she was left briefly speechless. Then she managed a “wow”. “Did he really say that? Would he be happy to live on just over £8,000 a year?” she asked, finally.

Tom McPhail, head of retirement policy at financial advisers Hargreaves Lansdown, is clear that the new state pension has not been set at a high-enough level to guarantee a dignified older age to those who have no other income. “How sufficient is the new state pension? That’s an easy one to answer: It’s not,” he said.

Field makes the assumption that people have enough additional private financial ballast to bolster their state pensions. But the reality is that many people have neither savings – nearly a third of all households would struggle to pay an unexpected £500 bill – nor sufficient private pension provision to bring their state pension entitlement up to a level to ensure a comfortable retirement by most people’s understanding of the term. In fact, savings are the great dividing line in retirement, and the scale of the so-called “pension gap” – the gap between what your pension pot will pay out and the amount you need to live comfortably in older age – is shocking.

Three in 10 Britons aged 55-64 do not have any pension savings at all. Almost half of those in their 30s and 40s are not saving adequately or at all. In part, that is because we underestimate the amount of money we need to save. According to research by Saga earlier this month, four in 10 of those aged over 40 have no idea of the cost of even a basic lifestyle in retirement. When it came to understanding the size of the total pension pot they would need to fund retirement, over 80% admitted they had no idea how big this would need to be.

Retirement is an ancient concept. It caused one of the worst military disasters ever faced by the Roman empire when, in AD14, the imperial power increased the retirement age and decreased the pensions of its legionaries, causing mutiny in Pannonia and Germany. The ringleaders were rounded up and disposed of, but the institution remains so highly prized that any threat to its continued existence is liable to cause mutiny. “Retirement has been stolen. You can pay in as much as you like. They will never pay back. Time for a grey revolution,” one reader emailed.

It was in 1881 that the German chancellor, Otto von Bismarck, made a radical speech to the Reichstag, calling for government-run financial support for those aged over 70 who were “disabled from work by age and invalidity”.

Roger Hall in Porlock Bay, Somerset. Photograph: Sam Frost for the Guardian

The scheme wasn’t the socialist ideal it is sometimes assumed to be: Bismarck was actually advocating a disability pension, not a retirement pension as we understand it today. Besides, the retirement age he recommended just about aligned with average life expectancy in Germany at that time. Bismarck did, however, have a further vision that was genuinely too radical for his era: he proposed a pension that could be drawn at any age, if the contributor was judged unfit for work. Those drawing it earlier would receive a lower amount.

This notion is surfacing again in various forms. The New Economics Foundation isarguing for a shorter working week, via a “slow retirement”, in which employees give up an hour of work per week every year from the age of 35. The idea is that older workers will release more of their work time to younger ones, which will allow a steady handover of retained wisdom. A universal basic income, whereby everyone receives a set sum from the state each year, regardless of how much they do or don’t work, might have a similar effect, enabling people to move to part-time work as they age.

Widespread poverty among the over-65s led to the 1946 National Insurance Act, which introduced the first contributory, flat-rate pension in the UK for women of 60 and men of 65. At first, pension rates were low and most pensioners did not have enough to get by. But by the late 1970s, the value of the state pension rose and an increasing number of people – mainly men – were able to benefit from occupational pension schemes. By 1967, more than 8 million employees working for private companies were entitled to a final-salary pension, along with 4 million state workers. In 1978, the Labour government introduced a fully fledged “earnings-linked” state top-up system for those without access to a company scheme.

With pension payments now at a rate that enabled older people to stop work without risking penury, older men (and to a lesser extent older women) began to enjoy a “third age”, which fell between the end of work and the start of old age. In 1970, the employment rate for men aged 60-64 was 81%; by 1985 it had fallen to 49.7%.

Access to a comfortable old age is a powerful political idea. John Macnicol, a visiting professor at the London School of Economics and author of Neoliberalising Old Age, believes that when jobs were needed for younger men after the second world war, a “socially elegant mythology” was created in which retirement was a time for older workers to kick back and relax.

He believes that in the 1990s, however, the narrative was cynically changed and the image of pensioners was deliberately altered: from being poor, frail, dependent and deserving, to well off, hedonistic, politically powerful and selfish. The notion of “the prosperous pensioner was constructed in the face of evidence that showed exactly the opposite to be the case”, he said, “so that the right to retirement [could be] undermined: more coercive working practices, forcing older people to stay in employment, could be presented as providing new ‘opportunities’, removing barriers to working, bestowing greater inclusion and even achieving upward social mobility”.

This change in attitude towards pensioners helped the government bring in a hike in retirement age. In 1995, the Conservative government under John Major announced a steady increase from 60 to 65 in the state pension age for women, to come in between April 2010 and April 2020. Most agreed that equalising the state pension age was fair enough. What they objected to is that the government waited until 2009 – a year before the increases were set to begin – to start contacting those affected, leaving thousands of women without time to rearrange their finances or adjust their employment plans to fill the gaping hole in their income.

Then, in 2011 – when the state pension age for women had risen to 63 – the coalition government accelerated the timetable: the state pension age for women will now reach 65 in November 2018, at which point it will rise alongside men’s: to 66 by 2020 and to 67 by 2028.

When she retired from the ministry of work and pensions in 2016, Ros Altmann stated that she was “not convinced the government had adequately addressed the hardship facing women who have had their state pension age increased at short notice”.

After surviving cancer at 52, Jackie Harrison, now 62, looked over her savings and decided she could just about afford to take early retirement. “I had achieved 36 years of national insurance contributions,” she said. “I used to phone the Department for Work and Pensions every year to ensure that I had worked enough to get my full pension at 60.”

Then she was told her personal pension age was increasing from 60 to 63 years and six months. “I wasn’t eligible for any benefits because of my partner’s pension, but I could nevertheless still just about manage until the new state retirement age,” she said. But when she was 58, the goalposts moved again – this time to 66. “I’d been out of the workplace for so long that I didn’t have a hope of being able to get back into it,” she said. “But nor did it give me enough time to make other financial arrangements.”

Harrison made the agonising decision to raise money by selling her family home and moving to a different city, where she could live more cheaply. Her decisions had heavy implications for the rest of her family – and the state. When she moved, she left behind a vulnerable adult daughter and baby grandchild and octogenarian parents.

“This is not the retirement I had planned at all,” Harrison told me. “I had loads of savings once, but now I live in a constant state of worry due to financial pressures. It seems so unfair when I have worked all my life and planned for my retirement. I just don’t know how I am going to manage for another four years”. Women born in the 1950s are already living in their age of no retirement.

In 2006, it became legal for employers to force their workers to retire at the age of 65. A campaign led by Age Concern and Help the Aged was swift and effective in its argument that the new default retirement age law broke EU rules and gave employers too much leeway to justify direct discrimination on the grounds of age. On 1 October 2011, the law was overturned.

Since then, Britain’s workforce has greyed almost before our eyes: in the last 15 years, the number of working people aged 50-64 has increased by 60% to 8 million (far greater than the increase in the population of people over 50). The proportion of people aged 70-74 in employment, meanwhile, has almost doubled in the past 10 years. This trend will continue. By 2020, one-third of the workforce will be over 50.

A worker at Steelite International ceramics in Stoke-on-Trent. Photograph: Christopher Thomond for the Guardian

The proportional increase may be substantial, but it charts growth from a low level. In empirical terms, the impact is less positive: almost one-third of people in the UK aged 50-64 are not working. In fact, a greater number are becoming jobless than finding employment: almost 40% of employment and support allowance claimants are over 50, an indication that many older people are unable to easily find new and sustainable work.

This is unsustainable: by 2020, an estimated 12.5m jobs will become vacant as a result of older people leaving the workforce. Yet there will only be 7 million younger people to fill them. If we can no longer rely on immigration to fill the gaps, employers will have to shed their prejudices, workplaces will have to be adapted, and social services will have to step in to provide the care that ageing people can no longer give their grandchildren, ageing spouses or parents if they remain in the workforce.

Forcing older people to work longer if they cannot easily do so can cause more harm than good

But forcing older people to work longer if they cannot easily do so can cause more harm than good. Prof Debora Price, director of the Manchester Institute for Collaborative Research on Ageing, told me: “There is evidence to suggest that opportunities for people to work beyond state pension age might well be making inequalities worse, since those able to work into later life tend to be men who are highly educated and have been in higher-paid jobs.”

One answer is to return to Bismarck’s original plan, whereby the state pension can be accessed early by anyone who chooses to collect a smaller pension sum at an age lower than the state retirement age, perhaps because of poor health or other commitments.

This option, however, was rejected last week by John Cridland, the former head of the Confederation of British Industry’s business lobby group, who was appointed by the government in March 2016 to help cut the UK’s £100bn a year pension costs by reviewing the state pension age.

Instead, Cridland has recommended that the state pension age should rise from 67 to 68 by 2039, seven years earlier than currently timetabled. This will push the state retirement age back for a year for anyone in their early 40s. Cridland has rejected calls for early access to the state pension for those in poor health, but has left the door open for additional means-tested support to be made available one year before state pension age for those unable to work owing to ill health or caring responsibilities.

In spite of their anxieties about money, one of the things I have been most struck by, in my many conversations with older readers, is the pleasure they take in life.

One grandmother told me: “Last week, I swept across a crowded pub to pick up a raffle prize … with my dress tucked into my knickers! A few years ago I would have been mortified. Not any more. Told ’em they were lucky it was cold and I had knickers on!”

Monica Hartwell, 69, is part of the team at the volunteer-run Regal theatre in Minehead, as well as the film society and the museum. “The joy of getting older is much greater self-confidence,” she told me. “It’s the loss of angst about what people think of you: the size of your bum or whether others are judging you correctly. It’s not an arrogance, but you know who you are when you’re older and all those roles you played to fit in when you were younger are irrelevant.”

  Women in Ilkley, West Yorkshire, discuss retirement. Photograph: Christopher Thomond for the Guardian

The data bears out these experiences: 65 to 79 is the happiest age group for adults, according to the Office for National Statistics. Recently, a report claimed that women in their 80s have more enjoyable sex than those up to 30 years younger. Other research has found that 75% of those aged 50 and over are less bothered about what people think of them and 61% enjoy life more than when they were younger.

So what is the secret to a successful retirement? Private companies run courses to help those on the verge of retirement plan for changes in income, time and relationships. I have spoken to those running such courses, as well as those who have retired. The consensus is that there are five pillars, all of which rest on the “money bit” – the basic level of financial security without which later life is hard. Once that foundation is in place, retirees can build up the second pillar: a social network to replace their former work community. The third pillar is having purpose and challenging one’s mind. Fourth is ongoing personal development – exploring, questioning and learning are an important part of what makes us human; this should never stop, I was told. The fifth and final pillar is having fun.

I tried explaining final-salary pensions to a 20-year-old recently. They looked at me quizzically, as though I was telling them that I had seen a unicorn. When that same 20-year-old, however, tries to explain the traditional concept of retirement to their own children, they might well be met with the same level of incomprehension.

For their children, life might well be more like the joke that Ali Seamer emailed to me during a recent Q&A I ran with readers as part of my investigation into what retirement means today: “I’m going to have to work up to 6pm on the day of my funeral just to be able to afford the coffin,” he said.

In examining the reality of this new age of no retirement, I have become aware of two pitfalls undermining constructive debate. The first is the prejudice that an ageing population will place a huge burden on society.

This is refuted by numerous studies: the volunteer charity WRVS has done the most work to quantify the economic role played by older generations. Taking together the tax payments, spending power, caring and volunteer efforts of people aged 65-plus, it calculates that they contribute almost £40bn more to the UK economy than they receive in state pensions, welfare and health services.

The research suggests that this benefit to the economy will increase in coming years as increasing numbers of baby-boomers enter retirement. By 2030, it projects that the net contribution of older people will be worth some £75bn.

Older people’s contribution to society is not just economic. An ICM poll for the WRVS study found that 65% of older people say they regularly help out elderly neighbours; they are the most likely of all adult age groups to do so.

The second pitfall is the conflict between generations that can be caused by the issue of retirement. The financial problems of the young have been blamed on baby boomers. But the truth is that the UK pension languishes far below that which is provided in most developed countries. And this contributory, taxed income – pensioners pay tax just like anyone else – is all that many old people have to live on.

Nearly 2 million of those aged 55-64 do not have any private pension savings and despite the commonly held belief that older people are all mortgage-free, fewer than 48% of those aged 55-64 own their own homes outright and nearly a quarter are still renting. It is true that some have benefitted greatly from rises in house prices, but the cost of lending was high – often 10% or more – during the 1970s and 1980s. One in 10 of those aged 65 and over still have a mortgage.

For all the recent talk of the average pensioner household being £20 a week better off than working households, the truth is that many are actually working to supplement their income. Still, to people just entering the workforce, the lives of today’s pensioners look impossibly privileged.

Rachael Ingram sums it up. At 19, working full-time and studying for an Open University degree, she is already putting 10% of her income aside for her pension. “I shouldn’t be worrying about saving for my pension at my age,” she told me. “I’m saving money that could go towards a deposit for my first house – I’m currently renting a flat in Liverpool – or out socialising. But I have no faith in government or the state pension. There will be no one to look after me when I’m old.”

I was vulnerable and wanted a home. What I got was a workhouse

Daniel Lavelle in The Guardian

There are many reasons why I became homeless, but no one was surprised it happened. I’m just another care leaver who lost control of their life. Almost every person I lived with in children’s homes and foster placements has since experienced mental health problems, stints in prison, and battles with drug and alcohol addiction. What would make me so special that I could avoid the inevitable breakdown?

Homeless in Britain: ‘I graduated with honours – and ended up on the streets’

I spent periods in a tent on a campsite near Saddleworth Moor, where I was woken up every night by my neighbour, a cantankerous Yorkshireman who would liberate the grievances he had been bottling up all day in a series of piercing screams.

The local housing advice service was no help. I was told that to be considered a priority need, I had to demonstrate that I was more vulnerable than my homeless counterparts. As one adviser put it: “I have to establish that you would be worse off than me, if I were homeless.” It may interest people that local councils are now running a misery contest for housing, a sort of X Factor for the destitute. Maybe my audition would have gone better if I’d had a few more missing teeth, and wet myself while singing Oom-Pah-Pah.

And then I befriended a resident of a residential charity for the homeless. He was far more helpful than the housing advisers, and managed to organise a place for me at the charity.

When I entered its walls, which were inside a converted factory, the place immediately struck me as having similarities with a Victorian workhouse. I was told by the “community leader” that I would receive basic subsistence: a room, food, clothing and a modest weekly allowance, in exchange for 40 hours’ labour.

The word “workhouse” conjures up images of Oliver Twist, and of bleak Victorian institutions populated by bedraggled paupers forced into backbreaking labour in exchange for meagre slops of porridge. At the charity home we were not expected to pick oakum or break boulders, but the work was hard and the returns were meagre.

Part of my job involved delivering furniture. I spent day after day lifting heavy items such as wardrobes and three-piece suites, sometimes up and down several flights of stairs. The work is described as voluntary by the charity, but in reality neither I nor any of my fellow inmates had anywhere else to go, and so had little choice but to do it.

The charity describes itself as a “working community”. But as far as I was concerned this was a workhouse in all but name: a civil prison, and a punishment for poverty. How do such charities manage to require their residents to work up to 40 hours a week without a wage, paying them only a small allowance for food and accommodation?

In 1999 the New Labour government exempted charities and other institutions from paying workers the national minimum wage if prior to entering a work scheme they were homeless or residing in a homeless hostel. There is perhaps no better demonstration that this country is yet to shake off punitive Victorian attitudes towards the “undeserving” poor.

These regulations not only strip homeless people of the right to a decent wage, but of all their other employment rights too. Because residents of such charities are not classed as employees, they cannot claim unfair dismissal or sick pay. Many people have lived and worked at the charity for up to 15 years, yet they can be sacked and evicted with no legal right to appeal.

I accept that residents, some of whom have suffered with long-term alcoholism and drug dependency, are far better off within the charity home’s walls than they would be on the streets or living alone. The environment is predominantly a positive one, where residents are well fed and safe, and are overseen by conscientious staff. The charity does give individuals the chance to participate in meaningful work and contribute to a community, sometimes for the first time in their lives. But none of this alters the fact that residents are forced by poverty to work for no pay.

The homelessness reduction bill, which last week passed its final obstacle in parliament, provides an opportunity to change our approach. It will force local authorities to provide assistance to people threatened with becoming homeless 56 days before they lose their home, ending the misery contest I and others have been subjected to over the years.

This bill represents a very small step in the right direction, but much more needs to be done to address the reasons people find themselves on the streets in the first place. And ending the exploitation of homeless people for their labour should be one of the first goals.

It is ironic that a Labour government created a backdoor for the revival of workhouses when it was Attlee’s government that abolished the workhouse system. The idea that the poor should be forced to work for board and basic subsistence was once universally condemned, but it has been revived without a murmur of public disapproval.

No one else in our society can be mandated to work full time for no pay, with no rights, on pain of being condemned to a life on the streets. So why is it OK to treat homeless people this way?

Tuesday, 28 March 2017

Access to justice is no longer a worker’s right, but a luxury

Aditya Charkrabortty in The Guardian

Laws that cost too much to enforce are phoney laws. A civil right that people can’t afford to use is no right at all. And a society that turns justice into a luxury good is one no longer ruled by law, but by money and power. This week the highest court in the land will decide whether Britain will become such a society. There are plenty of signs that we have already gone too far.

Listen to the country’s top judge, Lord Thomas of Cwmgiedd, who admits that “our justice system has become unaffordable to most”. Look at our legal-aid system, slashed so heavily by David Cameron and Theresa May that the poor must act as their own trial lawyers, ready to be skittled by barristers in the pay of their moneyed opponents.

The latest case will be heard by seven supreme court judges and will pit the government against the trade union Unison. It will be the climax of a four-year legal battle over one of the most fundamental rights of all: the right of workers to stand up against their bosses. 

In 2013, Cameron stripped workers of the right to access the employment tribunal system. Whether a pregnant woman forced out of her job, a Bangladeshi-origin guy battling racism at work, or a young graduate with disabilities getting aggro from a boss, all would now have to pay £1,200 for a chance of redress.

The number of cases taken to tribunal promptly fell off a cliff – down by 70% within a year. Citizens Advice, employment lawyers and academics practically queued up to warn that workers – especially poor workers – were getting priced out of justice. But for Conservative ministers, all was fine. Loyal flacks such as Matthew Hancock (then employment minister) claimed those deterred by the fees were merely “unscrupulous” try-ons, intent on “bullying bosses”. Follow Hancock’s logic, and with all those time-wasters weeded out, you’d expect the number of successful tribunal claims to jump. They’ve actually dropped.

At each hearing of Unison’s case, the judges have wound up asking to see actual people for whom the fees have represented a barrier to justice. One was sure that“if the statistics … were drilled down to some individual cases, situations would be revealed that showed an inability on the part of some people to proceed before an employment tribunal through lack of funds”.

Should the supreme court judges want the same thing, they could meet Liliana Almanza. They’d find her a compelling witness, although she finds it hard to sit down for too long due to three herniated discs in her lower back, which make her feel like she’s lugging around “a lot of heavy weight” and which send pain shooting into her hands, legs, shoulders and neck. She also has sometimes severe depression and anxiety. The physical pain and the mental illness can feed off each other.

Almanza has worked as a cleaner at the University of London since 2011 and never kept her conditions from her employer, an outsourcing company called Cofely. Then came a new supervisor, who Almanza felt had it in for her and who piled on extra work. Almanza was sent to the “punishment floor” – actually three floors, normally handled by two people, but she had to do the work on her own and in little time. The extra workload, especially the pushing about of a hoover and a mop, caused her so much pain that she sometimes felt dizzy. Yet when Almanza complained, she says the supervisor either laughed or told her to sign off sick. Despite being required under law, there was no adjustment for her disabilities.

Almanza, who is Colombian, remembers the supervisor telling her how Latin Americans were a bunch of beggars. Other times, she’d call Almanza a “bitch” and a “whore”.

On the worst days, Almanza would walk over to Euston station and stand at the platform’s very edge. She’d wait for the tube to come. Then “a light would come on” and she’d pull herself back.

Almanza did exactly what ministers would want and submitted a grievance using Cofely’s in-house procedure. It was rejected. She appealed and did not hear anything for months. However desperate her situation, she would never have found the money for a tribunal. Some are exempt from the fees, but Almanza and her husband – both cleaners – apparently earned too much money for her to qualify. Nor does the means-testing account for living costs, even though after renting a single room in a shared ex-council house in London and paying bills they have almost no money each month.

Her union, the tiny Independent Workers of Great Britain (IWGB), pitched in some money to go to tribunal and helped crowdfund the rest. As soon as she did, Almanza remembers that her employer made a number of adjustments and lightened her workload.

I contacted Engie, as Cofely has been rebranded, for its response to Almanza’s charges. Its statement reads in part: “We do not tolerate discrimination in the workplace and all claims … are investigated thoroughly. Following extensive investigation of the allegations brought against Cofely Workplace, all claims were denied and Cofely was formally discharged from the proceedings by the court on 24th May 2016.” The court documents actually show that Cofely was discharged because the contract was taken over by another company, which also reached a settlement with Almanza.

Without charity and the shoestring resources of the IWGB, Almanza wouldn’t have been able to file a claim. If she could testify to the supreme court, what would she say? “I would tell the judges if I hadn’t been able to go to tribunal I don’t think I’d be here today. If I’d continued like that, I wouldn’t have been able to tell this story. Maybe it sounds like an exaggeration, a movie. But it’s one thing to talk about it, another thing to live it.”

Saffron storm, hard cash

Jawed Naqvi in The Dawn

A young man described himself as a dejected Muslim, and punctured the sharp analysis that was under way about the Uttar Pradesh defeat. The venue was a well-appointed seminar room at the India International Centre. Why don’t we show our outrage like they do in America, the young Muslim wanted to know. People in America are out on the streets fighting for the refugees, Latinos, Muslims, blacks, everyone. One US citizen was shot trying to protect an Indian victim of racial assault. Why are Indian opponents of Hindutva so full of wisdom and analysis but few, barring angry students in the universities, take to the streets?

It’s not that people are not fighting injustices. From Bastar to Indian Kashmir, from Manipur to Manesar, peasants, workers, college students, tribespeople, Dalits; they are fighting back. But they are vulnerable without a groundswell of mass support like we see in other countries.

Off and on, political parties are capable of expressing outrage. A heartbreaking scene in parliament is to see Congress MPs screaming their lungs out with rage, but that’s usually when Sonia Gandhi is attacked or Rahul Gandhi belittled. Yet there is no hope of stopping the Hindutva march without accepting the Congress as a pivot to defeat the Modi-Yogi party in 2019.
It’s a given. The slaughterhouses may or may not open any time soon, but an opposition win in 2019 is easier to foresee. It could be a pyrrhic victory, the way the dice is loaded, but it is the only way. Will the Congress join the battle without pushing itself as the natural claimant to power? Without humility, we may not be able to address the young man’s dejection.

Like it or not, there is no other opposition party with the reach of the Congress, even today. Should we be saddled with a party that rises to its feet to protect its leaders — which it should — but has lost the habit of marching against the insults and torture that large sections of Indians endure daily?
A common and valid fear is that the party is vulnerable before the IOUs its satraps may have signed with big league traders, who drive politics in India today.

If religious fascism is staring down India’s throat, there’s someone financing it.

The Congress needs to ask itself bluntly: who chose Mr Modi as prime minister? It was the same people that chose Manmohan Singh before him. The fact is that India has come to be ruled by traders, though they have neither the vision nor the capacity to industrialise or modernise this country of 1.5 billion. Their fabled appetite for inflicting bad loans on the state exchequer is legendary, though they have seldom measured up to Nehru’s maligned public sector to build any core industry. (Bringing spectrum machines from Europe and mobile phones from China for more and more people to watch mediocre reality shows is neither modernisation nor industrialisation.)

The traders have thrived by funding ruling parties and keeping their options open with the opposition when necessary. It’s like placing casino chips on the roulette table, which is what they have turned a once robust democracy into. If there’s religious fascism staring down India’s throat, there’s someone financing it.

The newspapers won’t tell you all that. The traders own the papers. The umbilical cord between religious regression and traders has been well established in a fabulous book on the Gita Press by a fellow journalist; same with TV.

Nehru wasn’t terribly impressed with them. He fired his finance minister for flirting with their ilk. Indira Gandhi did one better. She installed socialism as a talisman against private profiteers in the preamble of the constitution. They hated her for that. The older Indian literature (Premchand) and cinema were quite a lot about their shady reality — Mother India, Foot Path, Do Bigha Zamin, Shree 420, to name a few.

At the Congress centenary in Mumbai, Rajiv Gandhi called out the ‘moneybags’ riding the backs of party workers. They retaliated through his closest coterie to smear him with the Bofors refuse. The first move against Hindutva’s financiers will be an uphill journey. The IOUs will come into play.

For that, the Congress must evict the agents of the moneybags known to surround its leadership. But they’re not the only reality the Congress must discard. It has to rid itself of ‘soft Hindutva’ completely, and it absolutely must stop indulging regressive Muslim clerics as a vote bank.

For a start, the West Bengal, Karnataka, and Delhi assemblies will need every opposition member’s support in the coming days. The most laughable of the cases will be summoned against the unimpeachable Arvind Kejriwal, a bête noire for the traders, whose hanky-panky he excels in exposing.

For better or worse, it is the Congress that still holds the key to 2019. Even in the post-emergency rout, the party kept a vote share of 41 per cent. And after the 2014 shock, its vote has grown, not decreased.

While everyone needs to think about 2019, the left faces a more daunting challenge. It knows that the Modi-Yogi party does not enjoy a majority of Indian votes. However, the majority includes Mamata Banerjee, who says she wants to join hands with the left against the BJP. Others are Lalu Yadav, Nitish Kumar, Arvind Kejriwal, Mayawati, Akhilesh Yadav, most of the Dravida parties and, above all, the Congress. The left has inflicted self-harm by putting up candidates against all these opponents of the BJP — in Bihar, in Uttar Pradesh, in Delhi. In West Bengal and Kerala, can it see eye to eye with its anti-BJP rivals?
As the keystone in the needed coalition, the left must drastically tweak its politics. It alone has the ability to lift the profile of the Indian ideology, which is still Nehruvian at its core, as the worried man at the Indian International Centre will be pleased to note.

Monday, 27 March 2017

Brexit deal must meet six tests, says Labour

  • Fair migration system for UK business and communities
  • Retaining strong, collaborative relationship with EU
  • Protecting national security and tackling cross-border crime
  • Delivering for all nations and regions of the UK
  • Protecting workers' rights and employment protections
  • Ensuring same benefits currently enjoyed within single market

Sunday, 26 March 2017

Populism is the result of global economic failure

Larry Elliott in The Guardian

The rise of populism has rattled the global political establishment. Brexit came as a shock, as did the victory of Donald Trump. Much head-scratching has resulted as leaders seek to work out why large chunks of their electorates are so cross.

The answer seems pretty simple. Populism is the result of economic failure. The 10 years since the financial crisis have shown that the system of economic governance that has held sway for the past four decades is broken. Some call this approach neoliberalism. Perhaps a better description would be unpopulism.

Unpopulism meant tilting the balance of power in the workplace in favour of management and treating people like wage slaves. Unpopulism was rigged to ensure that the fruits of growth went to the few not to the many. Unpopulism decreed that those responsible for the global financial crisis got away with it while those who were innocent bore the brunt of austerity.
Anybody seeking to understand why Trump won the US presidential election should take a look at what has been happening to the division of the economic spoils. The share of national income that went to the bottom 90% of the population held steady at around 66% from 1950 to 1980. It then began a steep decline, falling to just over 50% when the financial crisis broke in 2007.

Similarly, it is no longer the case that everybody benefits when the US economy is doing well. During the business cycle upswing between 1961 and 1969, the bottom 90% of Americans took 67% of the income gains. During the Reagan expansion two decades later they took 20%. During the Greenspan housing bubble of 2001 to 2007, they got just two cents in every extra dollar of national income generated while the richest 10% took the rest.

Those responsible for global financial crisis got away with it while those who were innocent bore the brunt of austerity

The US economist Thomas Palley* says that up until the late 1970s countries operated a virtuous circle growth model in which wages were the engine of demand growth.

“Productivity growth drove wage growth which fueled demand growth. That promoted full employment which provided the incentive to invest, which drove further productivity growth,” he says.

Unpopulism was touted as the antidote to the supposedly-failed policies of the post-war era. It promised higher growth rates, higher investment rates, higher productivity rates and a trickle down of income from rich to poor. It has delivered none of these things.

James Montier and Philip Pilkington of the global investment firm GMO say that the system that arose in the 1970s was characterised by four significant economic policies: the abandonment of full employment and its replacement with inflation targeting; an increase in the globalisation of the flows of people, capital and trade; a focus on shareholder maximisation rather than reinvestment and growth; and the pursuit of flexible labour markets and the disruption of trade unions and workers’ organisations.

To take just the last of these four pillars, the idea was that trade unions and minimum wages were impediments to an efficient labour market. Collective bargaining and statutory pay floors would result in workers being paid more than the market rate, with the result that unemployment would inevitably rise.

Unpopulism decreed that the real value of the US minimum wage should be eroded. But unemployment is higher than it was when the minimum wage was worth more. Nor is there any correlation between trade union membership and unemployment. If anything, international comparisons suggest that those countries with higher trade union density have lower jobless rates. The countries that have higher minimum wages do not have higher unemployment rates.

“Labour market flexibility may sound appealing, but it is based on a theory that runs completely counter to all the evidence we have,” Montier and Pilkington note. “The alternative theory suggests that labour market flexibility is by no means desirable as it results in an economy with a bias to stagnate that can only maintain high rates of employment and economic growth through debt-fuelled bubbles that inevitably blow up, leading to the economy tipping back into stagnation.”

This quest for ever-greater labour-market flexibility has had some unexpected consequences. The bill in the UK for tax credits spiralled quickly once firms realised that they could pay poverty wages and let the state pick up the bill. Access to a global pool of low-cost labour meant there was less of an incentive to invest in productivity-enhancing equipment.

The abysmally-low levels of productivity growth since the crisis have encouraged the belief that this is a recent phenomenon, but as Andy Haldane, the Bank of England’s chief economist, noted last week, the trend started in most advanced countries in the 1970s.

“Certainly, the productivity puzzle is not something which has emerged since the global financial crisis, though it seems to have amplified pre-existing trends,” Haldane said.

Bolshie trade unions certainly can’t be blamed for Britain’s lost productivity decade. The orthodox view in the 1970s was that attempts to make the UK more efficient were being thwarted by shop stewards who modeled themselves on Fred Kite, the character played by Peter Sellers in I’m Alright Jack. Haldane puts the blame elsewhere: on poor management, which has left the UK with a big gap between frontier firms and a long tail of laggards. “Firms which export have systematically higher levels of productivity than domestically-oriented firms, on average by around a third. The same is true, even more dramatically, for foreign-owned firms. Their average productivity is twice that of domestically-oriented firms.”

Wolfgang Streeck: the German economist calling time on capitalism

Read more

Populism is seen as irrational and reprehensible. It is neither. It seems entirely rational for the bottom 90% of the US population to question why they are getting only 2% of income gains. It hardly seems strange that workers in Britain should complain at the weakest decade for real wage growth since the Napoleonic wars.

It has also become clear that ultra-low interest rates and quantitative easing are merely sticking-plaster solutions. Populism stems from a sense that the economic system is not working, which it clearly isn’t. In any other walk of life, a failed experiment results in change. Drugs that are supposed to provide miracle cures but are proved not to work are quickly abandoned. Businesses that insist on continuing to produce goods that consumers don’t like go bust. That’s how progress happens.

The good news is that the casting around for new ideas has begun. Trump has advocated protectionism. Theresa May is consulting on an industrial strategy. Montier and Pilkington suggest a commitment to full employment, job guarantees, re-industrialisation and a stronger role for trade unions. The bad news is that time is running short. More and more people are noticing that the emperor has no clothes.

Even if the polls are right this time and Marine Le Pen fails to win the French presidency, a full-scale political revolt is only another deep recession away. And that’s easy enough to envisage.

Caste among Indian Muslims: Fateh ka Fatwa

Thursday, 23 March 2017

The inside story of the Tory election scandal

Ed Howker and Guy Basnett in The Guardian

A few hours after dawn on 8 May 2015, the morning after his unexpected victory in the general election, David Cameron delivered a celebratory speech to the jubilant staff of Conservative campaign headquarters, at 4 Matthew Parker Street, Westminster. “I’m not an old man but I remember casting a vote in 1987 and that was a great victory,” he said. “I remember 2010, achieving that dream of getting Labour out and getting the Tories back in, and that was amazing. But I think this is the sweetest victory of them all.”

The assembled Tory campaign staffers cheered and whistled as Cameron declared: “We are on the brink of something so exciting.” The election result would indeed change British politics, although not in the way that Cameron intended: the obliteration of the Conservatives’ Liberal Democrat coalition partners cleared the way for the referendum that set Britain on a path to leave the EU and ended Cameron’s political career. As a result, Theresa May is now the prime minister, while Cameron is on a speaking tour of US universities and George Osborne is moonlighting as a newspaper editor.

Until recently, Britain thought it knew how the Conservative party had defied expectations to win the election. After the initial shock that predictions of a hung parliament had proved incorrect, a new narrative was soon established. Commentators explained that the Tories had prevailed by successfully emphasising the threat of a Labour coalition with the SNP and deploying the “pumped-up” prime minister for a spurt of decisive last-minute campaigning. Several newspapers reported that the Tories had spent less to win their 12-seat majority in 2015 than they did to win 24 fewer seats in 2010.

In truth, the victorious Conservative campaign was the most complex ever mounted in Britain, run by two of the world’s most successful campaign consultants. Warehouses of telephone pollsters were put to work for a year before the election, their task to track the views of undecided voters in key marginal seats. The party also distributed thousands of detailed surveys to voters in marginals, and merged all this polling data with information from electoral rolls and commercial market research to produce the most comprehensive picture yet of who might be persuaded to vote Conservative.

Armed with an unprecedented level of detail, the Conservatives began distributing leaflets and letters that directly addressed the hopes and fears of their target voters. And in the final weeks of the campaign, shock troops of volunteers were dispatched to the doorsteps of undecided voters with a mission to persuade and cajole on the party’s behalf. In the most high-profile fight, an elite squad of strategists moved from the London HQ to Kent, where the Ukip leader Nigel Farage was making his bid for parliament.

If the sophistication of the 2015 campaign was not widely known, that was by design: the Conservative Home website, a meeting place for party loyalists, called the victory a “stealth win”. But over the last few months, another story has emerged – an account that is told in a paper trail of hotel bills, emails and witness statements that has led to a year-long investigation by the Electoral Commission and the police.

The startling evidence, first unearthed by Channel 4 News and confirmed in a condemnatory report released last week by the Electoral Commission – the independent body that oversees election law and regulates political finance in the UK – suggests that the Conservative party gained an advantage by breaching election spending laws during the 2015 election. This allowed the party to send its most dedicated volunteers into key seats, in which data had identified specific voters whose turnout could swing the contest. Some of this spending was not properly declared, and some of it was entirely off the books. The sums involved are deceptively small, but the impact may have been decisive.

At present, up to 20 sitting Conservative MPs are the subject of criminal investigation by 16 police forces. If any of the candidates are charged and found guilty of an election offence, they could be barred from political office for three years or spend up to a year in prison. The whole case is unprecedented: this is the largest number of MPs ever to be investigated for violations of electoral law. In the past, cases of alleged election fraud have usually focused on a single MP. This time, there are so many cases that police forces across England have taken the unusual step of coordinating their investigations.

The release of last week’s 38-page Electoral Commission report produced a minor political earthquake: as a result of the biggest investigation the commission has ever undertaken, it levied its largest-ever fine against the Conservative party and referred the case of the party’s treasurer, Simon Day, to the Metropolitan police for further criminal investigation. “There was a realistic prospect,” the report said, that the undeclared spending by the party had “enabled its candidates to gain a financial advantage over opponents.”

The party’s response to the report has been dismissive from the very start. During their investigation, the Electoral Commission was forced to file papers with the high court, demanding that the Conservative party disclose information about its election campaign, after the party had failed to fully comply with their requests for information for three months. Since the report was published, Conservative ministers and spokesmen have pointed out that the commission found only “a series of administrative errors” and that other parties have been fined for their activity in the 2015 election too. Conservatives also say that the missing money identified by the commission represents just 0.6% of the total spent by the party during the 2015 election.

It is true that the sums involved in this case are small: the Electoral Commission’s highest-ever fine turns out to be just £70,000, and it has been applied to punish undeclared and misdeclared Conservative spending totalling just £250,000. Most reports on the commission’s findings have echoed this defence, allowing that some criminal charges may indeed be filed, while overlooking the impact of the overspending on the result.

But British elections are designed to be cheap. Laws that date back to the 1880s limit campaign spending precisely so that people of all backgrounds, and not only the wealthy, have a fair chance to compete for votes. And if that egalitarian principle enhances our political culture, it has another less obvious consequence: even small sums of additional, illegal money, if shrewdly spent, can make a huge difference to results.

Thanks to the Electoral Commission report, we now know that some of the Conservative party’s central spending did benefit MPs in the tightest races, but it was not declared. It is possible even that this money helped to secure the victories from which the Conservative majority was derived. Slowly, a chilling prospect emerges that British politics, our relationship with Europe and the future of our economy, were all transformed following a contest that wasn’t a fair fight.

The Conservatives’ election worries were never financial. By the end of 2014, newspapers reported that the party had raised substantially more money than its rivals, assembling a £78m “war chest” that would allow it to “funnel huge amounts of cash into key seats”, according to the Observer. The campaign would be constrained only by two factors: the legal spending limits for each candidate and the number of volunteers the party could recruit to take its message to voters.

In fact, the scandal in which so many MPs now find themselves embroiled concerns precisely those limits. The spending that has been found to be in violation by the Electoral Commission was used to bring Conservative campaigners into the tightest marginal election battles. Separately, multiple police investigations are examining whether individual candidates and their election agents broke the law.

It is difficult to understand the election expenses scandal without understanding the election strategy that had been unveiled three years before the vote. At a closed session on the first day of the 2012 Conservative conference, the party’s campaign director, Stephen Gilbert, laid out a plan that would come to be known as the 40/40 strategy. For the 2015 election, the party would focus single-mindedly on holding 40 marginal seats and winning another 40. Candidates for these seats would be selected early, and full-time campaign managers – heavily subsidised by Conservative campaign headquarters (CCHQ) – would be appointed in every 40/40 seat.

The 40/40 campaign would be centrally controlled and would require two ingredients. The first was detailed information about every potential Conservative voter in each of the marginal seats. The second was a field team capable of making contact with them and persuading them to vote Tory.

To put the plan into action, the party turned to two men who have helped reshape the way elections are fought. The first, the Australian political strategist Lynton Crosby, had overseen the Tories’ 2005 general election campaign and Boris Johnson’s two victories in London mayoral elections.

Crosby’s notoriety made him the subject of considerable press attention – but the second man behind the Conservative campaign may have been even more important. This was the American strategist Jim Messina, who was hired as a strategy adviser in August 2013. Senior Conservative staff had been awestruck by Barack Obama’s comfortable victories in the 2008 and 2012 presidential elections, crediting their relentless focus on data to Messina.

British elections are designed to be cheap: even small sums of additional money can make a huge difference to results

Using vast databases, commercial market research, complex questionnaires and phone banks, Messina had been able to map the fears and desires of swing voters, and design highly personalised messaging that would appeal to them. The Conservatives hired him to perform the same magic in Britain. To do so, Messina used commercial call centres to track the views of between 1,000 and 2,000 voters in all 80 of the seats targeted by the 40/40 strategy.

This data was crucial to the Conservative campaign: it determined which voters the party needed to contact and which messages they would hear. This began with direct mail – personally addressed to voters in each target seat, who were divided into 40 different categories, with a slightly different message for each one.

But the big-data strategy requires more than leaflets: once you have identified the voters who might be persuaded to switch, and fine-tuned what message to give them, you have to send campaigners to actually knock on their doors and urge them to go to the polls on election day. This requires an army of volunteers, spread across dozens of constituencies. It fell to the party’s co-chairman, Grant Shapps, to establish the necessary volunteer outreach program, which was dubbed Team2015.

Shapps had begun sending out recruitment emails to the party’s mailing list in the summer of 2013, hoping to build a centrally controlled base of activists who could be deployed to marginal constituencies. CCHQ demanded that Team2015 coordinators be established in every swing seat. It was an uphill struggle. Rallying enthusiastic volunteers to David Cameron’s cause turned out to be a harder task than attracting Obama supporters had been.

‘Under David Cameron’s leadership, the number of party members had further depleted, halving to fewer than 150,000.’ Photograph: Peter Nicholls/PA

Conservative membership had been in long-term decline from a peak of 2.8 million in 1952. Under David Cameron’s leadership, the number of party members had further depleted, halving to fewer than 150,000. Those remaining members tended to be older and less active – not the dynamic door-knocking volunteers that Team2015 wanted to recruit. While some local Conservative associations reported new members, most described numbers as “hit and miss”. One seat’s early Team2015 report records: “[Team2015] invited to party with MP – no one turned up!”

In some marginal seats, Team2015 was almost nonexistent. One campaign manager recalls: “Trying to get members to volunteer was practically impossible, so Team2015 volunteers were even worse. People would put their names down, generally via CCHQ, who would then pass the person’s details to the local campaign manager but, in my case, when I tried to contact them I never got any volunteers.”

As the election drew nearer, Shapps made upbeat reports on the growing volunteer force. But, according to Conservative Home, the party’s records indicate that only about 15,000 people ever turned up to campaign, and fewer than that did so regularly.

There was, however, another team at work. Unsupervised by CCHQ to start with, it would later be adopted as a critical element in the party’s “ground war” since – unlike Team2015 – it had managed to deliver platoons of committed Conservative activists to the places that needed them most in a series of crucial byelections the year before. It was called RoadTrip.

RoadTrip2015 was the brainchild of Mark Clarke, who would become infamous after the election as “the Tatler Tory”, pilloried in the press over accusations that he bullied a young Conservative who later killed himself, and made unwanted sexual advances towards female members of the party – allegations he has always denied. But in 2014, as a failed parliamentary candidate desperate to get back into the party’s good graces, he launched a grassroots volunteer scheme that sent party members into marginal seats to distribute leaflets, knock on doors, and work the voters.

RoadTrip2015’s work began with a March 2014 trip to Cannock Chase, a West Midlands Labour marginal where 50 volunteers battled through a hailstorm to the doorsteps of swing voters. In the months that followed there were trips to Harlow, Chester and Cheadle. In Enfield, Team2015 marshalled 130 volunteers and party co-chair Grant Shapps attended too. But what put the scheme on the map, and drew the admiration of Conservative commentators and MPs, was the Newark bylection in early June 2014.

On 31 May, the Saturday before the byelection, Clarke successfully marshalled 500 volunteers to Nottinghamshire to campaign for the Conservative candidate, Robert Jenrick. Clarke posted his invitation across social media and on the Conservative Home website: “Join us, Grant Shapps and the hundreds of people signed up this Saturday to come to Newark. Afterwards, join Eric Pickles for the inaugural annual RoadTrip2015 dinner (a free curry) in nearby Nottingham. We will take care of your travel from cities like London, Manchester, Birmingham, Bristol and York.”

The Newark campaign was the first major stress test for the Conservatives’ parliamentary election team. By polling day, 5 June, they were feeling intense pressure from Ukip, which had triumphed in the European elections two weeks earlier – showing they were more than capable of stealing support away from the Conservatives.

Before Clarke’s RoadTrip arrived in Newark, a small team of senior Conservative staff – including Stephen Gilbert and a “campaign specialist” named Marion Little – had quietly taken position on the outskirts of the town at the Kelham House country manor hotel. In Newark itself, many more junior party employees – some of them campaign managers from other 40/40 seats – worked from temporary offices during the day and, at night, stayed in a Premier Inn.

The well-resourced Tory campaign turned out to be decisive and Robert Jenrick was returned with a 7,403 majority – rather smaller than his predecessor, but still substantial. But, on the evening of the count an exasperated Nigel Farage, interviewed by Channel 4 News political correspondent Michael Crick, raised the first concerns about Conservative election expenses – which, he suggested, might have breached the £100,000 limit for campaign spending in a byelection.

“Given the number of paid professional people from the Conservative party here, it is difficult to believe that their returns are going to come in below the figure,” Farage said, referring to the documents every candidate must file to detail their campaign costs. “I’d love to see what their returns are. Because it seems to me the scale of the campaign they fought here is so vast … There will certainly be some questions.”

The rules in a byelection contest are simple. All costs incurred in promoting the candidate in parliamentary elections – advertising, staff costs, unsolicited leaflets and letters, transport for campaigners, hotels that volunteers do not pay for themselves, and administrative costs such as phone bills and stationery – must be declared. Deliberate overspending can be a criminal offence, and it may also lead to an election being declared void.

Robert Jenrick’s campaign in Newark had declared expenses of £96,191. But the Electoral Commission later found that his return did not include the hotel bills for 54 nights of accommodation for senior Conservative staff, or 125 nights of hotel rooms for junior Conservative staff at the Premier Inn. Those costs totalled more than £10,000; had they been declared, the campaign would have breached the spending limits. Farage had been correct. (When questioned by Channel 4 News in 2016, Jenrick denied all wrongdoing. In response to questions about by-election hotel expenses, the party responded that “all byelection spending has been correctly recorded in accordance with the law”.)

At the time, however, these details remained unknown – and Channel 4 News reporters did not discover the undeclared hotel bills until long after the one-year time limit for the imnvestigation and prosecution of election crimes had passed. As a result, there was little attention to increasing Conservative spending in two more crucial byelections.

In October 2014, another huge team of Conservatives descended on Clacton-on-Sea, where Douglas Carswell had defected from the Conservatives to stand as a Ukip candidate. Again, hotels were booked for visiting campaign staff, and a return of £84,049 was filed – which did not mention all the party’s hotel costs of 290 nights at the Lifehouse Spa & Hotel, and 71 nights at the Premier Inn, worth at least £22,000. Had they been declared, the overspending would have been more than £8,000.

In Rochester and Strood, where the defection to Ukip of yet another Tory candidate, Mark Reckless, prompted another byelection in November 2014, the Conservatives could have breached the spending limit by a far larger amount – more than £51,096. As detailed in the Electoral Commission report, their candidate did not declare hotel costs of at least £54,304 against expenses of £96,793. The Conservatives still lost both contests. (Neither of the Conservative candidates responded to requests for comment. The party replied on their behalf that all spending was filed in accordance with the law.)

In these byelections, RoadTrip2015 – which was now supported by CCHQ and endorsed by Shapps – became an increasingly important influence. When the campaign launched a Facebook page advertising for a “Clacton Volunteer Force”, 1,300 people signed up to take part. In Rochester and Strood, it offered volunteers who turned out on Saturday 8 November “FREE transport there and back, FREE drinks and access to the FREE RoadTrip2015 Disraeli Dinner with a very special guest speaker!” The guest speaker was Theresa May, who was filmed celebrating with volunteers. She said: “What you do matters so much because, although what the politicians do has got a role to play, in terms of election campaigning, it’s the people who go out on the doorsteps, who knock on those doors, who make those telephone calls, who put those leaflets through the door, that make a real difference to the results we have.”

By the time of the 2015 general election, the tactics that the party had used to saturate all three byelection constituencies with activists and workers would all come together: there would be more buses of volunteers, more undeclared hotel bookings, and more senior advisers moved out of London into crucial seats. But this time, it would be discovered.

Today, two pieces of rather antiquated legislation exist to tame the influence of money on our elections. The first law governs spending by constituency candidates in the run-up to a general election during two time periods: the “long campaign” runs from about six months before polling day until parliament is dissolved; what follows is the “short campaign”, a final frenzied push for votes that lasted for 38 days in 2015.

The spending limits in each period are tight, with exact values depending on the type of constituency (borough or county) and the number of voters. For the “long campaign” in 2015, the totals were typically around £35,000 to £45,000. While in the short campaign, the most crucial campaign period, the limits were tighter still, set at £8,700 plus 6p or 9p per elector, giving a limit of around £10,000 to £16,000.

The limits are low, theoretically allowing as many people as possible to mount a viable campaign for election. Any costs incurred promoting the candidate in the constituency – from advertising, administration and public meetings, to party-paid transport for campaigners, staff costs and accommodation – must be honestly declared. At the end of the campaign, every penny spent must be declared in an official spending return submitted soon after the end of the campaign. Each spending return includes a declaration that certifies it is “complete and accurate … as required by law”. This must be signed by both the candidate and their election agent – a member of the local party that they appoint to manage their spending. Failing to declare spending, and spending over the limit, are criminal offences.

The second election spending law applies to political parties, and sets much higher limits for their spending on national campaigning during a specified period – roughly a year – before the election. The precise limit is derived by multiplying the number of constituencies being contested by £30,000. For the Conservatives in 2015, this gave the party a national limit of £18.9m to spend promoting David Cameron and his plan for the country through advertisements, billboards and direct mail. As it turned out, the party ended up declaring a figure well below the limit – around £15.6m. It is the responsibility of the national party treasurers to ensure that these national returns are correct, and again they commit an offence if they are found not to be.

Of course, the existence of two different laws setting out two different spending limits – one for local spending and one for national spending – is a source of potential confusion. In the real world of campaigning, there are bound to be expenses that do not fit neatly into one category or the other. For example, leaflets may contain a national message on one page – promoting the party’s leader or policies – and a local message, from the constituency candidate, on another page. When this happens, both the party and the candidates are required to make an “honest assessment”, in the words of law, about how much of the cost of the leaflet should be declared on both returns, before “splitting” the value accordingly. To aid transparency, election material must, by law, carry an “imprint” that shows whether it was produced for the local candidate or for the national campaign.

But the presence of two separate spending laws also presents an opportunity for abuse. Much of the scandal surrounding the Conservative party’s 2015 election spending relates to evidence that suggests spending declared as “national” – where limits are much higher – was, in reality, used to promote local candidates, who face much tighter spending limits.

In fact, it is the enormous difference between the national limits, in the millions, and the local limits, in the tens of thousands, that makes these allegations so significant. Even small amounts of candidate overspending – easily buried in the multimillion-pound national accounts – could have a significant impact on a local campaign, and even shift the result.

Following Ukip’s triumph in the Clacton and Rochester byelections in late 2014, the Conservative campaign faced a miserable winter. Labour led the polls for a few months, and by April 2015, pollsters and pundits were predicting a hung parliament.

The Conservatives made two moves that helped to turn the tables. The first was a new message – to stoke fear that without a clear Conservative majority, Britain would be run by a coalition between Labour and the Scottish National Party.

The second was a new tactic, based on RoadTrip2015. Mark Clarke’s day-long campaign events in the run-up to the general election had given the Conservatives a taste of what the party desperately needed – enthusiastic volunteers knocking on doors in areas that mattered. Historically, Labour had better form bringing activists into marginal battlegrounds, largely thanks to its more active membership drawn from the unions. The Conservative party, with its dwindling and increasingly inactive membership, often found it had no response.

The Conservative party insists that the BattleBus was only intended to conduct national campaigning

But a new plan grew from the seeds of RoadTrip, one that involved busloads of activists and block-booked hotel rooms. BattleBus2015 would send a fleet of coaches to three regions of the UK – the south-west, the Midlands and the north – for the final 10 days of the election campaign. These mobile units, each with around 40 party activists, would stay in hotels in each region, from where they would be loaded onto coaches and driven into different marginals to campaign each day. This would allow the party to flood 29 key seats with much-needed support: nine in the south-west, 10 in the Midlands and 10 in the north.

Receipts for the hotels and coaches, obtained later by Channel 4 News, would prove the operation was expensive. The Electoral Commission later calculated that the BattleBus operation cost £102,483, which works out to around £3,500 for each seat it visited. But while the national party could easily absorb the cost before hitting its spending cap, many of the local candidates were already cutting it fine. If they had to declare the extra costs associated with bringing in more campaigners, the majority would breach the limit.

In the event, £38,996 of the BattleBus costs were declared on the Conservative party’s national return, while the other £63,487, which included the hotels used by volunteers, was not declared at all. The Conservative party put this down to “human error”.

None of the 29 candidates visited by BattleBus declared any of its costs. Whether this should be categorised as national or local spending depends on what the activists did: if they promoted local candidates, even part of the time, then at least some costs associated in bringing them to the constituency should have been declared locally.

The Conservative party insists that BattleBus was only intended to conduct national campaigning. The Electoral Commission report states that it “has found no evidence to suggest that the party had funded the BattleBus2015 campaign with the intention that it would promote or procure the electoral success of candidates”. But, the report continues, “coaches of activists were transported to marginal constituencies to campaign alongside or in close proximity to local campaigners,” and “it is apparent that candidate campaigning did take place during the BattleBus2015 campaign”. It adds that, in the commission’s view, a proportion of the costs should have been declared in candidate campaign filings, “casting doubt” on whether these candidate spending returns were accurate.

The Conservative party has responded to these allegations by insisting that BattleBus volunteers did not promote local candidates. But on Twitter, in the weeks before the election, the BattleBus activists hailed their own efforts to win over voters for specific candidates. On 2 May, one volunteer wrote: “1,300 voters talked to on the doorstep in Amber Valley today for @VoteNigelMills!”. Another posted: “Nice homes in the beautiful Amber Valley – great reaction on the doorsteps in support of Nigel Mills.”

Photographs posted on social media add to the layers of evidence. One young female activist is pictured on a doorstep holding a leaflet bearing the name of Nigel Mills. In the north, a group of activists in Sherwood were photographed holding calling cards for the candidate Mark Spencer, carrying his name and image, and the words: “I called by today with my local team to hear your views.” Channel 4 News has spoken to a handful of volunteers who say their time on the BattleBus involved local campaigning.

Gregg and Louise Kinsell, a married couple from Market Drayton, Shropshire, joined the Conservative party in the run-up to the election, motivated by a mix of patriotic pride, shared values and a liking for David Cameron. They signed up to join BattleBus2015 for its final stretch in the south-west, visiting four constituencies over four days: Stroud; Plymouth, Sutton and Devonport; St Ives and North Cornwall. The aim of the south-west tour was to turn the nine yellow seats of the Liberal Democrats into a sea of blue for the Conservatives – and the Tories won all but one.

The BattleBus operation is still being investigated, but the Kinsells firmly believe that, contrary to claims of Conservative party HQ, they and their fellow volunteers did promote local candidates. “The coach would pull in”, Louise says, “and they’d all be cheering. Honestly, we were like the big hitters coming down to make sure that we win. That’s exactly how it was.”

The couple recall that senior activists gave them scripts about the local candidates to memorise on the bus, in order to be ready to sing their virtues on the doorsteps of undecided voters. Specially prepared briefing notes helped them absorb local issues. And they claim they were handed bundles of locally focused leaflets and calling cards to slip through the letterboxes of prospective voters. The voting intentions of the people they called upon were carefully logged. The couple are clear that they were used as a tactic to “sway marginal seats”, and are angry at the ongoing claim of the Conservative party and some MPs that the BattleBus operation only promoted the national message. “If people are saying – and the MPs concerned in these areas are saying that it was part of a greater expense nationally for the Conservatives, that’s an obvious falsehood,” Gregg says.

Nigel Farage, Al Murray and the winning Conservative candidate Craig Mackinlay at the count for the South Thanet seat, May 2015. Photograph: Matt Dunham/AP

But if there was one seat, among the 40/40 constituencies, that the Conservatives were most set upon winning, it was South Thanet in Kent. There, the Conservative party’s principal rival, Nigel Farage, would take on Craig Mackinlay in the most closely watched contest of the 2015 election.

Today the investigation into the Conservative victory in South Thanet is staffed by nine officers from the Kent police serious economic crime unit. The questions they are considering are familiar to those raised in the 2014 byelections. Were the hotel costs for visiting Conservative staffers in South Thanet – nearly £20,000 in total – properly declared?

After his election victory, Craig Mackinlay filed expenses of £14,838 for the short campaign – just £178 under the spending limit – but made no mention of the Royal Harbour hotel in Ramsgate where senior party workers had taken rooms. Was that an honest account of his expenses? And if not, who was responsible?

The search for answers has so far taken in boxes of internal Conservative documents, the testimony of campaigners, and a six-hour police interview earlier this month with Mackinlay. But a more basic question about the election remains disputed: who actually ran his South Thanet campaign? The list is longer than it should be.

At the top is the name Nathan Gray, Mackinlay’s election agent. In common with many of the “campaign managers” employed as part of the 40/40 strategy, Gray’s enthusiasm for politics was not matched by his experience. Then 26, he had never done the job before. (Gray denies any wrongdoing.)

In the aftermath of the great victory against Nigel Farage in South Thanet, Gray was largely written out of the story and replaced by Nick Timothy, a long-time special advisor to Theresa May who is now the prime minister’s joint chief of staff. In his book Why the Tories Won, Tim Ross describes how Timothy “was sent to take charge of the party’s flagging campaign to stop Farage in Thanet”. Grant Shapps even said recently that Timothy was “front and centre” in South Thanet. But he was not responsible for filing the expenses return and, when contacted about his involvement, a spokesperson stated that he provided “assistance for the Conservative party’s national team and would have given advice to any candidate who asked for it and indeed did so”. There is no suggestion that Timothy is at fault.

An analysis of the campaign written afterwards for the South Thanet Conservative Association credits someone else entirely: “In February [2015] CCHQ sent a professional team to help us. Their leader, Marion Little, is a very experienced election ‘trouble shooter’, and from the moment she arrived she effectively took control of the whole campaign.”

A Conservative staffer since 1984, Little had held the previous title “battleground director” of the Conservative party. And just as she had a formidable presence in the byelections of Newark, Clacton and Rochester and Strood, so she transformed the South Thanet Conservative’s constituency office into a military command post. Little was also not responsible for filing the election spending for South Thanet but she worked long into the night, battle planning and deploying troops: “Dear Team ‘South Thanet’,” she wrote in an email on 23 March. “Just to confirm that this weeks’ [sic] meeting schedule is as follows …” When Nick Timothy did make suggestions, they were run by Little: “Are we not putting ‘two horse race’ on everything?” he asked her in one email sent on 29 March 2015, before adding: “don’t we need to?”

Little didn’t respond when asked whether her role in South Thanet involved local campaigning.

Buses of activists also descended from London. Volunteers were dubbed the “South Thanet Soldiers”. One Labour campaigner, Peter Wallace, recalled seeing hordes of well-dressed young Conservatives working the constituency week after week. “They were like Terminators,” he said, “straight out of GQ, out of London and on our patch. They blew us away.”

Photographs and videos taken by Conservatives in the final weeks of campaigning show the scale of the resources used to bolster the party. There were visits from Boris Johnson and George Osborne, and groups of campaigners arriving on liveried Conservative coaches ready to work for Craig Mackinlay. On the morning of the election, party co-chairs Grant Shapps and Lord Feldman arrived with Mark Clarke and a coach of last-minute campaigners.

In the end Mackinlay defeated Farage in some style. The problem is that when Timothy and Little stayed down in South Thanet, they lived in some style too. The local spending limit in the election was just £15,016, but the bill for rooms housing the troubleshooters from CCHQ at the Royal Harbour hotel ran to £15,641 alone. Mackinlay denies any wrongdoing.

“They had a few rooms block-booked, yeah,” James Thomas, the owner of the Royal Harbour, told Channel 4 News. “All hotels become headquarters, unofficially sometimes,” he added. “Mr Farage was going to be defeated by them, so they made sure they had the right brains to do that.”

More hotel receipts, uncovered by Channel 4 News, showed more party workersstaying at the Margate Premier Inn, some for 12 nights, with a total cost of £3,809. Little’s name was on the bill, but these costs were not declared in the local return or the party’s national expenses. It appeared to resemble the spending in the 2014 byelections – the money was off-the-books. The difference was that, this time, the Conservatives won.

The first report into the Conservative party’s election expenses was broadcast by Michael Crick on Channel 4 News in late January 2016. It was a short item on a slow news day, which simply asked why the cost of rooms at the Royal Harbour hotel in South Thanet had been declared as part of the Conservatives’ national – rather than local – campaign expenses. Why, Crick asked, would a team of top Conservatives be based at a small provincial hotel miles from anywhere if not to work on behalf of the Conservative candidate fighting Nigel Farage for the seat?

When investigative reporters at Channel Four News began to look at the threads connecting tactics in South Thanet to other high-profile Conservative campaigns, a tangle of receipts and emails revealed the party’s hidden spending elsewhere: undeclared hotels, busloads of activists on specialist missions, and senior CCHQ staff buried deep in provincial England.

For months, the Conservative party repeated that all their campaign spending was “in accordance with the law”. A member of the party’s governing body stepped in front of the cameras on 1 March to announce: “Channel 4 has got it wrong.” But eventually the Electoral Commission, which had been widely criticised as toothless, developed canines and sank them into the case. After pressing the party for three months, they were finally provided with seven boxes of papers in May 2016. The secrets they held would make police investigations inevitable but, even then, the Conservatives dug in.

One of the nation’s leading QCs was dispatched by Craig Mackinlay to Folkestone magistrates’ court to halt a Kent police investigation into election spending offences in South Thanet. He failed, and the detectives’ work continued. By the middle of June, 17 forces were conducting investigations into 27 sitting Conservative MPs. Since then, 12 police forces have passed files to the CPS to review, and up to 20 sitting MPs wait to discover if criminal charges will be brought, while other forces still sift through evidence.

In the meantime, the prime minister re-elected in 2015 has melted away, while the election expenses scandal continues to lap at the door of No 10 Downing Street. Theresa May’s chief of staff Nick Timothy and her political secretary Stephen Parkinson were both part of the team dispatched to South Thanet by CCHQ; both took rooms in the Royal Harbour hotel. Whether the reality of their work is reflected in the spending documents signed by Mackinlay is the essential question that Kent police must answer. The photograph in which May appears, walking with members of the senior campaign team on South Thanet’s seafront three weeks before election day, should also give the prime minister pause to consider her own party’s tactics.

Should the Conservative MPs still under investigation face trial and be convicted, May’s government will be imperilled. Her majority is just 17.

In deciding whether or not to prosecute, the CPS must consider two clear tests. The first concerns the public interest in pursuing prosecutions and is met easily: the integrity of our election process is at stake. The second test regards the chance of success at trial. This is harder to meet, because the law says that prosecutors would have to prove that the candidate or agent knowingly submitted a false return.

A likely defence is clear. In South Thanet, Mackinlay has told police that the senior Conservatives who came into his constituency to work on his campaign were not under his “direction or control”, so he is not accountable for their activity. Other MPs who enjoyed a visit from the BattleBus have said that they were told by party headquarters that it was a national scheme. While few of the MPs under investigation have publicly revealed what they knew of the real effect of BattleBus, some have stated publicly that they received an email from the RoadTrip founder and BattleBus organiser Mark Clarke, instructing them not to declare the costs. (Clarke declined our request for comment.)

After one year of investigation, the Electoral Commission has found categorically that at least some of the spending the party claimed was national spending was spent on “candidate campaigning” and therefore should have been declared by candidates on their local returns. They did not. This, the commission said, had potentially given them a “financial advantage over opponents”. It was the responsibility of the candidates and their agents to do so. According to the law, the responsibility for failing to do so lies only with the candidate and the agent.

It is too soon to say whether charges will be brought. Lancashire police recently told the BBC that it has dropped its investigation into one MP who received the BattleBus, David Morris. Press reports have cited police sources who suggest that prosecutors “might decide to make an example” of others.

But if prosecutors decide not to “make an example”, they may set a legal precedent instead. Future candidates will reasonably conclude that they can, with the assistance of their parties, circumvent the electoral laws intended to keep our democracy free and fair – and that parties and candidates alike may do so without facing any penalty.