Search This Blog

Showing posts with label globalisation. Show all posts
Showing posts with label globalisation. Show all posts

Tuesday 27 August 2019

Will Modi's Muslims pay the price for Kashmir?

By Girish Menon

Modi’s Muslims, i.e. most middle class Indians (this writer included) supported Modi’s decision to de-operationalise Art. 370 in Kashmir. It is now three weeks since the decision and India’s security forces appear to keep the casualty levels low so far. There are many scenarios possible when the communications shut down is lifted. In this piece, I will examine the best possible scenario for Modi’s supporters and how they may still be called upon to pay a very high price.

In response to India’s action, Pakistan’s selected PM Imran Khan has promised to be an ambassador for Pakistan Coveted Kashmir (PCK). He has promised to raise the issue at the UN Security Council in a month’s time. And until then he has asked Pakistanis to protest for ½ an hour after their mid-day prayers. He has succeeded in getting the attention of foreign media, though the lack of body bags has resulted in a waning interest.

The Indian government, worried about the global interest, has responded with its own version of diplomacy with a majority of UN Security Council Members not giving Pakistan any crumbs for comfort. So what price will India pay for their support and how will Modi’s Muslims react when the pain increases?

Firstly, it is possible that India may send troops to Afghanistan to facilitate the smooth withdrawal of US troops in time for Trump’s re-election.

Secondly, President Trump wants India to give US companies’ better access to its markets. This could mean Huawei is forced out of the 5G selection process. It could mean that India will not insist that Indian consumer data is stored in India. It could mean compromises on many other positions that India has steadfastly adhered to as part of its economic interests.

Thirdly, India maybe forced to purchase more expensive defence equipment from the US. India's policies of indigenisation of defence production may be completely dropped. A forerunner to this thinking was palpable when the Rafale offset was given to private contractors without sufficient safe guards.

Economic growth in the Indian economy is already at the much derided Hindu rate of growth. Investment by firms is down, while firms are shutting down and unemployment is rising. If India removes further trade barriers to the already suffering French and US economies – it will result in benefits to the workers and businesses from there. But what about Modi’s Muslims who are drooling about the benefits from a $ 5 Trillion economy?

I suppose, when the economic situation gets really bad the Supreme Court can clear the path to build Ram Janambhoomi temple. This will win the 2024 elections and pave the way for the $ 5 Trillion Ram Rajya.

===



Monday 26 November 2018

Brexit won't affect only the UK – it has lessons for the global economy

Exiting the EU highlights the risks of economic and political fragmentation writes Mohamed El Erian in The Guardian 

 
Brexit will have an impact on the global economy, not just the UK. Photograph: Fabian Bimmer/Reuters


The singular issue of Brexit has consumed the United Kingdom for two-and-a-half years. The “if”, “how” and “when” of the country’s withdrawal from the European Union, after decades of membership, has understandably dominated news coverage, and sidelined almost every other policy debate. Lost in the mix, for example, has been any serious discussion of how the UK should boost productivity and competitiveness at a time of global economic and financial fluidity.

At the same time, the rest of the world’s interest in Brexit has understandably waned. The UK’s negotiations with the EU have dragged on through multiple déjà vu moments, and the consensus is that the economic fallout will be felt far more acutely in Britain than in the EU, let alone in countries elsewhere.

Still, the rest of the world is facing profound challenges of its own. Political and economic systems are undergoing far-reaching structural changes, many of them driven by technology, trade, climate change, high inequality and mounting political anger. In addressing these issues, policymakers around the world would do well to heed the lessons of the UK’s Brexit experience. 

When Britons voted by a margin of 51.9% to 48.1% to leave the EU, the decision came as a shock to experts, pundits and Conservative and Labour party leaders alike. They had underappreciated the role of “identity” as a driving force behind the June 2016 referendum. But now, voters’ deeply held ideas about identity, whether real or perceived, can no longer be dismissed. Though today’s disruptive politics are fuelled by economic disappointment and frustration, identity is the tip of the spear. It has exposed and deepened political and social divisions that are as uncomfortable as they are intractable.

Experts also predicted that the UK economy would suffer an immediate and significant fall in output following the 2016 referendum. In the event, they misunderstood the dynamics of what economists call a “sudden stop” – that is, abrupt, catastrophic dysfunction in a key sector of the economy. A perfect example is the 2008 global financial crisis, when financial markets seized up as a result of operational dislocations and a loss of mutual confidence in the payments and settlement system.

Brexit was different. Because you cannot replace something with nothing, there was no immediate break in British-EU trade. In the absence of clarity on what type of Brexit would ultimately materialise, the economic relationship simply continued “as is,” and an immediate disruption was averted.

It turns out that when making macroeconomic and market projections for Brexit so far, “short versus long” has been more important than “soft versus hard” (with “hard” referring to the UK’s full, and most likely disorderly, withdrawal from the European single market and customs union). The question is not whether the UK will face a considerable economic reckoning, but when.

Nonetheless, the UK economy is already experiencing slow-moving structural change. There is evidence of falling foreign investment and this is contributing to the economy’s disappointing level of investment overall. Moreover, this trend is accentuating the challenges associated with weak productivity growth. 

There are also signs that companies with UK-based operations have begun to trigger their Brexit contingency plans after a prolonged period of waiting, planning, and more waiting. In addition to shifting investments out of the UK, firms will also start to relocate jobs. And this process will likely accelerate even if Theresa May manages to get her proposed exit deal through parliament.

The Brexit process thus showcases the risks associated with economic and political fragmentation, and provides a preview of what awaits an increasingly fractured global economy if this continues: namely, less efficient economic interactions, less resilience, more complicated cross-border financial flows, and less agility. In this context, costly self-insurance will come to replace some of the current system’s pooled-insurance mechanisms. And it will be much harder to maintain global norms and standards, let alone pursue international policy harmonisation and coordination.

Tax and regulatory arbitrage are likely to become increasingly common as well. And economic policymaking will become a tool for addressing national security concerns (real or imagined). How this approach will affect existing geopolitical and military arrangements remains to be seen.

Lastly, there will also be a change in how countries seek to structure their economies. In the past, Britain and other countries prided themselves as “small open economies” that could leverage their domestic advantages through shrewd and efficient links with Europe and the rest of the world. But now, being a large and relatively closed economy might start to seem more attractive. And for countries that do not have that option – such as smaller economies in east Asia – tightly knit regional blocs might provide a serviceable alternative.

The messiness of British party politics has made the Brexit process look like a domestic dispute that is sometimes inscrutable to the rest of the world. But Brexit holds important lessons for and about the global economy. Gone are the days when accelerating economic and financial globalisation and correlated growth patterns went almost unquestioned. We are also in an era of considerable technological and political fluidity. The outlooks for growth and liquidity will likely become even more uncertain and divergent than they already are.

Thursday 27 September 2018

Trump has a point about globalisation

Larry Elliott in The Guardian


The president’s belief that the nation state can cure economic ills is not without merit


  
‘The stupendous growth posted by China over the past four decades has been the result of doing the opposite of what the globalisation textbooks recommend.’ Photograph: AFP/Getty Images


Once every three years the International Monetary Fund and the World Bank hold their annual meetings out of town. Instead of schlepping over to Washington, the gathering of finance ministers and central bank governors is hosted by a member state. Ever since the 2000 meeting in Prague was besieged by anti-globalisation rioters, the away fixtures have tended to be held in places that are hard to get to or where the regime tends to take a dim view of protest: Singapore, Turkey, Peru.

This year’s meeting will take place in a couple of weeks on the Indonesian island of Bali, where the IMF and the World Bank can be reasonably confident that the meetings will not be disrupted. At least not from the outside. The real threat no longer comes from balaclava-wearing anarchists throwing Molotov cocktails but from within. Donald Trump is now the one throwing the petrol bombs and for multilateral organisations like the IMF and World Bank, that poses a much bigger threat.

The US president put it this way in his speech to the United Nations on Tuesday: “We reject the ideology of globalism and we embrace the doctrine of patriotism.” For decades, the message from the IMF has been that breaking down the barriers to trade, allowing capital to move unhindered across borders and constraining the ability of governments to regulate multinational corporations was the way to prosperity. Now the most powerful man on the planet is saying something different: that the only way to remedy the economic and social ills caused by globalisation is through the nation state. Trump’s speech was mocked by fellow world leaders, but the truth is that he’s not a lone voice.

The world’s other big economic superpower – China – has never given up on the nation state. Xi Jinping likes to use the language of globalisation to make a contrast with Trump’s protectionism, but the stupendous growth posted by China over the past four decades has been the result of doing the opposite of what the globalisation textbooks recommend. The measures traditionally frowned upon by the IMF – state-run industries, subsidies, capital controls – have been central to Beijing’s managed capitalism. China has certainly not closed itself off from the global economy but has engaged on its own terms. When the communist regime wanted to move people out of the fields and into factories it did so through the mechanism of an undervalued currency, which made Chinese exports highly competitive. When the party decided that it wanted to move into more sophisticated, higher-tech manufacturing, it insisted that foreign companies wishing to invest in China share their intellectual property.

This sort of approach isn’t new. It was the way most western countries operated in the decades after the second world war, when capital controls, managed immigration and a cautious approach to removing trade barriers were seen as necessary if governments were to meet public demands for full employment and rising living standards. The US and the EU now say that China is not playing fair because it has been prospering with an economic strategy that is supposed not to work. There is some irony in this.

The idea that the nation state would wither away was based on three separate arguments. The first was that the barriers to the global free movement of goods, services, people and money were economically inefficient and that removing them would lead to higher levels of growth. This has not been the case. Growth has been weaker and less evenly shared.

The second was that governments couldn’t resist globalisation even if they wanted to. This was broadly the view once adopted by Bill Clinton and Tony Blair, and now kept alive by Emmanuel Macron. The message to displaced workers was that the power of the market was – rather like a hurricane or a blizzard – an irresistible force of nature. This has always been a dubious argument because there is no such thing as a pure free market. Globalisation has been shaped by political decisions, which for the past four decades have favoured the interests of capital over labour.
Finally, it was argued that the trans-national nature of modern capitalism made the nation state obsolete. Put simply, if economics was increasingly global then politics had to go global, too. There is clearly something in this because financial markets impose constraints on individual governments and it would be preferable for there to be a form of global governance pushing for stability and prosperity for all. The problem is that to the extent such an institutional mechanism exists, it has been captured by the globalists. That is as true of the EU as it is of the IMF.

So while the nation state is far from perfect, it is where an alternative to the current failed model will inevitably begin. Increasingly, voters are looking to the one form of government where they do have a say to provide economic security. And if the mainstream parties are not prepared to offer what these voters want – a decently paid job, properly funded public services and controls on immigration – then they will look elsewhere for parties or movements that will. This has proved to be a particular problem for the parties of the centre left – the Democrats in the US, New Labour in Britain, the SDP in Germany – that signed up to the idea that globalisation was an unstoppable force.

Jeremy Corbyn certainly does not accept the idea that the state is obsolete as an economic actor. The plan is to build a different sort of economy from the bottom up – locally and nationally. That’s not going to be easy but beats the current, failed, top-down approach.

Sunday 8 July 2018

The Billionaire Raj: A chronicle of economic India

Meghnad Desai in The FT 

India is now one of the world’s economic hotspots. Stock images of starving children, miserable peasants and cheating shop owners have been augmented with those of high-tech development and booming cities. India is now the world’s fastest-growing economy. It is about to become the third-largest economy — at least in terms of purchasing power dollars if not yet real ones. Foreign investors are rushing in. In The Billionaire Raj, James Crabtree has written a compelling guide to what awaits them. 


To make India more accessible to the western investor, Crabtree draws an analogy between America’s Gilded Age at the end of the 19th century — that plutocratic moment of the Vanderbilts, Goulds, Rockefellers — and the newest of India’s billionaires. Did you know that India now has more billionaires than Russia? 

This sudden enrichment was the result of the long boom of globalisation from 1991-2008. India had initiated reforms to escape from four decades of conservative socialism, initiated by Jawaharlal Nehru, which did not trust private business and put the state in command. The Indian state is inefficient as it is, but disastrous in running business. Its airline Air India has racked up billions in losses; its banks are mired in non-performing loans. 

In 1991, Manmohan Singh, then finance minister, bit the bullet and began to liberalise the economy. Tariffs were cut, import licensing was removed, and the rupee was devalued twice within a week. He had little choice because India had run out of foreign exchange reserves and had to pawn its gold to secure a loan from the International Monetary Fund. 

The reforms took time to work but, from 1998 onwards, the economy secured high single-digit growth rates, triple the so-called “Hindu growth rate” of 3 per cent per year that prevailed during the first 30 years of independence. With a decade-long growth spurt from 1998 to 2008 came the vast fortunes generated in a crony-capitalist relationship between the ruling Congress party and its private sector clients and financiers. Crabtree, a former FT Mumbai correspondent, gives us a detailed treatment of the links between the politicians needing money to finance elections that were both costly and cheap. (The 2014 elections cost $5bn — or $6 per voter.) 

Crabtree gives entertaining portraits of some billionaires. The opening chapters cover Mukesh Ambani and his towering residential extravaganza Antilla, the most expensive house ever built in India, which now dominates the Mumbai skyline; the fugitive Vijay Mallya, a drinks tycoon who was once known as the King of Good Times; the reticent Gautam Adani, an infrastructure entrepreneur who owns ports, mines and refineries. Dhirubhai Ambani, the patriarch of the Reliance group, figured out how to negotiate government regulations and expand his business while keeping the ruling party on his side. Mallya went so far as to be voted into a seat in the Rajya Sabha, the upper house of parliament, after a reported donation of 550m rupees ($10m in those days). Adani prospered in Gujarat with the reported blessings of Narendra Modi while he was chief minister of the state in India’s north-west, where the politician enjoyed both a clean reputation and business-friendly credentials. 

Crabtree shows both how deep corruption reaches into electoral politics — but also how functional it is 

Beyond the personalities lies another part of the puzzle. Corruption has gone deep in the system. Elections cannot be financed with just legally declared donations. The donors want to escape attention of the tax man, as do the party leaders. It is a symbiotic relationship. Not even Prime Minister Modi, as he has been since 2014, is about to change it, though he has moved against crony capitalism. Armed with an electoral majority, he has set about breaking the political mould, ending the near 70-year hegemony of Congress. He has sundered the crony ties that the top echelon of government enjoyed with the “promoters” of infrastructure projects during the Congress years. Back then the nationalised banks had to lend money to a favoured few and it was understood that the money would not be repaid. No more. Insolvency procedures have been toughened. Debtors can no longer shield their assets from creditors. It was this that sent Mallya abroad. 

This book was written before these drastic changes. But whether he wins or loses the next elections, Modi has made revival of crony capitalism difficult. 

There are also other concerns. Crabtree worries over Modi’s dual persona as a development enthusiast as well as a Hindu nationalist. The fear is that this may increase intolerance towards minorities — Muslims and Christians — and disrupt peaceful economic progress. 

Crabtree’s vivid portrayal of the corruption of politics is very informative, and thought-provoking. He travels the country to show both how deep corruption reaches into electoral politics — but also how functional it is. When an economy is regulated, riddled with permits needed to do business, a few palms may need to be greased. A payment — or rent, as economists call it — may be required. But the rewards are considerable. The corruption market works. It may be immoral but it is not inefficient. 

It would be better if India became less corrupt. Crabtree thinks so. That would require a lot of courage and an ability to pursue radical reform. The leader who embarks upon it risks unpopularity — as Modi is now finding out. 

These are matters that cannot be settled in a single book. Crabtree has given us the most comprehensive and eminently readable tour of economic India, which, as he shows, cannot be understood without a knowledge of how political India works.

Wednesday 18 April 2018

Visas and global poverty

Rafia Zakaria in The Dawn

IN a recent report, the Centre for Global Development made a surprising and somewhat startling observation. Looking at the data from several recent studies, they noted that even the very best international development programmes to reduce global poverty could only produce outcomes that were 40 times less successful than the income gain people in poor countries experienced when their citizens were provided greater labour mobility. In simple non-economist terms, it means that visas work faster and better to reduce global poverty by a lot than even the very best international development programmes.

The visa, then, with the promise of mobility that it holds, is one of the few single things that has the greatest capacity to eliminate global poverty than anything else in the world.

What is true, however, is not always popular, and this is certainly true of the visa solution. While this may be true, the extent of the discrepancy between the effectiveness of international aid programmes versus work visas is quite alarming. A study published in Science magazine reveals how intensive and highly targeted programmes directed at poor countries like Pakistan and Ethiopia were successful at reducing poverty even if they were far more expensive to implement and produce.

Even so, the mood of the announcement was triumphant; pricey as it may be, their study had found that international aid could work. The fact that work visas and access to labour markets work better was never mentioned.


The international aid system is a moral hierarchy, with the aid grantors at the top.

The omission is not surprising. As another study has noted, the infrastructure of aid depends on hierarchies in which Western experts imported into impoverished environments diagnose how and what poor countries must do to escape persistent poverty. Even while development lingo has evolved to include terms like ‘local involvement’ and ‘community input’, no project is complete without the messenger experts of the West arriving to impart their pearls of wisdom.

Behind all of this, there is a hierarchy at work and it always involves donor countries and their experts being at the top. This is even more visible in public presentations of development work at this or that conference; in one example, noted in the report (but recurrent everywhere), an organiser had to fight to ensure that at least one Arabic speaker be included in a panel on international development in the Middle East and the North African region.

It’s not just panels and experts that are the problem; it is also the impact of these interventions on local populations. Take, for instance, the issue of ‘capacity building’, a term of art deployed when aid is handed out in poor communities but little improvement is seen in their metrics.

At this point, ‘capacity building’ enters to save the day, that is, to introduce skills, such as financial management, entrepreneurship, etc that would hypothetically enable better results and prove the development programmes effective after all. Few of these ‘capacity-building’ programmes actually deliver the promised, improved results.

The reason is simple. Contrary to the assumption that aid grants exist solely to eliminate global poverty in the world’s most wanting populations, the international aid system is also a moral hierarchy. The aid grantors are at the top; they have the most and know the best, but in addition to all that they are also morally superior, willing to grant assistance with little expectation in return. They are the world’s altruists, whose purity of purpose lends them the authority that no others possess. They can pretend that they are doing good while expecting nothing at all in return.

When this moral aspect of international aid and aid giving in general is noted, the international aid system can be recast not as a means of actually helping the poor (because visas and labour mobility would accomplish this with far greater efficacy) but rather a means via which a moral hierarchy is created and maintained — the world’s wealthy, also the world’s noblest, inhabiting its summit, and the wanting at the bottom.

Seen against this, the purpose of development programmes may not actually be to reduce poverty or eliminate it but rather to enable the continued existence of this moral hierarchy. Per its dimensions, the world’s poor are not simply to be pitied but also morally wanting, often too lazy or devoid of initiative to figure out how to lift themselves out of their hapless circumstances. They are the ignoble, always awaiting alms from the good and noble.

Permitting some programme of labour mobility would dismantle this structure, whose moral currency permits the West to justify wars, trade restrictions and so much else that enable the maintenance of Western dominance. Research shows that an individual’s own desire to change his or her circumstances, one that aligns with the provision of work visas, is the best predictor of success in escaping poverty. Even while development professionals create metrics for this and that, measure effectiveness through complex statistical models, these basics that show a better route than the system of international aid are ignored.

Even while virtual platforms of communication enable organisation and discussion across national and continental boundaries and time zones, even as jet travel puts the world at our disposal and makes movement across borders a regularity, Western countries continue to rely on the archaic premises that borders are real, racial and religious difference are threats and the basis on which opportunities are distributed. It is not the lack of capacity or initiative among farmers in sub-Saharan Africa or shepherds in Ethiopia, then, that explain the persistence of global poverty, it is the inability of these people to travel freely to work where the jobs are.

Friday 13 April 2018

How much is an hour worth? The war over the minimum wage

Peter C Baker in The Guardian


No idea in economics provokes more furious argument than the minimum wage. Every time a government debates whether to raise the lowest amount it is legal to pay for an hour of labour, a bitter and emotional battle is sure to follow – rife with charges of ignorance, cruelty and ideological bias. In order to understand this fight, it is necessary to understand that every minimum-wage law is about more than just money. To dictate how much a company must pay its workers is to tinker with the beating heart of the employer-employee relationship, a central component of life under capitalism. This is why the dispute over these laws and their effects – which has raged for decades – is so acrimonious: it is ultimately a clash between competing visions of politics and economics. 

In the media, this debate almost always has two clearly defined sides. Those who support minimum-wage increases argue that when businesses are forced to pay a higher rate to workers on the lowest wages, those workers will earn more and have better lives as a result. Opponents of the minimum wage argue that increasing it will actually hurt low-wage workers: when labour becomes more expensive, they insist, businesses will purchase less of it. If minimum wages go up, some workers will lose their jobs, and others will lose hours in jobs they already have. Thanks to government intervention in the market, according to this argument, the workers struggling most will end up struggling even more.

This debate has flared up with new ferocity over the past year, as both sides have trained their firepower on the city of Seattle – where labour activists have won some of the most dramatic minimum-wage increases in decades, hiking the hourly pay for thousands of workers from $9.47 to $15, with future increases automatically pegged to inflation. Seattle’s $15 is the highest minimum wage in the US, and almost double the federal minimum of $7.25. This fact alone guaranteed that partisans from both sides of the great minimum-wage debate would be watching closely to see what happened.

But what turned the Seattle minimum wage into national news – and the subject of hundreds of articles – wasn’t just the hourly rate. It was a controversial, inconclusive verdict on the impact of the new law – or, really, two verdicts, delivered in two competing academic papers that reached opposite conclusions. One study, by economists at the University of Washington (UW), suggested that the sharp increase in Seattle’s minimum wage had reduced employment opportunities and lowered the average pay of the poorest workers, just as its critics had predicted. The other study, by economists at the University of California, Berkeley, claimed that a policy designed to boost worker income had done exactly that.

The duelling academic papers launched a flotilla of opinion columns, as pundits across the US picked over the economic studies to declare that the data was on their side – or that the data on their side was the better data, untainted by ideology or prejudice. In National Review, the country’s most prominent rightwing magazine, Kevin D Williamson wrote that the UW study had proven yet again “that the laws of supply and demand apply to the labor market”. Of course, he added, “everyone already knew that”.

Over on the left, a headline in the Nation declared: “No, Seattle’s $15 Minimum Wage Is Not Hurting Workers.” Citing the Berkeley study, Michelle Chen wrote: “What happens when wages go up? Workers make more money.” The business magazine Forbes ran two opposing articles: one criticising the UW study (“Why It’s Utter BS”), and another criticising liberals for ignoring the UW study in favour of the Berkeley study (“These People are Shameless”). This kind of thing – furious announcements of vindication from both sides – was everywhere, and soon followed by yet another round of stories summarising the first round of arguments.

When historians of the future consider our 21st-century debates about the minimum wage, one of the first things they will notice is that, despite the bitterness of the disagreement, the background logic is almost identical. Some commentators think the minimum wage should obviously go up. Some think all minimum-wage laws are harmful. Others concede we may need a minimum wage, but disagree about how high it should be or whether it should be the same everywhere – or whether its goals could be better accomplished by other measures, such as tax rebates for low-income workers.

But beneath all this conflict, there is a single, widely shared assumption: that the only important measure of the success of a minimum wage is whether economic studies show that it has increased the total earnings of low-wage workers – without this increase being outweighed by a cost in jobs or hours.

It is no coincidence that this framing tracks closely with the way the minimum wage is typically discussed by academic economists. In the US’s national organs of respectable public discourse – New York Times op-eds, Vox podcasts and Atlantic explainers – the minimum-wage debate is conducted almost entirely by economists or by journalists steeped in the economics literature. At first glance, this seems perfectly natural, just as it may seem completely natural that the debate is framed exclusively in terms of employment and pay. After all, the minimum wage is obviously an economic policy: shouldn’t economists be the people best equipped to discuss its effects?

But to historians of the future, this may well appear as a telling artifact of our age. Just imagine, for a moment, combing through a pile of articles debating slavery, or child labour, in which almost every participant spoke primarily in the specialised language of market exchange and incentives, and buttressed their points by wielding competing spreadsheets, graphs and statistical formulas. This would be, I think we can all agree, a discussion that was limited to the point of irrelevance. Our contemporary minimum-wage debates are similarly blinkered. In its reflexive focus on just a few variables, it risks skipping over the fundamental question: how do we value work? And is the answer determined by us – by politics and politicians – or by the allegedly immutable laws of economics?

In the last four years, some of the most effective activists in America have been the “Fight for $15” campaigners pushing to raise the minimum wage – whose biggest victory so far had come in Seattle. Thanks to their efforts – widely viewed as a hopelessly lost cause when they began – significant minimum-wage increases have been implemented in cities and states across the US. These same activists are laying plans to secure more increases in this November’s midterm elections. The Democratic party, following the lead of Bernie Sanders, has made a $15 minimum part of its official national platform. US businesses and their lobbyists, historically hostile to all minimum-wage increases but well aware of their robust popularity, are gearing up to fight back with PR campaigns and political talking points that paint the minimum wage as harmful to low-wage workers, especially young workers in need of job experience.

In the UK, Jeremy Corbyn has pledged that a Labour government would raise the national minimum wage to £10 “within months” of taking office. (It is currently on schedule to rise slowly to £9 by 2020, which has been criticised by some on the right, citing Seattle as evidence that it will eliminate jobs.) In recent years, EU policymakers have raised the possibility of an EU-wide minimum-wage scheme. All this activity – combined with concern about rising economic inequality and stagnating wages – means the minimum wage is being studied and debated with an intensity not seen for years. But this is a debate unlikely to be resolved by economic studies, because it ultimately hinges on questions that transcend economics.

So what are we really talking about when we talk about the minimum wage?

The first minimum-wage laws of the modern industrial era were passed in New Zealand and Australia in the first decades of the 20th century, with the goal of improving the lives and working conditions of sweatshop workers. As news of these laws spread, reformers in the US sought to copy them. Like today’s minimum-wage proponents, these early reformers insisted that a minimum wage would increase the incomes of the poorest, most precarious workers. But they were also explicit about their desire to protect against capitalism’s worst tendencies. Without government regulation, they argued, there was nothing to stop companies from exploiting poor workers who needed jobs in order to eat – and had no unions to fight on their behalf.

In the field of economics, the concern that a state-administered minimum wage – also known as a wage floor – could backfire by reducing jobs or hours had been around since John Stuart Mill at least. But for many years, it was not necessarily the dominant view. Many mainstream economists supported the introduction of a minimum wage in the US, especially a group known as “institutionalists”, who felt economists should be less interested in abstract models and more focused on how businesses operated in the real world. At the time, many economists, institutionalist and otherwise, thought minimum-wage laws would likely boost worker health and efficiency, reduce turnover costs, and – by putting more cash in workers’ pockets – stimulate spending that would keep the wheels of the economy spinning.

During the Great Depression, these arguments found a prominent champion in President Franklin Roosevelt, who openly declared his desire to reshape the American economy by driving out “parasitic” firms that built worker penury into their business models. “No business which depends for existence on paying less than living wages to its workers has any right to continue in this country,” he said in 1933.

Inevitably, this vision had its dissenters, especially among business owners, for whom minimum-wage increases represented an immediate and unwelcome increase in costs, and more generally, a limit on their agency as profit-seekers. At a 1937 Congressional hearing on the proposed Fair Labor Standards Act (FSLA) – which enacted the first federal minimum wage, the 40-hour work week and the ban on child labour – a representative of one of the US’s most powerful business lobby groups, the National Association of Manufacturers, testified that a minimum wage was the first step toward totalitarianism: “Call it Bolshevism or communism, if you will. Call it socialism, Nazism, fascism or what you will. Each says to the people that they must bow to the will of the state.”

Despite these objections, the FLSA passed in 1938, setting a nationwide minimum wage of $0.25 per hour (the equivalent of $4.45 today). Many industries were exempt at first, including those central to the southern economy, and those that employed high proportions of racial minorities and women. In subsequent decades, more and more of these loopholes were closed.

But as the age of Roosevelt and his New Deal gave way to that of Reagan, the field of economics turned decisively against the minimum wage – one part of a much larger political and cultural tilt toward all things “free market”. A central factor in this shift was the increasing prominence of neoclassical price theory, a set of powerful models that illuminated how well-functioning markets respond to the forces of supply and demand, to generate prices that strike, under ideal conditions, the most efficient balance possible between the preferences of consumers and producers, buyers and sellers.

Viewed through the lens of the basic neoclassical model, to set a minimum wage is to interfere with the “natural” marriage of market forces, and therefore to legislatively eliminate jobs that free agents would otherwise have been perfectly willing to take. Low-wage workers could lose income, teenagers could lose opportunities for work experience, consumer prices could rise and the overall output of the economy could be reduced. The temptation to shackle the invisible hand might be powerful, but was to be resisted, for the good of all.

Throughout the 70s, studies of the minimum wage’s effects were few and far between – certainly just a small fraction of today’s vast literature on the subject. Hardly anyone thought it was a topic that required much study. Economists understood that there were indeed rare conditions in which employers could get away with paying workers less than the “natural” market price of their labour, due to insufficiently high competition among employers. Under these conditions (known as monopsonies), raising the minimum wage could actually increase employment, by drawing more people into the workforce. But monopsonies were widely thought to be exceptionally unusual – only found in markets for very specialised labour, such as professional athletes or college professors. Economists knew the minimum wage as one thing only: a job killer.

In 1976, the prominent economist George Stigler, a longtime critic of the minimum wage on neoclassical grounds, boasted that “one evidence of the professional integrity of the economist is the fact that it is not possible to enlist good economists to defend protectionist programs or minimum wage laws”. He was right. According to a 1979 study in the American Economic Review, the main journal of the American Economic Association, 90% of economists identified minimum-wage laws as a source of unemployment.

“The minimum wage has caused more misery and unemployment than anything since the Great Depression,” claimed Reagan during his 1980 presidential campaign. In many ways, Reagan’s governing philosophy (like Margaret Thatcher’s) was a grossly simplified, selectively applied version of neoclassical price theory, slapped with a broad brush on to any aspect of American life that Republicans wanted to set free from regulatory interference or union pressure. Since becoming law in 1938, the US federal minimum wage had been raised by Congress 15 times, generally keeping pace with inflation. Once Reagan was president, he blocked any new increases, letting the nationwide minimum be eroded by inflation. By the time he left office, the federal minimum was $3.35, and stood at its lowest value to date, relative to the median national income.

Today, invectives against Reaganomics (and support for minimum-wage increases) are a commonplace in liberal outlets such as the New York Times. But in 1987, the Times ran an editorial titled “The Right Minimum Wage: $0.00”, informing its readers – not inaccurately, at the time – that “there’s a virtual consensus among economists that the minimum wage is an idea whose time has passed”. Minimum-wage increases, the paper’s editorial board argued, “would price working poor people out of the job market”. In service of this conclusion, they cited not a single study.

But the neoclassical consensus was eventually shattered. The first crack in the facade was a series of studies published in the mid-90s by two young economists, David Card and Alan Krueger. Through the 1980s and into the 90s, many US states had responded to the stagnant federal minimum wage by passing laws that boosted their local minimum wages above what national law required. Card and Krueger conducted what they called “natural experiments” to investigate the impact of these state-level increases. In their most well-known study, they investigated hiring and firing decisions at fast-food restaurants located along both sides of the border separating New Jersey, which had just raised its wage floor, and Pennsylvania, which had not. Their controversial conclusion was that New Jersey’s higher wage had not caused any decrease in employment.

In Myth and Measurement, the duo’s book summarising their findings, they assailed the existing body of minimum-wage research, arguing that serious flaws had been overlooked by a field eager to confirm the broad reach of neoclassical price theory, and willing to ignore the many ways in which the labour market might differ from markets in consumer goods. (For one thing, they suggested, it was likely that monopsony conditions were much more common in the low-wage labour market than had been previously assumed – allowing employers, rather than “the market”, to dictate wages). The book was dedicated to Richard Lester, an economist from the institutionalist school who argued in the 1940s that neoclassical models often failed to accurately describe how businesses behave in the real world.

Card and Krueger’s work went off like a bomb in the field of economics. The Clinton administration was happy to cite their findings in support of a push, which was eventually successful, to raise the federal minimum to $5.15. But defenders of the old consensus fought back.

In the Wall Street Journal, the Nobel prize-winning economist James M Buchanan asserted that people willing to give credence to the Myth and Measurement studies were “camp-following whores”. For economists to advance such heretical claims about the minimum wage, Buchanan argued, was the equivalent of a physicist arguing that “water runs uphill” (which, I must note, is not uncommon in man-made plumbing and irrigation systems). High-pitched public denunciations like Buchanan’s were just the tip of the disciplinary iceberg. More than a decade later, Card recalled that he subsequently avoided the subject, in part because many of his fellow economists “became very angry or disappointed. They thought that in publishing our work we were being traitors to the cause of economics as a whole.”

There were some shortcomings in Card and Krueger’s initial work, but their findings inspired droves of economists to start conducting empirical studies of minimum-wage increases. Over time, they developed new statistical techniques to make those studies more precise and robust. After several generations of such studies, there is now considerable agreement among economists that, in available historical examples, increases in the minimum wage have not substantially reduced employment. But this newer consensus is far short of the near-unanimity of the 1980s. There are prominent dissenters who insist that the field’s new tolerance for minimum wages is politically expedient wishful thinking – that the data, when properly analysed, still confirms the old predictions of neoclassical theory. And every new study from one side of the debate still generates a rapid response from the other team, in both the specialist academic literature and the wider media.

What has returned the minimum wage to the foreground of US politics is not the slowly shifting discourse of academic economists, but the efforts of the Fight for $15 and its new brand of labour activism. The traditional template for US labour organising was centred on unions – on workers pooling their power to collectively negotiate better contracts with their employers. But in the past four decades, the weakening of US labour law and the loss of jobs in industries that were once bastions of union strength have made traditional unions harder to form, less powerful and easier to break, especially in low-wage service industries.

These conditions have given birth to what is often called “alt-labour”: a wide variety of groups and campaigns (many of them funded or supported by traditional unions) that look more like activist movements. Campaigns such as the Fight for $15 often voice support for unionisation as an ideal (and their union backers would like the additional members), but in the meantime, alt-labour groups seek to address worker grievances through more public means, including the courts, elections and protest actions, including “wildcat” strikes.

In November 2012, some 200 non-unionised workers at fast-food chain restaurants in New York City walked off the job and marched through the streets to broadcast two central demands: the ability to form a union and a $15 minimum wage. (At the time, New York’s minimum wage was $7.25, the same as the national minimum.) The marches also sought to emphasise the fact that, contrary to persistent stereotype, minimum-wage jobs are not held exclusively, or even primarily, by teenagers working for pocket money or job experience; many of the participants were adults attempting to provide for families. The march, the largest of its kind in fast-food history, was coordinated with help from one of the US’s largest and most politically active unions, the Service Employees International Union. Soon the SEIU was helping fast-food workers stage similar walkouts across the country. The Fight for $15 had begun.

As the campaign gathered steam – earning widespread media coverage, helping secure minimum-wage increases in many cities and states, and putting the issue back into the national political conversation – the media turned to economists for their opinion. Their responses illustrated the extent to which the old neoclassical consensus had been upended, but also the ways in which it remained the same.

The old economic consensus insisted that the only good minimum wage was no minimum wage; the new consensus recognises that this is not the case. Increasingly, following Card and Krueger, economists recognise that monopsonistic conditions, in which there is little competition among purchasers of labour, are more common than once thought. If competition among low-wage employers is not as high as it “should” be, wages – like those of fast-food workers – can be “unnaturally” suppressed. Therefore, a minimum wage is accepted as a tweak necessary to correct this flaw. For economists, the “correct” minimum wage is the one calculated, on the basis of past studies, to give the average worker more money without significantly reducing the number of available jobs.

For economists, the key would be to calculate a wage with benefits (in hourly wages) that could be predicted, based on the weight of past studies, to outweigh its costs (in lost jobs and hours).

But this meant that almost no economists, even staunch defenders of minimum-wage increases, would endorse the central demand of the Fight for $15. A hike of that size, they pointed out, was considerably more drastic than any increase in the minimum wage they had previously analysed – and therefore, by the standards of the field, too risky to be endorsed. Arindrajit Dube, a professor at the University of Massachusetts, and perhaps contemporary economics’ most prominent defender of minimum-wage increases, cautioned that $15 might be fine for a prosperous coastal city, but it could end up incurring dangerously high costs in poorer parts of the country. Alan Krueger himself came out against setting a federal target of $15, arguing in a New York Times op-ed that such a high wage floor was “beyond the range studied in past research”, and therefore “could well be counterproductive”.

Of course, these economists may be right. But if all minimum-wage policy had been held to this standard, the US federal minimum wage would not exist to begin with – since the initial jump, from $0 to $0.25, was certainly well “beyond the range studied in past research”.

Almost exactly a year after fast-food workers first walked off the job in New York City, launching the Fight for $15, the country’s first $15 minimum wage became law in SeaTac, Washington, a city of fewer than 30,000 people, known mostly (if at all) as the home of Seattle’s major airport, Seattle-Tacoma International. It was an emblematic victory for “alt-labour”: for years, poorly paid airport ground-crew workers had been trying and failing to form a union, stymied by legal technicalities. With SEIU help, these workers launched a campaign to hold a public referendum on a $15 wage – not expecting to win, but in the hope that the negative publicity would put pressure on the airlines that flew through SeaTac. But in November 2013, the city’s residents – by a slim margin of 77 votes – passed the country’s highest minimum wage.

That same day, a socialist economist named Kshama Sawant won a seat on Seattle’s City Council. Sawant had made a $15 minimum wage a central plank of her campaign. Afraid of being outflanked from the left in one of the most proudly liberal cities in the US, most of her fellow council candidates and both major mayoral candidates endorsed the idea, too. (At the time, the city’s minimum wage was $9.47.) On 2 June 2014, the city council – hoping to avoid a public referendum on the matter – unanimously approved the increase to $15, to be phased in over three years, with future increases pegged to inflation.

The furious Seattle minimum-wage debate of last summer was ostensibly about the $15 rate. But the subject of those competing studies was actually the city’s intermediate increase, at the start of 2016, from the 2015 minimum of $11, to either $13 – for large businesses with more than 500 employees – or $12, for smaller ones. (Businesses that provided their employees with healthcare were allowed to pay less.)

When a group of researchers at the University of Washington (UW) released a paper analysing this incremental hike in June 2017, their conclusion appeared to uphold the predictions of neoclassical theory and throw cold water on the Fight for $15. Yes, low-wage Seattle workers now earned more per hour in 2016 than in 2015. But, the paper argued, having become more expensive to hire, they were being hired less often, and for fewer hours, with the overall reduction in hours outweighing the jump in hourly rates. According to their calculations, the average low-wage worker in Seattle made $1,500 less in 2016 than the year before, even though the city was experiencing an economic boom.

Some of the funding for the University of Washington researchers had come from the Seattle city council. (The group has released several other papers tracking the minimum wage’s effects, and plans to release at least 20 more in the years to come.) But after city officials read a draft of the study, they sought a second opinion from the Center on Wage and Employment Dynamics at the University of California, Berkeley – a research group long associated with support for minimum-wage increases. The Berkeley economists had been preparing their own study of Seattle’s minimum wage, which reached very different conclusions. At the city’s request, they accelerated its release, so it would come out before the more negative UW paper. And after the UW paper was released, Michael Reich, one of the Berkeley study’s lead authors, published a letter directly criticising its methods and dismissing its conclusions.


 Illustration: The Project Twins

It was around this point that the op-ed salvos started flying in both directions. The conditions for widespread, contentious coverage could hardly have been more perfect: supporters of the Fight for $15 and its detractors each had one study to trumpet and one to dismiss.

Conservatives leaped to portray liberals as delusional utopians who would keep commissioning scientific findings until they got one they liked. Some proponents of the Fight for $15, meanwhile, scoured the internet for any sign that Jacob Vigdor, who led the UW study, had a previous bias against the minimum wage.

Critics of the UW study pointed out that it had only used payroll data from businesses with a single location – thus excluding larger businesses and chains such as Domino’s and Starbucks, which were most likely to cope with the short-term local shock in labour costs (and, plausibly, to absorb some of the work that may have been lost at smaller businesses). The Berkeley study, on the other hand, relied solely on data from the restaurant industry, and critics contended this did not fully represent the city’s whole low-wage economy.

But on one point, almost everyone agreed. Both studies were measuring the one thing that really mattered: whether the higher minimum wage led to fewer working hours for low-wage workers, and if so, whether the loss in hours had counteracted the increase in pay.

This approach revealed a fundamental continuity between the post-Card and Krueger consensus and the neoclassical orthodoxy it had replaced. When Roosevelt pushed for America’s first minimum wage, he was confident that capitalists would deal with the temporary price shock by doing what capitalists do best: relentlessly seeking out new ways to save costs elsewhere. He rejected the idea that a functioning economy simply must contain certain types of jobs, or that particular industries were intrinsically required to be poorly compensated or exploitative.

Economies and jobs are, to some extent, what we decide to make them. In developed economies like the US and the UK, it is common to lament the disappearance of “good jobs” in manufacturing and their replacement by “bad” low-wage work in service industries. But much of what was “good” about those manufacturing jobs was made that way over time by concessions won and regulations demanded by labour activists. Today, there is no natural reason that the exploding class of service jobs must be as “bad” as they often are.

The Fight for $15 has not notched its victories by convincing libertarian economists that they are wrong; it has won because more and more Americans work bad jobs – poorly paid jobs, unrewarding jobs, insecure jobs – and they are willing to try voting some of that badness out of existence.

This willingness is not the product of hours spent reading the post-Card and Krueger economic literature. It has much more to do with an intuitive understanding that – in an economy defined by historically high levels of worker productivity on the one hand, and skyrocketing but unevenly distributed profit on the other – some significantly better arrangement must be possible, and that new rules might help nudge us in the right direction, steering employers’ profit-seeking energies towards other methods of cutting costs besides miserably low pay. But we should not expect that there will be a study that proves ahead of time how this will work – just as Roosevelt could not prove his conjecture that the US economy did not have an existential dependence on impoverished sweatshop labour.

Last November, I spent several days in Seattle, mostly talking with labour activists and low-wage workers, including fast-food employees, restaurant waiters and seasonal employees at CenturyLink Field, the city’s American football (and soccer) stadium. In all of these conversations, people talked about the higher minimum wage with palpable pride and enthusiasm. Crystal Thompson, a 36-year-old Domino’s supervisor (she was recently promoted from phone operator), told me she still loved looking at pictures from Seattle’s Fight for $15 marches: proof that even the poorest workers could shut down traffic across a major city and make their demands heard. “I wasn’t even a voter before,” she told me. In fact, more than one person said that since the higher wage had passed, they were on the lookout for the next fight to join.

The more people I talked to, the more difficult it was to keep seeing the minimum-wage debate through the narrow lens of the economics literature – where it is analysed as a discrete policy option, a dial to be turned up or down, with the correct level to be determined by experts. Again and again, my conversations with workers naturally drifted from the minimum wage to other battles about work and pay in Seattle. Since passing the $15 minimum wage, the city had instituted new laws mandating paid sick and family leave, set legal limits on unpredictable shift scheduling, and funded the creation of an office of labour investigators to track down violators of these new rules. (One dark footnote to any conversation about the minimum wage is the fact that, without effective enforcement, many employers regularly opt not to pay it. Another dark footnote is that minimum wage law does not apply to the rapidly growing number of workers classified as “independent contractors”, many of whom toil in the gig economy.)

It was obvious in Seattle that all these victories were intertwined – that victory in one battle had provided energy and momentum for the next – and that all of these advances for labour took the form of limits, imposed by politics, on the latitude allowed to employers in the name of profit-seeking.

Toward the end of my visit, I went to see Jacob Vigdor, the economist who was the lead author of the UW study arguing that Seattle’s minimum wage was actually costing low-wage workers money. He told me he hadn’t ever expected to find himself at the centre of a national storm about wage policy. “I managed to spend 18 years of my career successfully staying away from the minimum wage,” he said. “And then for a while there it kind of took over my life.”

He wanted to defend the study from its critics on the economic left – but he also wanted to stress that his group’s findings were tentative, and insufficiently detailed to make a final ruling about the impact of the minimum wage in Seattle or anywhere else. “This is not enough information to really make a normative call about this minimum-wage policy,” he said.

The UW paper itself is equally explicit on this front, something its many public proponents have been all too willing to forget. But it wasn’t just pundits who took liberties with interpreting the results: in August 2017, the Republican governor of Illinois explicitly cited the paper when vetoing a $15 minimum-wage bill. That same month, the Republican governor of Missouri also cited the UW study, while signing a law to block cities within the state from raising their own minimum wages. Thanks in large part to efforts of business lobbyists, 27 states have passed “pre-emption” laws that stop states and counties from raising their wage floors. (Vigdor has since acknowledged, on Twitter, that it was disingenuous for the governors to cite his study to justify their “politically motivated” decisions.)

Much like my conversations with low-wage workers across the city, talking to Vigdor ultimately left me feeling that, when examined closely, the minimum-wage discourse playing out in the field of economics – and, by extension, across the media landscape – had startlingly little direct relevance to anything at all other than itself. I mentioned to Vigdor that, walking around Seattle, I’d seen a surprising number of restaurants advertising an immediate, urgent need for basic help: dishwashers, busboys, kitchen staff. This had motivated me to go digging in state employment statistics, where I learned that in 2016 and 2017, restaurants across Seattle recorded a consistent need for several thousand more employees than they could find. How did this square with the idea that the higher minimum wage had led to low-wage workers losing work?

“That’s a story about labour supply,” Vigdor said. “Our labour supply is drying up.” Amazon and other tech companies, he said, were drawing in lots of high-skilled, high-wage workers. These transplants were rapidly driving up rents, making the city unlivable for workers at the bottom of the economic food chain, a dynamic exacerbated by the city’s relatively small stock of publicly subsidised low-income housing. 

These downward pressures on the labour supply, Vigdor pointed out, were essentially independent of the minimum wage. “The minimum wage [increase] is maybe just accelerating something that was bound to happen anyway,” he said.

This was not the sort of thing I had expected to hear from the author of the study that launched a hundred vitriolic assaults on the $15 minimum wage. “A million online op-ed writers’ heads just exploded,” I said.

Vigdor laughed ruefully. “Well, we’re going to be studying this for a long time.”

A few days earlier, I met with Kshama Sawant, the socialist economist who had been so instrumental in passing the $15 wage. She was eager to make sure I had read the Berkeley study, and that I had seen all the criticisms of the UW study. But her most impassioned argument wasn’t about the studies – and it was one that Roosevelt would have found very familiar.

“Look, if it were true that the economic system we have today can’t even bring our most poverty-stricken workers to a semi-decent standard of living – and $15 is not even a living wage, by the way – then why would we defend it?” She paused. “That would be straightforward evidence that we need a better system.”

Thursday 5 April 2018

Spasms of Resurgent Nationalism are a Sign of its Irreversible Decline?

Rana Dasgupta in The Guardian


What is happening to national politics? Every day in the US, events further exceed the imaginations of absurdist novelists and comedians; politics in the UK still shows few signs of recovery after the “national nervous breakdown” of Brexit. France “narrowly escaped a heart attack” in last year’s elections, but the country’s leading daily feels this has done little to alter the “accelerated decomposition” of the political system. In neighbouring Spain, El País goes so far as to say that “the rule of law, the democratic system and even the market economy are in doubt”; in Italy, “the collapse of the establishment” in the March elections has even brought talk of a “barbarian arrival”, as if Rome were falling once again. In Germany, meanwhile, neo-fascists are preparing to take up their role as official opposition, introducing anxious volatility into the bastion of European stability.

But the convulsions in national politics are not confined to the west. Exhaustion, hopelessness, the dwindling effectiveness of old ways: these are the themes of politics all across the world. This is why energetic authoritarian “solutions” are currently so popular: distraction by war (Russia, Turkey); ethno-religious “purification” (India, Hungary, Myanmar); the magnification of presidential powers and the corresponding abandonment of civil rights and the rule of law (China, Rwanda, Venezuela, Thailand, the Philippines and many more).

What is the relationship between these various upheavals? We tend to regard them as entirely separate – for, in political life, national solipsism is the rule. In each country, the tendency is to blame “our” history, “our” populists, “our” media, “our” institutions, “our” lousy politicians. And this is understandable, since the organs of modern political consciousness – public education and mass media – emerged in the 19th century from a globe-conquering ideology of unique national destinies. When we discuss “politics”, we refer to what goes on inside sovereign states; everything else is “foreign affairs” or “international relations” – even in this era of global financial and technological integration. We may buy the same products in every country of the world, we may all use Google and Facebook, but political life, curiously, is made of separate stuff and keeps the antique faith of borders.

Yes, there is awareness that similar varieties of populism are erupting in many countries. Several have noted the parallels in style and substance between leaders such as Donald Trump, Vladimir Putin, Narendra Modi, Viktor Orbán and Recep Tayyip Erdoğan. There is a sense that something is in the air – some coincidence of feeling between places. But this does not get close enough. For there is no coincidence. All countries are today embedded in the same system, which subjects them all to the same pressures: and it is these that are squeezing and warping national political life everywhere. And their effect is quite the opposite – despite the desperate flag-waving – of the oft-remarked “resurgence of the nation state”.

The most momentous development of our era, precisely, is the waning of the nation state: its inability to withstand countervailing 21st-century forces, and its calamitous loss of influence over human circumstance. National political authority is in decline, and, since we do not know any other sort, it feels like the end of the world. This is why a strange brand of apocalyptic nationalism is so widely in vogue. But the current appeal of machismo as political style, the wall-building and xenophobia, the mythology and race theory, the fantastical promises of national restoration – these are not cures, but symptoms of what is slowly revealing itself to all: nation states everywhere are in an advanced state of political and moral decay from which they cannot individually extricate themselves.

Why is this happening? In brief, 20th-century political structures are drowning in a 21st-century ocean of deregulated finance, autonomous technology, religious militancy and great-power rivalry. Meanwhile, the suppressed consequences of 20th-century recklessness in the once-colonised world are erupting, cracking nations into fragments and forcing populations into post-national solidarities: roving tribal militias, ethnic and religious sub-states and super-states. Finally, the old superpowers’ demolition of old ideas of international society – ideas of the “society of nations” that were essential to the way the new world order was envisioned after 1918 – has turned the nation-state system into a lawless gangland; and this is now producing a nihilistic backlash from the ones who have been most terrorised and despoiled.

The result? For increasing numbers of people, our nations and the system of which they are a part now appear unable to offer a plausible, viable future. This is particularly the case as they watch financial elites – and their wealth – increasingly escaping national allegiances altogether. Today’s failure of national political authority, after all, derives in large part from the loss of control over money flows. At the most obvious level, money is being transferred out of national space altogether, into a booming “offshore” zone. These fleeing trillions undermine national communities in real and symbolic ways. They are a cause of national decay, but they are also a result: for nation states have lost their moral aura, which is one of the reasons tax evasion has become an accepted fundament of 21st-century commerce.
More dramatically, great numbers of people are losing all semblance of a national home, and finding themselves pitched into a particular kind of contemporary hell. Seven years after the fall of Gaddafi’s dictatorship, Libya is controlled by two rival governments, each with its own parliament, and by several militia groups fighting to control oil wealth. But Libya is only one of many countries that appear whole only on maps. Since 1989, barely 5% of the world’s wars have taken place between states: national breakdown, not foreign invasion, has caused the vast majority of the 9 million war deaths in that time. And, as we know from the Democratic Republic of the Congo and Syria, the ensuing vacuum can suck in firepower from all over the world, destroying conditions for life and spewing shell-shocked refugees in every direction. Nothing advertises the crisis of our nation-state system so well, in fact, as its 65 million refugees – a “new normal” far greater than the “old emergency” (in 1945) of 40 million. The unwillingness even to acknowledge this crisis, meanwhile, is appropriately captured by the contempt for refugees that now drives so much of politics in the rich world.

The crisis was not wholly inevitable. Since 1945, we have actively reduced our world political system to a dangerous mockery of what was designed by US president Woodrow Wilson and many others after the cataclysm of the first world war, and now we are facing the consequences. But we should not leap too quickly into renovation. This system has done far less to deliver human security and dignity than we imagine – in some ways, it has been a colossal failure – and there are good reasons why it is ageing so much more quickly than the empires it replaced.

Even if we wanted to restore what we once had, that moment is gone. The reason the nation state was able to deliver what achievements it did – and in some places they were spectacular – was that there was, for much of the 20th century, an authentic “fit” between politics, economy and information, all of which were organised at a national scale. National governments possessed actual powers to manage modern economic and ideological energies, and to turn them towards human – sometimes almost utopian – ends. But that era is over. After so many decades of globalisation, economics and information have successfully grown beyond the authority of national governments. Today, the distribution of planetary wealth and resources is largely uncontested by any political mechanism.
But to acknowledge this is to acknowledge the end of politics itself. And if we continue to think the administrative system we inherited from our ancestors allows for no innovation, we condemn ourselves to a long period of dwindling political and moral hope. Half a century has been spent building the global system on which we all now depend, and it is here to stay. Without political innovation, global capital and technology will rule us without any kind of democratic consultation, as naturally and indubitably as the rising oceans.

 
Presidents Tayyip Erdoğan of Turkey and Vladimir Putin of Russia in Ankara. Photograph: Reuters

If we wish to rediscover a sense of political purpose in our era of global finance, big data, mass migration and ecological upheaval, we have to imagine political forms capable of operating at that same scale. The current political system must be supplemented with global financial regulations, certainly, and probably transnational political mechanisms, too. That is how we will complete this globalisation of ours, which today stands dangerously unfinished. Its economic and technological systems are dazzling indeed, but in order for it to serve the human community, it must be subordinated to an equally spectacular political infrastructure, which we have not even begun to conceive.

It will be objected, inevitably, that any alternative to the nation-state system is a utopian impossibility. But even the technological accomplishments of the last few decades seemed implausible before they arrived, and there are good reasons to be suspicious of those incumbent authorities who tell us that human beings are incapable of similar grandeur in the political realm. In fact, there have been many moments in history when politics was suddenly expanded to a new, previously inconceivable scale – including the creation of the nation state itself. And – as is becoming clearer every day – the real delusion is the belief that things can carry on as they are.

The first step will be ceasing to pretend that there is no alternative. So let us begin by considering the scale of the current crisis.

Let us start with the west. Europe, of course, invented the nation state: the principle of territorial sovereignty was agreed at the Treaty of Westphalia in 1648. The treaty made large-scale conquest difficult within the continent; instead, European nations expanded into the rest of the world. The dividends of colonial plunder were converted, back home, into strong states with powerful bureaucracies and democratic polities – the template for modern European life.

By the end of 19th century, European nations had acquired uniform attributes still familiar today – in particular, a set of fiercely enforced state monopolies (defence, taxation and law, among others), which gave governments substantial mastery of the national destiny. In return, a moral promise was made to all: the development, spiritual and material, of citizen and nation alike. Spectacular state-run projects in the fields of education, healthcare, welfare and culture arose to substantiate this promise.

The withdrawal of this moral promise over the past four decades has been a shattering metaphysical event in the west, and one that has left populations rummaging around for new things to believe in. For the promise was a major event in the evolution of the western psyche. It was part of a profound theological reorganisation: the French Revolution dethroned not only the monarch, but also God, whose superlative attributes – omniscience and omnipotence – were now absorbed into the institutions of the state itself. The state’s power to develop, liberate and redeem mankind became the foundational secular faith.

During the period of decolonisation that followed the second world war, the European nation-state structure was exported everywhere. But westerners still felt its moral promise with an intensity peculiar to themselves – more so than ever, in fact, after the creation of the welfare state and decades of unprecedented postwar growth. Nostalgia for that golden age of the nation state continues to distort western political debate to this day, but it was built on an improbable coincidence of conditions that will never recur. Very significant was the structure of the postwar state itself, which possessed a historically unique level of control over the domestic economy. Capital could not flow unchecked across borders and foreign currency speculation was negligible compared to today. Governments, in other words, had substantial control over money flows, and if they spoke of changing things, it was because they actually could. The fact that capital was captive meant they Governments could impose historic rates of taxation, which, in an era of record economic growth, allowed them to channel unprecedented energies into national development. For a few decades, state power was monumental – almost divine, indeed – and it created the most secure and equal capitalist societies ever known.

The destruction of state authority over capital has of course been the explicit objective of the financial revolution that defines our present era. As a result, states have been forced to shed social commitments in order to reinvent themselves as custodians of the market. This has drastically diminished national political authority in both real and symbolic ways. Barack Obama in 2013 called inequality “the defining challenge of our time”, but US inequality has risen continually since 1980, without regard for his qualms or those of any other president.

The picture is the same all over the west: the wealth of the richest continues to skyrocket, while post-crisis austerity cripples the social-democratic welfare state. We can all see the growing fury at governments that refuse to fulfil their old moral promise – but it is most probable that they no longer can. Western governments possess nothing like their previous command over national economic life, and if they continue to promise fundamental change, it is now at the level of PR and wish fulfilment.

There is every reason to believe that the next stage of the techno-financial revolution will be even more disastrous for national political authority. This will arise as the natural continuation of existing technological processes, which promise new, algorithmic kinds of governance to further undermine the political variety. Big data companies (Google, Facebook etc) have already assumed many functions previously associated with the state, from cartography to surveillance. Now they are the primary gatekeepers of social reality: membership of these systems is a new, corporate, de-territorialised form of citizenship, antagonistic at every level to the national kind. And, as the growth of digital currencies shows, new technologies will emerge to replace the other fundamental functions of the nation state. The libertarian dream – whereby antique bureaucracies succumb to pristine hi-tech corporate systems, which then take over the management of all life and resources – is a more likely vision for the future than any fantasy of a return to social democracy.

 
US president Donald Trump in Washington. Photograph: AP

Governments controlled by outside forces and possessing only partial influence over national affairs: this has always been so in the world’s poorest countries. But in the west, it feels like a terrifying return to primitive vulnerability. The assault on political authority is not a merely “economic” or “technological” event. It is an epochal upheaval, which leaves western populations shattered and bereft. There are outbreaks of irrational rage, especially against immigrants, the appointed scapegoats for much deeper forms of national contamination. The idea of the western nation as a universal home collapses, and transnational tribal identities grow up as a refuge: white supremacists and radical Islamists alike take up arms against contamination and corruption.

The stakes could not be higher. So it is easy to see why western governments are so desperate to prove what everyone doubts: that they are still in control. It is not merely Donald Trump’s personality that causes him to act like a sociopathic CEO. The era of globalisation has seen consistent attempts by US presidents to enhance the authority of the executive, but they are never enough. Trump’s office can never have the level of mastery over American life that Kennedy’s did, so he is obliged to fake it. He cannot make America great again, but he does have Twitter, through which he can establish a lone-gun personality cult – blaming women, leftists and brown people for the state’s impotence. He cannot heal America’s social divisions, but he still controls the security apparatus, which can be deployed to help him look “tough” – declaring war on crime, deporting foreigners, hardening borders. He cannot put more money into the hands of the poor who voted for him, but he can hand out mythological currency instead; even his poorest voters, after all, possess one significant asset – US citizenship – whose value he can “talk up”, as he previously talked up casinos and hotels. Like Putin or Orbán, Trump imbues citizenship with new martial power, and makes a big show of withholding it from people who want it: what is scarcer, obviously, is more precious. Citizens who have nothing are persuaded that they have a lot.

These strategies are ugly, but they cannot simply be blamed on a few bad actors. The predicament is this: political authority is running on empty, and leaders are unable to deliver meaningful material change. Instead, they must arouse and deploy powerful feelings: hatred of foreigners and internal enemies, for instance, or the euphoria of meaningless military exploits (Putin’s annexation of Crimea raised the hugely popular prospect of general Tsarist revival).

But let us not imagine that these strategies will quickly break down under their own deceptions as moderation magically comes back into fashion. As Putin’s Russia has shown, chauvinism is more effective than we like to believe. Partly because citizens are desperate for the cover-up to succeed: deep down, they know to be scared of what will happen if the power of the state is revealed to be a hoax.

In the world’s poorest countries, the picture is very different. Almost all those nations emerged in the 20th century from the Eurasian empires. It has become de rigueur to despise empires, but they have been the “normal” mode of governance for much of history. The Ottoman empire, which lasted from 1300 until 1922, delivered levels of tranquillity and cultural achievement that seem incredible from the perspective of today’s fractured Middle East. The modern nation of Syria looks unlikely to last more than a century without breaking apart, and it hardly provides security or stability for its citizens.

Empires were not democratic, but were built to be inclusive of all those who came under their rule. It is not the same with nations, which are founded on the fundamental distinction between who is in and who is out – and therefore harbour a tendency toward ethnic purification. This makes them much more unstable than empires, for that tendency can always be stoked by nativist demagogues.

Nevertheless, in the previous century it was decided with amazing alacrity that empires belonged to the past, and the future to nation states. And yet this revolutionary transformation has done almost nothing to close the economic gap between the colonised and the colonising. In the meantime, it has subjected many postcolonial populations to a bitter cocktail of authoritarianism, ethnic cleansing, war, corruption and ecological devastation.

If there are so few formerly colonised countries that are now peaceful, affluent and democratic, it is not, as the west often pretends, because “bad leaders” somehow ruined otherwise perfectly functional nations. In the breakneck pace of decolonisation, nations were thrown together in months; often their alarmed populations fell immediately into violent conflict to control the new state apparatus, and the power and wealth that came with it. Many infant states were held together only by strongmen who entrusted the system to their own tribes or clans, maintained power by stoking sectarian rivalries and turned ethnic or religious differences into super-charged axes of political terror.

The list is not a short one. Consider men such as Ne Win (Burma), Hissène Habré (Chad), Hosni Mubarak (Egypt), Mengistu Haile Mariam (Ethiopia), Ahmed Sékou Touré (Guinea), Muhammad Suharto (Indonesia), the Shah of Iran, Saddam Hussein (Iraq), Muammar Gaddafi (Libya), Moussa Traoré (Mali), General Zia-ul-Haq (Pakistan), Ferdinand Marcos (Philippines), the Kings of Saudi Arabia, Siaka Stevens (Sierra Leone), Mohamed Siad Barre (Somalia), Jaafar Nimeiri (Sudan), Hafez al-Assad (Syria), Idi Amin (Uganda), Mobutu Sese Seko (Zaire) or Robert Mugabe (Zimbabwe).

 
Hungary’s president, Viktor Orbán. Photograph: Laszlo Balogh/Getty Images

Such countries were generally condemned to remain what one influential commentator has called “quasi-states”. Formally equivalent to the older nations with which they now shared the stage, they were in reality very different entities, and they could not be expected to deliver comparable benefits to their citizens.

Those dictators could never have held such incoherent states together without tremendous reinforcement from outside, which was what sealed the lid on the pressure cooker. The post-imperial ethos was hospitable to dictators, of course: with the UN’s moral rejection of foreign rule came a universal imperative to respect national sovereignty, no matter what horrors went on behind its closed doors. But the cold war vastly expanded the resources available to brutal regimes for defending themselves against revolution and secession. The two superpowers funded the escalation of post-colonial conflicts to stupefying levels of fatality: at least 15 million died in the proxy wars of that period, in theatres as dispersed as Afghanistan, Korea, El Salvador, Angola and Sudan. And what the superpowers wanted out of all this destruction was a network of firmly installed clients able to defeat all internal rivals.

There was nothing stable about this cold war “stability”, but its devastation was contained within the borders of its proxy states. The breakup of the superpower system, however, has led to the implosion of state authority across large groups of economically and politically impoverished countries – and the resulting eruptions are not contained at all. Destroyed political cultures have given rise to startling “post-national” forces such as Islamic State, which are cutting through national borders and transmitting chaos, potentially, into every corner of the world.

Over the past 20 years, the slow, post-cold-war rot in Africa and the Middle East has been exuberantly exploited by these kinds of forces – whose position, since there are more countries set to go the way of Yemen, South Sudan, Syria and Somalia, is flush with opportunity. Their adherents have lost the enchantment for the old slogans of nation-building. Their political technology is charismatic religion, and the future they seek is inspired by the ancient golden empires that existed before the invention of nations. Militant religious groups in Africa and the Middle East are less engaged in the old project of seizing the state apparatus; instead, they cut holes and tunnels in state authority, and so assemble transnational networks of tax collection, trade routes and military supply lines.

Such a network currently extends from Mauritania in the west to Yemen in the east, and from Kenya and Somalia in the south to Algeria and Syria in the north. This eats away the old political architecture from the inside, making several nation states (such as Mali and the Central African Republic) essentially non-functional, which in turn creates further opportunities for consolidation and expansion. Several ethnic groups, meanwhile – such as the Kurds and the Tuareg – which were left without a homeland after decolonisation, and stranded as persecuted minorities ever since, have also exploited the rifts in state authority to assemble the beginnings of transnational territories. It is in the world’s most dangerous regions that today’s new political possibilities are being imagined.

The west’s commitment to nation states has been self-servingly partial. For many decades, it was content to see large areas of the world suffer under terrifying parodies of well-established Western states; it cannot complain that those areas now display little loyalty to the nation-state idea. Especially since they have also borne the most traumatic consequences of climate change, a phenomenon for which they were least responsible and least equipped to withstand. The strategic calculation of new militant groups in that region is in many ways quite accurate: the transition from empire to independent nation states has been a massive and unremitting failure, and, after three generations, there needs to be a way out.

But there is no possibility that al-Shabaab, the Janjaweed, Seleka, Boko Haram, Ansar Dine, Isis or al-Qaida will provide that way out. The situation requires new ideas of political organisation and global economic redistribution. There is no superpower great enough, any more, to contain the effects of exploding “quasi-states”. Barbed wire and harder borders will certainly not suffice to keep such human disasters at bay.
Let us turn to the nature of the nation-state system itself. The international order as we know it is not so old. The nation state became the universal template for human political organisation only after the first world war, when a new principle – “national self-determination,”, as US President Woodrow Wilson named it – buried the many other blueprints under debate. Today, after a century of lugubrious “international relations”, the only aspect of this principle we still remember is the one most familiar to us: national independence. But Wilson’s original programme, informed by a loose international coalition including such diverse visionaries as Andrew Carnegie and Leonard Woolf (husband of Virginia), aimed for something far more ambitious: a comprehensive intra-state democracy designed to ensure global cooperation, peace and justice.

How were human beings to live securely in their new nations, after all, if nations themselves were not subject to any law? The new order of nations only made sense if these were integrated into a “society of nations”: a formal global society with its own universal institutions, empowered to police the violence that individual states would not regulate on their own: the violence they perpetrated themselves, whether against other states or their own citizens.

The cold war definitively buried this “society”, and we have lived ever since with a drastically degraded version of what was intended. During that period, both superpowers actively destroyed any constraints on international action, maintaining a level of international lawlessness worthy of the “scramble for Africa”. Without such constraints, their disproportionate power produced exactly what one would expect: gangsterism. The end of the cold war did nothing to change American behaviour: the US is today dependent on lawlessness in international society, and on the perpetual warfare-against-the-weak that is its consequence.

Just as illegitimate government within a nation cannot persist for long without opposition, the illegitimate international order we have lived with for so many decades is quickly exhausting the assent it once enjoyed. In many areas of the world today, there is no remaining illusion that this system can offer a viable future. All that remains is exit. Some are staking everything on a western passport, which, since the supreme value of western life is still enshrined in the system, is the one guarantee of meaningful constitutional protection. But such passports are difficult to get.

That leaves the other kind of exit, which is to take up arms against the state system itself. The appeal of Isis for its converts was its claim to erase from the Middle East the catastrophe of the post-imperial century. It will be remembered that the group’s most triumphant publicity was associated with its penetration of the Iraq-Syria border. This was presented as a victory over the 1916 treaties by which the British and French divided the Ottoman Empire amongst themselves – Isis’s PR arm issued the Twitter hashtag #SykesPicotOver – and inaugurated a century of Mesopotamian bombing. It arose from an entirely justifiable rejection of a system that obstinately designated – during the course of a century and more – Arabs as “savages” to whom no dignity or protection would be extended.

The era of national self-determination has turned out to be an era of international lawlessness, which has crippled the legitimacy of the nation state system. And, while revolutionary groups attempt to destroy the system “from below”, assertive regional powers are destroying it “from above” – by infringing national borders in their own backyards. Russia’s escapade in Ukraine demonstrates that there are now few consequences to neo-imperial bagatelles, and China’s route to usurping the 22nd-richest country in the world – Taiwan – lies open. The true extent of our insecurity will be revealed as the relative power of the US further declines, and it can no longer do anything to control the chaos it helped create.

The three elements of the crisis described here will only worsen. First, the existential breakdown of rich countries during the assault on national political power by global forces. Second, the volatility of the poorest countries and regions, now that the departure of cold war-era strongmen has revealed their true fragility. And third, the illegitimacy of an “international order” that has never aspired to any kind of “society of nations” governed by the rule of law.

Since they are all rooted in transnational forces whose scale eludes the reach of any one nation’s politics, they are largely immune to well-meaning political reform within nations (though the coming years will also see many examples of such reform). So we are obliged to re-examine its ageing political foundations if we do not wish to see our global system pushed to ever more extreme forms of collapse.

 
Apple CEO Tim Cook and Facebook founder Mark Zuckerberg. Photograph: AP

This is not a small endeavour: it will take the better part of this century. We do not know yet where it will lead. All we can lay out now is a set of directions. From the standpoint of our present, they will seem impossible, because we have not known any other way. But that is how radical novelty always begins.

The first is clear: global financial regulation. Today’s great engines of wealth creation are distributed in such a way as to elude national taxation systems (94% of Apple’s cash reserves are held offshore; this $250bn is greater than the combined foreign reserves of the British government and the Bank of England), which is diminishing all nation states, materially and symbolically. There is no reason to heed those interested parties who tell us global financial regulation is impossible: it is technologically trivial compared to the astonishing systems those same parties have already built.

The history of the nation state is one of perennial tax innovation, and the next such innovation is transnational: we must build systems to track transnational money flows, and to transfer a portion of them into public channels. Without this, our political infrastructure will continue to become more and more superfluous to actual material life. In the process we must also think more seriously about global redistribution: not aid, which is exceptional, but the systematic transfer of wealth from rich to poor for the improved security of all, as happens in national societies.

Second: global flexible democracy. As new local and transnational political currents become more powerful, the nation state’s rigid monopoly on political life is becoming increasingly unviable. Nations must be nested in a stack of other stable, democratic structures – some smaller, some larger than they – so that turmoil at the national level does not lead to total breakdown. The EU is the major experiment in this direction, and it is significant that the continent that invented the nation state was also the first to move beyond it. The EU has failed in many of its functions, principally because it has not established a truly democratic ethos. But free movement has hugely democratised economic opportunity within the EU. And insofar as it may become a “Europe of regions” – comprising Catalonia and Scotland, not only Spain and the UK – it can help stabilise national political upheaval.

We need more such experiments in continental and global politics. National governments themselves need to be subjected to a superior tier of authority: they have proved to be the most dangerous forces in the nation-state era, waging endless wars against other nations while oppressing, killing and otherwise failing their own populations. Oppressed national minorities must be given a legal mechanism to appeal over the heads of their own governments – this was always part of Wilson’s vision and its loss has been terrible for humanity.

Third, and finally: we need to find new conceptions of citizenship. Citizenship is itself the primordial kind of injustice in the world. It functions as an extreme form of inherited property and, like other systems in which inherited privilege is overwhelmingly determinant, it arouses little allegiance in those who inherit nothing. Many countries have made efforts, through welfare and education policy, to neutralise the consequences of accidental advantages such as birth. But “accidental advantages” rule at the global level: 97% of citizenship is inherited, which means that the essential horizons of life on this planet are already determined at birth.

If you are born Finnish, your legal protections and economic expectations are of such a different order to those of a Somalian or Syrian that even mutual understanding is difficult. Your mobility – as a Finn – is also very different. But in a world system – rather than a system of nations – there can be no justification for such radical divergences in mobility. Deregulating human movement is an essential corollary of the deregulation of capital: it is unjust to preserve the freedom to move capital out of a place and simultaneously forbid people from following.

Contemporary technological systems offer models for rethinking citizenship so it can be de-linked from territory, and its advantages can be more fairly distributed. The rights and opportunities accruing to western citizenship could be claimed far away, for instance, without anyone having to travel to the west to do so. We could participate in political processes far away that nonetheless affect us: if democracy is supposed to give voters some control over their own conditions, for instance, should a US election not involve most people on earth? What would American political discourse look like, if it had to satisfy voters in Iraq or Afghanistan?

On the eve of its centenary, our nation-state system is already in a crisis from which it does not currently possess the capacity to extricate itself. It is time to think how that capacity might be built. We do not yet know what it will look like. But we have learned a lot from the economic and technological phases of globalisation, and we now possess the basic concepts for the next phase: building the politics of our integrated world system. We are confronted, of course, by an enterprise of political imagination as significant as that which produced the great visions of the 18th century – and, with them, the French and American Republics. But we are now in a position to begin.