Search This Blog

Showing posts with label productivity. Show all posts
Showing posts with label productivity. Show all posts

Thursday 22 April 2021

Burnt out: is the exhausting cult of productivity finally over?

Zoe Williams in The Guardian

In the US, they call it “hustle culture”: the idea that the ideal person for the modern age is one who is always on, always at work, always grafting. Your work is your life, and when you are not doing your hustle, you have a side-hustle. Like all the world’s worst ideas, it started in Silicon Valley, although it is a business-sector thing, rather than a California thing.

Since the earliest days of tech, the notion of “playbour”, work so enjoyable that it is interchangeable with leisure, has been the dream. From there, it spiralled in all directions: hobbies became something to monetise, wellness became a duty to your workplace and, most importantly, if you love your work, it follows that your colleagues are your intimates, your family.



Which is why an organisation such as Ustwo Games likes to call itself a “fampany”. “What the hell is that?” says Sarah Jaffe, author of Work Won’t Love You Back. “A lot of these companies’ websites use the word ‘family’, even though they have workers in Canada, workers in India, workers in the UK; a lot of us don’t even speak the same language and yet we’re a ‘family’.” Meanwhile, companies such as Facebook and Apple have offered egg-freezing to their employees, suggesting that you may have to defer having a real family if you work for a fake one.

A grownup soft-play area: inside the Google office in Zurich, Switzerland. Photograph: Google

The tech companies’ attitudes have migrated into other “status” sectors, together with the workplaces that look like a kind of grownup soft-play, all colourful sofas and ping-pong and hot meals. In finance, food has become such a sign of pastoral care that Goldman Sachs recently sent junior employees hampers to make up for their 100-hour working weeks. If you actually cared about your staff, surely you would say it with proper working conditions, not fruit? But there is an even dicier subtext: when what you eat becomes your boss’s business, they are buying more than your time – they are buying your whole self.

Then Elon Musk weighed in to solve that niggling problem: what’s the point of it all? Making money for someone else, with your whole life? The billionaire reorientated the nature of work: it’s not a waypoint or distraction in the quest for meaning – work is meaning. “Nobody ever changed the world on 40 hours a week,” he memorably tweeted, concluding that people of vision worked 80 or more, eliding industry with passion, vision, society. Say what you like about him but he knows how to build a narrative.

Hustle culture has proved to be a durable and agile creed, changing its image and language while retaining its fundamentals. Sam Baker, author of The Shift: How I (Lost and) Found Myself After 40 – And You Can Too, worked an 80-hour week most of her life, editing magazines. “The 1980s were, ‘Put on a suit and work till you drop,’” she says. “Mark Zuckerberg is, ‘Put on a grey T-shirt and work till you drop.’” The difference, she says, is that “it’s all now cloaked in a higher mission”.

What has exposed the problems with this whole structure is the pandemic. It has wreaked some uncomfortable but helpful realisations – not least that the jobs with the least financial value are the ones we most rely on. Those sectors that Tim Jackson, professor of sustainable development at Surrey University and author of Post Growth: Life After Capitalism, describes as “chronically underinvested for so long, neglected for so long” and with “piss-poor” wages, are the ones that civilisation depends on: care work, retail, delivery.
Elon Musk … ‘Nobody ever changed the world on 40 hours a week.’ Photograph: Brendan Smialowski/AFP/Getty Images

Many of the rest of us, meanwhile, have had to confront the nonessentiality of our jobs. Laura, 43, was working in private equity before the pandemic, but home working brought a realisation. Being apart from colleagues and only interacting remotely “distilled the job into the work rather than the emotions being part of something”. Not many jobs can take such harsh lighting. “It was all about making profit, and focusing on people who only care about the bottom line. I knew that. I’ve done it for 20-odd years. I just didn’t want to do it any more.”

Throw in some volunteering – which more than 12 million people have during the pandemic – and the scales dropped from her eyes. She ended up giving up her job to be a vaccination volunteer. She can afford to live on her savings for now, and as for what happens when the money runs out, she will cross that bridge when she comes to it. The four pillars of survival, on leaving work, are savings, spouses, downsizing and extreme thrift; generally speaking, people are happiest talking about the thrift and least happy talking about the savings.

Charlotte White, 47, had a similar revelation. She gave up a 20-plus-year career in advertising to volunteer at a food bank. “I felt so needed. This sounds very selfish but I have to admit that I’ve got a lot out of it. It’s the opposite of the advertising bullshit. I’d end each day thinking: ‘My God, I’ve really helped someone.’ I’ve lived in this neighbourhood for years, and there are all these people I’ve never met: older people, younger people, homeless people.”

With the spectre of mortality hovering insistently over every aspect of life, it is not surprising that people had their priorities upended. Neal, 50, lost his job as an accountant in January 2020. He started applying for jobs in the same field. “I was into three figures; my hit rate was something like one interview for 25. I think I was so uninterested that it was coming across in my application. I was pretending to be interested in spreadsheets and ledgers when thousands of people were dying, and it just did not sit right.” He is now working in a psychiatric intensive care unit, earning just above the minimum wage, and says: “I should have done it decades ago. I’m a much better support worker than I ever was an accountant.”

This is a constant motif: everybody mentions spreadsheets; everyone wishes they had made the change decades ago. “For nine months, my partner and I existed on universal credit,” Neal says, “and that was it. It was tough, we had to make adjustments, pay things later, smooth things out. But I thought: ‘If we can exist on that …’”
The tyranny of work … Could change be its own reward? Photograph: Bob Scott/Getty Images

So why have we been swallowing these notions about work and value that were nonsense to begin with, and just getting sillier? We have known that the “higher mission” idea, whether it was emotional (being in a company that refers to itself as a “family”) or revolutionary (being “on” all the time in order to change the world) was, as Baker puts it “just fake, just another way of getting people to work 24 hours a day. It combined with the email culture, of always being available. I remember when I got my BlackBerry, I was working for Cosmopolitan, it was the best thing ever … It was only a matter of months before I was doing emails on holiday.”

But a lot of status came with feeling so indispensable. Unemployment is a famous driver of misery, and overemployment, to be so needed, can feel very bolstering. Many people describe having been anxious about the loss of status before they left their jobs; more anxious than about the money, where you can at least count what you are likely to have and plan around it. As Laura puts it, “not being on a ladder any more, not being in a race: there is something in life, you should always be moving forward, always going up”. And, when it came to it, other people didn’t see them as diminished.

Katherine Trebeck, of the Wellbeing Economy Alliance, is keen to broaden the focus of the productivity conversation. “To be able to have the choice, to design your own goals for your own life, to develop your own sense of where you get status and esteem is a huge privilege; there’s a socio-economic gradient associated with that level of autonomy,” she says. In other words: you have to have a certain level of financial security before your own emotional needs are at all relevant.

“When I was at Oxfam, we worked with young mothers experiencing poverty,” Trebeck says. “Just the pressure to shield their kids from looking poor made them skimp on the food they were providing. Society was forcing them to take those decisions between hunger and stigma.” She is sceptical about individual solutions and is much more focused on system change. Whether we are at the bottom or in the middle of this ladder, we are all part of the same story.

Part of the scam of the productivity narrative is to separate us, so that the “unskilled” are voiceless, discredited by their lack of skill, while the “skilled” don’t have anything to complain about because if they want to know what’s tough, they should try being unskilled. But in reality we are very interconnected – especially if working in the public sector – and you can burn out just by seeing too closely what is going on with other people.

Pam, 50, moved with her husband from London to the Peak District. They were both educationalists, he a headteacher, she in special educational needs (SEN). She describes what drove their decision: “If you think about a school, it’s a microcosm of life, and there have been very limited resources. Certainly in SEN, the lack of funding was desperate. Some kids just go through absolute hell: trying to get a CAMHS [Child and Adolescent Mental Health Services] appointment is nigh on impossible, kids have to be literally suicidal for someone to say: ‘OK, we’ll see you in two months.’”

They moved before the pandemic, and she found a part-time job with the National Trust, before lockdown forced a restructure. She hopes to resume working in the heritage sector when it reopens. Her husband still does some consultancy, but the bedrock of their security, financially speaking, is that the move out of London allowed them to “annihilate the mortgage”.
A rewarding alternative … volunteers working at a foodbank in Earlsfield, south London. Photograph: Charlotte White/PA

Creative and academic work, putatively so different from profit-driven sectors, nevertheless exploits its employees using, if anything, a heightened version of the same narrative: if what you do is who you are, then you’re incredibly lucky to be doing this thoughtful/artistic thing, and really, you should be paying us. Elizabeth, 39, was a performer, then worked in a theatre. “My eldest sister used to be an archaeologist, and that sounds different, but it’s the same: another job where they want you to be incredibly credentialed, incredibly passionate. But they still want to pay you minimum wage and God forbid you have a baby.”

There is also what the management consultants would call an opportunity cost, of letting work dominate your sense of who you are. You could go a whole life thinking your thing was maths, when actually it was empathy. I asked everyone if they had any regrets about their careerist years. Baker said: “Are you asking if I wish I’d had children? That’s what people usually mean when they ask that.” It actually wasn’t what I meant: whether you have children or not, the sense of what you have lost to hyperproductivity is more ineffable, that there was a better person inside you that never saw daylight.

When the furlough scheme came in, Jennifer, 39, an academic, leapt at the opportunity to cut her hours without sacrificing any pay. “I thought there’d be a stampede, but I was the only one.” She makes this elegant observation: “The difference between trying 110% and trying 80% is often not that big to other people.”

If the past year has made us rethink what skill means, upturn our notions of the value we bring to the world around us, fall out of love with our employers and question productivity in its every particular, as an individual goal as well as a social one, well, this, as the young people say, could be quite major. Certainly, I would like to see Elon Musk try to rebut this new consciousness in a tweet.

Thursday 17 December 2020

Are poor countries poor because of their poor people? Economic History in Small Doses 5

Girish Menon*

A bus driver in Mumbai gets paid around Rs.50 per hour whereas his equivalent in Cambridge gets paid £12 per hour. Using currency exchange rates, the Cambridge driver gets paid 24 times more than his Indian equivalent. Does that mean John the Cambridge driver is 24 times more productive than Om? If anything, Om would likely be a much more skilled driver than John because Om has to negotiate his way through bullock carts, rickshaws, bicycles and cows on the street.

The main reason why John is paid 24 times more than Om is because of protectionism. Some, British workers are protected from competition from workers in India, and soon from the EU, through immigration control.  (Technology has erased this protectionism in the relocation of many white collar jobs.) This form of protectionism goes unmentioned in the WTO (World Trade Organization) as countries raise their barriers to immigration of poor workers.

 Many people think that poor countries are poor because of their poor people. The rich people in poor countries typically blame their countries’ poverty on the ignorance, laziness and passivity of the poor. Arithmetically too, it is true that poor people pull down the national income average because of their large numbers.

 Little do the rich people in poor countries realize that their countries are poor not because of the poor but because of themselves. The primary reason why John is paid 24 times more than Om is because John works in a labour market with other people who are way more than 24 times more productive than their Indian counterparts. The top managers, scientists and engineers in the UK are hundreds of times more productive than their Indian equivalents, so the UK’s national productivity ends up being in the region of 24 times that of India.

In other words, poor people from poor countries are usually able to hold their own against counterparts in rich countries. It is the rich from the poor countries who cannot do that. It is their relative low productivity that makes their country poor. So, instead of blaming their own poor for dragging the country down, the rich of the poor countries should ask themselves why they cannot pull up the productivity and innovation in their own country,

Of course, the rich in rich countries need not get smug. They are beneficiaries of economies with better technology, better organized firms, better institutions and better physical infrastructure. Warren Buffet expressed it best:

 “I personally think that society is responsible for a very significant percentage of what I’ve earned. If you stick me down in the middle of Bangladesh or Peru or someplace, you’ll find out how much this talent is going to produce in the wrong kind of soil. I will be struggling thirty years later. I work in a system that happens to reward what I do well – disproportionately well.”

 

* Adapted from 23 Things they don’t tell you about Capitalism by Ha Joon Chang

Wednesday 8 April 2020

Defining productivity in a pandemic may teach us a lesson

How should we measure the contribution of a teacher or a health worker during this crisis asks  DIANE COYLE in The FT 

One “P” word has been dominating economic policy discussions for some time now: not “pandemic”, but “productivity”. Now that coronavirus has dealt an unprecedented blow to economies everywhere, policymakers are asking how it will affect productivity at a national level. 

The long-term effects of Covid-19 are unknown — they depend on the length of time for which economic activity will have to be suspended. The longer lockdowns last, the greater the hit to output growth and increasing unemployment. 

Productivity — the output the economy gains for the resources and effort it expends — matters because it is what drives improvements in living standards: better health; longer lives; greater comfort. Investment, innovation and skills are the key ingredients, though the recipe is still a mystery.  

In the UK, the pandemic will certainly cause a short-term fall in private-sector productivity. This is not only because many people are unwell or struggling to work at home around children and pets, but also because of a sharp decline in output. Employment is falling too, but many businesses are keeping workers on their books so labour input will not decline by as much. In general, productivity falls when output falls. 

In the public sector, measuring productivity is hard. For services such as health and education, the Office for National Statistics looks at both activity and quality — such as the number of pupils and their exam grades, or the number of operations and health outcomes. But, at the best of times, these measures depend on other factors. 

How should we think of the productivity of a teacher preparing lessons for online delivery, with all the challenges that involves, and what will be the effect on pupils’ attainment? It is easy to think of new measures, such as the number of online lessons delivered, but hard to imagine pupil outcomes not suffering. 

As for medical staff, who would argue their productivity has not rocketed in recent weeks? But for many patients the outcomes that are measured will sadly be tragic. The biggest “productivity” boost may come from a new vaccine. 

Public investment in infrastructure or green technologies will ultimately help productivity, but financial pain may force businesses to retrench. Business investment in the UK has been sluggish anyway, falling in 2018 and rising just 0.6 per cent in 2019. It is hard to foresee anything other than a big fall from the £50m-or-so-a-quarter last year. 

Will supply chains unravel? The division of labour and specialisation that comes with outsourcing has driven gains in manufacturing productivity since the 1980s, but it depends on frictionless logistics and freight. Keeping that system going through lockdowns will take significant international co-ordination, which seems unlikely. 

Some recent work suggests that even quite small shocks can cause networks to fall apart. This one will reverberate as waves of contagion hit countries at varying times. One res­ponse would be for importing companies to diversify supply chains. A less benign one — in productivity terms — would be a shift to reshoring production at home. 

The pandemic and its aftermath will raise profound questions. Productivity involves a more-for-less (or, at least, more-for-the-same) mindset — hence the just-in-time systems and tight logistics operations. Companies may rethink the need for buffers as economic insurance. Inventories could rise, increasing business costs. Suppliers closer to home could be found, again at higher cost. 

Perhaps the definition of economic wellbeing will also change. Conventional economic output matters, as people now losing their incomes know all too well. But so do social support networks and fair access to services. Without them, everyone is more vulnerable. Prosperity is more than productivity.

Tuesday 7 May 2019

Red Meat Republic - The Story of Beef

Exploitation and predatory pricing drove the transformation of the US beef industry – and created the model for modern agribusiness. By Joshua Specht in The Guardian 


The meatpacking mogul Jonathan Ogden Armour could not abide socialist agitators. It was 1906, and Upton Sinclair had just published The Jungle, an explosive novel revealing the grim underside of the American meatpacking industry. Sinclair’s book told the tale of an immigrant family’s toil in Chicago’s slaughterhouses, tracing the family’s physical, financial and emotional collapse. The Jungle was not Armour’s only concern. The year before, the journalist Charles Edward Russell’s book The Greatest Trust in the World had detailed the greed and exploitation of a packing industry that came to the American dining table “three times a day … and extorts its tribute”.

In response to these attacks, Armour, head of the enormous Chicago-based meatpacking firm Armour & Co, took to the Saturday Evening Post to defend himself and his industry. Where critics saw filth, corruption and exploitation, Armour saw cleanliness, fairness and efficiency. If it were not for “the professional agitators of the country”, he claimed, the nation would be free to enjoy an abundance of delicious and affordable meat.

Armour and his critics could agree on this much: they lived in a world unimaginable 50 years before. In 1860, most cattle lived, died and were consumed within a few hundred miles’ radius. By 1906, an animal could be born in Texas, slaughtered in Chicago and eaten in New York. Americans rich and poor could expect to eat beef for dinner. The key aspects of modern beef production – highly centralised, meatpacker-dominated and low-cost – were all pioneered during that period.

For Armour, cheap beef and a thriving centralised meatpacking industry were the consequence of emerging technologies such as the railroad and refrigeration coupled with the business acumen of a set of honest and hard-working men like his father, Philip Danforth Armour. According to critics, however, a capitalist cabal was exploiting technological change and government corruption to bankrupt traditional butchers, sell diseased meat and impoverish the worker.

Ultimately, both views were correct. The national market for fresh beef was the culmination of a technological revolution, but it was also the result of collusion and predatory pricing. The industrial slaughterhouse was a triumph of human ingenuity as well as a site of brutal labour exploitation. Industrial beef production, with all its troubling costs and undeniable benefits, reflected seemingly contradictory realities.

Beef production would also help drive far-reaching changes in US agriculture. Fresh-fruit distribution began with the rise of the meatpackers’ refrigerator cars, which they rented to fruit and vegetable growers. Production of wheat, perhaps the US’s greatest food crop, bore the meatpackers’ mark. In order to manage animal feed costs, Armour & Co and Swift & Co invested heavily in wheat futures and controlled some of the country’s largest grain elevators. In the early 20th century, an Armour & Co promotional map announced that “the greatness of the United States is founded on agriculture”, and depicted the agricultural products of each US state, many of which moved through Armour facilities.

Beef was a paradigmatic industry for the rise of modern industrial agriculture, or agribusiness. As much as a story of science or technology, modern agriculture is a compromise between the unpredictability of nature and the rationality of capital. This was a lurching, violent process that sawmeatpackers displace the risks of blizzards, drought, disease and overproduction on to cattle ranchers. Today’s agricultural system works similarly. In poultry, processors like Perdue and Tyson use an elaborate system of contracts and required equipment and feed purchases to maximise their own profits while displacing risk on to contract farmers. This is true with crop production as well. As with 19th-century meatpacking, relatively small actors conduct the actual growing and production, while companies like Monsanto and Cargill control agricultural inputs and market access.

The transformations that remade beef production between the end of the American civil war in 1865 and the passage of the Federal Meat Inspection Act in 1906 stretched from the Great Plains to the kitchen table. Before the civil war, cattle raising was largely regional, and in most cases, the people who managed cattle out west were the same people who owned them. Then, in the 1870s and 80s, improved transport, bloody victories over the Plains Indians, and the American west’s integration into global capital markets sparked a ranching boom. Meanwhile, Chicago meatpackers pioneered centralised food processing. Using an innovative system of refrigerator cars and distribution centres, they began to distribute fresh beef nationwide. Millions of cattle were soon passing through Chicago’s slaughterhouses each year. By 1890, the Big Four meatpacking companies – Armour & Co, Swift & Co, Morris & Co and the GH Hammond Co – directly or indirectly controlled the majority of the nation’s beef and pork.

But in the 1880s, the big Chicago meatpackers faced determined opposition at every stage from slaughter to sale. Meatpackers fought with workers as they imposed a brutally exploitative labour regime. Meanwhile, attempts to transport freshly butchered beef faced opposition from railroads who found higher profits transporting live cattle east out of Chicago and to local slaughterhouses in eastern cities. Once pre-slaughtered and partially processed beef – known as “dressed beef” – reached the nation’s many cities and towns, the packers fought to displace traditional butchers and woo consumers sceptical of eating meat from an animal slaughtered a continent away.

The consequences of each of these struggles persist today. A small number of firms still control most of the country’s – and by now the world’s – beef. They draw from many comparatively small ranchers and cattle feeders, and depend on a low-paid, mostly invisible workforce. The fact that this set of relationships remains so stable, despite the public’s abstract sense that something is not quite right, is not the inevitable consequence of technological change but the direct result of the political struggles of the late 19th century.

In the slaughterhouse, someone was always willing to take your place. This could not have been far from the mind of 14-year-old Vincentz Rutkowski as he stooped, knife in hand, in a Swift & Co facility in summer 1892. For up to 10 hours each day, Vincentz trimmed tallow from cattle paunches. The job required strong workers who were low to the ground, making it ideal for boys like Rutkowski, who had the beginnings of the strength but not the size of grown men. For the first two weeks of his employment, Rutkowski shared his job with two other boys. As they became more skilled, one of the boys was fired. Another few weeks later, the other was also removed, and Rutkowski was expected to do the work of three people.

The morning that final co-worker left, on 30 June, Rutkowski fell behind the disassembly line’s frenetic pace. After just three hours of working alone, the boy failed to dodge a carcass swinging toward him. It struck his knife hand, driving the tool into his left arm near the elbow. The knife cut muscle and tendon, leaving Rutkowski with lifelong injuries.

The labour regime that led to Rutkowski’s injury was integral to large-scale meatpacking. A packinghouse was a masterpiece of technological and organisational achievement, but that was not enough to slaughter millions of cattle annually. Packing plants needed cheap, reliable and desperate labour. They found it via the combination of mass immigration and a legal regime that empowered management, checked the nascent power of unions and provided limited liability for worker injury. The Big Four’s output depended on worker quantity over worker quality.

Meatpacking lines, pioneered in the 1860s in Cincinnati’s pork packinghouses, were the first modern production lines. The innovation was that they kept products moving continuously, eliminating downtime and requiring workers to synchronise their movements to keep pace. This idea was enormously influential. In his memoirs, Henry Ford explained that his idea for continuous motion assembly “came in a general way from the overhead trolley that the Chicago packers use in dressing beef”.


 A Swift and Company meatpacking house in Chicago, circa 1906. Photograph: Granger Historical Picture Archive/Alamy

Packing plants relied on a brilliant intensification of the division of labour. This division increased productivity because it simplified slaughter tasks. Workers could then be trained quickly, and because the tasks were also synchronised, everyone had to match the pace of the fastest worker.

When cattle first entered one of these slaughterhouses, they encountered an armed man walking toward them on an overhead plank. Whether by a hammer swing to the skull or a spear thrust to the animal’s spinal column, the (usually achieved) goal was to kill with a single blow. Assistants chained the animal’s legs and dragged the carcass from the room. The carcass was hoisted into the air and brought from station to station along an overhead rail.

Next, a worker cut the animal’s throat and drained and collected its blood while another group began skinning the carcass. Even this relatively simple process was subdivided throughout the period. Initially the work of a pair, nine different workers handled skinning by 1904. Once the carcass was stripped, gutted and drained of blood, it went into another room, where highly trained butchers cut the carcass into quarters. These quarters were stored in giant refrigerated rooms to await distribution.

But profitability was not just about what happened inside slaughterhouses. It also depended on what was outside: throngs of men and women hoping to find a day’s or a week’s employment. An abundant labour supply meant the packers could easily replace anyone who balked at paltry salaries or, worse yet, tried to unionise. Similarly, productivity increases heightened the risk of worker injury, and therefore were only effective if people could be easily replaced. Fortunately for the packers, late 19th-century Chicago was full of people desperate for work.

Seasonal fluctuations and the vagaries of the nation’s cattle markets further conspired to marginalise slaughterhouse labour. Though refrigeration helped the meatpackers “defeat the seasons” and secure year-round shipping, packing remained seasonal. Packers had to reckon with cattle’s reproductive cycles, and distribution in hot weather was more expensive. The number of animals processed varied day to day and month to month. For packinghouse workers, the effect was a world in which an individual day’s labour might pay relatively well but busy days were punctuated with long stretches of little or no work. The least skilled workers might only find a few weeks or months of employment at a time.

The work was so competitive and the workers so desperate that, even when they had jobs, they often had to wait, without pay, if there were no animals to slaughter. Workers would be fired if they did not show up at a specified time before 9am, but then might wait, unpaid, until 10am or 11am for a shipment. If the delivery was very late, work might continue until late into the night.

Though the division of labour and throngs of unemployed people were crucial to operating the Big Four’s disassembly lines, these factors were not sufficient to maintain a relentless production pace. This required intervention directly on the line. Fortunately for the packers, they could exploit a core aspect of continuous-motion processing: if one person went faster, everyone had to go faster. The meatpackers used pace-setters to force other workers to increase their speed. The packers would pay this select group – roughly one in 10 workers – higher wages and offer secure positions that they only kept if they maintained a rapid pace, forcing the rest of the line to keep up. These pace-setters were resented by their co-workers, and were a vital management tool.

Close supervision of foremen was equally important. Management kept statistics on production-line output, and overseers who slipped in production could lose their jobs. This encouraged foremen to use tactics that management did not want to explicitly support. According to one retired foreman, he was “always trying to cut down wages in every possible way … some of [the foremen] got a commission on all expenses they could save below a certain point”. Though union officials vilified foremen, their jobs were only marginally less tenuous than those of their underlings.


 Union Stock Yard in Chicago in 1909. Photograph: Science History Images/Alamy

The effectiveness of de-skilling on the disassembly line rested on an increase in the wages of a few highly skilled positions. Though these workers individually made more money, their employers secured a precipitous decrease in average wages. Previously, a gang composed entirely of general-purpose butchers might all be paid 35 cents an hour. In the new regime, a few highly specialised butchers would receive 50 cents or more an hour, but the majority of other workers would be paid much less than 35 cents. Highly paid workers were given the only jobs in which costly mistakes could be made – damage to hides or expensive cuts of meat – protecting against mistakes or sabotage from the irregularly employed workers. The packers also believed (sometimes erroneously) that the highly paid workers – popularly known as the “butcher aristocracy” – would be more loyal to management and less willing to cooperate with unionisation attempts.

The overall trend was an incredible intensification of output. Splitters, one of the most skilled positions, provide a good example. The economist John Commons wrote that in 1884, “five splitters in a certain gang would get out 800 cattle in 10 hours, or 16 per hour for each man, the wages being 45 cents. In 1894 the speed had been increased so that four splitters got out 1,200 in 10 hours, or 30 per hour for each man – an increase of nearly 100% in 10 years.” Even as the pace increased, the process of de-skilling ensured that wages were constantly moving downward, forcing employees to work harder for less money.

The fact that meatpacking’s profitability depended on a brutal labour regime meant conflicts between labour and management were ongoing, and at times violent. For workers, strikes during the 1880s and 90s were largely unsuccessful. This was the result of state support for management, a willing pool of replacement workers and extreme hostility to any attempts to organise. At the first sign of unrest, Chicago packers would recruit replacement workers from across the US and threaten to permanently fire and blacklist anyone associated with labour organisers. But state support mattered most of all; during an 1886 fight, for instance, authorities “garrisoned over 1,000 men … to preserve order and protect property”. Even when these troops did not clash with strikers, it had a chilling effect on attempts to organise. Ultimately, packinghouse workers could not organise effectively until the very end of the 19th century.

The genius of the disassembly line was not merely in creating productivity gains through the division of labour; it was also that it simplified labour enough that the Big Four could benefit from a growing surplus of workers and a business-friendly legal regime. If the meatpackers needed purely skilled labour, they could not exploit desperate throngs outside their gates. If a new worker could be trained in hours and government was willing to break strikes and limit injury liability, workers became disposable. This enabled the dangerous – and profitable – increases in production speed that maimed Vincentz Rutkowski.

Centralisation of cattle slaughter in Chicago promised high profits. Chicago’s stockyards had started as a clearinghouse for cattle – a point from which animals were shipped live to cities around the country. But when an animal is shipped live, almost 40% of the travelling weight is blood, bones, hide and other inedible parts. The small slaughterhouses and butchers that bought live animals in New York or Boston could sell some of these by-products to tanners or fertiliser manufacturers, but their ability to do so was limited. If the animals could be slaughtered in Chicago, the large packinghouses could realise massive economies of scale on the by-products. In fact, these firms could undersell local slaughterhouses on the actual meat and make their profits on the by-products.

This model only became possible with refinements in refrigerated shipping technology, starting in the 1870s. Yet simply because technology created a possibility did not make its adoption inevitable. Refrigeration sparked a nearly decade-long conflict between the meatpackers and the railroads. American railroads had invested heavily in railcars and other equipment for shipping live cattle, and fought dressed-beef shipment tonne by tonne, charging different prices for moving a given weight of dressed beef from a similar weight of live cattle. They justified this difference by claiming their goal was to provide the same final cost for beef to consumers – what the railroads called a “principle of neutrality”.

Since beef from animals slaughtered locally was more expensive than Chicago dressed beef, the railroads would charge the Chicago packers more to even things out. This would protect railroad investments by eliminating the packers’ edge, and it could all be justified as “neutral”. Though this succeeded for a time, the packers would defeat this strategy by taking a circuitous route along Canada’s Grand Trunk Railway, a line that was happy to accept dressed-beef business it had no chance of securing otherwise.

Eventually, American railroads abandoned their differential pricing as they saw the collapse of live cattle shipping and became greedy for a piece of the burgeoning dressed-beef trade. But even this was not enough to secure the dominance of the Chicago houses. They also had to contend with local butchers.

In 1889 Henry Barber entered Ramsey County, Minnesota, with 100lb of contraband: fresh beef from an animal slaughtered in Chicago. Barber was no fly-by-night butcher, and was well aware of an 1889 law requiring all meat sold in Minnesota to be inspected locally prior to slaughter. Shortly after arriving, he was arrested, convicted and sentenced to 30 days in jail. But with the support of his employer, Armour & Co, Barber aggressively challenged the local inspection measure.

 
A cattle stockyard in Texas in the 1960s. Photograph: ClassicStock/Alamy

Barber’s arrest was part of a plan to provoke a fight over the Minnesota law, which Armour & Co had lobbied against since it was first drawn up. In federal court, Barber’s lawyers alleged that the statute under which he was convicted violated federal authority over interstate commerce, as well as the US constitution’s privileges and immunities clause. The case would eventually reach the supreme court.

At trial, the state argued that without local, on-the-hoof inspection it was impossible to know if meat had come from a diseased animal. Local inspection was therefore a reasonable part of the state’s police power. Of course, if this argument was upheld, the Chicago houses would no longer be able to ship their goods to any unfriendly state. In response, Barber’s counsel argued that the Minnesota law was a protectionist measure that discriminated against out-of-state butchers. There was no reason meat could not be adequately inspected in Chicago before being sold elsewhere. In Minnesota v Barber (1890), the supreme court ruled the statute unconstitutional and ordered Barber’s release. Armour & Co would go on to dominate the local market.

The Barber ruling was a pivotal moment in a longer fight on the part of the Big Four to secure national distribution. The Minnesota law, and others like it across the country, were fronts in a war waged by local butchers to protect their trade against the encroachment of the “dressed-beef men”. The rise of the Chicago meatpackers was not a gradual process of newer practices displacing old, but a wrenching process of big packers strong-arming and bankrupting smaller competitors. The Barber decision made these fights possible, but it did not make victory inevitable. It was on the back of hundreds of small victories – in rural and urban communities across the US – that the packers built their enormous profits.

Armour and the other big packers did not want to deal directly with customers. That required knowledge of local markets and represented a considerable amount of risk. Instead, they hoped to replace wholesalers, who slaughtered cattle for sale to retail butchers. The Chicago houses wanted local butchers to focus exclusively on selling meat; the packers would handle the rest.

When the packers first entered an area, they wooed a respected butcher. If the butcher would agree to buy from the Chicago houses, he could secure extremely generous rates. But if the local butcher refused these advances, the packers declared war. For example, when the Chicago houses entered Pittsburgh, they approached the veteran butcher William Peters. When he refused to work with Armour & Co, Peters later explained, the Chicago firm’s agent told him: “Mr Peters, if you butchers don’t take hold of it [dressed beef], we are going to open shops throughout the city.” Still, Peters resisted and Armour went on to open its own shops, underselling Pittsburgh’s butchers. Peters told investigators that he and his colleagues “are working for glory now. We do not work for any profit … we have been working for glory for the past three or four years, ever since those fellows came into our town”. Meanwhile, Armour’s share of the Pittsburgh market continued to grow.

Facing these kinds of tactics in cities around the country, local butchers formed protective associations to fight the Chicago houses. Though many associations were local, the Butchers’ National Protective Association of the United States of America aspired to “unite in one brotherhood all butchers and persons engaged in dealing in butchers’ stock”. Organised in 1887, the association pledged to “protect their common interests and those of the general public” through a focus on sanitary conditions. Health concerns were an issue on which traditional butchers could oppose the Chicago houses while appealing to consumers’ collective good. They argued that the Big Four “disregard the public good and endanger the health of the people by selling, for human food, diseased, tainted and other unwholesome meat”. The association further promised to oppose price manipulation of a “staple and indispensable article of human food”.

These associations pushed what amounted to a protectionist agenda using food contamination as a justification. On the state and local level, associations demanded local inspection before slaughter, as was the case with the Minnesota law that Henry Barber challenged. Decentralising slaughter would make wholesale butchering again dependent on local knowledge that the packers could not acquire from Chicago.

But again the packers successfully challenged these measures in the courts. Though the specifics varied by case, judges generally affirmed the argument that local, on-the-hoof inspection violated the constitution’s interstate commerce clause, and often accepted that inspection did not need to be local to ensure safe food. Animals could be inspected in Chicago before slaughter and then the meat itself could be inspected locally. This approach would address public fears about sanitary meat, but without a corresponding benefit to local butchers. Lacking legal recourse and finding little support from consumers excited about low-cost beef, local wholesalers lost more and more ground to the Chicago houses until they disappeared almost entirely.

Upton Sinclair’s The Jungle would become the most famous protest novel of the 20th century. By revealing brutal labour exploitation and stomach-turning slaughterhouse filth, the novel helped spur the passage of the Federal Meat Inspection Act and the Pure Food and Drug Act in 1906. But The Jungle’s heart-wrenching critique of industrial capitalism was lost on readers more worried about the rat faeces that, according to Sinclair, contaminated their sausage. Sinclair later observed: “I aimed at the public’s heart, and by accident I hit it in the stomach.” He hoped for socialist revolution, but had to settle for accurate food labelling.

The industry’s defence against striking workers, angry butchers and bankrupt ranchers – namely, that the new system of industrial production served a higher good – resonated with the public. Abstractly, Americans were worried about the plight of slaughterhouse workers, but they were also wary of those same workers marching in the streets. Similarly, they cared about the struggles of ranchers and local butchers, but also had to worry about their wallets. If packers could provide low prices and reassure the public that their meat was safe, consumers would be happy.

The Big Four meatpacking firms came to control the majority of the US’s beef within a fairly brief period –about 15 years – as a set of relationships that once appeared unnatural began to appear inevitable. Intense de-skilling in slaughterhouse labour only became accepted once organised labour was thwarted, leaving packinghouse labour largely invisible to this day. The slaughter of meat in one place for consumption and sale elsewhere only ceased to appear “artificial and abnormal” once butchers’ protective associations disbanded, and once lawmakers and the public accepted that this centralised industrial system was necessary to provide cheap beef to the people.

These developments are taken for granted now, but they were the product of struggles that could have resulted in radically different standards of production. The beef industry that was established in this period would shape food production throughout the 20th century. There were more major shifts – ranging from trucking-driven decentralisation to the rise of fast food – but the broad strokes would remain the same. Much of the environmental and economic risk of food production would be displaced on to struggling ranchers and farmers, while processors and packers would make money in good times and bad. Benefit to an abstract consumer good would continue to justify the industry’s high environmental and social costs.




‘Cows carry flesh, but they carry personality too’: the hard lessons of farming



Today, most local butchers have gone bankrupt and marginal ranchers have had little choice but to accept their marginality. In the US, an increasingly punitive immigration regime makes slaughterhouse work ever more precarious, and “ag-gag” laws that define animal-rights activism as terrorism keep slaughterhouses out of the public eye. The result is that our means of producing our food can seem inevitable, whatever creeping sense of unease consumers might feel. But the history of the beef industry reminds us that this method of producing food is a question of politics and political economy, rather than technology and demographics. Alternate possibilities remain hazy, but if we understand this story as one of political economy, we might be able to fulfil Armour & Company’s old credo – “We feed the world”– using a more equitable system.

Friday 13 April 2018

How much is an hour worth? The war over the minimum wage

Peter C Baker in The Guardian


No idea in economics provokes more furious argument than the minimum wage. Every time a government debates whether to raise the lowest amount it is legal to pay for an hour of labour, a bitter and emotional battle is sure to follow – rife with charges of ignorance, cruelty and ideological bias. In order to understand this fight, it is necessary to understand that every minimum-wage law is about more than just money. To dictate how much a company must pay its workers is to tinker with the beating heart of the employer-employee relationship, a central component of life under capitalism. This is why the dispute over these laws and their effects – which has raged for decades – is so acrimonious: it is ultimately a clash between competing visions of politics and economics. 

In the media, this debate almost always has two clearly defined sides. Those who support minimum-wage increases argue that when businesses are forced to pay a higher rate to workers on the lowest wages, those workers will earn more and have better lives as a result. Opponents of the minimum wage argue that increasing it will actually hurt low-wage workers: when labour becomes more expensive, they insist, businesses will purchase less of it. If minimum wages go up, some workers will lose their jobs, and others will lose hours in jobs they already have. Thanks to government intervention in the market, according to this argument, the workers struggling most will end up struggling even more.

This debate has flared up with new ferocity over the past year, as both sides have trained their firepower on the city of Seattle – where labour activists have won some of the most dramatic minimum-wage increases in decades, hiking the hourly pay for thousands of workers from $9.47 to $15, with future increases automatically pegged to inflation. Seattle’s $15 is the highest minimum wage in the US, and almost double the federal minimum of $7.25. This fact alone guaranteed that partisans from both sides of the great minimum-wage debate would be watching closely to see what happened.

But what turned the Seattle minimum wage into national news – and the subject of hundreds of articles – wasn’t just the hourly rate. It was a controversial, inconclusive verdict on the impact of the new law – or, really, two verdicts, delivered in two competing academic papers that reached opposite conclusions. One study, by economists at the University of Washington (UW), suggested that the sharp increase in Seattle’s minimum wage had reduced employment opportunities and lowered the average pay of the poorest workers, just as its critics had predicted. The other study, by economists at the University of California, Berkeley, claimed that a policy designed to boost worker income had done exactly that.

The duelling academic papers launched a flotilla of opinion columns, as pundits across the US picked over the economic studies to declare that the data was on their side – or that the data on their side was the better data, untainted by ideology or prejudice. In National Review, the country’s most prominent rightwing magazine, Kevin D Williamson wrote that the UW study had proven yet again “that the laws of supply and demand apply to the labor market”. Of course, he added, “everyone already knew that”.

Over on the left, a headline in the Nation declared: “No, Seattle’s $15 Minimum Wage Is Not Hurting Workers.” Citing the Berkeley study, Michelle Chen wrote: “What happens when wages go up? Workers make more money.” The business magazine Forbes ran two opposing articles: one criticising the UW study (“Why It’s Utter BS”), and another criticising liberals for ignoring the UW study in favour of the Berkeley study (“These People are Shameless”). This kind of thing – furious announcements of vindication from both sides – was everywhere, and soon followed by yet another round of stories summarising the first round of arguments.

When historians of the future consider our 21st-century debates about the minimum wage, one of the first things they will notice is that, despite the bitterness of the disagreement, the background logic is almost identical. Some commentators think the minimum wage should obviously go up. Some think all minimum-wage laws are harmful. Others concede we may need a minimum wage, but disagree about how high it should be or whether it should be the same everywhere – or whether its goals could be better accomplished by other measures, such as tax rebates for low-income workers.

But beneath all this conflict, there is a single, widely shared assumption: that the only important measure of the success of a minimum wage is whether economic studies show that it has increased the total earnings of low-wage workers – without this increase being outweighed by a cost in jobs or hours.

It is no coincidence that this framing tracks closely with the way the minimum wage is typically discussed by academic economists. In the US’s national organs of respectable public discourse – New York Times op-eds, Vox podcasts and Atlantic explainers – the minimum-wage debate is conducted almost entirely by economists or by journalists steeped in the economics literature. At first glance, this seems perfectly natural, just as it may seem completely natural that the debate is framed exclusively in terms of employment and pay. After all, the minimum wage is obviously an economic policy: shouldn’t economists be the people best equipped to discuss its effects?

But to historians of the future, this may well appear as a telling artifact of our age. Just imagine, for a moment, combing through a pile of articles debating slavery, or child labour, in which almost every participant spoke primarily in the specialised language of market exchange and incentives, and buttressed their points by wielding competing spreadsheets, graphs and statistical formulas. This would be, I think we can all agree, a discussion that was limited to the point of irrelevance. Our contemporary minimum-wage debates are similarly blinkered. In its reflexive focus on just a few variables, it risks skipping over the fundamental question: how do we value work? And is the answer determined by us – by politics and politicians – or by the allegedly immutable laws of economics?

In the last four years, some of the most effective activists in America have been the “Fight for $15” campaigners pushing to raise the minimum wage – whose biggest victory so far had come in Seattle. Thanks to their efforts – widely viewed as a hopelessly lost cause when they began – significant minimum-wage increases have been implemented in cities and states across the US. These same activists are laying plans to secure more increases in this November’s midterm elections. The Democratic party, following the lead of Bernie Sanders, has made a $15 minimum part of its official national platform. US businesses and their lobbyists, historically hostile to all minimum-wage increases but well aware of their robust popularity, are gearing up to fight back with PR campaigns and political talking points that paint the minimum wage as harmful to low-wage workers, especially young workers in need of job experience.

In the UK, Jeremy Corbyn has pledged that a Labour government would raise the national minimum wage to £10 “within months” of taking office. (It is currently on schedule to rise slowly to £9 by 2020, which has been criticised by some on the right, citing Seattle as evidence that it will eliminate jobs.) In recent years, EU policymakers have raised the possibility of an EU-wide minimum-wage scheme. All this activity – combined with concern about rising economic inequality and stagnating wages – means the minimum wage is being studied and debated with an intensity not seen for years. But this is a debate unlikely to be resolved by economic studies, because it ultimately hinges on questions that transcend economics.

So what are we really talking about when we talk about the minimum wage?

The first minimum-wage laws of the modern industrial era were passed in New Zealand and Australia in the first decades of the 20th century, with the goal of improving the lives and working conditions of sweatshop workers. As news of these laws spread, reformers in the US sought to copy them. Like today’s minimum-wage proponents, these early reformers insisted that a minimum wage would increase the incomes of the poorest, most precarious workers. But they were also explicit about their desire to protect against capitalism’s worst tendencies. Without government regulation, they argued, there was nothing to stop companies from exploiting poor workers who needed jobs in order to eat – and had no unions to fight on their behalf.

In the field of economics, the concern that a state-administered minimum wage – also known as a wage floor – could backfire by reducing jobs or hours had been around since John Stuart Mill at least. But for many years, it was not necessarily the dominant view. Many mainstream economists supported the introduction of a minimum wage in the US, especially a group known as “institutionalists”, who felt economists should be less interested in abstract models and more focused on how businesses operated in the real world. At the time, many economists, institutionalist and otherwise, thought minimum-wage laws would likely boost worker health and efficiency, reduce turnover costs, and – by putting more cash in workers’ pockets – stimulate spending that would keep the wheels of the economy spinning.

During the Great Depression, these arguments found a prominent champion in President Franklin Roosevelt, who openly declared his desire to reshape the American economy by driving out “parasitic” firms that built worker penury into their business models. “No business which depends for existence on paying less than living wages to its workers has any right to continue in this country,” he said in 1933.

Inevitably, this vision had its dissenters, especially among business owners, for whom minimum-wage increases represented an immediate and unwelcome increase in costs, and more generally, a limit on their agency as profit-seekers. At a 1937 Congressional hearing on the proposed Fair Labor Standards Act (FSLA) – which enacted the first federal minimum wage, the 40-hour work week and the ban on child labour – a representative of one of the US’s most powerful business lobby groups, the National Association of Manufacturers, testified that a minimum wage was the first step toward totalitarianism: “Call it Bolshevism or communism, if you will. Call it socialism, Nazism, fascism or what you will. Each says to the people that they must bow to the will of the state.”

Despite these objections, the FLSA passed in 1938, setting a nationwide minimum wage of $0.25 per hour (the equivalent of $4.45 today). Many industries were exempt at first, including those central to the southern economy, and those that employed high proportions of racial minorities and women. In subsequent decades, more and more of these loopholes were closed.

But as the age of Roosevelt and his New Deal gave way to that of Reagan, the field of economics turned decisively against the minimum wage – one part of a much larger political and cultural tilt toward all things “free market”. A central factor in this shift was the increasing prominence of neoclassical price theory, a set of powerful models that illuminated how well-functioning markets respond to the forces of supply and demand, to generate prices that strike, under ideal conditions, the most efficient balance possible between the preferences of consumers and producers, buyers and sellers.

Viewed through the lens of the basic neoclassical model, to set a minimum wage is to interfere with the “natural” marriage of market forces, and therefore to legislatively eliminate jobs that free agents would otherwise have been perfectly willing to take. Low-wage workers could lose income, teenagers could lose opportunities for work experience, consumer prices could rise and the overall output of the economy could be reduced. The temptation to shackle the invisible hand might be powerful, but was to be resisted, for the good of all.

Throughout the 70s, studies of the minimum wage’s effects were few and far between – certainly just a small fraction of today’s vast literature on the subject. Hardly anyone thought it was a topic that required much study. Economists understood that there were indeed rare conditions in which employers could get away with paying workers less than the “natural” market price of their labour, due to insufficiently high competition among employers. Under these conditions (known as monopsonies), raising the minimum wage could actually increase employment, by drawing more people into the workforce. But monopsonies were widely thought to be exceptionally unusual – only found in markets for very specialised labour, such as professional athletes or college professors. Economists knew the minimum wage as one thing only: a job killer.

In 1976, the prominent economist George Stigler, a longtime critic of the minimum wage on neoclassical grounds, boasted that “one evidence of the professional integrity of the economist is the fact that it is not possible to enlist good economists to defend protectionist programs or minimum wage laws”. He was right. According to a 1979 study in the American Economic Review, the main journal of the American Economic Association, 90% of economists identified minimum-wage laws as a source of unemployment.

“The minimum wage has caused more misery and unemployment than anything since the Great Depression,” claimed Reagan during his 1980 presidential campaign. In many ways, Reagan’s governing philosophy (like Margaret Thatcher’s) was a grossly simplified, selectively applied version of neoclassical price theory, slapped with a broad brush on to any aspect of American life that Republicans wanted to set free from regulatory interference or union pressure. Since becoming law in 1938, the US federal minimum wage had been raised by Congress 15 times, generally keeping pace with inflation. Once Reagan was president, he blocked any new increases, letting the nationwide minimum be eroded by inflation. By the time he left office, the federal minimum was $3.35, and stood at its lowest value to date, relative to the median national income.

Today, invectives against Reaganomics (and support for minimum-wage increases) are a commonplace in liberal outlets such as the New York Times. But in 1987, the Times ran an editorial titled “The Right Minimum Wage: $0.00”, informing its readers – not inaccurately, at the time – that “there’s a virtual consensus among economists that the minimum wage is an idea whose time has passed”. Minimum-wage increases, the paper’s editorial board argued, “would price working poor people out of the job market”. In service of this conclusion, they cited not a single study.

But the neoclassical consensus was eventually shattered. The first crack in the facade was a series of studies published in the mid-90s by two young economists, David Card and Alan Krueger. Through the 1980s and into the 90s, many US states had responded to the stagnant federal minimum wage by passing laws that boosted their local minimum wages above what national law required. Card and Krueger conducted what they called “natural experiments” to investigate the impact of these state-level increases. In their most well-known study, they investigated hiring and firing decisions at fast-food restaurants located along both sides of the border separating New Jersey, which had just raised its wage floor, and Pennsylvania, which had not. Their controversial conclusion was that New Jersey’s higher wage had not caused any decrease in employment.

In Myth and Measurement, the duo’s book summarising their findings, they assailed the existing body of minimum-wage research, arguing that serious flaws had been overlooked by a field eager to confirm the broad reach of neoclassical price theory, and willing to ignore the many ways in which the labour market might differ from markets in consumer goods. (For one thing, they suggested, it was likely that monopsony conditions were much more common in the low-wage labour market than had been previously assumed – allowing employers, rather than “the market”, to dictate wages). The book was dedicated to Richard Lester, an economist from the institutionalist school who argued in the 1940s that neoclassical models often failed to accurately describe how businesses behave in the real world.

Card and Krueger’s work went off like a bomb in the field of economics. The Clinton administration was happy to cite their findings in support of a push, which was eventually successful, to raise the federal minimum to $5.15. But defenders of the old consensus fought back.

In the Wall Street Journal, the Nobel prize-winning economist James M Buchanan asserted that people willing to give credence to the Myth and Measurement studies were “camp-following whores”. For economists to advance such heretical claims about the minimum wage, Buchanan argued, was the equivalent of a physicist arguing that “water runs uphill” (which, I must note, is not uncommon in man-made plumbing and irrigation systems). High-pitched public denunciations like Buchanan’s were just the tip of the disciplinary iceberg. More than a decade later, Card recalled that he subsequently avoided the subject, in part because many of his fellow economists “became very angry or disappointed. They thought that in publishing our work we were being traitors to the cause of economics as a whole.”

There were some shortcomings in Card and Krueger’s initial work, but their findings inspired droves of economists to start conducting empirical studies of minimum-wage increases. Over time, they developed new statistical techniques to make those studies more precise and robust. After several generations of such studies, there is now considerable agreement among economists that, in available historical examples, increases in the minimum wage have not substantially reduced employment. But this newer consensus is far short of the near-unanimity of the 1980s. There are prominent dissenters who insist that the field’s new tolerance for minimum wages is politically expedient wishful thinking – that the data, when properly analysed, still confirms the old predictions of neoclassical theory. And every new study from one side of the debate still generates a rapid response from the other team, in both the specialist academic literature and the wider media.

What has returned the minimum wage to the foreground of US politics is not the slowly shifting discourse of academic economists, but the efforts of the Fight for $15 and its new brand of labour activism. The traditional template for US labour organising was centred on unions – on workers pooling their power to collectively negotiate better contracts with their employers. But in the past four decades, the weakening of US labour law and the loss of jobs in industries that were once bastions of union strength have made traditional unions harder to form, less powerful and easier to break, especially in low-wage service industries.

These conditions have given birth to what is often called “alt-labour”: a wide variety of groups and campaigns (many of them funded or supported by traditional unions) that look more like activist movements. Campaigns such as the Fight for $15 often voice support for unionisation as an ideal (and their union backers would like the additional members), but in the meantime, alt-labour groups seek to address worker grievances through more public means, including the courts, elections and protest actions, including “wildcat” strikes.

In November 2012, some 200 non-unionised workers at fast-food chain restaurants in New York City walked off the job and marched through the streets to broadcast two central demands: the ability to form a union and a $15 minimum wage. (At the time, New York’s minimum wage was $7.25, the same as the national minimum.) The marches also sought to emphasise the fact that, contrary to persistent stereotype, minimum-wage jobs are not held exclusively, or even primarily, by teenagers working for pocket money or job experience; many of the participants were adults attempting to provide for families. The march, the largest of its kind in fast-food history, was coordinated with help from one of the US’s largest and most politically active unions, the Service Employees International Union. Soon the SEIU was helping fast-food workers stage similar walkouts across the country. The Fight for $15 had begun.

As the campaign gathered steam – earning widespread media coverage, helping secure minimum-wage increases in many cities and states, and putting the issue back into the national political conversation – the media turned to economists for their opinion. Their responses illustrated the extent to which the old neoclassical consensus had been upended, but also the ways in which it remained the same.

The old economic consensus insisted that the only good minimum wage was no minimum wage; the new consensus recognises that this is not the case. Increasingly, following Card and Krueger, economists recognise that monopsonistic conditions, in which there is little competition among purchasers of labour, are more common than once thought. If competition among low-wage employers is not as high as it “should” be, wages – like those of fast-food workers – can be “unnaturally” suppressed. Therefore, a minimum wage is accepted as a tweak necessary to correct this flaw. For economists, the “correct” minimum wage is the one calculated, on the basis of past studies, to give the average worker more money without significantly reducing the number of available jobs.

For economists, the key would be to calculate a wage with benefits (in hourly wages) that could be predicted, based on the weight of past studies, to outweigh its costs (in lost jobs and hours).

But this meant that almost no economists, even staunch defenders of minimum-wage increases, would endorse the central demand of the Fight for $15. A hike of that size, they pointed out, was considerably more drastic than any increase in the minimum wage they had previously analysed – and therefore, by the standards of the field, too risky to be endorsed. Arindrajit Dube, a professor at the University of Massachusetts, and perhaps contemporary economics’ most prominent defender of minimum-wage increases, cautioned that $15 might be fine for a prosperous coastal city, but it could end up incurring dangerously high costs in poorer parts of the country. Alan Krueger himself came out against setting a federal target of $15, arguing in a New York Times op-ed that such a high wage floor was “beyond the range studied in past research”, and therefore “could well be counterproductive”.

Of course, these economists may be right. But if all minimum-wage policy had been held to this standard, the US federal minimum wage would not exist to begin with – since the initial jump, from $0 to $0.25, was certainly well “beyond the range studied in past research”.

Almost exactly a year after fast-food workers first walked off the job in New York City, launching the Fight for $15, the country’s first $15 minimum wage became law in SeaTac, Washington, a city of fewer than 30,000 people, known mostly (if at all) as the home of Seattle’s major airport, Seattle-Tacoma International. It was an emblematic victory for “alt-labour”: for years, poorly paid airport ground-crew workers had been trying and failing to form a union, stymied by legal technicalities. With SEIU help, these workers launched a campaign to hold a public referendum on a $15 wage – not expecting to win, but in the hope that the negative publicity would put pressure on the airlines that flew through SeaTac. But in November 2013, the city’s residents – by a slim margin of 77 votes – passed the country’s highest minimum wage.

That same day, a socialist economist named Kshama Sawant won a seat on Seattle’s City Council. Sawant had made a $15 minimum wage a central plank of her campaign. Afraid of being outflanked from the left in one of the most proudly liberal cities in the US, most of her fellow council candidates and both major mayoral candidates endorsed the idea, too. (At the time, the city’s minimum wage was $9.47.) On 2 June 2014, the city council – hoping to avoid a public referendum on the matter – unanimously approved the increase to $15, to be phased in over three years, with future increases pegged to inflation.

The furious Seattle minimum-wage debate of last summer was ostensibly about the $15 rate. But the subject of those competing studies was actually the city’s intermediate increase, at the start of 2016, from the 2015 minimum of $11, to either $13 – for large businesses with more than 500 employees – or $12, for smaller ones. (Businesses that provided their employees with healthcare were allowed to pay less.)

When a group of researchers at the University of Washington (UW) released a paper analysing this incremental hike in June 2017, their conclusion appeared to uphold the predictions of neoclassical theory and throw cold water on the Fight for $15. Yes, low-wage Seattle workers now earned more per hour in 2016 than in 2015. But, the paper argued, having become more expensive to hire, they were being hired less often, and for fewer hours, with the overall reduction in hours outweighing the jump in hourly rates. According to their calculations, the average low-wage worker in Seattle made $1,500 less in 2016 than the year before, even though the city was experiencing an economic boom.

Some of the funding for the University of Washington researchers had come from the Seattle city council. (The group has released several other papers tracking the minimum wage’s effects, and plans to release at least 20 more in the years to come.) But after city officials read a draft of the study, they sought a second opinion from the Center on Wage and Employment Dynamics at the University of California, Berkeley – a research group long associated with support for minimum-wage increases. The Berkeley economists had been preparing their own study of Seattle’s minimum wage, which reached very different conclusions. At the city’s request, they accelerated its release, so it would come out before the more negative UW paper. And after the UW paper was released, Michael Reich, one of the Berkeley study’s lead authors, published a letter directly criticising its methods and dismissing its conclusions.


 Illustration: The Project Twins

It was around this point that the op-ed salvos started flying in both directions. The conditions for widespread, contentious coverage could hardly have been more perfect: supporters of the Fight for $15 and its detractors each had one study to trumpet and one to dismiss.

Conservatives leaped to portray liberals as delusional utopians who would keep commissioning scientific findings until they got one they liked. Some proponents of the Fight for $15, meanwhile, scoured the internet for any sign that Jacob Vigdor, who led the UW study, had a previous bias against the minimum wage.

Critics of the UW study pointed out that it had only used payroll data from businesses with a single location – thus excluding larger businesses and chains such as Domino’s and Starbucks, which were most likely to cope with the short-term local shock in labour costs (and, plausibly, to absorb some of the work that may have been lost at smaller businesses). The Berkeley study, on the other hand, relied solely on data from the restaurant industry, and critics contended this did not fully represent the city’s whole low-wage economy.

But on one point, almost everyone agreed. Both studies were measuring the one thing that really mattered: whether the higher minimum wage led to fewer working hours for low-wage workers, and if so, whether the loss in hours had counteracted the increase in pay.

This approach revealed a fundamental continuity between the post-Card and Krueger consensus and the neoclassical orthodoxy it had replaced. When Roosevelt pushed for America’s first minimum wage, he was confident that capitalists would deal with the temporary price shock by doing what capitalists do best: relentlessly seeking out new ways to save costs elsewhere. He rejected the idea that a functioning economy simply must contain certain types of jobs, or that particular industries were intrinsically required to be poorly compensated or exploitative.

Economies and jobs are, to some extent, what we decide to make them. In developed economies like the US and the UK, it is common to lament the disappearance of “good jobs” in manufacturing and their replacement by “bad” low-wage work in service industries. But much of what was “good” about those manufacturing jobs was made that way over time by concessions won and regulations demanded by labour activists. Today, there is no natural reason that the exploding class of service jobs must be as “bad” as they often are.

The Fight for $15 has not notched its victories by convincing libertarian economists that they are wrong; it has won because more and more Americans work bad jobs – poorly paid jobs, unrewarding jobs, insecure jobs – and they are willing to try voting some of that badness out of existence.

This willingness is not the product of hours spent reading the post-Card and Krueger economic literature. It has much more to do with an intuitive understanding that – in an economy defined by historically high levels of worker productivity on the one hand, and skyrocketing but unevenly distributed profit on the other – some significantly better arrangement must be possible, and that new rules might help nudge us in the right direction, steering employers’ profit-seeking energies towards other methods of cutting costs besides miserably low pay. But we should not expect that there will be a study that proves ahead of time how this will work – just as Roosevelt could not prove his conjecture that the US economy did not have an existential dependence on impoverished sweatshop labour.

Last November, I spent several days in Seattle, mostly talking with labour activists and low-wage workers, including fast-food employees, restaurant waiters and seasonal employees at CenturyLink Field, the city’s American football (and soccer) stadium. In all of these conversations, people talked about the higher minimum wage with palpable pride and enthusiasm. Crystal Thompson, a 36-year-old Domino’s supervisor (she was recently promoted from phone operator), told me she still loved looking at pictures from Seattle’s Fight for $15 marches: proof that even the poorest workers could shut down traffic across a major city and make their demands heard. “I wasn’t even a voter before,” she told me. In fact, more than one person said that since the higher wage had passed, they were on the lookout for the next fight to join.

The more people I talked to, the more difficult it was to keep seeing the minimum-wage debate through the narrow lens of the economics literature – where it is analysed as a discrete policy option, a dial to be turned up or down, with the correct level to be determined by experts. Again and again, my conversations with workers naturally drifted from the minimum wage to other battles about work and pay in Seattle. Since passing the $15 minimum wage, the city had instituted new laws mandating paid sick and family leave, set legal limits on unpredictable shift scheduling, and funded the creation of an office of labour investigators to track down violators of these new rules. (One dark footnote to any conversation about the minimum wage is the fact that, without effective enforcement, many employers regularly opt not to pay it. Another dark footnote is that minimum wage law does not apply to the rapidly growing number of workers classified as “independent contractors”, many of whom toil in the gig economy.)

It was obvious in Seattle that all these victories were intertwined – that victory in one battle had provided energy and momentum for the next – and that all of these advances for labour took the form of limits, imposed by politics, on the latitude allowed to employers in the name of profit-seeking.

Toward the end of my visit, I went to see Jacob Vigdor, the economist who was the lead author of the UW study arguing that Seattle’s minimum wage was actually costing low-wage workers money. He told me he hadn’t ever expected to find himself at the centre of a national storm about wage policy. “I managed to spend 18 years of my career successfully staying away from the minimum wage,” he said. “And then for a while there it kind of took over my life.”

He wanted to defend the study from its critics on the economic left – but he also wanted to stress that his group’s findings were tentative, and insufficiently detailed to make a final ruling about the impact of the minimum wage in Seattle or anywhere else. “This is not enough information to really make a normative call about this minimum-wage policy,” he said.

The UW paper itself is equally explicit on this front, something its many public proponents have been all too willing to forget. But it wasn’t just pundits who took liberties with interpreting the results: in August 2017, the Republican governor of Illinois explicitly cited the paper when vetoing a $15 minimum-wage bill. That same month, the Republican governor of Missouri also cited the UW study, while signing a law to block cities within the state from raising their own minimum wages. Thanks in large part to efforts of business lobbyists, 27 states have passed “pre-emption” laws that stop states and counties from raising their wage floors. (Vigdor has since acknowledged, on Twitter, that it was disingenuous for the governors to cite his study to justify their “politically motivated” decisions.)

Much like my conversations with low-wage workers across the city, talking to Vigdor ultimately left me feeling that, when examined closely, the minimum-wage discourse playing out in the field of economics – and, by extension, across the media landscape – had startlingly little direct relevance to anything at all other than itself. I mentioned to Vigdor that, walking around Seattle, I’d seen a surprising number of restaurants advertising an immediate, urgent need for basic help: dishwashers, busboys, kitchen staff. This had motivated me to go digging in state employment statistics, where I learned that in 2016 and 2017, restaurants across Seattle recorded a consistent need for several thousand more employees than they could find. How did this square with the idea that the higher minimum wage had led to low-wage workers losing work?

“That’s a story about labour supply,” Vigdor said. “Our labour supply is drying up.” Amazon and other tech companies, he said, were drawing in lots of high-skilled, high-wage workers. These transplants were rapidly driving up rents, making the city unlivable for workers at the bottom of the economic food chain, a dynamic exacerbated by the city’s relatively small stock of publicly subsidised low-income housing. 

These downward pressures on the labour supply, Vigdor pointed out, were essentially independent of the minimum wage. “The minimum wage [increase] is maybe just accelerating something that was bound to happen anyway,” he said.

This was not the sort of thing I had expected to hear from the author of the study that launched a hundred vitriolic assaults on the $15 minimum wage. “A million online op-ed writers’ heads just exploded,” I said.

Vigdor laughed ruefully. “Well, we’re going to be studying this for a long time.”

A few days earlier, I met with Kshama Sawant, the socialist economist who had been so instrumental in passing the $15 wage. She was eager to make sure I had read the Berkeley study, and that I had seen all the criticisms of the UW study. But her most impassioned argument wasn’t about the studies – and it was one that Roosevelt would have found very familiar.

“Look, if it were true that the economic system we have today can’t even bring our most poverty-stricken workers to a semi-decent standard of living – and $15 is not even a living wage, by the way – then why would we defend it?” She paused. “That would be straightforward evidence that we need a better system.”

Sunday 1 October 2017

The pendulum swings against privatisation

Evidence suggests that ending state ownership works in some markets but not others


Tim Harford in The Financial Times


Political fashions can change quickly, as a glance at almost any western democracy will tell you. The pendulum of the politically possible swings back and forth. Nowhere is this more obvious than in the debates over privatisation and nationalisation. 


In the late 1940s, experts advocated nationalisation on a scale hard to imagine today. Arthur Lewis thought the government should run the phone system, insurance and the car industry. James Meade wanted to socialise iron, steel and chemicals; both men later won Nobel memorial prizes in economics. 

They were in tune with the times: the British government ended up owning not only utilities and heavy industry but airlines, travel agents and even the removal company, Pickfords. The pendulum swung back in the 1980s and early 1990s, as Margaret Thatcher and John Major began an ever more ambitious series of privatisations, concluding with water, electricity and the railways. The world watched, and often followed suit. 

Was it all worth it? The question arises because the pendulum is swinging back again: Jeremy Corbyn, the bookies’ favourite to be the next UK prime minister, wants to renationalise the railways, electricity, water and gas. (He has not yet mentioned Pickfords.) Furthermore, he cites these ambitions as a reason to withdraw from the European single market. 

Privatisation’s proponents mention the galvanising effect of the profit motive, or the entrepreneurial spirit of private enterprise. Opponents talk of fat cats and selling off the family silver 

That is odd, since there is nothing in single market rules to prevent state ownership of railways and utilities — the excuse seems to be yet another Eurosceptic myth, the leftwing reflection of rightwing tabloids moaning about banana regulation. Since the entire British political class has lost its mind over Brexit, it would be unfair to single out Mr Corbyn on those grounds. 

Still, he has reopened a debate that long seemed settled, and piqued my interest. Did privatisation work? Proponents sometimes mention the galvanising effect of the profit motive, or the entrepreneurial spirit of private enterprise. Opponents talk of fat cats and selling off the family silver. Realists might prefer to look at the evidence, and the ambitious UK programme has delivered plenty of that over the years. 

There is no reason for a government to own Pickfords, but the calculus of privatisation is more subtle when it comes to natural monopolies — markets that are broadly immune to competition. If I am not satisfied with what Pickford’s has to offer me when I move home, I am not short of options. But the same is not true of the Royal Mail: if I want to write to my MP then the big red pillar box at the end of the street is really the only game in town. 

Competition does sometimes emerge in unlikely seeming circumstances. British Telecom seemed to have an iron grip on telephone services in the UK — as did AT&T in the US. The grip melted away in the face of regulation and, more importantly, technological change. 

Railways seem like a natural monopoly, yet there are two separate railway lines from my home town of Oxford into London, and two separate railway companies will sell me tickets for the journey. They compete with two bus companies; competition can sometimes seem irrepressible. 

But the truth is that competition has often failed to bloom, even when one might have expected it. If I run a bus service at 20 and 50 minutes past the hour, then a competitor can grab my business without competing on price by running a service at 19 and 49 minutes past the hour. Customers will not be well served by that. 

Meanwhile electricity and phone companies offer bewildering tariffs, and it is hard to see how water companies will ever truly compete with each other; the logic of geography suggests otherwise. 

All this matters because the broad lesson of the great privatisation experiment is that it has worked well when competition has been unleashed, but less well when a government-run business has been replaced by a government-regulated monopoly. 

A few years ago, the economist David Parker assembled a survey of post-privatisation performance studies. The most striking thing is the diversity of results. Sometimes productivity soared. Sometimes investors and managers skimmed off all the cream. Revealingly, performance often leapt in the year or two before privatisation, suggesting that state-owned enterprises could be well-run when the political will existed — but that political will was often absent. 

My overall reading of the evidence is that privatisation tended to improve profitability, productivity and pricing — but the gains were neither vast nor guaranteed. Electricity privatisation was a success; water privatisation was a disappointment. Privatised railways now serve vastly more passengers than British Rail did. That is a success story but it looks like a failure every time your nose is crushed up against someone’s armpit on the 18:09 from London Victoria. 

The evidence suggests this conclusion: the picture is mixed, the details matter, and you can get results if you get the execution right. Our politicians offer a different conclusion: the picture is stark, the details are irrelevant, and we metaphorically execute not our policies but our opponents. The pendulum swings — but shows no sign of pausing in the centre.