Search This Blog

Showing posts with label regulation. Show all posts
Showing posts with label regulation. Show all posts

Saturday 17 June 2023

Economics Essay 41: Regulation in Financial Sector

 Explain why economies such as the UK need a legal framework of regulation for the financial sector.

Economies like the UK require a legal framework of regulation for the financial sector for several reasons. The financial sector plays a critical role in the economy, and effective regulation helps ensure stability, protect consumers, maintain market integrity, and mitigate systemic risks. Here are arguments supported by examples:

  1. Financial Stability and Systemic Risk Mitigation: Regulation is crucial in promoting financial stability and preventing crises that can have far-reaching consequences for the economy. By implementing prudential regulations, governments can safeguard the financial system against risks and shocks.

Example: The 2008 global financial crisis highlighted the importance of financial regulation. In the UK, the Financial Services Authority (FSA) was replaced by the Financial Conduct Authority (FCA) and the Prudential Regulation Authority (PRA) to enhance regulatory oversight and prevent a recurrence of such systemic risks.

  1. Consumer Protection: Regulation protects consumers by ensuring fair and transparent practices in financial markets, preventing fraud, and providing mechanisms for dispute resolution. Regulations can set standards for product disclosures, customer data protection, and fair treatment of consumers.

Example: The UK's Financial Services Compensation Scheme (FSCS) is a regulatory initiative that protects consumers' deposits in case of bank failures. This scheme assures individuals that their deposits are safeguarded up to a certain limit, fostering trust in the banking system.

  1. Market Integrity and Confidence: Regulations play a crucial role in maintaining market integrity, preventing market abuse, and promoting investor confidence. This fosters trust in the financial sector, attracting investment and supporting economic growth.

Example: The UK's Financial Conduct Authority (FCA) regulates conduct in financial markets, ensuring fair and transparent practices. The FCA enforces regulations related to insider trading, market manipulation, and mis-selling of financial products, which helps maintain market integrity and investor confidence.

  1. International Reputation and Regulatory Standards: A well-regulated financial sector enhances a country's international reputation and facilitates cross-border transactions. Regulatory frameworks aligned with international standards and best practices help attract foreign investment and promote financial integration.

Example: The UK's regulatory framework for the financial sector adheres to international standards, such as those set by the Basel Committee on Banking Supervision. This alignment helps maintain the UK's position as a global financial hub and encourages international investors to engage with UK-based financial institutions.

  1. Systematic Risk Management: Regulations provide tools and mechanisms to manage systemic risks, such as capital adequacy requirements, stress tests, and resolution frameworks. These measures aim to prevent the failure of large financial institutions and minimize the impact on the wider economy.

Example: The UK's Financial Policy Committee (FPC), established after the financial crisis, monitors systemic risks and promotes the resilience of the financial system. The FPC sets macroprudential regulations, including capital buffers, to ensure banks can withstand economic downturns and protect the stability of the financial sector.

In summary, a legal framework of regulation is essential for the financial sector in economies like the UK. It promotes financial stability, protects consumers, maintains market integrity, boosts investor confidence, and helps manage systemic risks. The examples provided highlight the importance of regulation in preventing financial crises, ensuring fair practices, and maintaining the UK's reputation as a global financial center.

Wednesday 7 June 2023

Externalities and Taxes - What to do when the interests of the individual and society do not coincide?

 From The Economist

LOUD conversation in a train carriage that makes concentration impossible for fellow-passengers. A farmer spraying weedkiller that destroys his neighbour’s crop. Motorists whose idling cars spew fumes into the air, polluting the atmosphere for everyone. Such behaviour might be considered thoughtless, anti-social or even immoral. For economists these spillovers are a problem to be solved.

Markets are supposed to organise activity in a way that leaves everyone better off. But the interests of those directly involved, and of wider society, do not always coincide. Left to their own devices, boors may ignore travellers’ desire for peace and quiet; farmers the impact of weedkiller on the crops of others; motorists the effect of their emissions. In all of these cases, the active parties are doing well, but bystanders are not. Market prices—of rail tickets, weedkiller or petrol—do not take these wider costs, or “externalities”, into account.

The examples so far are the negative sort of externality. Others are positive. Melodious music could improve everyone’s commute, for example; a new road may benefit communities by more than a private investor would take into account. Still others are more properly known as “internalities”. These are the overlooked costs people inflict on their future selves, such as when they smoke, or scoff so many sugary snacks that their health suffers.

The first to lay out the idea of externalities was Alfred Marshall, a British economist. But it was one of his students at Cambridge University who became famous for his work on the problem. Born in 1877 on the Isle of Wight, Arthur Pigou cut a scruffy figure on campus. He was uncomfortable with strangers, but intellectually brilliant. Marshall championed him and with the older man’s support, Pigou succeeded him to become head of the economics faculty when he was just 30 years old.

In 1920 Pigou published “The Economics of Welfare”, a dense book that outlined his vision of economics as a toolkit for improving the lives of the poor. Externalities, where “self-interest will not…tend to make the national dividend a maximum”, were central to his theme.

Although Pigou sprinkled his analysis with examples that would have appealed to posh students, such as his concern for those whose land might be overrun by rabbits from a neighbouring field, others reflected graver problems. He claimed that chimney smoke in London meant that there was only 12% as much sunlight as was astronomically possible. Such pollution imposed huge “uncharged” costs on communities, in the form of dirty clothes and vegetables, and the need for expensive artificial light. If markets worked properly, people would invest more in smoke-prevention devices, he thought.

Pigou was open to different ways of tackling externalities. Some things should be regulated—he scoffed at the idea that the invisible hand could guide property speculators towards creating a well-planned town. Other activities ought simply to be banned. No amount of “deceptive activity”—adulterating food, for example—could generate economic benefits, he reckoned.

But he saw the most obvious forms of intervention as “bounties and taxes”. These measures would use prices to restore market perfection and avoid strangling people with red tape. Seeing that producers and sellers of “intoxicants” did not have to pay for the prisons and policemen associated with the rowdiness they caused, for example, he recommended a tax on booze. Pricier kegs should deter some drinkers; the others will pay towards the social costs they inflict.

This type of intervention is now known as a Pigouvian tax. The idea is not just ubiquitous in economics courses; it is also a favourite of policymakers. The world is littered with apparently externality-busting taxes. The French government imposes a noise tax on aircraft at its nine busiest airports. Levies on drivers to counterbalance the externalities of congestion and pollution are common in the Western world. Taxes to fix internalities, like those on tobacco, are pervasive, too. Britain will join other governments in imposing a levy on unhealthy sugary drinks starting next year.

Pigouvian taxes are also a big part of the policy debate over global warming. Finland and Denmark have had a carbon tax since the early 1990s; British Columbia, a Canadian province, since 2008; and Chile and Mexico since 2014. By using prices as signals, a tax should encourage people and companies to lower their carbon emissions more efficiently than a regulator could by diktat. If everyone faces the same tax, those who find it easiest to lower their emissions ought to lower them the most.

Such measures do change behaviour. A tax on plastic bags in Ireland, for example, cut their use by over 90% (with some unfortunate side-effects of its own, as thefts of baskets and trolleys rose). Three years after a charge was introduced on driving in central London, congestion inside the zone had fallen by a quarter. British Columbia’s carbon tax reduced fuel consumption and greenhouse-gas emissions by an estimated 5-15%. And experience with tobacco taxes suggests that they discourage smoking, as long as they are high and smuggled substitutes are hard to find.

Champions of Pigouvian taxes say that they generate a “double dividend”. As well as creating social benefits by pricing in harm, they raise revenues that can be used to lower taxes elsewhere. The Finnish carbon tax was part of a move away from taxes on labour, for example; if taxes must discourage something, better that it be pollution than work. In Denmark the tax partly funds pension contributions.

Pigou flies

Even as policymakers have embraced Pigou’s idea, however, its flaws, both theoretical and practical, have been scrutinised. Economists have picked holes in the theory. One major objection is the incompleteness of the framework, since it holds everything else in the economy fixed. The impact of a Pigouvian tax will depend on the level of competition in the market it is affecting, for example. If a monopoly is already using its power to reduce supply of its products, a new tax may not do any extra good. And if a dominant drinks firm absorbs the cost of an alcohol tax rather than passes it on, then it may not influence the rowdy. (A similar criticism applies to the idea of the double dividend: taxes on labour could cause people to work less than they otherwise might, but if an environmental tax raises the cost of things people spend their income on it might also have the effect of deterring work.)

Another assault on Pigou’s idea came from Ronald Coase, an economist at the University of Chicago (whose theory of the firm was the subject of the first brief in this series). Coase considered externalities as a problem of ill-defined property rights. If it were feasible to assign such rights properly, people could be left to bargain their way to a good solution without the need for a heavy-handed tax. Coase used the example of a confectioner, disturbing a quiet doctor working next door with his noisy machinery. Solving the conflict with a tax would make less sense than the two neighbours bargaining their way to a solution. The law could assign the right to be noisy to the sweet-maker, and if worthwhile, the doctor could pay him to be quiet.

In most cases, the sheer hassle of haggling would render this unrealistic, a problem that Coase was the first to admit. But his deeper point stands. Before charging in with a corrective tax, first think about which institutions and laws currently in place could fix things. Coase pointed out that laws against nuisance could help fix the problem of rabbits ravaging the land; quiet carriages today assign passengers to places according to their noise preferences.

Others reject Pigou’s approach on moral grounds. Michael Sandel, a political philosopher at Harvard University, has worried that relying on prices and markets to fix the world’s problems can end up legitimising bad behaviour. When in 1998 one school in Haifa tried to encourage parents to pick their children up on time by fining them, tardy pickups increased. It turned out that parental guilt was a more effective deterrent than cash; making payments seems to have assuaged the guilt.

Besides these more theoretical qualms about Pigouvian taxes, policymakers encounter all manner of practical ones. Pigou himself admitted that his prescriptions were vague; in “The Economics of Welfare”, though he believed taxes on damaging industries could benefit society, he did not say which ones. Nor did he spell out in much detail how to set the level of the tax.

Prices in the real world are no help; their failure to incorporate social costs is the problem that needs to be solved. Getting people to reveal the precise cost to them of something like clogged roads is asking a lot. In areas like these, policymakers have had to settle on a mixture of pragmatism and public acceptability. London’s initial £5 ($8) fee for driving into its city centre was suspiciously round for a sum meant to reflect the social cost of a trip.

Inevitably, a desire to raise revenue also plays a role. It would be nice to believe that politicians set Pigouvian taxes merely in order to price in an externality, but the evidence, and common sense, suggests otherwise. Research may have guided the initial level of a British landfill tax, at £7 a tonne in 1996. But other considerations may have boosted it to £40 a tonne in 2009, and thence to £80 a tonne in 2014.

Things become even harder when it comes to divining the social cost of carbon emissions. Economists have diligently poked gigantic models of the global economy to calculate the relationship between temperature and GDP. But such exercises inevitably rely on heroic assumptions. And putting a dollar number on environmental Armageddon is an ethical question, as well as a technical one, relying as it does on such judgments as how to value unborn generations. The span of estimates of the economic loss to humanity from carbon emissions is unhelpfully wide as a result, ranging from around $30 to $400 a tonne.

It’s the politics, stupid

The question of where Pigouvian taxes fall is also tricky. A common gripe is that they are regressive, punishing poorer people, who, for example, smoke more and are less able to cope with rises in heating costs. An economist might shrug: the whole point is to raise the price for whoever is generating the externality. A politician cannot afford to be so hard-hearted. When Australia introduced a version of a carbon tax in 2012, more than half of the money ended up being given back to pensioners and poorer households to help with energy costs. The tax still sharpened incentives, the handouts softened the pain.

A tax is also hard to direct very precisely at the worst offenders. Binge-drinking accounts for 77% of the costs of excessive alcohol use, as measured by lost workplace productivity and extra health-care costs, for example, but less than a fifth of Americans report drinking to excess in any one month. Economists might like to charge someone’s 12th pint of beer at a higher rate than their first, but implementing that would be a nightmare.

Globalisation piles on complications. A domestic carbon tax could encourage people to switch towards imports, or hurt the competitiveness of companies’ exports, possibly even encouraging them to relocate. One solution would be to apply a tax on the carbon content of imports and refund the tax to companies on their exports, as the European Union is doing for cement. But this would be fiendishly complicated to implement across the economy. A global harmonised tax on carbon is the stuff of economists’ dreams, and set to remain so.

So, Pigou handed economists a problem and a solution, elegant in theory but tricky in practice. Politics and policymaking are both harder than the blackboard scribblings of theoreticians. He was sure, however, that the effort was worthwhile. Economics, he said, was an instrument “for the bettering of human life.”

Tuesday 2 May 2023

AI has hacked the operating system of human civilisation

Yuval Noah Hariri in The Economist

Fears of artificial intelligence (ai) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new ai tools have emerged that threaten the survival of human civilisation from an unexpected direction. ai has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.

In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.

On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.

Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?

Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.

At first, ai will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, ai culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.

Fear of ai has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.

In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.

In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.

The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.

Of course, the new power of ai could be used for good purposes as well. I won’t dwell on this, because the people who develop ai talk about it enough. The job of historians and philosophers like myself is to point out the dangers. But certainly, ai can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to make sure the new ai tools are used for good rather than for ill. To do that, we first need to appreciate the true capabilities of these tools.

Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans—but could also physically destroy human civilisation. We therefore reshaped the entire international order to protect humanity, and to make sure nuclear technology was used primarily for good. We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world.

We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

Won’t slowing down public deployments of ai cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated ai deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When ai hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.

We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of ai tools in the public sphere, and regulate ai before it regulates us. And the first regulation I would suggest is to make it mandatory for ai to disclose that it is an ai. If I am having a conversation with someone, and I cannot tell whether it is a human or an ai—that’s the end of democracy.

This text has been generated by a human.

Or has it?

Thursday 30 July 2020

A coronavirus vaccine could split America

In the battle between public science and anti-vaxxer sentiment, science is heavily outgunned writes Edward Luce in The FT

It is late October and Donald Trump has a surprise for you. Unlike the traditional pre-election shock — involving war or imminent terrorist attack — this revelation is about hope rather than fear. The “China virus” has been defeated thanks to the ingenuity of America’s president. The US has developed a vaccine that will be available to all citizens by the end of the year. Get online and book your jab.  

It is possible Mr Trump could sway a critical slice of voters with such a declaration. The bigger danger is that he would deepen America’s mistrust of science. A recent poll found that only half of Americans definitely plan to take a coronavirus vaccine. Other polls said that between a quarter and a third of the nation would never get inoculated. 

Whatever the true number, anti-vaccine campaigners are having a great pandemic — as indeed is Covid-19. At least three-quarters of the population would need to be vaccinated to reach herd immunity. 

Infectious diseases thrive on mistrust. It is hard to imagine a better Petri dish than today’s America. Some of the country’s “vaccine hesitancy” is well grounded. Regulators are under tremendous pressure to let big pharma shorten clinical trials. That could lead to mistakes

Vaccine nationalism is not just about rich governments pre-ordering as many vials as they can. It is also about winning unimaginably large bragging rights in the race to save the world. Cutting immunological corners could be dangerous to public health. 

Such caution accounts for many of those who would hesitate to be injected. The rest are captured by conspiracy theories. In the battle between public science and anti-vaxxer sentiment, science is heavily outgunned. It faces a rainbow coalition of metastasising folk suspicions on both the left and the right. Public health messages are little match for the memology of social media opponents. 

It is that mix of technological savvy and intellectual derangement that drives today’s politics. Mr Trump did not invent postmodern quackery — though he has endorsed some life-threatening remedies. The irony is that he could fall victim to the mistrust he has stoked.  

Should an effective vaccine loom into view before the US goes to the polls in 95 days, Mr Trump would not be the ideal person to inform the country. The story is as old as cry wolf. Having endorsed the use of disinfectants and hydroxychloroquine, Mr Trump has forfeited any credibility. Validation should come from Anthony Fauci, America’s top infectious-diseases expert, whose trust ratings are almost double those of the president he serves. 

Even then, however, the challenge would only just be starting. There is no cause to doubt the world-beating potential of US scientific research. There are good reasons to suspect the medical establishment’s ability to win over public opinion. 

The modern anti-vaxxer movement began on the left. It is still going strong. It follows the “my body is my temple” philosophy. Corporate science cannot be trusted to put healthy things into our bodies. The tendency for modern parents to award themselves overnight Wikipedia degrees in specialist fields is also to blame. 

Not all of this mistrust is madcap. African Americans have good reason to distrust public health following the postwar Tuskegee experiments in which hundreds were infected with syphilis and left to fester without penicillin. Polls show that more blacks than whites would refuse a coronavirus vaccine. Given their higher likelihood of exposure, such mistrust has tragic potential. 

But rightwing anti-vaxxers have greater momentum. America’s 19th century anti-vaccination movements drew equally from religious paranoia that vaccines were the work of the devil and a more general fear that liberty was under threat. Both strains have resurfaced in QAnon, the virtual cult that believes America is run by a satanic deep state that abuses children. 

It would be hard to invent a more unhinged account of how the world works. Yet Mr Trump has retweeted QAnon-friendly accounts more than 90 times since the pandemic began. Among QAnon’s other theories is that Covid-19 is a Dr Fauci-led hoax to sink Mr Trump’s chances of being re-elected. Science cannot emulate such imaginative forms of storytelling. 

All of which poses a migraine for the silent majority that would happily take the vaccine shots. Their lives are threatened both by a pandemic and by an infodemic. It is a bizarre feature of our times that the first looks easier to solve than the second. 

Sunday 14 October 2018

What price the wisdom of Luke Johnson, when his own company Patisserie Valerie tanks?

Catherine Bennett in The Guardian

The Patisserie Valerie chief should look to himself before lecturing others again

 
Self-styled ‘risk-taker’ Luke Johnson at a branch of Patisserie Valerie in London.


‘Unfortunately,” Luke Johnson wrote recently, “financial illiteracy permeates society from top to bottom. Too many ordinary people do not understand mortgages, pensions, insurance, loans or investing.”

Johnson, the entrepreneur whose biggest asset, Patisserie Valerie, now needs bailing out, was being generous. Even after the 2008 financial crisis confirmed that corporate incompetence warranted unwavering public scrutiny, too many ordinary people remain equally ignorant about the operations and capabilities of business leaders, even those, like Mr Johnson, whose influence extends far beyond his imperilled patisserie company.

Some of us, inexcusably, even struggle with the basic jargon of “black hole”. As in: “The owner of Patisserie Valerie has been plunged into financial crisis after it revealed a multimillion pound accounting black hole.” Is it the same sort of black hole that astonished managers at Carillion, following a “deterioration in cashflows”? Or an industry synonym for the “material shortfall” disclosed by the Patisserie Valerie board, “between the reported financial status and the current financial status of the business”.

Either way, does the black hole’s existence mean that Mr Johnson must also be financially illiterate? Or is that question better addressed to Patisserie Valerie’s finance chief, Chris Marsh, with whom Johnson has worked since 2006? Marsh was arrested by the police, then released on bail.

Regrettably, at the very moment when an ordinary person struggles to comprehend how £28m in May became minus £10m by October, and why one creditor, the HMRC, should be pursuing an unpaid tax bill of £1.4m – and what that tells us about the company’s leadership – it appears that Mr Johnson is taking a break from his weekly newspaper column. Its absence is the more acute, now that its author, expert on subjects such as red tape, Brexit and other people’s incompetence, has also fallen silent on Twitter; and his popular personal website seems, at the time of writing, to have vanished. With luck, it won’t be too long before he is sharing details of his mercy dash on Evan Davis’s The Bottom Line: “Providing insight into business from the people at the top.”

Happily, as others have noted, some of Mr Johnson’s earlier columns have addressed related issues such as, recently, “a business beginner’s guide to tried and tested swindles”. Watch out, he warns, for non-payment of creditors, dodgy advisers and attempts to overcomplicate things, so as to baffle the many people – unlike himself – who “do not understand the technicalities of investing or accounting”.

Inevitably, that widespread ignorance makes it hard to judge how much of Johnson’s wide-ranging, pre-existing advice, which has recently focused on Brexit, we can safely discard as, if not consistently hilarious, worthless. His chairmanship of Patisserie Valerie has, after all, repeatedly been cited, in the same way as Dyson’s profits and Tim Martin’s pubs, as the main reason to listen to him deprecate the EU, with his own achievements (pre-black hole), proving that “this is a great country in which to do business and prosper”.

Although Johnson is no different from other business celebrities, such as Dyson, Branson and Trump, in having parlayed business success into guru status, he has, more unusually, further set himself up as a kind of entrepreneur-moralist, with a biblical line in rebukes. Here he is, against – I think – overpaid government regulators: “Political leaders who want to foster world-beating companies must act decisively and, as with any transformation, slash off the gangrenous limbs without mercy.” Critics of rich people are warned: “Envy is a ruinous trait – as well as one of the deadly sins – and a sordid national characteristic.”
 
Like any half-decent moralist, he alternates rants with hints for personal salvation, through thrift, reliability and, again, financial literacy: “I am surprised how many senior managers I meet cannot read a cashflow statement.”

By way of authority, even Johnson’s less scorching capitalist homilies are littered with references to the usual suspects – Napoleon, Samuel Smiles and Marcus Aurelius – less usually, the scriptures and “the 19th-century philosopher Herbert Spencer”, not forgetting, shamelessly, Ayn Rand. “Those who possess willpower,” Johnson echoes, “seize the day and actively control their destiny.” Less gifted individuals are dismissed as lazy idiots, fools, inferiors who will never get the chance to close down a chain of well-regarded bookshops or, as now, bail out their own patisseries.

That Johnson should, on the back of this stuff, and the cake shops, have risen to yet greater prominence as a notable Vote Leave backer, his blessing sought by Theresa May, is perhaps no more absurd than, earlier, was David Cameron’s promotion of the Topshop brute, Philip Green, or elevation of JCB’s Anthony Bamford (previously fined by the EU). The myth of the disinterested entrepreneur-consultant seems ineradicable.

In Brexit, Johnson and his like-minded entrepreneurs have, however, discovered a yet more rewarding platform on which to portray their regulation-averse interests as a purely patriotic project.

Entrepreneurs, Johnson has written, on this favourite subject, are “the anarchists of the business world. Their mission is to overthrow the existing order.” Every entrepreneur is “a disruptor and a libertarian”, or would be “if the state sets a sensible framework and gets out of the way”. He explains that the word “chancer” properly describes risk-takers like him, who are willing to make mistakes, probably through excessive impetuosity, or as others might think of it, recklessness. “Probably the most common and devastating mistake I’ve made,” he wrote, “is to choose the wrong business partners.” As for abiding by the rules of the game: “It is the nature of risk-takers to be in a ferocious hurry to become successful, which frequently means cutting corners.”

Thus, even before last week’s disclosures about Patisserie Valerie, Johnson’s own columns amounted to the best possible case for ignoring the entrepreneur lobby on Brexit – indeed, on every subject other than their own, risk-taking genius.

Thursday 3 May 2018

Big Tech is sorry. Why Silicon Valley can’t fix itself

Tech insiders have finally started admitting their mistakes – but the solutions they are offering could just help the big players get even more powerful. By Ben Tarnoff and Moira Weigel in The Guardian 


Big Tech is sorry. After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.

Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.” Justin Rosenstein, an engineer who helped build Facebook’s “like” button and Gchat, regrets having contributed to technology that he now considers psychologically damaging, too. “Everyone is distracted,” Rosenstein says. “All of the time.” 

Ever since the internet became widely used by the public in the 1990s, users have heard warnings that it is bad for us. In the early years, many commentators described cyberspace as a parallel universe that could swallow enthusiasts whole. The media fretted about kids talking to strangers and finding porn. A prominent 1998 study from Carnegie Mellon University claimed that spending time online made you lonely, depressed and antisocial.

In the mid-2000s, as the internet moved on to mobile devices, physical and virtual life began to merge. Bullish pundits celebrated the “cognitive surplus” unlocked by crowdsourcing and the tech-savvy campaigns of Barack Obama, the “internet president”. But, alongside these optimistic voices, darker warnings persisted. Nicholas Carr’s The Shallows (2010) argued that search engines were making people stupid, while Eli Pariser’s The Filter Bubble (2011) claimed algorithms made us insular by showing us only what we wanted to see. In Alone, Together (2011) and Reclaiming Conversation (2015), Sherry Turkle warned that constant connectivity was making meaningful interaction impossible.

Still, inside the industry, techno-utopianism prevailed. Silicon Valley seemed to assume that the tools they were building were always forces for good – and that anyone who questioned them was a crank or a luddite. In the face of an anti-tech backlash that has surged since the 2016 election, however, this faith appears to be faltering. Prominent people in the industry are beginning to acknowledge that their products may have harmful effects.

Internet anxiety isn’t new. But never before have so many notable figures within the industry seemed so anxious about the world they have made. Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.

It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity. The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.

The hub of the new tech humanism is the Center for Humane Technology in San Francisco. Founded earlier this year, the nonprofit has assembled an impressive roster of advisers, including investor Roger McNamee, Lyft president John Zimmer, and Rosenstein. But its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction. In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.

As suspicion of Silicon Valley grows, the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track. For this, they have been getting a lot of attention. As the backlash against tech has grown, so too has the appeal of techies repenting for their sins. The Center for Humane Technology has been profiled – and praised by – the New York Times, the Atlantic, Wired and others.

But tech humanism’s influence cannot be measured solely by the positive media coverage it has received. The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”, and Twitter boss Jack Dorsey recently claimed he wants to improve the platform’s “conversational health”. 

Even Mark Zuckerberg, famous for encouraging his engineers to “move fast and break things”, seems to be taking a tech humanist turn. In January, he announced that Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.

Zuckerberg’s choice of words is significant: Time Well Spent is the name of the advocacy group that Harris led before co-founding the Center for Humane Technology. In April, Zuckerberg brought the phrase to Capitol Hill. When a photographer snapped a picture of the notes Zuckerberg used while testifying before the Senate, they included a discussion of Facebook’s new emphasis on “time well spent”, under the heading “wellbeing”.

This new concern for “wellbeing” may strike some observers as a welcome development. After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.

But these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.

The Center for Humane Technology argues that technology must be “aligned” with humanity – and that the best way to accomplish this is through better design. Their website features a section entitled The Way Forward. A familiar evolutionary image shows the silhouettes of several simians, rising from their crouches to become a man, who then turns back to contemplate his history.

“In the future, we will look back at today as a turning point towards humane design,” the header reads. To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”.

There is a good reason why the language of tech humanism is penetrating the upper echelons of the tech industry so easily: this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives. Their success turned the Bay Area tech industry into a global powerhouse – and produced the digitised world that today’s tech humanists now lament.

The story begins in the 1960s, when Silicon Valley was still a handful of electronics firms clustered among fruit orchards. Computers came in the form of mainframes then. These machines were big, expensive and difficult to use. Only corporations, universities and government agencies could afford them, and they were reserved for specialised tasks, such as calculating missile trajectories or credit scores.

Computing was industrial, in other words, not personal, and Silicon Valley remained dependent on a small number of big institutional clients. The practical danger that this dependency posed became clear in the early 1960s, when the US Department of Defense, by far the single biggest buyer of digital components, began cutting back on its purchases. But the fall in military procurement wasn’t the only mid-century crisis around computing.

Computers also had an image problem. The inaccessibility of mainframes made them easy to demonise. In these whirring hulks of digital machinery, many observers saw something inhuman, even evil. To antiwar activists, computers were weapons of the war machine that was killing thousands in Vietnam. To highbrow commentators such as the social critic Lewis Mumford, computers were instruments of a creeping technocracy that threatened to extinguish personal freedom.

But during the course of the 1960s and 70s, a series of experiments in northern California helped solve both problems. These experiments yielded breakthrough innovations like the graphical user interface, the mouse and the microprocessor. Computers became smaller, more usable and more interactive, reducing Silicon Valley’s reliance on a few large customers while giving digital technology a friendlier face.

The pioneers who led this transformation believed they were making computing more human. They drew deeply from the counterculture of the period, and its fixation on developing “human” modes of living. They wanted their machines to be “extensions of man”, in the words of Marshall McLuhan, and to unlock “human potential” rather than repress it. At the centre of this ecosystem of hobbyists, hackers, hippies and professional engineers was Stewart Brand, famed entrepreneur of the counterculture and founder of the Whole Earth Catalog. In a famous 1972 article for Rolling Stone, Brand called for a new model of computing that “served human interest, not machine”.

Brand’s disciples answered this call by developing the technical innovations that transformed computers into the form we recognise today. They also promoted a new way of thinking about computers – not as impersonal slabs of machinery, but as tools for unleashing “human potential”.

No single figure contributed more to this transformation of computing than Steve Jobs, who was a fan of Brand and a reader of the Whole Earth Catalog. Jobs fulfilled Brand’s vision on a global scale, launching the mass personal computing era with the Macintosh in the mid-80s, and the mass smartphone era with the iPhone two decades later. Brand later acknowledged that Jobs embodied the Whole Earth Catalog ethos. “He got the notion of tools for human use,” Brand told Jobs’ biographer, Walter Isaacson.

Building those “tools for human use” turned out to be great for business. The impulse to humanise computing enabled Silicon Valley to enter every crevice of our lives. From phones to tablets to laptops, we are surrounded by devices that have fulfilled the demands of the counterculture for digital connectivity, interactivity and self-expression. Your iPhone responds to the slightest touch; you can look at photos of anyone you have ever known, and broadcast anything you want to all of them, at any moment.

In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention. To guide us out of that wilderness, tech humanists say we need more humanising. They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.

Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.

It is difficult to imagine human beings without technology. The story of our species began when we began to make tools. Homo habilis, the first members of our genus, left sharpened stones scattered across Africa. Their successors hit rocks against each other to make sparks, and thus fire. With fire you could cook meat and clear land for planting; with ash you could fertilise the soil; with smoke you could make signals. In flickering light, our ancestors painted animals on cave walls. The ancient tragedian Aeschylus recalled this era mythically: Prometheus, in stealing fire from the gods, “founded all the arts of men.”

All of which is to say: humanity and technology are not only entangled, they constantly change together. This is not just a metaphor. Recent researchsuggests that the human hand evolved to manipulate the stone tools that our ancestors used. The evolutionary scientist Mary Marzke shows that we developed “a unique pattern of muscle architecture and joint surface form and functions” for this purpose.

The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities. For millennia, people have feared that new media were eroding the very powers that they promised to extend. In The Phaedrus, Socrates warned that writing on wax tablets would make people forgetful. If you could jot something down, you wouldn’t have to remember it. In the late middle ages, as a culture of copying manuscripts gave way to printed books, teachers warned that pupils would become careless, since they no longer had to transcribe what their teachers said.

Yet as we lose certain capacities, we gain new ones. People who used to navigate the seas by following stars can now program computers to steer container ships from afar. Your grandmother probably has better handwriting than you do – but you probably type faster.

The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology. Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.

Intentionally or not, this is what tech humanists are doing when they talk about technology as threatening human nature – as if human nature had stayed the same from the paleolithic era until the rollout of the iPhone. Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them. And while the tech humanists may believe they are acting in the common good, they themselves acknowledge they are doing so from above, as elites. “We have a moral responsibility to steer people’s thoughts ethically,” Tristan Harris has declared.

Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes. The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.

This paternalism produces a central irony of tech humanism: the language that they use to describe users is often dehumanising. “Facebook appeals to your lizard brain – primarily fear and anger,” says McNamee. Harris echoes this sentiment: “Imagine you had an input cable,” he has said. “You’re trying to jack it into a human being. Do you want to jack it into their reptilian brain, or do you want to jack it into their more reflective self?”

The Center for Humane Technology’s website offers tips on how to build a more reflective and less reptilian relationship to your smartphone: “going greyscale” by setting your screen to black-and-white, turning off app notifications and charging your device outside your bedroom. It has also announced two major initiatives: a national campaign to raise awareness about technology’s harmful effects on young people’s “digital health and well-being”; and a “Ledger of Harms” – a website that will compile information about the health effects of different technologies in order to guide engineers in building “healthier” products.

These initiatives may help some people reduce their smartphone use – a reasonable personal goal. But there are some humans who may not share this goal, and there need not be anything unhealthy about that. Many people rely on the internet for solace and solidarity, especially those who feel marginalised. The kid with autism may stare at his screen when surrounded by people, because it lets him tolerate being surrounded by people. For him, constant use of technology may not be destructive at all, but in fact life-saving.

Pathologising certain potentially beneficial behaviours as “sick” isn’t the only problem with the Center for Humane Technology’s proposals. They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

This may be why their approach is so appealing to the tech industry. There is no reason to doubt the good intentions of tech humanists, who may genuinely want to address the problems fuelling the tech backlash. But they are handing the firms that caused those problems a valuable weapon. Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power. By channelling popular anger at Big Tech into concerns about health and humanity, tech humanism gives corporate giants such as Facebook a way to avoid real democratic control. In a moment of danger, it may even help them protect their profits.

One can easily imagine a version of Facebook that embraces the principles of tech humanism while remaining a profitable and powerful monopoly. In fact, these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.

When Zuckerberg announced that Facebook would prioritise “time well spent” over total time spent, it came a couple weeks before the company released their 2017 Q4 earnings. These reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”.

Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable. In a recent interview, he said: “Over the long term, even if time spent goes down, if people are spending more time on Facebook actually building relationships with people they care about, then that’s going to build a stronger community and build a stronger business, regardless of what Wall Street thinks about it in the near term.”

Sheryl Sandberg has also stressed that the shift will create “more monetisation opportunities”. How? Everyone knows data is the lifeblood of Facebook – but not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”. Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently. Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.

Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact. Advertisers can target the closest friends of the users who already like a product, on the assumption that close friends tend to like the same things.

 
Facebook CEO Mark Zuckerberg testifies before the US Senate last month. Photograph: Jim Watson/AFP/Getty Images

So when Zuckerberg talks about wanting to increase “meaningful” interactions and building relationships, he is not succumbing to pressure to take better care of his users. Rather, emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable.

In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

In many ways, this process recalls an earlier stage in the evolution of capitalism. In the 19th century, factory owners in England discovered they could only make so much money by extending the length of the working day. At some point, workers would die of exhaustion, or they would revolt, or they would push parliament to pass laws that limited their working hours. So industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.

A similar situation confronts Facebook today. They have to make the attention of the user more valuable – and the language and concepts of tech humanism can help them do it. So far, it seems to be working. Despite the reported drop in total time spent, Facebook recently announced huge 2018 Q1 earnings of $11.97bn (£8.7bn), smashing Wall Street estimates by nearly $600m.

Today’s tech humanists come from a tradition with deep roots in Silicon Valley. Like their predecessors, they believe that technology and humanity are distinct, but can be harmonised. This belief guided the generations who built the “humanised” machines that became the basis for the industry’s enormous power. Today it may provide Silicon Valley with a way to protect that power from a growing public backlash – and even deepen it by uncovering new opportunities for profit-making.

Fortunately, there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.

To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention. But it does suggest that living well with technology can’t be a matter of making technology more “human”. This goal isn’t just impossible – it’s also dangerous, because it puts us at the mercy of experts who tell us how to be human. It cedes control of our technological future to those who believe they know what’s best for us because they understand the essential truths about our species.

The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.

Today, that power is wielded by corporations, which own our technology and run it for profit. The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.

There is an alternative. If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right. The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.

Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.

What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power. Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources. After all, Silicon Valley wouldn’t exist without billions of dollars of public funding, not to mention the vast quantities of information that we all provide for free. Facebook’s market capitalisation is $500bn with 2.2 billion users – do the math to estimate how much the time you spend on Facebook is worth. You could apply the same logic to Google. There is no escape: whether or not you have an account, both platforms track you around the internet.

In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation coming into effect in the European Union later this month. But more robust regulation of Silicon Valley isn’t enough. We also need to pry the ownership of our digital infrastructure away from private firms. 

This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run. These democratic digital structures can focus on serving personal and social needs rather than piling up profits for investors. One inspiring example is municipal broadband: a successful experiment in Chattanooga, Tennessee, has shown that publicly owned internet service providers can supply better service at lower cost than private firms. Other models of digital democracy might include a worker-owned Uber, a user-owned Facebook or a socially owned “smart city” of the kind being developed in Barcelona. Alternatively, we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.

More experimentation is needed, but democracy should be our guiding principle. The stakes are high. Never before have so many people been thinking about the problems produced by the tech industry and how to solve them. The tech backlash is an enormous opportunity – and one that may not come again for a long time.

The old techno-utopianism is crumbling. What will replace it? Silicon Valley says it wants to make the world a better place. Fulfilling this promise may require a new kind of disruption.