Search This Blog

Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Thursday, 22 April 2021

The European Super League is the perfect metaphor for global capitalism

From elite football to tech giants, our lives are increasingly governed by ‘free’ markets that turn out to be rigged writes Larry Elliott in The Guardian

The organisers of the ESL have taken textbook free-market capitalism and turned it on its head.’ Graffiti showing the Juventus president, Andrea Agnelli, near the headquarters of the Italian Football Federation in Rome. Photograph: Filippo Monteforte/AFP/Getty Images


Back in the days of the Soviet Union, it was common to hear people on the left criticise the Kremlin for pursuing the wrong kind of socialism. There was nothing wrong with the theory, they said, rather the warped form of it conducted behind the iron curtain. 

The same argument has surfaced this week amid the furious response to the now-aborted plans to form a European Super League for 20 football clubs, only this time from the right. Free-market purists say they hate the idea because it is the wrong form of capitalism.

They are both right and wrong about that. Free-market capitalism is supposed to work through competition, which means no barriers to entry for new, innovative products. In football’s case, that would be a go-ahead small club with a manager trying radical new training methods and fielding a crop of players it had nurtured itself or invested in through the transfer market. The league-winning Derby County and Nottingham Forest teams developed by Brian Clough in the 1970s would be an example of this.

Supporters of free-market capitalism say that the system can tolerate inequality provided there is the opportunity to better yourself. They are opposed to cartels and firms that use their market power to protect themselves from smaller and nimbler rivals. Nor do they like rentier capitalism, which is where people can make large returns from assets they happen to own but without doing anything themselves.

The organisers of the ESL have taken textbook free-market capitalism and turned it on its head. Having 15 of the 20 places guaranteed for the founder members represents a colossal barrier to entry and clearly stifles competition. There is not much chance of “creative destruction” if an elite group of clubs can entrench their position by trousering the bulk of the TV receipts that their matches will generate. Owners of the clubs are classic rentier capitalists.

Where the free-market critics of the ESL are wrong is in thinking the ESL is some sort of aberration, a one-off deviation from established practice, rather than a metaphor for what global capitalism has become: an edifice built on piles of debt where the owners of businesses say they love competition but do everything they can to avoid it. Just as the top European clubs have feeder teams that they can exploit for new talent, so the US tech giants have been busy buying up anything that looks like providing competition. It is why Google has bought a slew of rival online advertising vendors and why Facebook bought Instagram and WhatsApp.

For those who want to understand how the economics of football have changed, a good starting point is The Glory Game, a book Hunter Davies wrote about his life behind the scenes with Tottenham Hotspur, one of the wannabe members of the ESL, in the 1971-72 season. (Full disclosure: I am a Spurs season ticket holder.)

Davies’s book devotes a chapter to the directors of Spurs in the early 1970s, who were all lifelong supporters of the club and who received no payment for their services. They lived in Enfield, not in the Bahamas, which is where the current owner, Joe Lewis, resides as a tax exile. These were not radical men. They could not conceive of there ever being women on the board; they opposed advertising inside the ground and were only just coming round to the idea of a club shop to sell official Spurs merchandise. They were conservative in all senses of the word.

In the intervening half century, the men who made their money out of nuts and bolts and waste paper firms in north London have been replaced by oligarchs and hedge funds. TV, barely mentioned in the Glory Game, has arrived with its billions of pounds in revenue. Facilities have improved and the players are fitter, stronger and much better paid than those of the early 1970s. In very few sectors of modern Britain can it be said that the workers receive the full fruits of their labours: the Premier League is one of them.

Even so, the model is not really working and would have worked even less well had the ESL come about. And it goes a lot deeper than greed, something that can hardly be said to be new to football.

No question, greed is part of the story, because for some clubs the prospect of sharing an initial €3.5bn (£3bn) pot of money was just too tempting given their debts, but there was also a problem with the product on offer.

Some of the competitive verve has already been sucked out of football thanks to the concentration of wealth. In the 1970s, there was far more chance of a less prosperous club having their moment of glory: not only did Derby and Forest win the league, but Sunderland, Southampton and Ipswich won the FA Cup. Fans can accept the despair of defeat if they can occasionally hope for the thrill of victory, but the ESL was essentially a way for an elite to insulate itself against the risk of failure.

By presenting their half-baked idea in the way they did, the ESL clubs committed one of capitalism’s cardinal sins: they damaged their own brand. Companies – especially those that rely on loyalty to their product – do that at their peril, not least because it forces politicians to respond. Supporters have power and so do governments, if they choose to exercise it.

The ESL has demonstrated that global capitalism operates on the basis of rigged markets not free markets, and those running the show are only interested in entrenching existing inequalities. It was a truly bad idea, but by providing a lesson in economics to millions of fans it may have performed a public service.

Sunday, 29 October 2017

Dons, donors and the murky business of funding universities

John Lloyd in The Financial Times

The University of Oxford is in constant need of money — and it takes an approach to raising it that oscillates between the severe and the relaxed. Those familiar with its procedures say many would-be donors have been turned away. No names are given, outside of senior common room gossip. “Oxford doesn’t need to compromise,” says Sir Anthony Seldon, vice-chancellor of the independent Buckingham University. “People want to be associated with it.” But that confident sense that the great universities will do the right thing has been called into question by a Swedish academic who has thrown down the gauntlet to one of Oxford’s most prominent donors. 

For many centuries the deal has been clear: donations buy gratitude and even a named chair or library, but no rights to influence the running of the institution. In return, barring evidence of illegality, the university will not probe the funder’s finances. “You don’t have to like sponsors,” says the Canadian scholar Margaret MacMillan, an admired contemporary historian and former warden of St Antony’s College, Oxford. “But if they don’t interfere with your teaching and your choice of colleagues, then the rest is their own affair.” 

The Rhodes scholarship is a case in point. It began in 1902 with a bequest from Cecil Rhodes, the enthusiastic imperialist who argued that Anglo-Saxons deserved to be the dominant global race. His scholarship was founded to bring “the whole of the uncivilised world under British rule”, by funding young men to Oxford. Two years ago a South African Rhodes scholar, Ntokozo Qwabe, started a campaign to recognise the “colonial genocide” underpinning Rhodes’ wealth. He called for the removal of his statue from Oriel College, Rhodes’ alma mater. The campaign escalated, but the university and college resisted and the statue still stands. 

You cannot accept stolen money, but who is to decide what is stolen? Money from the oligarchs? 

A more recent bequest, beginning with £20m in 1985 and rising to over £50m, is that of the Syrian-born businessman Wafic Saïd to Oxford’s business school, which bears his name. With high-level contacts in the Saudi royal family, Saïd had helped to arrange the Al-Yamamah contracts between Saudi Arabia and British Aerospace and other UK companies from the mid-1980s onwards, worth some £44bn. In the 2000s it emerged that millions of pounds had been paid to senior Saudi royals to smooth the deal. BAE agreed to pay over £250m in the US in 2010, after the Department of Justice found it guilty of “intentionally failing to put appropriate anti-bribery preventive measures in place”. No wrongdoing was proved against Saïd, who said he had received no commissions for assisting in the deal. Last year, he opened legal proceedings against Barclays Bank, which had forced him to close several accounts, and had told him he was no longer welcome as a customer. (He later dropped the lawsuit after the bank apologised and confirmed the closures had been a business decision that was not based on any wrongdoing in relation to account activity.) 

The traditional argument justifying such relationships is ensuring a robust division between gift and subsequent influence. The economist and FT commentator John Kay, the first director of the Saïd Business School (1997-99), says he takes “a relaxed view of the relationship between the leadership of a university or college and the donor. It’s rare to have a very rich donor who has accumulated his wealth by simple hard work and dedication to honest business. There’s often something like monopolistic practices. You cannot accept stolen money, but who is to decide what is stolen? Money from the oligarchs? From Nigerian businessmen?” 

Seldon at Buckingham adds, “Even if it’s bad money, it can serve good causes. The key thing is that there are no conditions attached, and that there is a clear statement of the establishment of a firewall between the money and the decisions of the institute. We would all be in a pickle if we were to be morally pure.” 

Bo Rothstein, a former professor at Oxford University who resigned his post in protest against one of its funders But moral purity has come to Oxford, in the shape of Professor Bo Rothstein. A fellow of Nuffield College, Rothstein is a Swedish sociologist whose work has centred on ethical issues, most recently on studies of corruption in government. In 2015, he joined the faculty at the Blavatnik School of Government — where, in early August, he learnt that Len Blavatnik, the billionaire Ukrainian-born businessman whose £75m gift had founded the school, had given $1m to help finance Donald Trump’s inauguration. Blavatnik, one of Britain’s richest men, was knighted this year for services to philanthropy: he has given large sums to the Tate Modern and, with the New York Academy of Sciences, has established the Blavatnik Award for Young Scientists. After a pause for reflection, Rothstein resigned in protest. 

Rothstein believes Trump is an existential danger to western values. To become entangled financially with such a man in any way, he argues, is an affront to both universal and university values. “I teach about the importance of rights,” he says. “How am I to explain to a student why I am giving legitimacy, by teaching at the school, to one who gives money to him? It’s impossible.” How far would you take this argument, he adds. “Would you take money from one who was a Nazi? Would you have a Hermann Göring chair of aviation?” 

Rothstein’s challenge to Oxford’s see-no-evil consensus has brought him into conflict with one of the university’s brightest stars, the economist and international-relations scholar Professor Ngaire Woods. It was Woods who conceived of and secured the funding for the Blavatnik School of Government, becoming its founding director in 2011. 

She disputes almost everything Rothstein says about the immediate aftermath of his resignation — something he refers to as an “excommunication”, because he was asked to leave the school very soon after his resignation. On the central issue, she says that “we do not tell our donors how to exercise their political points of view; they do not tell us how to run the institute. Len Blavatnik has never said anything to me about what I should do or how I should teach. Never. Not once. There was a representative of the donor on the building committee for the institute, as is the Oxford practice, and that has been the extent of it.” 

Woods believes Rothstein had not grasped the difference between supporting Trump’s campaign and giving to the inauguration. “Lots of people give money to the inauguration, because it can’t be paid for from government funds,” she points out. “You cannot seriously think that the institute is in some way linked to Trump. We teach our students to try to get the facts right, to reason and to learn from diversity. We recently held a ‘challenges to government’ conference, in which all the issues of governance were debated. We have an open, argumentative centre.” 

Rothstein’s campaign has been a lonely one, not least given that established opinion in Oxford is squarely against him. Macmillan says that “to give money for Trump’s inaugural was quite legal, a perfectly sensible thing to do. I think he [Rothstein] put himself in an indefensible position.” Kay commends Rothstein for having the courage of his convictions, and “not engaging in a protest which costs nothing in the way of harm to the protester, but accepting the damage this will do to his position”, but he believes he was wrong to act as he did. At Buckingham, Seldon says: “I have sympathy for what he has done, but if the Blavatnik institute gets money that is unattached, and it’s clear there must be no influence, then that is OK.” 

This consensual view is anathema to Rothstein, a Luther among Renaissance popes. “Trump is a very serious threat to liberal democracy. My colleagues think it’s not too serious. Some say we shouldn’t oppose him head on, but we should just give the platform to strong liberals and democrats. But I am not keen on that. It’s trying to take a middle course which, with Trump, now you cannot take.” 

Rothstein sees the infamous case involving the London School of Economics as proving his point. In 2008, the LSE’s Global Governance Centre accepted a donation of £1.5m from Saif al-Islam Gaddafi, son of the long-time despot of Libya, Muammer Gaddafi. 

Amid charges that a PhD had been awarded to Saif improperly, and after a speech in Tripoli in 2011 in which he promised “rivers of blood” to flow if protests against his father’s regime did not stop, the LSE acknowledged it had erred in pursuing the relationship and in taking the money. The then-director, Sir Howard Davies, resigned. Says Rothstein: “There are of course donors whose behaviour you cannot just ignore and say, ‘Well, it’s their business.’ ” 

Rothstein has at least one prominent supporter in the academy, back in his native Sweden: the president of the Stockholm School of Economics, Lars Strannegård. “I think he was right to do it. Things which a year ago were thought not even to be allowed to be said are now daily announced from the White House. This strikes at the core of what universities do. It is like when you dip a watercolour brush into water — the first time it is slightly darkened, then more, and more until it is completely dark.” 

The Blavatnik affair finds an echo in the 1951 CP Snow novel, The Masters. Set in a Cambridge college in 1937, it concerns a struggle over the election of a new master — the two main contestants being an establishment figure seeking to bolster his chances by attracting a donation, and a radical scientist determined the college should take a stance against the steady advance of fascism. 

But universities are now far from Snow’s times. Those who now run them attest to a much more harried life than in the past. The state has retreated from full funding — universities charge fees, and most have created units that raise money — but it now expects higher teaching and research standards. At the same time as the universities have come under more intense financial pressure, their student bodies have become more combative. Aside from the “Rhodes Must Fall” campaign, there has been a rash of “no platforming” incidents in which controversial speakers have been barred from appearing on campus. 

More threatening still are the campaigns to force universities to divest themselves of investment considered unethical. Cambridge university has ceased investment in coal and tar sand “heavy” oil, but the pressure to go further is intensifying. Many students, faculty members and influential figures including Rowan Williams, former Archbishop of Canterbury and now master of Magdalene College, are calling for Cambridge to divest from all energy companies. So far the university has resisted, but Nick Butler, a former senior executive in BP and now a visiting professor at King’s College London, believes the tide runs against them. “The universities don’t want to be told what to do with their money,” he says. “But I think that, since the protests will continue, more and more will give in.” 

Money is power, but so is a university, especially one as storied as Oxford. Large donors are not always kept at arm’s length, and influences can be subtle, a question of implicit understandings more than explicit direction. They can also be fruitful: as a co-founder of the university’s Reuters Institute for the Study of Journalism (2006), whose funding comes largely from the Thomson Reuters Foundation, I think it right that Reuters representatives sit on the committees of the institute — balancing those who represent the interests of the university. The idea was, in part, to have the academy and the journalism trade interact and inform each other — not always without friction, but always with benefits. 

Donor-ship is an increasingly complex business in the digital age. What once might have been a campus kerfuffle can become a global furore. Last month, Washington DC saw such a dispute when a scholar named Barry Lynn was fired from the New America Foundation think-tank (not attached to a university) by its chief executive, Anne-Marie Slaughter. Lynn had written a statement about Google and “other dominant platform monopolists” and called for more robust antitrust action against them. Google, a major funder of the foundation, complained via Eric Schmidt, executive chairman of its parent company, Alphabet, according to the New York Times. Slaughter, a former director of policy planning at the State Department, at first called the Times’ report false, then backtracked. She later conceded: “There are unavoidable tensions the minute you take corporation funding or foreign government funding.” 

Universities must now manage tensions more actively than before; in doing so, all make deals and ethical zigzags. The two main protagonists in this updated CP Snow imbroglio deserve each other, for both are driven: Woods, by a desire to fashion her school into a world centre for the study of good governance; Rothstein, by a hyperactive political conscience that demanded a demonstrative act, essential to dramatise the scale of the disaster that, he believes, the Trump presidency presages. Two beliefs clash: one, that continuing to offer a rational-liberal education will maintain and expand rational-liberal governance; the other, that these very assumptions are being destroyed, and that larger protest must be made. Both are, at root, principled. Both cannot be right.

Thursday, 12 October 2017

Data is not the new oil

How do you know when a pithy phrase or seductive idea has become fashionable in policy circles? When The Economist devotes a briefing to it.


Amol Rajan in BBC

In a briefing and accompanying editorial earlier this summer, that distinguished newspaper (it's a magazine, but still calls itself a newspaper, and I'm happy to indulge such eccentricity) argued that data is today what oil was a century ago.

As The Economist put it, "A new commodity spawns a lucrative, fast-growing industry, prompting anti-trust regulators to step in to restrain those who control its flow." Never mind that data isn't particularly new (though the volume may be) - this argument does, at first glance, have much to recommend it.

Just as a century ago those who got to the oil in the ground were able to amass vast wealth, establish near monopolies, and build the future economy on their own precious resource, so data companies like Facebook and Google are able to do similar now. With oil in the 20th century, a consensus eventually grew that it would be up to regulators to intervene and break up the oligopolies - or oiliogopolies - that threatened an excessive concentration of power.

Many impressive thinkers have detected similarities between data today and oil in yesteryear. John Thornhill, the Financial Times's Innovation Editor, has used the example of Alaska to argue that data companies should pay a universal basic income, another idea that has become highly fashionable in policy circles.

Image copyrightGETTY IMAGESImage caption A drilling crew poses for a photograph at Spindletop Hill in Beaumont, Texas where the first Texas oil gusher was discovered in 1901.

At first I was taken by the parallels between data and oil. But now I'm not so sure. As I argued in a series of tweets last week, there are such important differences between data today and oil a century ago that the comparison, while catchy, risks spreading a misunderstanding of how these new technology super-firms operate - and what to do about their power.

The first big difference is one of supply. There is a finite amount of oil in the ground, albeit that is still plenty, and we probably haven't found all of it. But data is virtually infinite. Its supply is super-abundant. In terms of basic supply, data is more like sunlight than oil: there is so much of it that our principal concern should be more what to do with it than where to find more, or how to share that which we've already found.

Data can also be re-used, and the same data can be used by different people for different reasons. Say I invented a new email address. I might use that to register for a music service, where I left a footprint of my taste in music; a social media platform on which I upload photos of my baby son; and a search engine, where I indulge my fascination with reggae.

If, through that email address, a data company were able to access information about me or my friends, the music service, the social network and the search engine might all benefit from that one email address and all that is connected to it. This is different from oil. If a major oil company get to an oil field in, say, Texas, they alone will have control of the oil there - and once they've used it up, it's gone.


Legitimate fears

This points to another key difference: who controls the commodity. There are very legitimate fears about the use and abuse of personal data online - for instance, by foreign powers trying to influence elections. And very few people have a really clear idea about the digital footprint they have left online. If they did know, they might become obsessed with security. I know a few data fanatics who own several phones and indulge data-savvy habits, such as avoiding all text messages in favour of WhatsApp, which is encrypted.

But data is something which - in theory if not in practice - the user can control, and which ideally - though again the practice falls well short - spreads by consent. Going back to that oil company, it's largely up to them how they deploy the oil in the ground beneath Texas: how many barrels they take out every day, what price they sell it for, who they sell it to.

With my email address, it's up to me whether to give it to that music service, social network, or search engine. If I don't want people to know that I have an unhealthy obsession with bands such as The Wailers, The Pioneers and The Ethiopians, I can keep digitally schtum.

Now, I realise that in practice, very few people feel they have control over their personal data online; and retrieving your data isn't exactly easy. If I tried to reclaim, or wipe from the face of the earth, all the personal data that I've handed over to data companies, it'd be a full time job for the rest of my life and I'd never actually achieve it. That said, it is largely as a result of my choices that these firms have so much of my personal data.

Image copyrightGETTY IMAGESImage captionServers for data storage in Hafnarfjordur, Iceland, which is trying to make a name for itself in the business of data centres - warehouses that consume enormous amounts of energy to store the information of 3.2 billion internet users.

The final key difference is that the data industry is much faster to evolve than the oil industry was. Innovation is in the very DNA of big data companies, some of whose lifespans are pitifully short. As a result, regulation is much harder. That briefing in The Economist actually makes the point well that a previous model of regulation may not necessarily work for these new companies, who are forever adapting. That is not to say they should not be regulated; rather, that regulating them is something we haven't yet worked out how to do.

It is because the debate over regulation of these companies is so live that I think we need to interrogate superficially attractive ideas such as 'data is the new oil'. In fact, whereas finite but plentiful oil supplied a raw material for the industrial economy, data is a super-abundant resource in a post-industrial economy. Data companies increasingly control, and redefine, the nature of our public domain, rather than power our transport, or heat our homes.

Data today has something important in common with oil a century ago. But the tech titans are more media moguls than oil barons.

Sunday, 3 September 2017

Silicon Valley has been humbled. But its schemes are as dangerous as ever

Sex scandals, rows over terrorism, fears for its impact on social policy: the backlash against Big Tech has begun. Where will it end?


Evgeny Morozov in The Guardian


Just a decade ago, Silicon Valley pitched itself as a savvy ambassador of a newer, cooler, more humane kind of capitalism. It quickly became the darling of the elite, of the international media, and of that mythical, omniscient tribe: the “digital natives”. While an occasional critic – always easy to dismiss as a neo-Luddite – did voice concerns about their disregard for privacy or their geeky, almost autistic aloofness, public opinion was firmly on the side of technology firms.

Silicon Valley was the best that America had to offer; tech companies frequently occupied – and still do – top spots on lists of the world’s most admired brands. And there was much to admire: a highly dynamic, innovative industry, Silicon Valley has found a way to convert scrolls, likes and clicks into lofty political ideals, helping to export freedom, democracy and human rights to the Middle East and north Africa. Who knew that the only thing thwarting the global democratic revolution was capitalism’s inability to capture and monetise the eyeballs of strangers?

How things have changed. An industry once hailed for fuelling the Arab spring is today repeatedly accused of abetting Islamic State. An industry that prides itself on diversity and tolerance is now regularly in the news for cases of sexual harassment as well as the controversial views of its employees on matters such as gender equality. An industry that built its reputation on offering us free things and services is now regularly assailed for making other things – housing, above all– more expensive.

The Silicon Valley backlash is on. These days, one can hardly open a major newspaper – including such communist rags as the Financial Times and the Economist – without stumbling on passionate calls that demand curbs on the power of what is now frequently called “Big Tech”, from reclassifying digital platforms as utility companies to even nationalising them.

Meanwhile, Silicon Valley’s big secret – that the data produced by users of digital platforms often has economic value exceeding the value of the services rendered – is now also out in the open. Free social networking sounds like a good idea – but do you really want to surrender your privacy so that Mark Zuckerberg can run a foundation to rid the world of the problems that his company helps to perpetuate? Not everyone is so sure any longer. The Teflon industry is Teflon no more: the dirt thrown at it finally sticks – and this fact is lost on nobody.

Much of the brouhaha has caught Silicon Valley by surprise. Its ideas – disruption as a service, radical transparency as a way of being, an entire economy of gigs and shares – still dominate our culture. However, its global intellectual hegemony is built on shaky foundations: it stands on the post-political can-do allure of TED talks much more than in wonky thinktank reports and lobbying memorandums.

This is not to say that technology firms do not dabble in lobbying – here Alphabet is on a par with Goldman Sachs – nor to imply that they don’t steer academic research. In fact, on many tech policy issues it’s now difficult to find unbiased academics who have not received some Big Tech funding. Those who go against the grain find themselves in a rather precarious situation, as was recently shown by the fate of the Open Markets project at New America, an influential thinktank in Washington: its strong anti-monopoly stance appears to have angered New America’s chairman and major donor, Eric Schmidt, executive chairman of Alphabet. As a result, it was spun off from the thinktank.

Nonetheless, Big Tech’s political influence is not at the level of Wall Street or Big Oil. It’s hard to argue that Alphabet wields as much power over global technology policy as the likes of Goldman Sachs do over global financial and economic policy. For now, influential politicians – such as José Manuel Barroso, the former president of the European Commission – prefer to continue their careers at Goldman Sachs, not at Alphabet; it is also the former, not the latter, that fills vacant senior posts in Washington.

This will surely change. It’s obvious that the cheerful and utopian chatterboxes who make up TED talks no longer contribute much to boosting the legitimacy of the tech sector; fortunately, there’s a finite supply of bullshit on this planet. Big digital platforms will thus seek to acquire more policy leverage, following the playbook honed by the tobacco, oil and financial firms.

There are, however, two additional factors worth considering in order to understand where the current backlash against Big Tech might lead. First of all, short of a major privacy disaster, digital platforms will continue to be the world’s most admired and trusted brands – not least because they contrast so favourably with your average telecoms company or your average airline (say what you will of their rapaciousness, but tech firms don’t generally drag their customers off their flights).

And it is technology firms – American companies but also Chinese – that create the false impression that the global economy has recovered and everything is back to normal. Since January, the valuations of just four firms – Alphabet, Amazon, Facebook and Microsoft – have grown by an amount greater than the entire GDP of oil-rich Norway. Who would want to see this bubble burst? Nobody; in fact, those in power would rather see it grow some more.

The culture power of Silicon Valley can be gleaned from the simple fact that no sensible politician dares to go to Wall Street for photo ops; everyone goes to Palo Alto to unveil their latest pro-innovation policy. Emmanuel Macron wants to turn France into a startup, not a hedge fund. There’s no other narrative in town that makes centrist, neoliberal policies look palatable and inevitable at the same time; politicians, however angry they might sound about Silicon Valley’s monopoly power, do not really have an alternative project. It’s not just Macron: from Italy’s Matteo Renzi to Canada’s Justin Trudeau, all mainstream politicians who have claimed to offer a clever break with the past also offer an implicit pact with Big Tech – or, at least, its ideas – in the future.

Second, Silicon Valley, being the home of venture capital, is good at spotting global trends early on. Its cleverest minds had sensed the backlash brewing before the rest of us. They also made the right call in deciding that wonky memos and thinktank reports won’t quell our discontent, and that many other problems – from growing inequality to the general unease about globalisation – will eventually be blamed on an industry that did little to cause them.

Silicon Valley’s brightest minds realised they needed bold proposals – a guaranteed basic income, a tax on robots, experiments with fully privatised cities to be run by technology companies outside of government jurisdiction – that will sow doubt in the minds of those who might have otherwise opted for conventional anti-monopoly legislation. If technology firms can play a constructive role in funding our basic income, if Alphabet or Amazon can run Detroit or New York with the same efficiency that they run their platforms, if Microsoft can infer signs of cancer from our search queries: should we really be putting obstacles in their way?

In the boldness and vagueness of its plans to save capitalism, Silicon Valley might out-TED the TED talks. There are many reasons why such attempts won’t succeed in their grand mission even if they would make these firms a lot of money in the short term and help delay public anger by another decade. The main reason is simple: how could one possibly expect a bunch of rent-extracting enterprises with business models that are reminiscent of feudalism to resuscitate global capitalism and to establish a new New Deal that would constrain the greed of capitalists, many of whom also happen to be the investors behind these firms?

Data might seem infinite but there’s no reason to believe that the enormous profits made from it would simply smooth over the many contradictions of the current economic system. A self-proclaimed caretaker of global capitalism, Silicon Valley is much more likely to end up as its undertaker.

Wednesday, 30 August 2017

We need to nationalise Google, Facebook and Amazon. Here’s why

A crisis is looming. These monopoly platforms hoovering up our data have no competition: they’re too big to serve the public interest

Nick Srnicek in The Guardian


For the briefest moment in March 2014, Facebook’s dominance looked under threat. Ello, amid much hype, presented itself as the non-corporate alternative to Facebook. According to the manifesto accompanying its public launch, Ello would never sell your data to third parties, rely on advertising to fund its service, or require you to use your real name.

The hype fizzled out as Facebook continued to expand. Yet Ello’s rapid rise and fall is symptomatic of our contemporary digital world and the monopoly-style power accruing to the 21st century’s new “platform” companies, such as Facebook, Google and Amazon. Their business model lets them siphon off revenues and data at an incredible pace, and consolidate themselves as the new masters of the economy. Monday brought another giant leap as Amazon raised the prospect of an international grocery price war by slashing prices on its first day in charge of the organic retailer Whole Foods.

The platform – an infrastructure that connects two or more groups and enables them to interact – is crucial to these companies’ power. None of them focuses on making things in the way that traditional companies once did. Instead, Facebook connects users, advertisers, and developers; Uber, riders and drivers; Amazon, buyers and sellers.

Reaching a critical mass of users is what makes these businesses successful: the more users, the more useful to users – and the more entrenched – they become. Ello’s rapid downfall occurred because it never reached the critical mass of users required to prompt an exodus from Facebook – whose dominance means that even if you’re frustrated by its advertising and tracking of your data, it’s still likely to be your first choice because that’s where everyone is, and that’s the point of a social network. Likewise with Uber: it makes sense for riders and drivers to use the app that connects them with the biggest number of people, regardless of the sexism of Travis Kalanick, the former chief executive, or the ugly ways in which it controls drivers, or the failures of the company to report serious sexual assaults by its drivers.

Network effects generate momentum that not only helps these platforms survive controversy, but makes it incredibly difficult for insurgents to replace them.

As a result, we have witnessed the rise of increasingly formidable platform monopolies. Google, Facebook and Amazon are the most important in the west. (China has its own tech ecosystem.) Google controls search, Facebook rules social media, and Amazon leads in e-commerce. And they are now exerting their power over non-platform companies – a tension likely to be exacerbated in the coming decades. Look at the state of journalism: Google and Facebook rake in record ad revenues through sophisticated algorithms; newspapers and magazines see advertisers flee, mass layoffs, the shuttering of expensive investigative journalism, and the collapse of major print titles like the Independent. A similar phenomenon is happening in retail, with Amazon’s dominance undermining old department stores.

These companies’ power over our reliance on data adds a further twist. Data is quickly becoming the 21st-century version of oil – a resource essential to the entire global economy, and the focus of intense struggle to control it. Platforms, as spaces in which two or more groups interact, provide what is in effect an oil rig for data. Every interaction on a platform becomes another data point that can be captured and fed into an algorithm. In this sense, platforms are the only business model built for a data-centric economy.

More and more companies are coming to realise this. We often think of platforms as a tech-sector phenomenon, but the truth is that they are becoming ubiquitous across the economy. Uber is the most prominent example, turning the staid business of taxis into a trendy platform business. Siemens and GE, two powerhouses of the 20th century, are fighting it out to develop a cloud-based system for manufacturing. Monsanto and John Deere, two established agricultural companies, are trying to figure out how to incorporate platforms into farming and food production.


And this poses problems. At the heart of platform capitalism is a drive to extract more data in order to survive. One way is to get people to stay on your platform longer. Facebook is a master at using all sorts of behavioural techniques to foster addictions to its service: how many of us scroll absentmindedly through Facebook, barely aware of it?

Another way is to expand the apparatus of extraction. This helps to explain why Google, ostensibly a search engine company, is moving into the consumer internet of things (Home/Nest), self-driving cars (Waymo), virtual reality (Daydream/Cardboard), and all sorts of other personal services. Each of these is another rich source of data for the company, and another point of leverage over their competitors.

Others have simply bought up smaller companies: Facebook has swallowed Instagram ($1bn), WhatsApp ($19bn), and Oculus ($2bn), while investing in drone-based internet, e-commerce and payment services. It has even developed a tool that warns when a start-up is becoming popular and a possible threat. Google itself is among the most prolific acquirers of new companies, at some stages purchasing a new venture every week. The picture that emerges is of increasingly sprawling empires designed to vacuum up as much data as possible.

But here we get to the real endgame: artificial intelligence (or, less glamorously, machine learning). Some enjoy speculating about wild futures involving a Terminator-style Skynet, but the more realistic challenges of AI are far closer. In the past few years, every major platform company has turned its focus to investing in this field. As the head of corporate development at Google recently said, “We’re definitely AI first.”


Tinkering with minor regulations while AI companies amass power won’t do



All the dynamics of platforms are amplified once AI enters the equation: the insatiable appetite for data, and the winner-takes-all momentum of network effects. And there is a virtuous cycle here: more data means better machine learning, which means better services and more users, which means more data. Currently Google is using AI to improve its targeted advertising, and Amazon is using AI to improve its highly profitable cloud computing business. As one AI company takes a significant lead over competitors, these dynamics are likely to propel it to an increasingly powerful position.

What’s the answer? We’ve only begun to grasp the problem, but in the past, natural monopolies like utilities and railways that enjoy huge economies of scale and serve the common good have been prime candidates for public ownership. The solution to our newfangled monopoly problem lies in this sort of age-old fix, updated for our digital age. It would mean taking back control over the internet and our digital infrastructure, instead of allowing them to be run in the pursuit of profit and power. Tinkering with minor regulations while AI firms amass power won’t do. If we don’t take over today’s platform monopolies, we risk letting them own and control the basic infrastructure of 21st-century society.

Thursday, 24 August 2017

Silicon Valley siphons our data like oil. But the deepest drilling has just begun

Ben Tarnoff in The Guardian


What if a cold drink cost more on a hot day?

Customers in the UK will soon find out. Recent reports suggest that three of the country’s largest supermarket chains are rolling out surge pricing in select stores. This means that prices will rise and fall over the course of the day in response to demand. Buying lunch at lunchtime will be like ordering an Uber at rush hour.

This may sound pretty drastic, but far more radical changes are on the horizon. About a week before that report, Amazon announced its $13.7bn purchase of Whole Foods. A company that has spent its whole life killing physical retailers now owns more than 460 stores in three countries.

Amazon isn’t abandoning online retail for brick-and-mortar. Rather, it’s planning to fuse the two. It’s going to digitize our daily lives in ways that make surge-pricing your groceries look primitive by comparison. It’s going to expand Silicon Valley’s surveillance-based business model into physical space, and make money from monitoring everything we do.

Silicon Valley is an extractive industry. Its resource isn’t oil or copper, but data. Companies harvest this data by observing as much of our online activity as they can. This activity might take the form of a Facebook like, a Google search, or even how long your mouse hovers in a particular part of your screen. Alone, these traces may not be particularly meaningful. By pairing them with those of millions of others, however, companies can discover patterns that help determine what kind of person you are – and what kind of things you might buy.

These patterns are highly profitable. Silicon Valley uses them to sell you products or to sell you to advertisers. But feeding the algorithms that produce these patterns requires a steady stream of data. And while that data is certainly abundant, it’s not infinite.

A hundred years ago, you could dig a hole in Texas and strike oil. Today, fossil fuel companies have to build drilling platforms many miles offshore. The tech industry faces a similar fate. Its wildcat days are over: most of the data that lies closest to the surface is already claimed. Together, Facebook and Google receive a staggering 76% of online advertising revenue in the United States.

An Amazon Go ‘smart’ store in Seattle. The company’s acquisition of Whole Foods signals a desire to fuse online surveillance with brick-and-mortar business. Photograph: Paul Gordon/Zuma Press / eyevine

To increase profits, Silicon Valley must extract more data. One method is to get people to spend more time online: build new apps, and make them as addictive as possible. Another is to get more people online. This is the motivation for Facebook’s Free Basics program, which provides a limited set of internet services for free in underdeveloped regions across the globe, in the hopes of harvesting data from the world’s poor.

But these approaches leave large reservoirs of data untapped. After all, we can only spend so much time online. Our laptops, tablets, smartphones, and wearables see a lot of our lives – but not quite everything. For Silicon Valley, however, anything less than total knowledge of its users represents lost revenue. Any unmonitored moment is a missed opportunity.

Amazon is going to show the industry how to monitor more moments: by making corporate surveillance as deeply embedded in our physical environment as it is in our virtual one. Silicon Valley already earns vast sums of money from watching what we do online. Soon it’ll earn even more money from watching what we do offline.

It’s easy to picture how this will work, because the technology already exists. Late last year, Amazon built a “smart” grocery store in Seattle. You don’t have to wait in a checkout line to buy something – you just grab it and walk out of the store. Sensors detect what items you pick up, and you’re charged when you leave.


Imagine if your supermarket watched you as closely as Facebook or Google

Amazon is keen to emphasize the customer benefits: nobody likes waiting in line to pay for groceries, or fumbling with one’s wallet at the register. But the same technology that automates away the checkout line will enable Amazon to track every move a customer makes.

Imagine if your supermarket watched you as closely as Facebook or Google does. It would know not only which items you bought, but how long you lingered in front of which products and your path through the store. This data holds valuable lessons about your personality and your preferences – lessons that Amazon will use to sell you more stuff, online and off.

Supermarkets aren’t the only places these ideas will be put into practice. Surveillance can transform any physical space into a data mine. And the most data-rich environment, the one that contains the densest concentration of insights into who you are, is your home.

That’s why Amazon has aggressively promoted the Echo, a small speaker that offers a Siri-like voice-activated assistant called Alexa. Alexa can tell you the weather, read you the news, make you a to-do list, and perform any number of other tasks. It is a very good listener. It faithfully records your interactions and transmits them back to Amazon for analysis. In fact, it may be recording not only your interactions, but absolutely everything.

Putting a listening device in your living room is an excellent way for Amazon to learn more about you. Another is conducting aerial surveillance of your house. In late July, Amazon obtained a patent for drones that spy on people’s homes as they make deliveries. An example included in Amazon’s patent filing is roof repair: the drone that drops a package on your doorstep might notice your roof is falling apart, and that observation could result in a recommendation for a repair service. Amazon is still testing its delivery drones. But if and when they start flying, it’s safe to assume they’ll be scraping data from the outside of our homes as diligently as the Echo does from the inside.


  Silicon Valley is an extractive industry. Its resource isn’t oil or copper, but data. And to increase profits, Silicon Valley must extract more. Photograph: Spencer Platt/Getty Images

Amazon is likely to face some resistance as it colonizes more of our lives. People may not love the idea of their supermarkets spying on them, or every square inch of their homes being fed to an algorithm. But one should never underestimate how rapidly norms can be readjusted when capital requires it.

A couple of decades ago, letting a company read your mail and observe your social interactions and track your location would strike many, if not most, as a breach of privacy. Today, these are standard, even banal, aspects of using the internet. It’s worth considering what further concessions will come to feel normal in the next 20 years, as Silicon Valley is forced to dig deeper into our lives for data.

Tech’s apologists will say that consumers can always opt out: if you object to a company’s practices, don’t use its services. But in our new era of monopoly capitalism, consumer choice is a meaningless concept. Companies like Google and Facebook and Amazon dominate the digital sphere – you can’t avoid them.

The only solution is political. As consumers we’re nearly powerless, but as citizens, we can demand more democratic control of our data. Data is a common good. We make it together, and we make it meaningful together, since useful patterns only emerge from collecting and analyzing large quantities of it.

No reasonable person would let the mining industry unilaterally decide how to extract and refine a resource, or where to build its mines. Yet somehow we let the tech industry make all these decisions and more, with practically no public oversight. A company that yanks copper out of an earth that belongs to everyone should be governed in everyone’s interest. So should a company that yanks data out of every crevice of our collective lives.


Thursday, 19 January 2017

How statistics lost their power

William Davies in The Guardian


In theory, statistics should help settle arguments. They ought to provide stable reference points that everyone – no matter what their politics – can agree on. Yet in recent years, divergent levels of trust in statistics has become one of the key schisms that have opened up in western liberal democracies. Shortly before the November presidential election, a study in the US discovered that 68% of Trump supporters distrusted the economic data published by the federal government. In the UK, a research project by Cambridge University and YouGov looking at conspiracy theories discovered that 55% of the population believes that the government “is hiding the truth about the number of immigrants living here”.

Rather than diffusing controversy and polarisation, it seems as if statistics are actually stoking them. Antipathy to statistics has become one of the hallmarks of the populist right, with statisticians and economists chief among the various “experts” that were ostensibly rejected by voters in 2016. Not only are statistics viewed by many as untrustworthy, there appears to be something almost insulting or arrogant about them. Reducing social and economic issues to numerical aggregates and averages seems to violate some people’s sense of political decency.

Nowhere is this more vividly manifest than with immigration. The thinktank British Future has studied how best to win arguments in favour of immigration and multiculturalism. One of its main findings is that people often respond warmly to qualitative evidence, such as the stories of individual migrants and photographs of diverse communities. But statistics – especially regarding alleged benefits of migration to Britain’s economy – elicit quite the opposite reaction. People assume that the numbers are manipulated and dislike the elitism of resorting to quantitative evidence. Presented with official estimates of how many immigrants are in the country illegally, a common response is to scoff. Far from increasing support for immigration, British Future found, pointing to its positive effect on GDP can actually make people more hostile to it. GDP itself has come to seem like a Trojan horse for an elitist liberal agenda. Sensing this, politicians have now largely abandoned discussing immigration in economic terms.

All of this presents a serious challenge for liberal democracy. Put bluntly, the British government – its officials, experts, advisers and many of its politicians – does believe that immigration is on balance good for the economy. The British government did believe that Brexit was the wrong choice. The problem is that the government is now engaged in self-censorship, for fear of provoking people further.

This is an unwelcome dilemma. Either the state continues to make claims that it believes to be valid and is accused by sceptics of propaganda, or else, politicians and officials are confined to saying what feels plausible and intuitively true, but may ultimately be inaccurate. Either way, politics becomes mired in accusations of lies and cover-ups.

The declining authority of statistics – and the experts who analyse them – is at the heart of the crisis that has become known as “post-truth” politics. And in this uncertain new world, attitudes towards quantitative expertise have become increasingly divided. From one perspective, grounding politics in statistics is elitist, undemocratic and oblivious to people’s emotional investments in their community and nation. It is just one more way that privileged people in London, Washington DC or Brussels seek to impose their worldview on everybody else. From the opposite perspective, statistics are quite the opposite of elitist. They enable journalists, citizens and politicians to discuss society as a whole, not on the basis of anecdote, sentiment or prejudice, but in ways that can be validated. The alternative to quantitative expertise is less likely to be democracy than an unleashing of tabloid editors and demagogues to provide their own “truth” of what is going on across society.


Is there a way out of this polarisation? Must we simply choose between a politics of facts and one of emotions, or is there another way of looking at this situation? One way is to view statistics through the lens of their history. We need to try and see them for what they are: neither unquestionable truths nor elite conspiracies, but rather as tools designed to simplify the job of government, for better or worse. Viewed historically, we can see what a crucial role statistics have played in our understanding of nation states and their progress. This raises the alarming question of how – if at all – we will continue to have common ideas of society and collective progress, should statistics fall by the wayside.

In the second half of the 17th century, in the aftermath of prolonged and bloody conflicts, European rulers adopted an entirely new perspective on the task of government, focused upon demographic trends – an approach made possible by the birth of modern statistics. Since ancient times, censuses had been used to track population size, but these were costly and laborious to carry out and focused on citizens who were considered politically important (property-owning men), rather than society as a whole. Statistics offered something quite different, transforming the nature of politics in the process.

Statistics were designed to give an understanding of a population in its entirety,rather than simply to pinpoint strategically valuable sources of power and wealth. In the early days, this didn’t always involve producing numbers. In Germany, for example (from where we get the term Statistik) the challenge was to map disparate customs, institutions and laws across an empire of hundreds of micro-states. What characterised this knowledge as statistical was its holistic nature: it aimed to produce a picture of the nation as a whole. Statistics would do for populations what cartography did for territory.

Equally significant was the inspiration of the natural sciences. Thanks to standardised measures and mathematical techniques, statistical knowledge could be presented as objective, in much the same way as astronomy. Pioneering English demographers such as William Petty and John Graunt adapted mathematical techniques to estimate population changes, for which they were hired by Oliver Cromwell and Charles II.

The emergence in the late 17th century of government advisers claiming scientific authority, rather than political or military acumen, represents the origins of the “expert” culture now so reviled by populists. These path-breaking individuals were neither pure scholars nor government officials, but hovered somewhere between the two. They were enthusiastic amateurs who offered a new way of thinking about populations that privileged aggregates and objective facts. Thanks to their mathematical prowess, they believed they could calculate what would otherwise require a vast census to discover.

There was initially only one client for this type of expertise, and the clue is in the word “statistics”. Only centralised nation states had the capacity to collect data across large populations in a standardised fashion and only states had any need for such data in the first place. Over the second half of the 18th century, European states began to collect more statistics of the sort that would appear familiar to us today. Casting an eye over national populations, states became focused upon a range of quantities: births, deaths, baptisms, marriages, harvests, imports, exports, price fluctuations. Things that would previously have been registered locally and variously at parish level became aggregated at a national level.

New techniques were developed to represent these indicators, which exploited both the vertical and horizontal dimensions of the page, laying out data in matrices and tables, just as merchants had done with the development of standardised book-keeping techniques in the late 15th century. Organising numbers into rows and columns offered a powerful new way of displaying the attributes of a given society. Large, complex issues could now be surveyed simply by scanning the data laid out geometrically across a single page.

These innovations carried extraordinary potential for governments. By simplifying diverse populations down to specific indicators, and displaying them in suitable tables, governments could circumvent the need to acquire broader detailed local and historical insight. Of course, viewed from a different perspective, this blindness to local cultural variability is precisely what makes statistics vulgar and potentially offensive. Regardless of whether a given nation had any common cultural identity, statisticians would assume some standard uniformity or, some might argue, impose that uniformity upon it.

Not every aspect of a given population can be captured by statistics. There is always an implicit choice in what is included and what is excluded, and this choice can become a political issue in its own right. The fact that GDP only captures the value of paid work, thereby excluding the work traditionally done by women in the domestic sphere, has made it a target of feminist critique since the 1960s. In France, it has been illegal to collect census data on ethnicity since 1978, on the basis that such data could be used for racist political purposes. (This has the side-effect of making systemic racism in the labour market much harder to quantify.)

Despite these criticisms, the aspiration to depict a society in its entirety, and to do so in an objective fashion, has meant that various progressive ideals have been attached to statistics. The image of statistics as a dispassionate science of society is only one part of the story. The other part is about how powerful political ideals became invested in these techniques: ideals of “evidence-based policy”, rationality, progress and nationhood grounded in facts, rather than in romanticised stories.

Since the high-point of the Enlightenment in the late 18th century, liberals and republicans have invested great hope that national measurement frameworks could produce a more rational politics, organised around demonstrable improvements in social and economic life. The great theorist of nationalism, Benedict Anderson, famously described nations as “imagined communities”, but statistics offer the promise of anchoring this imagination in something tangible. Equally, they promise to reveal what historical path the nation is on: what kind of progress is occurring? How rapidly? For Enlightenment liberals, who saw nations as moving in a single historical direction, this question was crucial.

The potential of statistics to reveal the state of the nation was seized in post-revolutionary France. The Jacobin state set about imposing a whole new framework of national measurement and national data collection. The world’s first official bureau of statistics was opened in Paris in 1800. Uniformity of data collection, overseen by a centralised cadre of highly educated experts, was an integral part of the ideal of a centrally governed republic, which sought to establish a unified, egalitarian society.

From the Enlightenment onwards, statistics played an increasingly important role in the public sphere, informing debate in the media, providing social movements with evidence they could use. Over time, the production and analysis of such data became less dominated by the state. Academic social scientists began to analyse data for their own purposes, often entirely unconnected to government policy goals. By the late 19th century, reformers such as Charles Booth in London and WEB Du Bois in Philadelphia were conducting their own surveys to understand urban poverty.


 
Illustration by Guardian Design

To recognise how statistics have been entangled in notions of national progress, consider the case of GDP. GDP is an estimate of the sum total of a nation’s consumer spending, government spending, investments and trade balance (exports minus imports), which is represented in a single number. This is fiendishly difficult to get right, and efforts to calculate this figure began, like so many mathematical techniques, as a matter of marginal, somewhat nerdish interest during the 1930s. It was only elevated to a matter of national political urgency by the second world war, when governments needed to know whether the national population was producing enough to keep up the war effort. In the decades that followed, this single indicator, though never without its critics, took on a hallowed political status, as the ultimate barometer of a government’s competence. Whether GDP is rising or falling is now virtually a proxy for whether society is moving forwards or backwards.

Or take the example of opinion polling, an early instance of statistical innovation occurring in the private sector. During the 1920s, statisticians developed methods for identifying a representative sample of survey respondents, so as to glean the attitudes of the public as a whole. This breakthrough, which was first seized upon by market researchers, soon led to the birth of the opinion polling. This new industry immediately became the object of public and political fascination, as the media reported on what this new science told us about what “women” or “Americans” or “manual labourers” thought about the world.

Nowadays, the flaws of polling are endlessly picked apart. But this is partly due to the tremendous hopes that have been invested in polling since its origins. It is only to the extent that we believe in mass democracy that we are so fascinated or concerned by what the public thinks. But for the most part it is thanks to statistics, and not to democratic institutions as such, that we can know what the public thinks about specific issues. We underestimate how much of our sense of “the public interest” is rooted in expert calculation, as opposed to democratic institutions.

As indicators of health, prosperity, equality, opinion and quality of life have come to tell us who we are collectively and whether things are getting better or worse, politicians have leaned heavily on statistics to buttress their authority. Often, they lean too heavily, stretching evidence too far, interpreting data too loosely, to serve their cause. But that is an inevitable hazard of the prevalence of numbers in public life, and need not necessarily trigger the type of wholehearted rejections of expertise that we have witnessed recently.

In many ways, the contemporary populist attack on “experts” is born out of the same resentment as the attack on elected representatives. In talking of society as a whole, in seeking to govern the economy as a whole, both politicians and technocrats are believed to have “lost touch” with how it feels to be a single citizen in particular. Both statisticians and politicians have fallen into the trap of “seeing like a state”, to use a phrase from the anarchist political thinker James C Scott. Speaking scientifically about the nation – for instance in terms of macroeconomics – is an insult to those who would prefer to rely on memory and narrative for their sense of nationhood, and are sick of being told that their “imagined community” does not exist.

On the other hand, statistics (together with elected representatives) performed an adequate job of supporting a credible public discourse for decades if not centuries. What changed?

The crisis of statistics is not quite as sudden as it might seem. For roughly 450 years, the great achievement of statisticians has been to reduce the complexity and fluidity of national populations into manageable, comprehensible facts and figures. Yet in recent decades, the world has changed dramatically, thanks to the cultural politics that emerged in the 1960s and the reshaping of the global economy that began soon after. It is not clear that the statisticians have always kept pace with these changes. Traditional forms of statistical classification and definition are coming under strain from more fluid identities, attitudes and economic pathways. Efforts to represent demographic, social and economic changes in terms of simple, well-recognised indicators are losing legitimacy.

Consider the changing political and economic geography of nation states over the past 40 years. The statistics that dominate political debate are largely national in character: poverty levels, unemployment, GDP, net migration. But the geography of capitalism has been pulling in somewhat different directions. Plainly globalisation has not rendered geography irrelevant. In many cases it has made the location of economic activity far more important, exacerbating the inequality between successful locations (such as London or San Francisco) and less successful locations (such as north-east England or the US rust belt). The key geographic units involved are no longer nation states. Rather, it is cities, regions or individual urban neighbourhoods that are rising and falling.

The Enlightenment ideal of the nation as a single community, bound together by a common measurement framework, is harder and harder to sustain. If you live in one of the towns in the Welsh valleys that was once dependent on steel manufacturing or mining for jobs, politicians talking of how “the economy” is “doing well” are likely to breed additional resentment. From that standpoint, the term “GDP” fails to capture anything meaningful or credible.

When macroeconomics is used to make a political argument, this implies that the losses in one part of the country are offset by gains somewhere else. Headline-grabbing national indicators, such as GDP and inflation, conceal all sorts of localised gains and losses that are less commonly discussed by national politicians. Immigration may be good for the economy overall, but this does not mean that there are no local costs at all. So when politicians use national indicators to make their case, they implicitly assume some spirit of patriotic mutual sacrifice on the part of voters: you might be the loser on this occasion, but next time you might be the beneficiary. But what if the tables are never turned? What if the same city or region wins over and over again, while others always lose? On what principle of give and take is that justified?

In Europe, the currency union has exacerbated this problem. The indicators that matter to the European Central Bank (ECB), for example, are those representing half a billion people. The ECB is concerned with the inflation or unemployment rate across the eurozone as if it were a single homogeneous territory, at the same time as the economic fate of European citizens is splintering in different directions, depending on which region, city or neighbourhood they happen to live in. Official knowledge becomes ever more abstracted from lived experience, until that knowledge simply ceases to be relevant or credible.

The privileging of the nation as the natural scale of analysis is one of the inbuilt biases of statistics that years of economic change has eaten away at. Another inbuilt bias that is coming under increasing strain is classification. Part of the job of statisticians is to classify people by putting them into a range of boxes that the statistician has created: employed or unemployed, married or unmarried, pro-Europe or anti-Europe. So long as people can be placed into categories in this way, it becomes possible to discern how far a given classification extends across the population.

This can involve somewhat reductive choices. To count as unemployed, for example, a person has to report to a survey that they are involuntarily out of work, even if it may be more complicated than that in reality. Many people move in and out of work all the time, for reasons that might have as much to do with health and family needs as labour market conditions. But thanks to this simplification, it becomes possible to identify the rate of unemployment across the population as a whole.

Here’s a problem, though. What if many of the defining questions of our age are not answerable in terms of the extent of people encompassed, but the intensity with which people are affected? Unemployment is one example. The fact that Britain got through the Great Recession of 2008-13 without unemployment rising substantially is generally viewed as a positive achievement. But the focus on “unemployment” masked the rise of underemployment, that is, people not getting a sufficient amount of work or being employed at a level below that which they are qualified for. This currently accounts for around 6% of the “employed” labour force. Then there is the rise of the self-employed workforce, where the divide between “employed” and “involuntarily unemployed” makes little sense.

This is not a criticism of bodies such as the Office for National Statistics (ONS), which does now produce data on underemployment. But so long as politicians continue to deflect criticism by pointing to the unemployment rate, the experiences of those struggling to get enough work or to live on their wages go unrepresented in public debate. It wouldn’t be all that surprising if these same people became suspicious of policy experts and the use of statistics in political debate, given the mismatch between what politicians say about the labour market and the lived reality.

The rise of identity politics since the 1960s has put additional strain on such systems of classification. Statistical data is only credible if people will accept the limited range of demographic categories that are on offer, which are selected by the expert not the respondent. But where identity becomes a political issue, people demand to define themselves on their own terms, where gender, sexuality, race or class is concerned.

Opinion polling may be suffering for similar reasons. Polls have traditionally captured people’s attitudes and preferences, on the reasonable assumption that people will behave accordingly. But in an age of declining political participation, it is not enough simply to know which box someone would prefer to put an “X” in. One also needs to know whether they feel strongly enough about doing so to bother. And when it comes to capturing such fluctuations in emotional intensity, polling is a clumsy tool.

Statistics have faced criticism regularly over their long history. The challenges that identity politics and globalisation present to them are not new either. Why then do the events of the past year feel quite so damaging to the ideal of quantitative expertise and its role in political debate?

In recent years, a new way of quantifying and visualising populations has emerged that potentially pushes statistics to the margins, ushering in a different era altogether. Statistics, collected and compiled by technical experts, are giving way to data that accumulates by default, as a consequence of sweeping digitisation. Traditionally, statisticians have known which questions they wanted to ask regarding which population, then set out to answer them. By contrast, data is automatically produced whenever we swipe a loyalty card, comment on Facebook or search for something on Google. As our cities, cars, homes and household objects become digitally connected, the amount of data we leave in our trail will grow even greater. In this new world, data is captured first and research questions come later.

In the long term, the implications of this will probably be as profound as the invention of statistics was in the late 17th century. The rise of “big data” provides far greater opportunities for quantitative analysis than any amount of polling or statistical modelling. But it is not just the quantity of data that is different. It represents an entirely different type of knowledge, accompanied by a new mode of expertise.

First, there is no fixed scale of analysis (such as the nation) nor any settled categories (such as “unemployed”). These vast new data sets can be mined in search of patterns, trends, correlations and emergent moods. It becomes a way of tracking the identities that people bestow upon themselves (such as “#ImwithCorbyn” or “entrepreneur”) rather than imposing classifications upon them. This is a form of aggregation suitable to a more fluid political age, in which not everything can be reliably referred back to some Enlightenment ideal of the nation state as guardian of the public interest.

Second, the majority of us are entirely oblivious to what all this data says about us, either individually or collectively. There is no equivalent of an Office for National Statistics for commercially collected big data. We live in an age in which our feelings, identities and affiliations can be tracked and analysed with unprecedented speed and sensitivity – but there is nothing that anchors this new capacity in the public interest or public debate. There are data analysts who work for Google and Facebook, but they are not “experts” of the sort who generate statistics and who are now so widely condemned. The anonymity and secrecy of the new analysts potentially makes them far more politically powerful than any social scientist.

A company such as Facebook has the capacity to carry quantitative social science on hundreds of billions of people, at very low cost. But it has very little incentive to reveal the results. In 2014, when Facebook researchers published results of a study of “emotional contagion” that they had carried out on their users – in which they altered news feeds to see how it affected the content that users then shared in response – there was an outcry that people were being unwittingly experimented on. So, from Facebook’s point of view, why go to all the hassle of publishing? Why not just do the study and keep quiet?

What is most politically significant about this shift from a logic of statistics to one of data is how comfortably it sits with the rise of populism. Populist leaders can heap scorn upon traditional experts, such as economists and pollsters, while trusting in a different form of numerical analysis altogether. Such politicians rely on a new, less visible elite, who seek out patterns from vast data banks, but rarely make any public pronouncements, let alone publish any evidence. These data analysts are often physicists or mathematicians, whose skills are not developed for the study of society at all. This, for example, is the worldview propagated by Dominic Cummings, former adviser to Michael Gove and campaign director of Vote Leave. “Physics, mathematics and computer science are domains in which there are real experts, unlike macro-economic forecasting,” Cummings has argued.

Figures close to Donald Trump, such as his chief strategist Steve Bannon and the Silicon Valley billionaire Peter Thiel, are closely acquainted with cutting-edge data analytics techniques, via companies such as Cambridge Analytica, on whose board Bannon sits. During the presidential election campaign, Cambridge Analytica drew on various data sources to develop psychological profiles of millions of Americans, which it then used to help Trump target voters with tailored messaging.

This ability to develop and refine psychological insights across large populations is one of the most innovative and controversial features of the new data analysis. As techniques of “sentiment analysis”, which detect the mood of large numbers of people by tracking indicators such as word usage on social media, become incorporated into political campaigns, the emotional allure of figures such as Trump will become amenable to scientific scrutiny. In a world where the political feelings of the general public are becoming this traceable, who needs pollsters?

Few social findings arising from this kind of data analytics ever end up in the public domain. This means that it does very little to help anchor political narrative in any shared reality. With the authority of statistics waning, and nothing stepping into the public sphere to replace it, people can live in whatever imagined community they feel most aligned to and willing to believe in. Where statistics can be used to correct faulty claims about the economy or society or population, in an age of data analytics there are few mechanisms to prevent people from giving way to their instinctive reactions or emotional prejudices. On the contrary, companies such as Cambridge Analytica treat those feelings as things to be tracked.

But even if there were an Office for Data Analytics, acting on behalf of the public and government as the ONS does, it is not clear that it would offer the kind of neutral perspective that liberals today are struggling to defend. The new apparatus of number-crunching is well suited to detecting trends, sensing the mood and spotting things as they bubble up. It serves campaign managers and marketers very well. It is less well suited to making the kinds of unambiguous, objective, potentially consensus-forming claims about society that statisticians and economists are paid for.

In this new technical and political climate, it will fall to the new digital elite to identify the facts, projections and truth amid the rushing stream of data that results. Whether indicators such as GDP and unemployment continue to carry political clout remains to be seen, but if they don’t, it won’t necessarily herald the end of experts, less still the end of truth. The question to be taken more seriously, now that numbers are being constantly generated behind our backs and beyond our knowledge, is where the crisis of statistics leaves representative democracy.

On the one hand, it is worth recognising the capacity of long-standing political institutions to fight back. Just as “sharing economy” platforms such as Uber and Airbnb have recently been thwarted by legal rulings (Uber being compelled to recognise drivers as employees, Airbnb being banned altogether by some municipal authorities), privacy and human rights law represents a potential obstacle to the extension of data analytics. What is less clear is how the benefits of digital analytics might ever be offered to the public, in the way that many statistical data sets are. Bodies such as the Open Data Institute, co-founded by Tim Berners-Lee, campaign to make data publicly available, but have little leverage over the corporations where so much of our data now accumulates. Statistics began life as a tool through which the state could view society, but gradually developed into something that academics, civic reformers and businesses had a stake in. But for many data analytics firms, secrecy surrounding methods and sources of data is a competitive advantage that they will not give up voluntarily.

A post-statistical society is a potentially frightening proposition, not because it would lack any forms of truth or expertise altogether, but because it would drastically privatise them. Statistics are one of many pillars of liberalism, indeed of Enlightenment. The experts who produce and use them have become painted as arrogant and oblivious to the emotional and local dimensions of politics. No doubt there are ways in which data collection could be adapted to reflect lived experiences better. But the battle that will need to be waged in the long term is not between an elite-led politics of facts versus a populist politics of feeling. It is between those still committed to public knowledge and public argument and those who profit from the ongoing disintegration of those things.