Search This Blog

Showing posts with label facebook. Show all posts
Showing posts with label facebook. Show all posts

Thursday 2 July 2020

What's wrong with WhatsApp

As social media has become more inhospitable, the appeal of private online groups has grown. But they hold their own dangers – to those both inside and out. By William Davies in The Guardian


In the spring, as the virus swept across the world and billions of people were compelled to stay at home, the popularity of one social media app rose more sharply than any other. By late March, usage of WhatsApp around the world had grown by 40%. In Spain, where the lockdown was particularly strict, it rose by 76%. In those early months, WhatsApp – which hovers neatly between the space of email, Facebook and SMS, allowing text messages, links and photos to be shared between groups – was a prime conduit through which waves of news, memes and mass anxiety travelled.

At first, many of the new uses were heartening. Mutual aid groups sprung up to help the vulnerable. Families and friends used the app to stay close, sharing their fears and concerns in real time. Yet by mid-April, the role that WhatsApp was playing in the pandemic looked somewhat darker. A conspiracy theory about the rollout of 5G, which originated long before Covid-19 had appeared, now claimed that mobile phone masts were responsible for the disease. Across the UK, people began setting fire to 5G masts, with 20 arson attacks over the Easter weekend alone.

WhatsApp, along with Facebook and YouTube, was a key channel through which the conspiracy theory proliferated. Some feared that the very same community groups created during March were now accelerating the spread of the 5G conspiracy theory. Meanwhile, the app was also enabling the spread of fake audio clips, such as a widely shared recording in which someone who claimed to work for the NHS reported that ambulances would no longer be sent to assist people with breathing difficulties.

This was not the first time that WhatsApp has been embroiled in controversy. While the “fake news” scandals surrounding the 2016 electoral upsets in the UK and US were more focused upon Facebook – which owns WhatsApp – subsequent electoral victories for Jair Bolsonaro in Brazil and Narendra Modi in India were aided by incendiary WhatsApp messaging, exploiting the vast reach of the app in these countries. In India, there have also been reports of riots and at least 30 deaths linked to rumours circulating on WhatsApp. India’s Ministry of Information and Broadcasting has sought ways of regulating WhatsApp content, though this has led to new controversies about government infringement on civil liberties.


 
Brazil’s president Jair Bolsonaro with a printout of an opponent’s WhatsApp message about him. Photograph: Ueslei Marcelino/Reuters

As ever, there is a risk of pinning too much blame for complex political crises on an inert technology. WhatsApp has also taken some steps to limit its use as a vehicle for misinformation. In March, a WhatsApp spokesperson told the Washington Post that the company had “engaged health ministries around the world to provide simple ways for citizens to receive accurate information about the virus”. But even away from such visible disruptions, WhatsApp does seem to be an unusually effective vehicle for sowing distrust in public institutions and processes.

A WhatsApp group can exist without anyone outside the group knowing of its existence, who its members are or what is being shared, while end-to-end encryption makes it immune to surveillance. Back in Britain’s pre-Covid-19 days, when Brexit and Jeremy Corbyn were the issues that provoked the most feverish political discussions, speculation and paranoia swirled around such groups. Media commentators who defended Corbyn were often accused of belonging to a WhatsApp group of “outriders”, co-ordinated by Corbyn’s office, which supposedly told them what line to take. Meanwhile, the Conservative party’s pro-Brexit European Research Group was said to be chiefly sustained in the form of a WhatsApp group, whose membership was never public. Secretive coordination – both real and imagined – does not strengthen confidence in democracy.

WhatsApp groups can not only breed suspicion among the public, but also manufacture a mood of suspicion among their own participants. As also demonstrated by closed Facebook groups, discontents – not always well-founded – accumulate in private before boiling over in public. The capacity to circulate misinformation and allegations is becoming greater than the capacity to resolve them.

The political threat of WhatsApp is the flipside of its psychological appeal. Unlike so many other social media platforms, WhatsApp is built to secure privacy. On the plus side, this means intimacy with those we care about and an ability to speak freely; on the negative side, it injects an ethos of secrecy and suspicion into the public sphere. As Facebook, Twitter and Instagram become increasingly theatrical – every gesture geared to impress an audience or deflect criticism – WhatsApp has become a sanctuary from a confusing and untrustworthy world, where users can speak more frankly. As trust in groups grows, so it is withdrawn from public institutions and officials. A new common sense develops, founded on instinctive suspicion towards the world beyond the group.

The ongoing rise of WhatsApp, and its challenge to both legacy institutions and open social media, poses a profound political question: how do public institutions and discussions retain legitimacy and trust once people are organised into closed and invisible communities? The risk is that a vicious circle ensues, in which private groups circulate ever more information and disinformation to discredit public officials and public information, and our alienation from democracy escalates.

When WhatsApp was bought by Facebook in 2014 for $19bn, it was the most valuable tech acquisition in history. At the time, WhatsApp brought 450 million users with it. In February this year, it hit 2 billion users worldwide – and that is even before its lockdown surge – making it by far the most widely used messenger app, and the second most commonly used app after Facebook itself. In many countries, it is now the default means of digital communication and social coordination, especially among younger people.

The features that would later allow WhatsApp to become a conduit for conspiracy theory and political conflict were ones never integral to SMS, and have more in common with email: the creation of groups and the ability to forward messages. The ability to forward messages from one group to another – recently limited in response to Covid-19-related misinformation – makes for a potent informational weapon. Groups were initially limited in size to 100 people, but this was later increased to 256. That’s small enough to feel exclusive, but if 256 people forward a message on to another 256 people, 65,536 will have received it.

Groups originate for all sorts of purposes – a party, organising amateur sport, a shared interest – but then take on a life of their own. There can be an anarchic playfulness about this, as a group takes on its own set of in-jokes and traditions. In a New York Magazine piece last year, under the headline “Group chats are making the internet fun again”, the technology critic Max Read argued that groups have become “an outright replacement for the defining mode of social organization of the past decade: the platform-centric, feed-based social network.”

It’s understandable that in order to relax, users need to know they’re not being overheard – though there is a less playful side to this. If groups are perceived as a place to say what you really think, away from the constraints of public judgement or “political correctness”, then it follows that they are also where people turn to share prejudices or more hateful expressions, that are unacceptable (or even illegal) elsewhere. Santiago Abascal, the leader of the Spanish far-right party Vox, has defined his party as one willing to “defend what Spaniards say on WhatsApp”.

 
A WhatsApp newspaper ad in India warning about fake information on its service. Photograph: Prakash Singh/AFP/Getty Images

A different type of group emerges where its members are all users of the same service, such as a school, a housing block or a training programme. A potential problem here is one of negative solidarity, in which feelings of community are deepened by turning against the service in question. Groups of this sort typically start from a desire to pool information – students staying in touch about deadlines, say – but can swiftly become a means of discrediting the institution they cluster around. Initial murmurs of dissatisfaction can escalate rapidly, until the group has forged an identity around a spirit of resentment and alienation, which can then be impossible to dislodge with countervailing evidence.

Faced with the rise of new technologies, one option for formal organisations and associations is to follow people to their preferred platform. In March, the government introduced a WhatsApp-based information service about Covid-19, with an automated chatbot. But groups themselves can be an unreliable means of getting crucial information to people. Anecdotal evidence from local political organisers and trade union reps suggests that, despite the initial efficiency of WhatsApp groups, their workload often increases because of the escalating number of sub-communities, each of which needs to be contacted separately. Schools desperately seek to get information out to parents, only to discover that unless it appears in precisely the right WhatsApp group, it doesn’t register. The age of the message board, be it physical or digital, where information can be posted once for anyone who needs it, is over.

WhatsApp’s ‘broadcast list’ function, which allows messages to be sent to multiple recipients who are invisible to one another (like email’s ‘bcc’ line), alleviates some of the problems of groups taking on a life of their own. But even then, lists can only include people who are already mutual contacts of the list-owner. The problem, from the point of view of institutions, is that WhatsApp use seems fuelled by a preference for informal, private communication as such. University lecturers are frequently baffled by the discovery that many students and applicants don’t read email. If email is going into decline, WhatsApp does not seem to be a viable alternative when it comes to sharing verified information as widely and inclusively as possible.

Groups are great for brief bursts of humour or frustration, but, by their very nature, far less useful for supporting the circulation of public information. To understand why this is the case, we have to think about the way in which individuals can become swayed and influenced once they belong to a group.

The internet has brought with it its own litany of social pathologies and threats. Trolling, flaming, doxing, cancelling and pile-ons are all risks that go with socialising within a vast open architecture. “Open” platforms such as Twitter are reminders that much social activity tends to be aimed at a small and select community, but can be rendered comical or shameful when exposed to a different community altogether.

As any frequent user of WhatsApp or a closed Facebook group will recognise, the moral anxiety associated with groups is rather different. If the worry in an open network is of being judged by some outside observer, be it one’s boss or an extended family member, in a closed group it is of saying something that goes against the codes that anchor the group’s identity. Groups can rapidly become dominated by a certain tone or worldview that is uncomfortable to challenge and nigh-impossible to dislodge. WhatsApp is a machine for generating feelings of faux pas, as comments linger in a group’s feed, waiting for a response.

This means that while groups can generate high levels of solidarity, which can in principle be put to powerful political effect, it also becomes harder to express disagreement within the group. If, for example, an outspoken and popular member of a neighbourhood WhatsApp group begins to circulate misinformation about health risks, the general urge to maintain solidarity means that their messages are likely to be met with approval and thanks. When a claim or piece of content shows up in a group, there may be many members who view it as dubious; the question is whether they have the confidence to say as much. Meanwhile, the less sceptical can simply forward it on. It’s not hard, then, to understand why WhatsApp is a powerful distributor of “fake news” and conspiracy theories.

As on open social platforms, one of the chief ways of building solidarity on WhatsApp is to posit some injustice or enemy that threatens the group and its members. In the most acute examples, conspiracy theories are unleashed against political opponents, to the effect that they are paedophiles or secret affiliates of foreign powers. Such plausibly deniable practices swirled around the fringes of the successful election campaigns of Modi, Bolsonaro and Donald Trump, and across multiple platforms.


FacebookTwitterPinterest A security message on WhatsApp. Photograph: Thomas White/Reuters
But what makes WhatsApp potentially more dangerous than public social media are the higher levels of trust and honesty that are often present in private groups. It is a truism that nobody is as happy as they appear on Facebook, as attractive as they appear on Instagram or as angry as they appear on Twitter, which spawns a growing weariness with such endless performance. By contrast, closed groups are where people take off their public masks and let their critical guard down. Neither anonymity (a precondition of most trolling) nor celebrity are on offer. The speed with which rumours circulate on WhatsApp is partly a reflection of how altruistic and uncritical people can be in groups. Most of the time, people seem to share false theories about Covid-19 not with the intention of doing harm, but precisely out of concern for other group members. Anti-vaxx, anti-5G or anti-Hillary rumours combine an identification of an enemy with a strong internal sense of solidarity. Nevertheless, they add to the sense that the world is hostile and dangerous.

There is one particular pattern of a group chat that can manufacture threats and injustices out of thin air. It tends to start with one participant speculating that they are being let down or targeted by some institution or rival group – be it a public service, business or cultural community – whereupon a second participant agrees. By this stage, it becomes risky for anyone else to defend the institution or group in question, and immediately a new enemy and a new resentment is born. Instantly, the warnings and denunciations emanating from within the group take on a level of authenticity that cannot be matched by the entity that is now the object of derision.

But what if the first contributor has misunderstood or misread something, or had a very stressful day and needs to let off steam? And what if the second is merely agreeing so as to make the first one feel better? And what if the other members are either too distracted, too inhibited or too exhausted to say anything to oppose this fresh indignation? This needn’t snowball into the forms of conspiracy theory that produce riots or arson attacks. But even in milder forms, it makes the job of communicating official information – occasionally life-saving information – far more troublesome than it was just a decade ago. Information about public services and health risks is increasingly having to penetrate a thicket of overlapping groups, many of which may have developed an instinctive scepticism to anything emanating from the “mainstream”.

Part of the challenge for institutions is that there is often a strange emotional comfort in the shared feeling of alienation and passivity. “We were never informed about that”, “nobody consulted us”, “we are being ignored”. These are dominant expressions of our political zeitgeist. As WhatsApp has become an increasingly common way of encountering information and news, a vicious circle can ensue: the public world seems ever more distant, impersonal and fake, and the private group becomes a space of sympathy and authenticity.

This is a new twist in the evolution of the social internet. Since the 90s, the internet has held out a promise of connectivity, openness and inclusion, only to then confront inevitable threats to privacy, security and identity. By contrast, groups make people feel secure and anchored, but also help to fragment civil society into separate cliques, unknown to one another. This is the outcome of more than 20 years of ideological battles over what sort of social space the internet should be.

For a few years at the dawn of the millennium, the O’Reilly Emerging Technology Conferences (or ETech), were a crucible in which a new digital world was imagined and debated. Launched by the west coast media entrepreneur Tim O’Reilly and hosted annually around California, the conferences attracted a mixture of geeks, gurus, designers and entrepreneurs, brought together more in a spirit of curiosity than of commerce. In 2005, O’Reilly coined the term “web 2.0” to describe a new wave of websites that connected users with each other, rather than with existing offline institutions. Later that year, the domain name facebook.com was purchased by a 21-year-old Harvard student, and the age of the giant social media platforms was born.

Within this short window of time, we can see competing ideas of what a desirable online community might look like. The more idealistic tech gurus who attended ETech insisted that the internet should remain an open public space, albeit one in which select communities could cluster for their own particular purposes, such as creating open-source software projects or Wikipedia entries. The untapped potential of the internet, they believed, was for greater democracy. But for companies such as Facebook, the internet presented an opportunity to collect data about users en masse. The internet’s potential was for greater surveillance. The rise of the giant platforms from 2005 onwards suggested the latter view had won out. And yet, in a strange twist, we are now witnessing a revival of anarchic, self-organising digital groups – only now, in the hands of Facebook as well. The two competing visions have collided.

 
Mark Zuckerberg talking about privacy at a Facebook conference in 2019. Photograph: Amy Osborne/AFP/Getty Images

To see how this story unfolded, it’s worth going back to 2003. At the ETech conference that year, a keynote speech was given by the web enthusiast and writer Clay Shirky, now an academic at New York University, which surprised its audience by declaring that the task of designing successful online communities had little to do with technology at all. The talk looked back at one of the most fertile periods in the history of social psychology, and was entitled “A group is its own worst enemy”.

Shirky drew on the work of the British psychoanalyst and psychologist Wilfred Bion, who, together with Kurt Lewin, was one of the pioneers of the study of “group dynamics” in the 40s. The central proposition of this school was that groups possess psychological properties that exist independently of their individual members. In groups, people find themselves behaving in ways that they never would if left to their own devices.

Like Stanley Milgram’s notorious series of experiments to test obedience in the early 60s – in which some participants were persuaded to administer apparently painful electric shocks to others – the mid-20th century concern with group dynamics grew in the shadow of the political horrors of the 30s and 40s, which had posed grave questions about how individuals come to abandon their ordinary sense of morality. Lewin and Bion posited that groups possess distinctive personalities, which emerge organically through the interaction of their members, independently of what rules they might have been given, or what individuals might rationally do alone.

With the dawn of the 60s, and its more individualistic political hopes, psychologists’ interest in groups started to wane. The assumption that individuals are governed by conformity fell by the wayside. When Shirky introduced Bion’s work at the O’Reilly conference in 2003, he was going out on a limb. What he correctly saw was that, in the absence of any explicit structures or rules, online communities were battling against many of the disruptive dynamics that fascinated the psychologists of the 40s.

Shirky highlighted one area of Bion’s work in particular: how groups can spontaneously sabotage their own stipulated purpose. The beauty of early online communities, such as listservs, message boards and wikis, was their spirit of egalitarianism, humour and informality. But these same properties often worked against them when it came to actually getting anything constructive done, and could sometimes snowball into something obstructive or angry. Once the mood of a group was diverted towards jokes, disruption or hostility towards another group, it became very difficult to wrest it back.

Bion’s concerns originated in fear of humanity’s darker impulses, but the vision Shirky was putting to his audience that day was a more optimistic one. If the designers of online spaces could preempt disruptive “group dynamics”, he argued, then it might be possible to support cohesive, productive online communities that remained open and useful at the same time. Like a well designed park or street, a well-designed online space might nurture healthy sociability without the need for policing, surveillance or closure to outsiders. Between one extreme of anarchic chaos (constant trolling), and another of strict moderation and regulation of discussion (acceding to an authority figure), thinking in terms of group dynamics held out the promise of a social web that was still largely self-organising, but also relatively orderly.

But there was another solution to this same problem waiting in the wings, which would turn out to be world-changing in its consequences: forget group dynamics, and focus on reputation dynamics instead. If someone online has a certain set of offline attributes, such as a job title, an album of tagged photos, a list of friends and an email address, they will behave themselves in ways that are appropriate to all of these fixed public identifiers. Add more and more surveillance into the mix, both by one’s peers and by corporations, and the problem of spontaneous group dynamics disappears. It is easier to hold on to your self-control and your conscience if you are publicly visible, including to friends, extended family and colleagues.

For many of the Californian pioneers of cyberculture, who cherished online communities as an escape from the values and constraints of capitalist society, Zuckerberg’s triumph represents an unmitigated defeat. Corporations were never meant to seize control of this space. As late as 2005, the hope was that the social web would be built around democratic principles and bottom-up communities. Facebook abandoned all of that, by simply turning the internet into a multimedia telephone directory.

The last ETech was held in 2009. Within a decade, Facebook was being accused of pushing liberal democracy to the brink and even destroying truth itself. But as the demands of social media have become more onerous, with each of us curating a profile and projecting an identity, the lure of the autonomous group has resurfaced once again. In some respects, Shirky’s optimistic concern has now become today’s pessimistic one. Partly thanks to WhatsApp, the unmoderated, self-governing, amoral collective – larger than a conversation, smaller than a public – has become a dominant and disruptive political force in our society, much as figures such as Bion and Lewin feared.

Conspiracy theories and paranoid group dynamics were features of political life long before WhatsApp arrived. It makes no sense to blame the app for their existence, any more than it makes sense to blame Facebook for Brexit. But by considering the types of behaviour and social structures that technologies enable and enhance, we get a better sense of some of society’s characteristics and ailments. What are the general tendencies that WhatsApp helps to accelerate?

First of all, there is the problem of conspiracies in general. WhatsApp is certainly an unbeatable conduit for circulating conspiracy theories, but we must also admit that it seems to be an excellent tool for facilitating genuinely conspiratorial behaviour. One of the great difficulties when considering conspiracy theories in today’s world is that, regardless of WhatsApp, some conspiracies turn out to be true: consider Libor-fixing, phone-hacking, or efforts by Labour party officials to thwart Jeremy Corbyn’s electoral prospects. These all happened, but one would have sounded like a conspiracy theorist to suggest them until they were later confirmed by evidence.

A communication medium that connects groups of up to 256 people, without any public visibility, operating via the phones in their pockets, is by its very nature, well-suited to supporting secrecy. Obviously not every group chat counts as a “conspiracy”. But it makes the question of how society coheres, who is associated with whom, into a matter of speculation – something that involves a trace of conspiracy theory. In that sense, WhatsApp is not just a channel for the circulation of conspiracy theories, but offers content for them as well. The medium is the message.

The full political potential of WhatsApp has not been witnessed in the UK. To date, it has not served as an effective political campaigning tool, partly because users seem reluctant to join large groups with people they don’t know. However, the influence – imagined or real – of WhatsApp groups within Westminster and the media undoubtedly contributes to the deepening sense that public life is a sham, behind which lurk invisible networks through which power is coordinated. WhatsApp has become a kind of “backstage” of public life, where it is assumed people articulate what they really think and believe in secret. This is a sensibility that has long fuelled conspiracy theories, especially antisemitic ones. Invisible WhatsApp groups now offer a modern update to the type of “explanation” that once revolved around Masonic lodges or the Rothschilds.

Away from the world of party politics and news media, there is the prospect of a society organised as a tapestry of overlapping cliques, each with their own internal norms. Groups are less likely to encourage heterodoxy or risk-taking, and more likely to inculcate conformity, albeit often to a set of norms hostile to those of the “mainstream”, whether that be the media, politics or professional public servants simply doing their jobs. In the safety of the group, it becomes possible to have one’s cake and eat it, to be simultaneously radical and orthodox, hyper-sceptical and yet unreflective.

For all the benefits that WhatsApp offers in helping people feel close to others, its rapid ascendency is one further sign of how a common public world – based upon verified facts and recognised procedures – is disintegrating. WhatsApp is well equipped to support communications on the margins of institutions and public discussion: backbenchers plotting coups, parents gossipping about teachers, friends sharing edgy memes, journalists circulating rumours, family members forwarding on unofficial medical advice. A society that only speaks honestly on the margins like this will find it harder to sustain the legitimacy of experts, officials and representatives who, by definition, operate in the spotlight. Meanwhile, distrust, alienation and conspiracy theories become the norm, chipping away at the institutions that might hold us together.

Monday 31 December 2018

We tell ourselves we choose our own life course, but is this ever true? The role of universities and advertising explored

By abetting the ad industry, universities are leading us into temptation, when they should be enlightening us writes George Monbiot in The Guardian

 

To what extent do we decide? We tell ourselves we choose our own life course, but is this ever true? If you or I had lived 500 years ago, our worldview, and the decisions we made as a result, would have been utterly different. Our minds are shaped by our social environment, in particular the belief systems projected by those in power: monarchs, aristocrats and theologians then; corporations, billionaires and the media today.

Humans, the supremely social mammals, are ethical and intellectual sponges. We unconsciously absorb, for good or ill, the influences that surround us. Indeed, the very notion that we might form our own minds is a received idea that would have been quite alien to most people five centuries ago. This is not to suggest we have no capacity for independent thought. But to exercise it, we must – consciously and with great effort – swim against the social current that sweeps us along, mostly without our knowledge. 

----Also Watch


The Day The Universe Changed

-----

Surely, though, even if we are broadly shaped by the social environment, we control the small decisions we make? Sometimes. Perhaps. But here, too, we are subject to constant influence, some of which we see, much of which we don’t. And there is one major industry that seeks to decide on our behalf. Its techniques get more sophisticated every year, drawing on the latest findings in neuroscience and psychology. It is called advertising.
Every month, new books on the subject are published with titles like The Persuasion Code: How Neuromarketing Can Help You Persuade Anyone, Anywhere, Anytime. While many are doubtless overhyped, they describe a discipline that is rapidly closing in on our minds, making independent thought ever harder. More sophisticated advertising meshes with digital technologies designed to eliminate agency.

Earlier this year, the child psychologist Richard Freed explained how new psychological research has been used to develop social media, computer games and phones with genuinely addictive qualities. He quoted a technologist who boasts, with apparent justification: “We have the ability to twiddle some knobs in a machine learning dashboard we build, and around the world hundreds of thousands of people are going to quietly change their behaviour in ways that, unbeknownst to them, feel second-nature but are really by design.”

The purpose of this brain hacking is to create more effective platforms for advertising. But the effort is wasted if we retain our ability to resist it. Facebook, according to a leaked report, carried out research – shared with an advertiser – to determine when teenagers using its network feel insecure, worthless or stressed. These appear to be the optimum moments for hitting them with a micro-targeted promotion. Facebook denied that it offered “tools to target people based on their emotional state”.

We can expect commercial enterprises to attempt whatever lawful ruses they can pull off. It is up to society, represented by government, to stop them, through the kind of regulation that has so far been lacking. But what puzzles and disgusts me even more than this failure is the willingness of universities to host research that helps advertisers hack our minds. The Enlightenment ideal, which all universities claim to endorse, is that everyone should think for themselves. So why do they run departments in which researchers explore new means of blocking this capacity?


 ‘Facebook, according to a leaked report, developed tools to determine when teenagers using its network feel insecure, worthless or stressed.’ Photograph: Alamy Stock Photo

I ask because, while considering the frenzy of consumerism that rises beyond its usual planet-trashing levels at this time of year, I recently stumbled across a paper that astonished me. It was written by academics at public universities in the Netherlands and the US. Their purpose seemed to me starkly at odds with the public interest. They sought to identify “the different ways in which consumers resist advertising, and the tactics that can be used to counter or avoid such resistance”.

Among the “neutralising” techniques it highlighted were “disguising the persuasive intent of the message”; distracting our attention by using confusing phrases that make it harder to focus on the advertiser’s intentions; and “using cognitive depletion as a tactic for reducing consumers’ ability to contest messages”. This means hitting us with enough advertisements to exhaust our mental resources, breaking down our capacity to think.

Intrigued, I started looking for other academic papers on the same theme, and found an entire literature. There were articles on every imaginable aspect of resistance, and helpful tips on overcoming it. For example, I came across a paper that counsels advertisers on how to rebuild public trust when the celebrity they work with gets into trouble. Rather than dumping this lucrative asset, the researchers advised that the best means to enhance “the authentic persuasive appeal of a celebrity endorser” whose standing has slipped is to get them to display “a Duchenne smile”, otherwise known as “a genuine smile”. It precisely anatomised such smiles, showed how to spot them, and discussed the “construction” of sincerity and “genuineness”: a magnificent exercise in inauthentic authenticity.




Facebook told advertisers it can identify teens feeling 'insecure' and 'worthless'


Another paper considered how to persuade sceptical people to accept a company’s corporate social responsibility claims, especially when these claims conflict with the company’s overall objectives. (An obvious example is ExxonMobil’s attempts to convince people that it is environmentally responsible, because it is researching algal fuels that could one day reduce CO2 – even as it continues to pump millions of barrels of fossil oil a day). I hoped the paper would recommend that the best means of persuading people is for a company to change its practices. Instead, the authors’ research showed how images and statements could be cleverly combined to “minimise stakeholder scepticism”.

A further paper discussed advertisements that work by stimulating Fomo – fear of missing out. It noted that such ads work through “controlled motivation”, which is “anathema to wellbeing”. Fomo ads, the paper explained, tend to cause significant discomfort to those who notice them. It then went on to show how an improved understanding of people’s responses “provides the opportunity to enhance the effectiveness of Fomo as a purchase trigger”. One tactic it proposed is to keep stimulating the fear of missing out, during and after the decision to buy. This, it suggested, will make people more susceptible to further ads on the same lines.

Yes, I know: I work in an industry that receives most of its income from advertising, so I am complicit in this too. But so are we all. Advertising – with its destructive impacts on the living planet, our peace of mind and our free will – sits at the heart of our growth-based economy. This gives us all the more reason to challenge it. Among the places in which the challenge should begin are universities, and the academic societies that are supposed to set and uphold ethical standards. If they cannot swim against the currents of constructed desire and constructed thought, who can?

Thursday 12 October 2017

Data is not the new oil

How do you know when a pithy phrase or seductive idea has become fashionable in policy circles? When The Economist devotes a briefing to it.


Amol Rajan in BBC

In a briefing and accompanying editorial earlier this summer, that distinguished newspaper (it's a magazine, but still calls itself a newspaper, and I'm happy to indulge such eccentricity) argued that data is today what oil was a century ago.

As The Economist put it, "A new commodity spawns a lucrative, fast-growing industry, prompting anti-trust regulators to step in to restrain those who control its flow." Never mind that data isn't particularly new (though the volume may be) - this argument does, at first glance, have much to recommend it.

Just as a century ago those who got to the oil in the ground were able to amass vast wealth, establish near monopolies, and build the future economy on their own precious resource, so data companies like Facebook and Google are able to do similar now. With oil in the 20th century, a consensus eventually grew that it would be up to regulators to intervene and break up the oligopolies - or oiliogopolies - that threatened an excessive concentration of power.

Many impressive thinkers have detected similarities between data today and oil in yesteryear. John Thornhill, the Financial Times's Innovation Editor, has used the example of Alaska to argue that data companies should pay a universal basic income, another idea that has become highly fashionable in policy circles.

Image copyrightGETTY IMAGESImage caption A drilling crew poses for a photograph at Spindletop Hill in Beaumont, Texas where the first Texas oil gusher was discovered in 1901.

At first I was taken by the parallels between data and oil. But now I'm not so sure. As I argued in a series of tweets last week, there are such important differences between data today and oil a century ago that the comparison, while catchy, risks spreading a misunderstanding of how these new technology super-firms operate - and what to do about their power.

The first big difference is one of supply. There is a finite amount of oil in the ground, albeit that is still plenty, and we probably haven't found all of it. But data is virtually infinite. Its supply is super-abundant. In terms of basic supply, data is more like sunlight than oil: there is so much of it that our principal concern should be more what to do with it than where to find more, or how to share that which we've already found.

Data can also be re-used, and the same data can be used by different people for different reasons. Say I invented a new email address. I might use that to register for a music service, where I left a footprint of my taste in music; a social media platform on which I upload photos of my baby son; and a search engine, where I indulge my fascination with reggae.

If, through that email address, a data company were able to access information about me or my friends, the music service, the social network and the search engine might all benefit from that one email address and all that is connected to it. This is different from oil. If a major oil company get to an oil field in, say, Texas, they alone will have control of the oil there - and once they've used it up, it's gone.


Legitimate fears

This points to another key difference: who controls the commodity. There are very legitimate fears about the use and abuse of personal data online - for instance, by foreign powers trying to influence elections. And very few people have a really clear idea about the digital footprint they have left online. If they did know, they might become obsessed with security. I know a few data fanatics who own several phones and indulge data-savvy habits, such as avoiding all text messages in favour of WhatsApp, which is encrypted.

But data is something which - in theory if not in practice - the user can control, and which ideally - though again the practice falls well short - spreads by consent. Going back to that oil company, it's largely up to them how they deploy the oil in the ground beneath Texas: how many barrels they take out every day, what price they sell it for, who they sell it to.

With my email address, it's up to me whether to give it to that music service, social network, or search engine. If I don't want people to know that I have an unhealthy obsession with bands such as The Wailers, The Pioneers and The Ethiopians, I can keep digitally schtum.

Now, I realise that in practice, very few people feel they have control over their personal data online; and retrieving your data isn't exactly easy. If I tried to reclaim, or wipe from the face of the earth, all the personal data that I've handed over to data companies, it'd be a full time job for the rest of my life and I'd never actually achieve it. That said, it is largely as a result of my choices that these firms have so much of my personal data.

Image copyrightGETTY IMAGESImage captionServers for data storage in Hafnarfjordur, Iceland, which is trying to make a name for itself in the business of data centres - warehouses that consume enormous amounts of energy to store the information of 3.2 billion internet users.

The final key difference is that the data industry is much faster to evolve than the oil industry was. Innovation is in the very DNA of big data companies, some of whose lifespans are pitifully short. As a result, regulation is much harder. That briefing in The Economist actually makes the point well that a previous model of regulation may not necessarily work for these new companies, who are forever adapting. That is not to say they should not be regulated; rather, that regulating them is something we haven't yet worked out how to do.

It is because the debate over regulation of these companies is so live that I think we need to interrogate superficially attractive ideas such as 'data is the new oil'. In fact, whereas finite but plentiful oil supplied a raw material for the industrial economy, data is a super-abundant resource in a post-industrial economy. Data companies increasingly control, and redefine, the nature of our public domain, rather than power our transport, or heat our homes.

Data today has something important in common with oil a century ago. But the tech titans are more media moguls than oil barons.

Tuesday 19 September 2017

If engineers are allowed to rule the world....

How technology is making our minds redundant.

Franklin Foer in The Guardian

All the values that Silicon Valley professes are the values of the 60s. The big tech companies present themselves as platforms for personal liberation. Everyone has the right to speak their mind on social media, to fulfil their intellectual and democratic potential, to express their individuality. Where television had been a passive medium that rendered citizens inert, Facebook is participatory and empowering. It allows users to read widely, think for themselves and form their own opinions.

We can’t entirely dismiss this rhetoric. There are parts of the world, even in the US, where Facebook emboldens citizens and enables them to organise themselves in opposition to power. But we shouldn’t accept Facebook’s self-conception as sincere, either. Facebook is a carefully managed top-down system, not a robust public square. It mimics some of the patterns of conversation, but that’s a surface trait.

In reality, Facebook is a tangle of rules and procedures for sorting information, rules devised by the corporation for the ultimate benefit of the corporation. Facebook is always surveilling users, always auditing them, using them as lab rats in its behavioural experiments. While it creates the impression that it offers choice, in truth Facebook paternalistically nudges users in the direction it deems best for them, which also happens to be the direction that gets them thoroughly addicted. It’s a phoniness that is most obvious in the compressed, historic career of Facebook’s mastermind.

Mark Zuckerberg is a good boy, but he wanted to be bad, or maybe just a little bit naughty. The heroes of his adolescence were the original hackers. These weren’t malevolent data thieves or cyberterrorists. Zuckerberg’s hacker heroes were disrespectful of authority. They were technically virtuosic, infinitely resourceful nerd cowboys, unbound by conventional thinking. In the labs of the Massachusetts Institute of Technology (MIT) during the 60s and 70s, they broke any rule that interfered with building the stuff of early computing, such marvels as the first video games and word processors. With their free time, they played epic pranks, which happened to draw further attention to their own cleverness – installing a living cow on the roof of a Cambridge dorm; launching a weather balloon, which miraculously emerged from beneath the turf, emblazoned with “MIT”, in the middle of a Harvard-Yale football game.

The hackers’ archenemies were the bureaucrats who ran universities, corporations and governments. Bureaucrats talked about making the world more efficient, just like the hackers. But they were really small-minded paper-pushers who fiercely guarded the information they held, even when that information yearned to be shared. When hackers clearly engineered better ways of doing things – a box that enabled free long-distance calls, an instruction that might improve an operating system – the bureaucrats stood in their way, wagging an unbending finger. The hackers took aesthetic and comic pleasure in outwitting the men in suits.

When Zuckerberg arrived at Harvard in the fall of 2002, the heyday of the hackers had long passed. They were older guys now, the stuff of good tales, some stuck in twilight struggles against The Man. But Zuckerberg wanted to hack, too, and with that old-time indifference to norms. In high school he picked the lock that prevented outsiders from fiddling with AOL’s code and added his own improvements to its instant messaging program. As a college sophomore he hatched a site called Facemash – with the high-minded purpose of determining the hottest kid on campus. Zuckerberg asked users to compare images of two students and then determine the better-looking of the two. The winner of each pairing advanced to the next round of his hormonal tournament. To cobble this site together, Zuckerberg needed photos. He purloined those from the servers of the various Harvard houses. “One thing is certain,” he wrote on a blog as he put the finishing touches on his creation, “and it’s that I’m a jerk for making this site. Oh well.”

His brief experimentation with rebellion ended with his apologising to a Harvard disciplinary panel, as well as to campus women’s groups, and mulling strategies to redeem his soiled reputation. In the years since, he has shown that defiance really wasn’t his natural inclination. His distrust of authority was such that he sought out Don Graham, then the venerable chairman of the Washington Post company, as his mentor. After he started Facebook, he shadowed various giants of corporate America so that he could study their managerial styles up close.

Still, Zuckerberg’s juvenile fascination with hackers never died – or rather, he carried it forward into his new, more mature incarnation. When he finally had a corporate campus of his own, he procured a vanity address for it: One Hacker Way. He designed a plaza with the word “HACK” inlaid into the concrete. In the centre of his office park, he created an open meeting space called Hacker Square. This is, of course, the venue where his employees join for all-night Hackathons. As he told a group of would-be entrepreneurs, “We’ve got this whole ethos that we want to build a hacker culture.”

Plenty of companies have similarly appropriated hacker culture – hackers are the ur-disrupters – but none have gone as far as Facebook. By the time Zuckerberg began extolling the virtues of hacking, he had stripped the name of most of its original meaning and distilled it into a managerial philosophy that contains barely a hint of rebelliousness. Hackers, he told one interviewer, were “just this group of computer scientists who were trying to quickly prototype and see what was possible. That’s what I try to encourage our engineers to do here.” To hack is to be a good worker, a responsible Facebook citizen – a microcosm of the way in which the company has taken the language of radical individualism and deployed it in the service of conformism.

Zuckerberg claimed to have distilled that hacker spirit into a motivational motto: “Move fast and break things.” The truth is that Facebook moved faster than Zuckerberg could ever have imagined. His company was, as we all know, a dorm-room lark, a thing he ginned up in a Red Bull–induced fit of sleeplessness. As his creation grew, it needed to justify its new scale to its investors, to its users, to the world. It needed to grow up fast. Over the span of its short life, the company has caromed from self-description to self-description. It has called itself a tool, a utility and a platform. It has talked about openness and connectedness. And in all these attempts at defining itself, it has managed to clarify its intentions.

Facebook creators Mark Zuckerberg and Chris Hughes at Harvard in May 2004. Photograph: Rick Friedman/Corbis via Getty

Though Facebook will occasionally talk about the transparency of governments and corporations, what it really wants to advance is the transparency of individuals – or what it has called, at various moments, “radical transparency” or “ultimate transparency”. The theory holds that the sunshine of sharing our intimate details will disinfect the moral mess of our lives. With the looming threat that our embarrassing information will be broadcast, we’ll behave better. And perhaps the ubiquity of incriminating photos and damning revelations will prod us to become more tolerant of one another’s sins. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly,” Zuckerberg has said. “Having two identities for yourself is an example of a lack of integrity.”

The point is that Facebook has a strong, paternalistic view on what’s best for you, and it’s trying to transport you there. “To get people to this point where there’s more openness – that’s a big challenge. But I think we’ll do it,” Zuckerberg has said. He has reason to believe that he will achieve that goal. With its size, Facebook has amassed outsized powers. “In a lot of ways Facebook is more like a government than a traditional company,” Zuckerberg has said. “We have this large community of people, and more than other technology companies we’re really setting policies.”

Without knowing it, Zuckerberg is the heir to a long political tradition. Over the last 200 years, the west has been unable to shake an abiding fantasy, a dream sequence in which we throw out the bum politicians and replace them with engineers – rule by slide rule. The French were the first to entertain this notion in the bloody, world-churning aftermath of their revolution. A coterie of the country’s most influential philosophers (notably, Henri de Saint-Simon and Auguste Comte) were genuinely torn about the course of the country. They hated all the old ancient bastions of parasitic power – the feudal lords, the priests and the warriors – but they also feared the chaos of the mob. To split the difference, they proposed a form of technocracy – engineers and assorted technicians would rule with beneficent disinterestedness. Engineers would strip the old order of its power, while governing in the spirit of science. They would impose rationality and order.

This dream has captivated intellectuals ever since, especially Americans. The great sociologist Thorstein Veblen was obsessed with installing engineers in power and, in 1921, wrote a book making his case. His vision briefly became a reality. In the aftermath of the first world war, American elites were aghast at all the irrational impulses unleashed by that conflict – the xenophobia, the racism, the urge to lynch and riot. And when the realities of economic life had grown so complicated, how could politicians possibly manage them? Americans of all persuasions began yearning for the salvific ascendance of the most famous engineer of his time: Herbert Hoover. In 1920, Franklin D Roosevelt – who would, of course, go on to replace him in 1932 – organised a movement to draft Hoover for the presidency.

The Hoover experiment, in the end, hardly realised the happy fantasies about the Engineer King. A very different version of this dream, however, has come to fruition, in the form of the CEOs of the big tech companies. We’re not ruled by engineers, not yet, but they have become the dominant force in American life – the highest, most influential tier of our elite.

There’s another way to describe this historical progression. Automation has come in waves. During the industrial revolution, machinery replaced manual workers. At first, machines required human operators. Over time, machines came to function with hardly any human intervention. For centuries, engineers automated physical labour; our new engineering elite has automated thought. They have perfected technologies that take over intellectual processes, that render the brain redundant. Or, as the former Google and Yahoo executive Marissa Mayer once argued, “You have to make words less human and more a piece of the machine.” Indeed, we have begun to outsource our intellectual work to companies that suggest what we should learn, the topics we should consider, and the items we ought to buy. These companies can justify their incursions into our lives with the very arguments that Saint-Simon and Comte articulated: they are supplying us with efficiency; they are imposing order on human life.

Nobody better articulates the modern faith in engineering’s power to transform society than Zuckerberg. He told a group of software developers, “You know, I’m an engineer, and I think a key part of the engineering mindset is this hope and this belief that you can take any system that’s out there and make it much, much better than it is today. Anything, whether it’s hardware or software, a company, a developer ecosystem – you can take anything and make it much, much better.” The world will improve, if only Zuckerberg’s reason can prevail – and it will.

The precise source of Facebook’s power is algorithms. That’s a concept repeated dutifully in nearly every story about the tech giants, yet it remains fuzzy at best to users of those sites. From the moment of the algorithm’s invention, it was possible to see its power, its revolutionary potential. The algorithm was developed in order to automate thinking, to remove difficult decisions from the hands of humans, to settle contentious debates.

The essence of the algorithm is entirely uncomplicated. The textbooks compare them to recipes – a series of precise steps that can be followed mindlessly. This is different from equations, which have one correct result. Algorithms merely capture the process for solving a problem and say nothing about where those steps ultimately lead.

These recipes are the crucial building blocks of software. Programmers can’t simply order a computer to, say, search the internet. They must give the computer a set of specific instructions for accomplishing that task. These instructions must take the messy human activity of looking for information and transpose that into an orderly process that can be expressed in code. First do this … then do that. The process of translation, from concept to procedure to code, is inherently reductive. Complex processes must be subdivided into a series of binary choices. There’s no equation to suggest a dress to wear, but an algorithm could easily be written for that – it will work its way through a series of either/or questions (morning or night, winter or summer, sun or rain), with each choice pushing to the next.

For the first decades of computing, the term “algorithm” wasn’t much mentioned. But as computer science departments began sprouting across campuses in the 60s, the term acquired a new cachet. Its vogue was the product of status anxiety. Programmers, especially in the academy, were anxious to show that they weren’t mere technicians. They began to describe their work as algorithmic, in part because it tied them to one of the greatest of all mathematicians – the Persian polymath Muhammad ibn Musa al-Khwarizmi, or as he was known in Latin, Algoritmi. During the 12th century, translations of al-Khwarizmi introduced Arabic numerals to the west; his treatises pioneered algebra and trigonometry. By describing the algorithm as the fundamental element of programming, the computer scientists were attaching themselves to a grand history. It was a savvy piece of name-dropping: See, we’re not arriviste, we’re working with abstractions and theories, just like the mathematicians!

A statue of the mathematician Muhammad ibn Musa al-Khwarizmi in Uzbekistan. Photograph: Alamy

There was sleight of hand in this self-portrayal. The algorithm may be the essence of computer science – but it’s not precisely a scientific concept. An algorithm is a system, like plumbing or a military chain of command. It takes knowhow, calculation and creativity to make a system work properly. But some systems, like some armies, are much more reliable than others. A system is a human artefact, not a mathematical truism. The origins of the algorithm are unmistakably human, but human fallibility isn’t a quality that we associate with it. When algorithms reject a loan application or set the price for an airline flight, they seem impersonal and unbending. The algorithm is supposed to be devoid of bias, intuition, emotion or forgiveness.

Silicon Valley’s algorithmic enthusiasts were immodest about describing the revolutionary potential of their objects of affection. Algorithms were always interesting and valuable, but advances in computing made them infinitely more powerful. The big change was the cost of computing: it collapsed, just as the machines themselves sped up and were tied into a global network. Computers could stockpile massive piles of unsorted data – and algorithms could attack this data to find patterns and connections that would escape human analysts. In the hands of Google and Facebook, these algorithms grew ever more powerful. As they went about their searches, they accumulated more and more data. Their machines assimilated all the lessons of past searches, using these learnings to more precisely deliver the desired results.

For the entirety of human existence, the creation of knowledge was a slog of trial and error. Humans would dream up theories of how the world worked, then would examine the evidence to see whether their hypotheses survived or crashed upon their exposure to reality. Algorithms upend the scientific method – the patterns emerge from the data, from correlations, unguided by hypotheses. They remove humans from the whole process of inquiry. Writing in Wired, Chris Anderson, then editor-in-chief, argued: “We can stop looking for models. We can analyse the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”

On one level, this is undeniable. Algorithms can translate languages without understanding words, simply by uncovering the patterns that undergird the construction of sentences. They can find coincidences that humans might never even think to seek. Walmart’s algorithms found that people desperately buy strawberry Pop-Tarts as they prepare for massive storms.

Still, even as an algorithm mindlessly implements its procedures – and even as it learns to see new patterns in the data – it reflects the minds of its creators, the motives of its trainers. Amazon and Netflix use algorithms to make recommendations about books and films. (One-third of purchases on Amazon come from these recommendations.) These algorithms seek to understand our tastes, and the tastes of like-minded consumers of culture. Yet the algorithms make fundamentally different recommendations. Amazon steers you to the sorts of books that you’ve seen before. Netflix directs users to the unfamiliar. There’s a business reason for this difference. Blockbuster movies cost Netflix more to stream. Greater profit arrives when you decide to watch more obscure fare. Computer scientists have an aphorism that describes how algorithms relentlessly hunt for patterns: they talk about torturing the data until it confesses. Yet this metaphor contains unexamined implications. Data, like victims of torture, tells its interrogator what it wants to hear.

Like economics, computer science has its preferred models and implicit assumptions about the world. When programmers are taught algorithmic thinking, they are told to venerate efficiency as a paramount consideration. This is perfectly understandable. An algorithm with an ungainly number of steps will gum up the machinery, and a molasses-like server is a useless one. But efficiency is also a value. When we speed things up, we’re necessarily cutting corners; we’re generalising.

Algorithms can be gorgeous expressions of logical thinking, not to mention a source of ease and wonder. They can track down copies of obscure 19th-century tomes in a few milliseconds; they put us in touch with long-lost elementary school friends; they enable retailers to deliver packages to our doors in a flash. Very soon, they will guide self-driving cars and pinpoint cancers growing in our innards. But to do all these things, algorithms are constantly taking our measure. They make decisions about us and on our behalf. The problem is that when we outsource thinking to machines, we are really outsourcing thinking to the organisations that run the machines.

Mark Zuckerberg disingenuously poses as a friendly critic of algorithms. That’s how he implicitly contrasts Facebook with his rivals across the way at Google. Over in Larry Page’s shop, the algorithm is king – a cold, pulseless ruler. There’s not a trace of life force in its recommendations, and very little apparent understanding of the person keying a query into its engine. Facebook, in his flattering self-portrait, is a respite from this increasingly automated, atomistic world. “Every product you use is better off with your friends,” he says.

What he is referring to is Facebook’s news feed. Here’s a brief explanation for the sliver of humanity who have apparently resisted Facebook: the news feed provides a reverse chronological index of all the status updates, articles and photos that your friends have posted to Facebook. The news feed is meant to be fun, but also geared to solve one of the essential problems of modernity – our inability to sift through the ever-growing, always-looming mounds of information. Who better, the theory goes, to recommend what we should read and watch than our friends? Zuckerberg has boasted that the News Feed turned Facebook into a “personalised newspaper”.

Unfortunately, our friends can do only so much to winnow things for us. Turns out, they like to share a lot. If we just read their musings and followed links to articles, we might be only a little less overwhelmed than before, or perhaps even deeper underwater. So Facebook makes its own choices about what should be read. The company’s algorithms sort the thousands of things a Facebook user could possibly see down to a smaller batch of choice items. And then within those few dozen items, it decides what we might like to read first.

Algorithms are, by definition, invisibilia. But we can usually sense their presence – that somewhere in the distance, we’re interacting with a machine. That’s what makes Facebook’s algorithm so powerful. Many users – 60%, according to the best research – are completely unaware of its existence. But even if they know of its influence, it wouldn’t really matter. Facebook’s algorithm couldn’t be more opaque. It has grown into an almost unknowable tangle of sprawl. The algorithm interprets more than 100,000 “signals” to make its decisions about what users see. Some of these signals apply to all Facebook users; some reflect users’ particular habits and the habits of their friends. Perhaps Facebook no longer fully understands its own tangle of algorithms – the code, all 60m lines of it, is a palimpsest, where engineers add layer upon layer of new commands.

Pondering the abstraction of this algorithm, imagine one of those earliest computers with its nervously blinking lights and long rows of dials. To tweak the algorithm, the engineers turn the knob a click or two. The engineers are constantly making small adjustments here and there, so that the machine performs to their satisfaction. With even the gentlest caress of the metaphorical dial, Facebook changes what its users see and read. It can make our friends’ photos more or less ubiquitous; it can punish posts filled with self-congratulatory musings and banish what it deems to be hoaxes; it can promote video rather than text; it can favour articles from the likes of the New York Times or BuzzFeed, if it so desires. Or if we want to be melodramatic about it, we could say Facebook is constantly tinkering with how its users view the world – always tinkering with the quality of news and opinion that it allows to break through the din, adjusting the quality of political and cultural discourse in order to hold the attention of users for a few more beats.

But how do the engineers know which dial to twist and how hard? There’s a whole discipline, data science, to guide the writing and revision of algorithms. Facebook has a team, poached from academia, to conduct experiments on users. It’s a statistician’s sexiest dream – some of the largest data sets in human history, the ability to run trials on mathematically meaningful cohorts. When Cameron Marlow, the former head of Facebook’s data science team, described the opportunity, he began twitching with ecstatic joy. “For the first time,” Marlow said, “we have a microscope that not only lets us examine social behaviour at a very fine level that we’ve never been able to see before, but allows us to run experiments that millions of users are exposed to.” Facebook’s headquarters in Menlo Park, California. Photograph: Alamy

Facebook likes to boast about the fact of its experimentation more than the details of the actual experiments themselves. But there are examples that have escaped the confines of its laboratories. We know, for example, that Facebook sought to discover whether emotions are contagious. To conduct this trial, Facebook attempted to manipulate the mental state of its users. For one group, Facebook excised the positive words from the posts in the news feed; for another group, it removed the negative words. Each group, it concluded, wrote posts that echoed the mood of the posts it had reworded. This study was roundly condemned as invasive, but it is not so unusual. As one member of Facebook’s data science team confessed: “Anyone on that team could run a test. They’re always trying to alter people’s behaviour.”

There’s no doubting the emotional and psychological power possessed by Facebook – or, at least, Facebook doesn’t doubt it. It has bragged about how it increased voter turnout (and organ donation) by subtly amping up the social pressures that compel virtuous behaviour. Facebook has even touted the results from these experiments in peer-reviewed journals: “It is possible that more of the 0.60% growth in turnout between 2006 and 2010 might have been caused by a single message on Facebook,” said one study published in Nature in 2012. No other company has made such claims about its ability to shape democracy like this – and for good reason. It’s too much power to entrust to a corporation.

The many Facebook experiments add up. The company believes that it has unlocked social psychology and acquired a deeper understanding of its users than they possess of themselves. Facebook can predict users’ race, sexual orientation, relationship status and drug use on the basis of their “likes” alone. It’s Zuckerberg’s fantasy that this data might be analysed to uncover the mother of all revelations, “a fundamental mathematical law underlying human social relationships that governs the balance of who and what we all care about”. That is, of course, a goal in the distance. In the meantime, Facebook will keep probing – constantly testing to see what we crave and what we ignore, a never-ending campaign to improve Facebook’s capacity to give us the things that we want and things we don’t even know we want. Whether the information is true or concocted, authoritative reporting or conspiratorial opinion, doesn’t really seem to matter much to Facebook. The crowd gets what it wants and deserves.

The automation of thinking: we’re in the earliest days of this revolution, of course. But we can see where it’s heading. Algorithms have retired many of the bureaucratic, clerical duties once performed by humans – and they will soon begin to replace more creative tasks. At Netflix, algorithms suggest the genres of movies to commission. Some news wires use algorithms to write stories about crime, baseball games and earthquakes – the most rote journalistic tasks. Algorithms have produced fine art and composed symphonic music, or at least approximations of them.

It’s a terrifying trajectory, especially for those of us in these lines of work. If algorithms can replicate the process of creativity, then there’s little reason to nurture human creativity. Why bother with the tortuous, inefficient process of writing or painting if a computer can produce something seemingly as good and in a painless flash? Why nurture the overinflated market for high culture when it could be so abundant and cheap? No human endeavour has resisted automation, so why should creative endeavours be any different?

The engineering mindset has little patience for the fetishisation of words and images, for the mystique of art, for moral complexity or emotional expression. It views humans as data, components of systems, abstractions. That’s why Facebook has so few qualms about performing rampant experiments on its users. The whole effort is to make human beings predictable – to anticipate their behaviour, which makes them easier to manipulate. With this sort of cold-blooded thinking, so divorced from the contingency and mystery of human life, it’s easy to see how long-standing values begin to seem like an annoyance – why a concept such as privacy would carry so little weight in the engineer’s calculus, why the inefficiencies of publishing and journalism seem so imminently disruptable.

Facebook would never put it this way, but algorithms are meant to erode free will, to relieve humans of the burden of choosing, to nudge them in the right direction. Algorithms fuel a sense of omnipotence, the condescending belief that our behaviour can be altered, without our even being aware of the hand guiding us, in a superior direction. That’s always been a danger of the engineering mindset, as it moves beyond its roots in building inanimate stuff and begins to design a more perfect social world. We are the screws and rivets in the grand design.

Sunday 3 September 2017

Silicon Valley has been humbled. But its schemes are as dangerous as ever

Sex scandals, rows over terrorism, fears for its impact on social policy: the backlash against Big Tech has begun. Where will it end?


Evgeny Morozov in The Guardian


Just a decade ago, Silicon Valley pitched itself as a savvy ambassador of a newer, cooler, more humane kind of capitalism. It quickly became the darling of the elite, of the international media, and of that mythical, omniscient tribe: the “digital natives”. While an occasional critic – always easy to dismiss as a neo-Luddite – did voice concerns about their disregard for privacy or their geeky, almost autistic aloofness, public opinion was firmly on the side of technology firms.

Silicon Valley was the best that America had to offer; tech companies frequently occupied – and still do – top spots on lists of the world’s most admired brands. And there was much to admire: a highly dynamic, innovative industry, Silicon Valley has found a way to convert scrolls, likes and clicks into lofty political ideals, helping to export freedom, democracy and human rights to the Middle East and north Africa. Who knew that the only thing thwarting the global democratic revolution was capitalism’s inability to capture and monetise the eyeballs of strangers?

How things have changed. An industry once hailed for fuelling the Arab spring is today repeatedly accused of abetting Islamic State. An industry that prides itself on diversity and tolerance is now regularly in the news for cases of sexual harassment as well as the controversial views of its employees on matters such as gender equality. An industry that built its reputation on offering us free things and services is now regularly assailed for making other things – housing, above all– more expensive.

The Silicon Valley backlash is on. These days, one can hardly open a major newspaper – including such communist rags as the Financial Times and the Economist – without stumbling on passionate calls that demand curbs on the power of what is now frequently called “Big Tech”, from reclassifying digital platforms as utility companies to even nationalising them.

Meanwhile, Silicon Valley’s big secret – that the data produced by users of digital platforms often has economic value exceeding the value of the services rendered – is now also out in the open. Free social networking sounds like a good idea – but do you really want to surrender your privacy so that Mark Zuckerberg can run a foundation to rid the world of the problems that his company helps to perpetuate? Not everyone is so sure any longer. The Teflon industry is Teflon no more: the dirt thrown at it finally sticks – and this fact is lost on nobody.

Much of the brouhaha has caught Silicon Valley by surprise. Its ideas – disruption as a service, radical transparency as a way of being, an entire economy of gigs and shares – still dominate our culture. However, its global intellectual hegemony is built on shaky foundations: it stands on the post-political can-do allure of TED talks much more than in wonky thinktank reports and lobbying memorandums.

This is not to say that technology firms do not dabble in lobbying – here Alphabet is on a par with Goldman Sachs – nor to imply that they don’t steer academic research. In fact, on many tech policy issues it’s now difficult to find unbiased academics who have not received some Big Tech funding. Those who go against the grain find themselves in a rather precarious situation, as was recently shown by the fate of the Open Markets project at New America, an influential thinktank in Washington: its strong anti-monopoly stance appears to have angered New America’s chairman and major donor, Eric Schmidt, executive chairman of Alphabet. As a result, it was spun off from the thinktank.

Nonetheless, Big Tech’s political influence is not at the level of Wall Street or Big Oil. It’s hard to argue that Alphabet wields as much power over global technology policy as the likes of Goldman Sachs do over global financial and economic policy. For now, influential politicians – such as José Manuel Barroso, the former president of the European Commission – prefer to continue their careers at Goldman Sachs, not at Alphabet; it is also the former, not the latter, that fills vacant senior posts in Washington.

This will surely change. It’s obvious that the cheerful and utopian chatterboxes who make up TED talks no longer contribute much to boosting the legitimacy of the tech sector; fortunately, there’s a finite supply of bullshit on this planet. Big digital platforms will thus seek to acquire more policy leverage, following the playbook honed by the tobacco, oil and financial firms.

There are, however, two additional factors worth considering in order to understand where the current backlash against Big Tech might lead. First of all, short of a major privacy disaster, digital platforms will continue to be the world’s most admired and trusted brands – not least because they contrast so favourably with your average telecoms company or your average airline (say what you will of their rapaciousness, but tech firms don’t generally drag their customers off their flights).

And it is technology firms – American companies but also Chinese – that create the false impression that the global economy has recovered and everything is back to normal. Since January, the valuations of just four firms – Alphabet, Amazon, Facebook and Microsoft – have grown by an amount greater than the entire GDP of oil-rich Norway. Who would want to see this bubble burst? Nobody; in fact, those in power would rather see it grow some more.

The culture power of Silicon Valley can be gleaned from the simple fact that no sensible politician dares to go to Wall Street for photo ops; everyone goes to Palo Alto to unveil their latest pro-innovation policy. Emmanuel Macron wants to turn France into a startup, not a hedge fund. There’s no other narrative in town that makes centrist, neoliberal policies look palatable and inevitable at the same time; politicians, however angry they might sound about Silicon Valley’s monopoly power, do not really have an alternative project. It’s not just Macron: from Italy’s Matteo Renzi to Canada’s Justin Trudeau, all mainstream politicians who have claimed to offer a clever break with the past also offer an implicit pact with Big Tech – or, at least, its ideas – in the future.

Second, Silicon Valley, being the home of venture capital, is good at spotting global trends early on. Its cleverest minds had sensed the backlash brewing before the rest of us. They also made the right call in deciding that wonky memos and thinktank reports won’t quell our discontent, and that many other problems – from growing inequality to the general unease about globalisation – will eventually be blamed on an industry that did little to cause them.

Silicon Valley’s brightest minds realised they needed bold proposals – a guaranteed basic income, a tax on robots, experiments with fully privatised cities to be run by technology companies outside of government jurisdiction – that will sow doubt in the minds of those who might have otherwise opted for conventional anti-monopoly legislation. If technology firms can play a constructive role in funding our basic income, if Alphabet or Amazon can run Detroit or New York with the same efficiency that they run their platforms, if Microsoft can infer signs of cancer from our search queries: should we really be putting obstacles in their way?

In the boldness and vagueness of its plans to save capitalism, Silicon Valley might out-TED the TED talks. There are many reasons why such attempts won’t succeed in their grand mission even if they would make these firms a lot of money in the short term and help delay public anger by another decade. The main reason is simple: how could one possibly expect a bunch of rent-extracting enterprises with business models that are reminiscent of feudalism to resuscitate global capitalism and to establish a new New Deal that would constrain the greed of capitalists, many of whom also happen to be the investors behind these firms?

Data might seem infinite but there’s no reason to believe that the enormous profits made from it would simply smooth over the many contradictions of the current economic system. A self-proclaimed caretaker of global capitalism, Silicon Valley is much more likely to end up as its undertaker.

Wednesday 30 August 2017

We need to nationalise Google, Facebook and Amazon. Here’s why

A crisis is looming. These monopoly platforms hoovering up our data have no competition: they’re too big to serve the public interest

Nick Srnicek in The Guardian


For the briefest moment in March 2014, Facebook’s dominance looked under threat. Ello, amid much hype, presented itself as the non-corporate alternative to Facebook. According to the manifesto accompanying its public launch, Ello would never sell your data to third parties, rely on advertising to fund its service, or require you to use your real name.

The hype fizzled out as Facebook continued to expand. Yet Ello’s rapid rise and fall is symptomatic of our contemporary digital world and the monopoly-style power accruing to the 21st century’s new “platform” companies, such as Facebook, Google and Amazon. Their business model lets them siphon off revenues and data at an incredible pace, and consolidate themselves as the new masters of the economy. Monday brought another giant leap as Amazon raised the prospect of an international grocery price war by slashing prices on its first day in charge of the organic retailer Whole Foods.

The platform – an infrastructure that connects two or more groups and enables them to interact – is crucial to these companies’ power. None of them focuses on making things in the way that traditional companies once did. Instead, Facebook connects users, advertisers, and developers; Uber, riders and drivers; Amazon, buyers and sellers.

Reaching a critical mass of users is what makes these businesses successful: the more users, the more useful to users – and the more entrenched – they become. Ello’s rapid downfall occurred because it never reached the critical mass of users required to prompt an exodus from Facebook – whose dominance means that even if you’re frustrated by its advertising and tracking of your data, it’s still likely to be your first choice because that’s where everyone is, and that’s the point of a social network. Likewise with Uber: it makes sense for riders and drivers to use the app that connects them with the biggest number of people, regardless of the sexism of Travis Kalanick, the former chief executive, or the ugly ways in which it controls drivers, or the failures of the company to report serious sexual assaults by its drivers.

Network effects generate momentum that not only helps these platforms survive controversy, but makes it incredibly difficult for insurgents to replace them.

As a result, we have witnessed the rise of increasingly formidable platform monopolies. Google, Facebook and Amazon are the most important in the west. (China has its own tech ecosystem.) Google controls search, Facebook rules social media, and Amazon leads in e-commerce. And they are now exerting their power over non-platform companies – a tension likely to be exacerbated in the coming decades. Look at the state of journalism: Google and Facebook rake in record ad revenues through sophisticated algorithms; newspapers and magazines see advertisers flee, mass layoffs, the shuttering of expensive investigative journalism, and the collapse of major print titles like the Independent. A similar phenomenon is happening in retail, with Amazon’s dominance undermining old department stores.

These companies’ power over our reliance on data adds a further twist. Data is quickly becoming the 21st-century version of oil – a resource essential to the entire global economy, and the focus of intense struggle to control it. Platforms, as spaces in which two or more groups interact, provide what is in effect an oil rig for data. Every interaction on a platform becomes another data point that can be captured and fed into an algorithm. In this sense, platforms are the only business model built for a data-centric economy.

More and more companies are coming to realise this. We often think of platforms as a tech-sector phenomenon, but the truth is that they are becoming ubiquitous across the economy. Uber is the most prominent example, turning the staid business of taxis into a trendy platform business. Siemens and GE, two powerhouses of the 20th century, are fighting it out to develop a cloud-based system for manufacturing. Monsanto and John Deere, two established agricultural companies, are trying to figure out how to incorporate platforms into farming and food production.


And this poses problems. At the heart of platform capitalism is a drive to extract more data in order to survive. One way is to get people to stay on your platform longer. Facebook is a master at using all sorts of behavioural techniques to foster addictions to its service: how many of us scroll absentmindedly through Facebook, barely aware of it?

Another way is to expand the apparatus of extraction. This helps to explain why Google, ostensibly a search engine company, is moving into the consumer internet of things (Home/Nest), self-driving cars (Waymo), virtual reality (Daydream/Cardboard), and all sorts of other personal services. Each of these is another rich source of data for the company, and another point of leverage over their competitors.

Others have simply bought up smaller companies: Facebook has swallowed Instagram ($1bn), WhatsApp ($19bn), and Oculus ($2bn), while investing in drone-based internet, e-commerce and payment services. It has even developed a tool that warns when a start-up is becoming popular and a possible threat. Google itself is among the most prolific acquirers of new companies, at some stages purchasing a new venture every week. The picture that emerges is of increasingly sprawling empires designed to vacuum up as much data as possible.

But here we get to the real endgame: artificial intelligence (or, less glamorously, machine learning). Some enjoy speculating about wild futures involving a Terminator-style Skynet, but the more realistic challenges of AI are far closer. In the past few years, every major platform company has turned its focus to investing in this field. As the head of corporate development at Google recently said, “We’re definitely AI first.”


Tinkering with minor regulations while AI companies amass power won’t do



All the dynamics of platforms are amplified once AI enters the equation: the insatiable appetite for data, and the winner-takes-all momentum of network effects. And there is a virtuous cycle here: more data means better machine learning, which means better services and more users, which means more data. Currently Google is using AI to improve its targeted advertising, and Amazon is using AI to improve its highly profitable cloud computing business. As one AI company takes a significant lead over competitors, these dynamics are likely to propel it to an increasingly powerful position.

What’s the answer? We’ve only begun to grasp the problem, but in the past, natural monopolies like utilities and railways that enjoy huge economies of scale and serve the common good have been prime candidates for public ownership. The solution to our newfangled monopoly problem lies in this sort of age-old fix, updated for our digital age. It would mean taking back control over the internet and our digital infrastructure, instead of allowing them to be run in the pursuit of profit and power. Tinkering with minor regulations while AI firms amass power won’t do. If we don’t take over today’s platform monopolies, we risk letting them own and control the basic infrastructure of 21st-century society.