Search This Blog

Friday 4 May 2018

Pakistan's Extraordinary Times

Najam Sethi in The Friday TimesExtraordinary times


We live in extraordinary times. There are over 100 TV channels and over 5000 newspapers, magazines and news websites in the country. Yet, on Press Freedom Day, Thursday May 3, the shackles that bind us and the gags that silence us must be recorded.

We cannot comment freely on the machinations of the Miltablishment without being roughed up or “disappeared”. We cannot comment freely on the utterances and decisions of the judges without being jailed for contempt. We cannot comment freely on the motives that drive the Pashtun Tahaffuz Movement and other rights-based groups without being berated for anti-state behavior. We cannot comment freely on the “protests” and “dharnas” of militant religious parties and groups without being accused of “blasphemy” and threatened with death. And so on. The price of freedom is costly. There have been over 150 attacks on journalists in the last twelve months, one-third in Islamabad, the seat of the “democratically” elected, pro-media government.

We live in extraordinary times. With less than one month to go in the term of the present government, we still do not know who the interim prime minister and chief ministers will be, or whether general elections will be held on time or whether these will be rigged or free and fair.


----Also watch



India and its 'free press'


Yashwant Sinha - " Without a Blink, I Will Ask People to Vote the BJP Out of Power"

--------

We live in extraordinary times. The “hidden hand” is everywhere and nowhere at the same time, pulling the plug on dissenters. For over four years, the democratically elected PMLN government in Balochistan was alive and kicking. One day, suddenly, it was gone in a puff of smoke, replaced by a motley crew of pro-Miltablishment “representatives”. For over three decades, the MQM was alive and kicking. One day, it was splintered into three groups, each vying for the favours of the Miltablishment. For over two decades, Nawaz Sharif was the President of the PMLN and thrice elected prime minister of Pakistan. One day he was no more for ever. And so on.

We live in extraordinary times. For over five decades, the Peoples Party of the Bhuttos was the main liberal, anti-Miltablishment party in the country. Now, under the Zardaris, it is solidly on the side of the Miltablishment. For over seven decades, the Mulsim League has been the main pro-Miltablishment party of the country. Now, under Nawaz Sharif, it is the main anti-Miltablishment party in Pakistan. Indeed, for long Mr Sharif was the blue-eyed boy of the Miltablishment. Now he is its chief nemesis.

We live in extraordinary times. A massive political engineering exercise is being held today to thwart some parties and politicians and prop up others. Such attempts were made in the past too but always under the umbrella of martial law and PCO judges. What is unprecedented in the current exercise is the bid to achieve the ends of martial law by “other” means. An unaccountable judiciary is the mask behind which lurks the Miltablishment. The judges have taken no new oath. Nor is the order of the day “provisional”.

We live in extraordinary times. The liberal and secular supporters of the PPP are in disarray. Some have sullenly retreated into a damning silence. Many have plonked their hearts in the freezer and are queuing up to vote for Nawaz Sharif because he is the sole anti-establishment leader in the country. A clutch is ever ready to join the ranks of rights-groups protesting “state” highhandedness or injustice, like the PTM. We are in the process of completing the circle that began with the left-wing, anti-establishment, party of Zulfikar Ali Bhutto and is ending with the right-wing, pro-establishment, party of Imran Khan. The “caring socialist-fascism” of the PPP in the 1970s has morphed into the “uncaring capitalist-fascism” of the PTI today. The middle-class, cheery, internationalist “hopefuls” of yesteryear have been swept aside by the middle-class, angry, nationalist “fearfuls” of today.

We live in extraordinary times. In the first two decades of Pakistan, we stumbled from one civil-military bureaucrat to another without an organic constitution or free and fair elections. In the third decade, we lost half the country because of the political engineering of the first two decades but managed to cobble a democratic constitution in its aftermath. Trouble arose when we violated the constitutional rules of democracy and paid the price of martial law in the fourth decade. In the fifth, we reeled from one engineered election and government to another until we were engulfed by another martial law in the sixth. In the seventh, we wowed to stick together under a Charter of Democracy but joined hands with the Miltablishment to violate the rules of the game. Now, after sacrificing two elected prime ministers at the altar of “justice”, we are back at the game of political engineering in the new decade.

Pakistan is more internally disunited today than ever before. It has more external enemies today than ever before. It is more economically, demographically and environmentally challenged today than ever before. The more it experiments with engineered political change, the worse it becomes. We live in extraordinary times.

Thursday 3 May 2018

Big Tech is sorry. Why Silicon Valley can’t fix itself

Tech insiders have finally started admitting their mistakes – but the solutions they are offering could just help the big players get even more powerful. By Ben Tarnoff and Moira Weigel in The Guardian 


Big Tech is sorry. After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.

Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.” Justin Rosenstein, an engineer who helped build Facebook’s “like” button and Gchat, regrets having contributed to technology that he now considers psychologically damaging, too. “Everyone is distracted,” Rosenstein says. “All of the time.” 

Ever since the internet became widely used by the public in the 1990s, users have heard warnings that it is bad for us. In the early years, many commentators described cyberspace as a parallel universe that could swallow enthusiasts whole. The media fretted about kids talking to strangers and finding porn. A prominent 1998 study from Carnegie Mellon University claimed that spending time online made you lonely, depressed and antisocial.

In the mid-2000s, as the internet moved on to mobile devices, physical and virtual life began to merge. Bullish pundits celebrated the “cognitive surplus” unlocked by crowdsourcing and the tech-savvy campaigns of Barack Obama, the “internet president”. But, alongside these optimistic voices, darker warnings persisted. Nicholas Carr’s The Shallows (2010) argued that search engines were making people stupid, while Eli Pariser’s The Filter Bubble (2011) claimed algorithms made us insular by showing us only what we wanted to see. In Alone, Together (2011) and Reclaiming Conversation (2015), Sherry Turkle warned that constant connectivity was making meaningful interaction impossible.

Still, inside the industry, techno-utopianism prevailed. Silicon Valley seemed to assume that the tools they were building were always forces for good – and that anyone who questioned them was a crank or a luddite. In the face of an anti-tech backlash that has surged since the 2016 election, however, this faith appears to be faltering. Prominent people in the industry are beginning to acknowledge that their products may have harmful effects.

Internet anxiety isn’t new. But never before have so many notable figures within the industry seemed so anxious about the world they have made. Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.

It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity. The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.

The hub of the new tech humanism is the Center for Humane Technology in San Francisco. Founded earlier this year, the nonprofit has assembled an impressive roster of advisers, including investor Roger McNamee, Lyft president John Zimmer, and Rosenstein. But its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction. In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.

As suspicion of Silicon Valley grows, the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track. For this, they have been getting a lot of attention. As the backlash against tech has grown, so too has the appeal of techies repenting for their sins. The Center for Humane Technology has been profiled – and praised by – the New York Times, the Atlantic, Wired and others.

But tech humanism’s influence cannot be measured solely by the positive media coverage it has received. The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”, and Twitter boss Jack Dorsey recently claimed he wants to improve the platform’s “conversational health”. 

Even Mark Zuckerberg, famous for encouraging his engineers to “move fast and break things”, seems to be taking a tech humanist turn. In January, he announced that Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.

Zuckerberg’s choice of words is significant: Time Well Spent is the name of the advocacy group that Harris led before co-founding the Center for Humane Technology. In April, Zuckerberg brought the phrase to Capitol Hill. When a photographer snapped a picture of the notes Zuckerberg used while testifying before the Senate, they included a discussion of Facebook’s new emphasis on “time well spent”, under the heading “wellbeing”.

This new concern for “wellbeing” may strike some observers as a welcome development. After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.

But these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.

The Center for Humane Technology argues that technology must be “aligned” with humanity – and that the best way to accomplish this is through better design. Their website features a section entitled The Way Forward. A familiar evolutionary image shows the silhouettes of several simians, rising from their crouches to become a man, who then turns back to contemplate his history.

“In the future, we will look back at today as a turning point towards humane design,” the header reads. To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”.

There is a good reason why the language of tech humanism is penetrating the upper echelons of the tech industry so easily: this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives. Their success turned the Bay Area tech industry into a global powerhouse – and produced the digitised world that today’s tech humanists now lament.

The story begins in the 1960s, when Silicon Valley was still a handful of electronics firms clustered among fruit orchards. Computers came in the form of mainframes then. These machines were big, expensive and difficult to use. Only corporations, universities and government agencies could afford them, and they were reserved for specialised tasks, such as calculating missile trajectories or credit scores.

Computing was industrial, in other words, not personal, and Silicon Valley remained dependent on a small number of big institutional clients. The practical danger that this dependency posed became clear in the early 1960s, when the US Department of Defense, by far the single biggest buyer of digital components, began cutting back on its purchases. But the fall in military procurement wasn’t the only mid-century crisis around computing.

Computers also had an image problem. The inaccessibility of mainframes made them easy to demonise. In these whirring hulks of digital machinery, many observers saw something inhuman, even evil. To antiwar activists, computers were weapons of the war machine that was killing thousands in Vietnam. To highbrow commentators such as the social critic Lewis Mumford, computers were instruments of a creeping technocracy that threatened to extinguish personal freedom.

But during the course of the 1960s and 70s, a series of experiments in northern California helped solve both problems. These experiments yielded breakthrough innovations like the graphical user interface, the mouse and the microprocessor. Computers became smaller, more usable and more interactive, reducing Silicon Valley’s reliance on a few large customers while giving digital technology a friendlier face.

The pioneers who led this transformation believed they were making computing more human. They drew deeply from the counterculture of the period, and its fixation on developing “human” modes of living. They wanted their machines to be “extensions of man”, in the words of Marshall McLuhan, and to unlock “human potential” rather than repress it. At the centre of this ecosystem of hobbyists, hackers, hippies and professional engineers was Stewart Brand, famed entrepreneur of the counterculture and founder of the Whole Earth Catalog. In a famous 1972 article for Rolling Stone, Brand called for a new model of computing that “served human interest, not machine”.

Brand’s disciples answered this call by developing the technical innovations that transformed computers into the form we recognise today. They also promoted a new way of thinking about computers – not as impersonal slabs of machinery, but as tools for unleashing “human potential”.

No single figure contributed more to this transformation of computing than Steve Jobs, who was a fan of Brand and a reader of the Whole Earth Catalog. Jobs fulfilled Brand’s vision on a global scale, launching the mass personal computing era with the Macintosh in the mid-80s, and the mass smartphone era with the iPhone two decades later. Brand later acknowledged that Jobs embodied the Whole Earth Catalog ethos. “He got the notion of tools for human use,” Brand told Jobs’ biographer, Walter Isaacson.

Building those “tools for human use” turned out to be great for business. The impulse to humanise computing enabled Silicon Valley to enter every crevice of our lives. From phones to tablets to laptops, we are surrounded by devices that have fulfilled the demands of the counterculture for digital connectivity, interactivity and self-expression. Your iPhone responds to the slightest touch; you can look at photos of anyone you have ever known, and broadcast anything you want to all of them, at any moment.

In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention. To guide us out of that wilderness, tech humanists say we need more humanising. They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.

Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.

It is difficult to imagine human beings without technology. The story of our species began when we began to make tools. Homo habilis, the first members of our genus, left sharpened stones scattered across Africa. Their successors hit rocks against each other to make sparks, and thus fire. With fire you could cook meat and clear land for planting; with ash you could fertilise the soil; with smoke you could make signals. In flickering light, our ancestors painted animals on cave walls. The ancient tragedian Aeschylus recalled this era mythically: Prometheus, in stealing fire from the gods, “founded all the arts of men.”

All of which is to say: humanity and technology are not only entangled, they constantly change together. This is not just a metaphor. Recent researchsuggests that the human hand evolved to manipulate the stone tools that our ancestors used. The evolutionary scientist Mary Marzke shows that we developed “a unique pattern of muscle architecture and joint surface form and functions” for this purpose.

The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities. For millennia, people have feared that new media were eroding the very powers that they promised to extend. In The Phaedrus, Socrates warned that writing on wax tablets would make people forgetful. If you could jot something down, you wouldn’t have to remember it. In the late middle ages, as a culture of copying manuscripts gave way to printed books, teachers warned that pupils would become careless, since they no longer had to transcribe what their teachers said.

Yet as we lose certain capacities, we gain new ones. People who used to navigate the seas by following stars can now program computers to steer container ships from afar. Your grandmother probably has better handwriting than you do – but you probably type faster.

The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology. Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.

Intentionally or not, this is what tech humanists are doing when they talk about technology as threatening human nature – as if human nature had stayed the same from the paleolithic era until the rollout of the iPhone. Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them. And while the tech humanists may believe they are acting in the common good, they themselves acknowledge they are doing so from above, as elites. “We have a moral responsibility to steer people’s thoughts ethically,” Tristan Harris has declared.

Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes. The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.

This paternalism produces a central irony of tech humanism: the language that they use to describe users is often dehumanising. “Facebook appeals to your lizard brain – primarily fear and anger,” says McNamee. Harris echoes this sentiment: “Imagine you had an input cable,” he has said. “You’re trying to jack it into a human being. Do you want to jack it into their reptilian brain, or do you want to jack it into their more reflective self?”

The Center for Humane Technology’s website offers tips on how to build a more reflective and less reptilian relationship to your smartphone: “going greyscale” by setting your screen to black-and-white, turning off app notifications and charging your device outside your bedroom. It has also announced two major initiatives: a national campaign to raise awareness about technology’s harmful effects on young people’s “digital health and well-being”; and a “Ledger of Harms” – a website that will compile information about the health effects of different technologies in order to guide engineers in building “healthier” products.

These initiatives may help some people reduce their smartphone use – a reasonable personal goal. But there are some humans who may not share this goal, and there need not be anything unhealthy about that. Many people rely on the internet for solace and solidarity, especially those who feel marginalised. The kid with autism may stare at his screen when surrounded by people, because it lets him tolerate being surrounded by people. For him, constant use of technology may not be destructive at all, but in fact life-saving.

Pathologising certain potentially beneficial behaviours as “sick” isn’t the only problem with the Center for Humane Technology’s proposals. They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

This may be why their approach is so appealing to the tech industry. There is no reason to doubt the good intentions of tech humanists, who may genuinely want to address the problems fuelling the tech backlash. But they are handing the firms that caused those problems a valuable weapon. Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power. By channelling popular anger at Big Tech into concerns about health and humanity, tech humanism gives corporate giants such as Facebook a way to avoid real democratic control. In a moment of danger, it may even help them protect their profits.

One can easily imagine a version of Facebook that embraces the principles of tech humanism while remaining a profitable and powerful monopoly. In fact, these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.

When Zuckerberg announced that Facebook would prioritise “time well spent” over total time spent, it came a couple weeks before the company released their 2017 Q4 earnings. These reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”.

Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable. In a recent interview, he said: “Over the long term, even if time spent goes down, if people are spending more time on Facebook actually building relationships with people they care about, then that’s going to build a stronger community and build a stronger business, regardless of what Wall Street thinks about it in the near term.”

Sheryl Sandberg has also stressed that the shift will create “more monetisation opportunities”. How? Everyone knows data is the lifeblood of Facebook – but not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”. Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently. Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.

Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact. Advertisers can target the closest friends of the users who already like a product, on the assumption that close friends tend to like the same things.

 
Facebook CEO Mark Zuckerberg testifies before the US Senate last month. Photograph: Jim Watson/AFP/Getty Images

So when Zuckerberg talks about wanting to increase “meaningful” interactions and building relationships, he is not succumbing to pressure to take better care of his users. Rather, emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable.

In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

In many ways, this process recalls an earlier stage in the evolution of capitalism. In the 19th century, factory owners in England discovered they could only make so much money by extending the length of the working day. At some point, workers would die of exhaustion, or they would revolt, or they would push parliament to pass laws that limited their working hours. So industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.

A similar situation confronts Facebook today. They have to make the attention of the user more valuable – and the language and concepts of tech humanism can help them do it. So far, it seems to be working. Despite the reported drop in total time spent, Facebook recently announced huge 2018 Q1 earnings of $11.97bn (£8.7bn), smashing Wall Street estimates by nearly $600m.

Today’s tech humanists come from a tradition with deep roots in Silicon Valley. Like their predecessors, they believe that technology and humanity are distinct, but can be harmonised. This belief guided the generations who built the “humanised” machines that became the basis for the industry’s enormous power. Today it may provide Silicon Valley with a way to protect that power from a growing public backlash – and even deepen it by uncovering new opportunities for profit-making.

Fortunately, there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.

To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention. But it does suggest that living well with technology can’t be a matter of making technology more “human”. This goal isn’t just impossible – it’s also dangerous, because it puts us at the mercy of experts who tell us how to be human. It cedes control of our technological future to those who believe they know what’s best for us because they understand the essential truths about our species.

The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.

Today, that power is wielded by corporations, which own our technology and run it for profit. The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.

There is an alternative. If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right. The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.

Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.

What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power. Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources. After all, Silicon Valley wouldn’t exist without billions of dollars of public funding, not to mention the vast quantities of information that we all provide for free. Facebook’s market capitalisation is $500bn with 2.2 billion users – do the math to estimate how much the time you spend on Facebook is worth. You could apply the same logic to Google. There is no escape: whether or not you have an account, both platforms track you around the internet.

In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation coming into effect in the European Union later this month. But more robust regulation of Silicon Valley isn’t enough. We also need to pry the ownership of our digital infrastructure away from private firms. 

This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run. These democratic digital structures can focus on serving personal and social needs rather than piling up profits for investors. One inspiring example is municipal broadband: a successful experiment in Chattanooga, Tennessee, has shown that publicly owned internet service providers can supply better service at lower cost than private firms. Other models of digital democracy might include a worker-owned Uber, a user-owned Facebook or a socially owned “smart city” of the kind being developed in Barcelona. Alternatively, we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.

More experimentation is needed, but democracy should be our guiding principle. The stakes are high. Never before have so many people been thinking about the problems produced by the tech industry and how to solve them. The tech backlash is an enormous opportunity – and one that may not come again for a long time.

The old techno-utopianism is crumbling. What will replace it? Silicon Valley says it wants to make the world a better place. Fulfilling this promise may require a new kind of disruption.

Why ‘Sufism’ is not what it is made out to be

Zahra Sabri in The Dawn

In a variety of Islamic political contexts around the world today, we see ‘Sufi’ ideas being invoked as a call to return to a deeper, more inward-directed (and more peaceful) mode of religious experience as compared to the one that results in outward-oriented political engagements that are often seen as negative and violent. A hundred years ago, it would not have been uncommon to hear western or West-influenced native voices condemn Islamic mysticism (often described problematically in English as ‘Sufism’) as one of the major sources of inertia and passivity within Muslim societies. Yet new political contingencies, especially after 9/11, have led to this same phenomenon being described as ‘the soft face of Islam’, with observers such as British writer William Dalrymple referring to a vaguely defined group of people called ‘the Sufis’ as ‘our’ best friends vis-à-vis the danger posed by Taliban-like forces.

We seem to be in a situation where journalistic discourse and policy debates celebrate idealised notions of Islamic mysticism with its enthralling music, inspiring poetry and the transformative/liberating potential of the ‘message’ of the great mystics. These mystics are clearly differentiated from more ‘closed-minded’ and ‘orthodox’ representatives of the faith such as preachers (mullahs), theologians (fuqaha) and other types of ulema.

On the other hand, when we trace the institutional legacy of these great mystics (walis/shaikhs) and spiritual guides (pirs) down to their present-day spiritual heirs, we find out that they are often all too well-entrenched in the social and political status quo. The degree of their sociopolitical influence has even become electorally quantifiable since the introduction of parliamentary institutions during colonial times. Pirs in Pakistan have been visible as powerful party leaders (Pir Pagara), ministers (Shah Mahmood Qureshi) and even prime ministers (Yousaf Raza Gillani). Even more traditional religious figures, such as Pir Hameeduddin Sialvi (who recently enjoyed media attention for threatening to withdraw support from the ruling party over a religious issue that unites many types of religious leaders), not only exercise considerable indirect influence over the vote but have also served as members of various legislative forums.

It is, therefore, unclear what policymakers mean when they call for investment in the concepts and traditions of ‘Sufi Islam’. Is it an appeal for the promotion of a particular kind of religious ethic through the public education system? Or is it a call for raising the public profile of little known faqirs and dervishes and for strengthening the position of existing sajjada-nishins (hereditary representatives of pirs and mystics and the custodians of their shrines), many of whom already enjoy a high level of social and political prominence and influence? Or are policymakers referring to some notion of Islamic mysticism that has remained very much at the level of poetic utterance or philosophical discourse — that is, at the level of the ideal rather than at the level of reality as lived and experienced by Muslims over centuries?

The salience of idealised notions of Islamic mysticism in various policy circles today makes it interesting to examine the historical relations that mystic groups within Islamic societies have had with the ruling classes and the guardians of religious law. What has the typical relationship among kings, ulema and mystics been, for example, in regions such as Central Asia, Anatolia, Persia and Mughal India that fall in a shared Persianate cultural and intellectual zone? Has tasawwuf (Islamic mysticism) historically been a passive or apolitical force in society, or have prominent mystics engaged with politics and society in ways that are broadly comparable to the way other kinds of religious representatives have done so?

It is instructive to turn first to the life of an Islamic mystic who is perhaps more celebrated and widely recognised than any other: Maulana Jalaluddin Rumi (d. 1273). He lived in Konya in modern-day Turkey. The fame of his mystic verse has travelled far and wide, but what is less widely known is that he had received a thorough training in fiqh (Islamic law).

Historical accounts show that he had studied the Quran and fiqh at a very high level in some of the most famous madrasas in Aleppo and Damascus. Later, he served as a teacher of fiqh at several madrasas. In this, he appears to have followed his father who was a religious scholar at a princely court in Anatolia and taught at an institution that blended the functions of a madrasa and those of a khanqah, demonstrating how fluid the relationship between an Islamic law college and a mystic lodge could be in Islamic societies. Even madrasas built exclusively for training ulema have often been paired with khanqahs since centuries.


Jahangir showing preference to shaikhs over kings | Courtesy purchase, Charles Lang Freer Endowment


Biographers have described how Rumi’s legal opinions were frequently sought on a variety of subjects. As a spiritual guide and preacher, he regularly delivered the Friday sermon (khutba), achieving popularity as an acclaimed speaker and attracting a considerable number of disciples from all parts of society. His followers included merchants and artisans as well as members of the ruling class. His lectures were attended by both women and men in Konya. For much of this while, he was also composing his renowned poetry and becoming identified with his own style of sama’aand dance, which sometimes drew criticism from other ulema, many of whom nevertheless continued to revere him.

It is evident from Rumi’s letters that he also had extremely close relations with several Seljuk rulers, even referring to one of them as ‘son’. It was not rare for him to advise these rulers on various points of statesmanship and make recommendations (for instance, on relations with infidel powers) in light of religious strictures and political expediencies. He is also known to have written letters to introduce his disciples and relatives to men of position and influence who could help them professionally or socially. Unlike his religious sermons and ecstatic poetry, these letters follow the conventions typically associated with correspondence addressed to nobles and state officials.

All this contradicts the idea that mystics (mashaikh) are always firmly resistant to interacting with rulers. The stereotypical image of mystics is one where they are far too caught up in contemplation of the divine to have anything to do with the mundane political affairs of the world. Yet in sharp contrast to this image, many prominent mystics in Islamic history have played eminent roles in society and politics.

This holds true not only for the descendants of prominent mystics who continue to wield considerable sociopolitical influence in Muslim countries such as today’s Egypt and Pakistan but also for the mashaikh in whose names various mystical orders were originally founded. These mashaikh evidently lived very much in the world, not unlike nobles and kings and many classes of the ulema. 
The offspring of these shaikhs also often became favoured marriage partners for royal princesses, thus becoming merged with the nobility itself.

Rumi’s life also offers evidence that the two worlds of khanqah and madrasa, often considered vastly different from each other, all too often overlap in terms of their functions. Regardless of the impressions created by mystic poetry’s derogatory allusions to the zahid (zealous ascetic), wa‘iz (preacher) or shaikh (learned religious scholar), there is little practical reason to see mystics on the whole as being fundamentally opposed to other leaders and representatives of religion. In fact, right through until modern times, we have seen ulema and mashaikh work in tandem with each other in the pursuit of shared religio-political objectives, the Khilafat movement in British India being just one such example among many of their collaborations.

Rumi’s activities are indicative of a nearly ubiquitous pattern of political involvement by prominent mystics in various Islamic societies. In Central Asia, support from the mashaikh of the Naqshbandi mystical order (tariqa) seems to have become almost indispensable by the end of the 15th century for anyone aspiring to rule since the order had acquired deep roots within the population at large. The attachment of Timurid and Mughal rulers to the Naqshbandi order is well known. The Shaybanid rulers of Uzbek origin also had deep ties with the order and Naqshbandi mashaikhtended to play a prominent role in mediating between Mughal and Uzbek rulers.

Naqshbandis are somewhat unusual among Sufi orders in their historical inclination towards involving themselves in political affairs, and for favouring fellowship (suhbat) over seclusion (khalwat), yet political interventions are not rare even among other orders.

Shaikh Moeenuddin Chishti Ajmeri | Courtesy trustees of the Chester Beatty library, Dublin



Closer to home, Shaikh Bahauddin Zakariya (d. 1262), a Suhrawardi mystic, is reported to have negotiated the peaceful surrender of Multan to the Mongols, giving 10,000 dinars in cash to the invading army’s commander in return for securing the lives and properties of the citizens. Suhrawardis, indeed, have long believed in making attempts to influence rulers to take religiously correct decisions. Bahauddin Zakariya was very close to Sultan Iltutmish of the Slave Dynasty of Delhi and was given the official post of Shaikhul Islam. He openly sided with the sultan when Nasiruddin Qabacha, the governor of Multan, conspired to overthrow him.

It is widely known that the Mughal king Jahangir was named after Shaikh Salim Chishti (d. 1572) but what is less well known is that his great-grandfather Babar’s name ‘Zahiruddin Muhammad’ was chosen by Naqshbandi shaikh Khwaja Ubaidullah Ahrar (d. 1490), who wielded tremendous political power in Central Asia. The shaikh’s son later asked Babar to defend Samarkand against the Uzbeks. When Babar fell ill in India many years later, he versified one of Khwaja Ahrar’s works in order to earn the shaikh’s blessings for his recovery.

Even after Babar lost control of his Central Asian homeland and India became his new dominion, he and his descendants maintained strong ties with Central Asian Naqshbandi orders such as Ahrars, Juybaris and Dahbidis. This affiliation was not limited to the spiritual level. It also translated into important military and administrative posts at the Mughal court being awarded to generations of descendants of Naqshbandi shaikhs.

The offspring of these shaikhs also often became favoured marriage partners for royal princesses, thus becoming merged with the nobility itself. One of Babar’s daughters as well as one of Humayun’s was given in marriage to the descendants of Naqshbandi shaikhs. The two emperors also married into the family of the shaikhs of Jam in Khurasan. Akbar’s mother, Hamida Banu (Maryam Makani), was descended from the renowned shaikh Ahmad-e-Jam (d. 1141).

In India, Mughal princes and kings also established important relationships with several other mystical orders such as the Chishtis and Qadris. In particular, the Shattari order (that originated in Persia) grew to have significant influence over certain Mughal kings. It seems to have been a common tendency among members of the Mughal household to pen hagiographical tributes to their spiritual guides. Dara Shikoh, for example, wrote tazkirahs (biographies) of his spiritual guide Mian Mir (d. 1635) and other Qadri shaikhs. His sister Jahanara wrote about the Chishti shaikhs of Delhi.

So great was the royal reverence for mystics that several Mughal emperors, like their counterparts outside India, wanted to be buried beside the graves of prominent shaikhs. Aurangzeb, for example, was buried beside a Chishti shaikh, Zainuddin Shirazi (d. 1369). Muhammad Shah’s grave in Delhi is near that of another Chishti shaikh, Nizamuddin Auliya (d. 1325).

Like several other Mughal and Islamic rulers, Aurangzeb’s life demonstrates a devotion to a number of different mystical orders (Chishtis, Shattaris and Naqshbandis) at various points in his life. The emperor is reported to have sought the blessings of Naqshbandis during his war of succession with his brother Dara Shikoh. Naqshbandi representatives not only committed themselves to stay by his side in the battle but they also vowed to visit Baghdad to pray at the tomb of Ghaus-e-Azam Abdul Qadir Jilani (d. 1166) for his victory. They similarly promised to mobilise the blessings of the ulema and mashaikh living in the holy city of Makkah in his favour.
Mughal prince Parvez talking to a holy man | Courtesy purchase — Charles Lang Freer Endowment



The combined spiritual and temporal power of influential mashaikh across various Islamic societies meant that rulers were eager to seek their political support and spiritual blessings for the stability and longevity of their rule. Benefits accrued to both sides. The mashaikh’s approval and support bolstered the rulers’ political position, and financial patronage by rulers and wealthy nobles, in turn, served to strengthen the social and economic position of mashaikh who often grew to be powerful landowners. The estates and dynasties left behind by these shaikhs frequently outlasted those of their royal patrons.

This is not to say that every prominent mystic had equally intimate ties with rulers. Some mashaikh (particularly among Chishtis) are famous for refusing to meet kings and insisting on remaining aloof from the temptations of worldly power. Shaikh Nizamuddin Auliya’s response to Alauddin Khilji’s repeated requests for an audience is well known: “My house has two doors. If the Sultan enters by one, I will make my exit by the other.” In effect, however, even these avowedly aloof mashaikh often benefited from access to the corridors of royal power via their disciples among the royal household and high state officials.

The relationship between sultans and mashaikh was also by no means always smooth. From time to time, there was a real breakdown in their ties. Shaikhs faced the prospect of being exiled, imprisoned or even executed if their words or actions threatened public order or if they appeared to be in a position to take over the throne. The example of Shaikh Ahmad Sirhindi (d. 1624) is famous. He was imprisoned by Jahangir for a brief period reportedly because his disquietingly elevated claims about his own spiritual rank threatened to disrupt public order. Several centuries earlier, Sidi Maula was executed by Jalaluddin Khilji, who suspected the shaikh of conspiring to seize his throne.

It is not only through influence over kings and statesmen that Islamic mystical orders have historically played a political role. Some of them are known to have launched direct military campaigns. Contrary to a general notion in contemporary popular discourse that ‘Sufism’ somehow automatically means ‘peace’, some Islamic mystical orders have had considerable military recruiting potential.

The Safaviyya mystical order of Ardabil in modern day Iranian Azerbaijan offers a prominent example of this. Over the space of almost two centuries, this originally Sunni mystical order transformed itself into a fighting force. With the help of his army of Qizilbash disciples, the first Safavid ruler Shah Ismail I established an enduring Shia empire in 16th century Iran.

In modern times, Pir Pagara’s Hurs in Sindh during the British period offer another example of a pir’s devotees becoming a trained fighting force. It is not difficult to find other examples in Islamic history of mashaikh who urged sultans to wage wars, accompanied sultans on military expeditions and inspired their disciples to fight in the armies of favoured rulers. Some are believed to have personally participated in armed warfare.

Maulana Jalaluddin Rumi distributing sweetmeats to disciples | Courtesy Museum of Fine Arts, Boston



To speak of a persistent difference between the positions of ulema and mystics on the issue of war or jihad would be, thus, a clear mistake. ‘Sufism’ on the whole is hardly outside the mainstream of normative Islam on this issue, as on others.

Another popular misconception is to speak of ‘Sufism’ as something peculiar to the South Asian experience of Islam or deem it to be some indigenously developed, soft ‘variant’ of Islam that is different from the ‘harder’ forms of the religion prevalent elsewhere. Rituals associated with piri-muridi (master-disciple) relationships and visits to dargahs can, indeed, display the influence of local culture and differ significantly from mystical rituals in other countries and regions.

However, the main trends and features defining Islamic mysticism in South Asia remain pointedly similar to those characterising Islamic mysticism in the Middle East and Central Asia. As British scholar Nile Green points out, “What is often seen as being in some way a typically South Asian characteristic of Islam – the emphasis on a cult of Sufi shrines – was in fact one of the key practices and institutions of a wider Islamic cultural system to be introduced to South Asia at an early period ... It is difficult to understand the history of Sufism in South Asia without reference to the several lengthy and distinct patterns of immigration into South Asia of holy men from different regions of the wider Muslim world, chiefly from Arabia, the fertile crescent, Iran and Central Asia.”

It is a fact that all the major mystical orders in South Asia have their origins outside this region. Even the Chishti order, which has come to be associated more closely with South Asia than with any other region, originated in Chisht near Herat in modern-day Afghanistan. These interregional connections have consistently been noted and celebrated by masters and disciples connected with mystic orders over time. Shaikh Ali al-Hujweri (d. circa 1072-77), who migrated from Ghazna in Afghanistan to settle in Lahore, is known and revered as Data Ganj Bakhsh. Yet this does not mean that the status of high ranking shaikhs who lived far away from the Subcontinent is lower than his in any way. Even today, the cult of Ghaus-e-Azam of Baghdad continues to be popular in South Asia.
The third myth is that mystics across the board are intrinsically ‘peaceful’ and opposed to armed jihad or warfare.

For anyone who has the slightest acquaintance with Muslim history outside the Subcontinent, it would be difficult to defend the assertion – one that we hear astoundingly often in both lay and academic settings in South Asia – that ‘Sufi Islam’ is somehow particular to Sindh or Punjab in specific or to the Indian subcontinent more broadly. It is simply not possible to understand the various strands of Islamic mysticism in our region without reference to their continual interactions with the broader Islamic world.

What is mystical experience, after all? The renowned Iranian scholar Abdolhossein Zarrinkoub defines it as an “attempt to attain direct and personal communication with the godhead” and argues that mysticism is as old as humanity itself and cannot be confined to any race or religion.

It would, therefore, be quite puzzling if Islamic mysticism had flowered only in the Indian subcontinent and in no other Muslim region, as some of our intellectuals seem to assert. Islamic mysticism in South Asia owes as much to influences from Persia, Central Asia and the Arab lands as do most other aspects of Islam in our region. These influences are impossible to ignore when we study the lives and works of the mystics themselves.

As Shaikh Ahmad Sirhindi (Mujaddid-e-Alf-e-Sani) wrote in the 16th-17th century: “We ... Muslims of India ... are so much indebted to the ulema and Sufis (mashaikh) of Transoxiana (Mawara un-Nahr) that it cannot be conveyed in words. It was the ulema of the region who strove to correct the beliefs [of Muslims] to make them consistent with the sound beliefs and opinions of the followers of the Prophet’s tradition and the community (Ahl-e-Sunna wa’l-Jama’a). It was they who reformed the religious practices [of the Muslims] according to Hanafi law. The travels of the great Sufis (may their graves be hallowed) on the path of this sublime Sufi order have been introduced to India by this blessed region.” *

These influences were not entirely one-way. We see that the Mujaddidi order (developed in India by Shaikh Ahmad Sirhindi as an offshoot of the Naqshbandi order) went on to exert a considerable influence in Central Asia and Anatolia. This demonstrates once again how interconnected these regions had been at the intellectual, literary and commercial levels before the advent of colonialism.
Dancing dervishes | Courtesy purchase, Rogers Fund and the Kevorkian Foundation Gift, 1955



This essay has been an attempt to dispel four myths about Islamic mysticism. The first myth is that there is a wide gap between the activities of the mystic khanqah and those of the scholarly madrasa (and that there is, thus, a vast difference between ‘Sufi’ Islam and normative/mainstream Sunni Islam). The second myth is that mystics are ‘passive’, apolitical and withdrawn from the political affairs of their time. The third myth is that mystics across the board are intrinsically ‘peaceful’ and opposed to armed jihad or warfare. The last myth is that Islamic mysticism is a phenomenon particular to, or intrinsically more suited to, the South Asian environment as compared to other Islamic lands.

All these four points are worth taking into consideration in any meaningful policy discussion of the limits and possibilities of harnessing Islamic mysticism for political interventions in Muslim societies such as today’s Pakistan. It is important to be conscious of the fact that when we make an argument for promoting mystical Islam in this region, we are in effect making an argument for the promotion of mainstream Sunni (mostly Hanafi) Islam in its historically normative form.

Wednesday 2 May 2018

Turmoil in Indian Courts - Arun Shourie


Rudd’s career lays bare the new rules of power: crash around and cash out

The ex-home secretary’s rise and fall is typical of an inexperienced elite that regards ordinary people with contempt writes Aditya Chakrabortty in The Guardian

At least one consolation remains for Amber Rudd. Drummed out of the Home Office, she can now spend more time in her constituency of Hastings: the same seaside resort she found irresistible because “I wanted to be within two hours of London, and I could see we were going to win it”. Yet Rudd loves her electorate, rhapsodising about some of them as people “who prefer to be on benefits by the seaside … they’re moving down here to have easier access to friends and drugs and drink”.

Relax. I come neither to praise nor to bury Rudd, but to analyse her. Or, rather, to place her in context. What stands out about this latest crash-and-burn is how well it represents the current Westminster elite, even down to the contempt for the poor sods who vote for them.

Rudd exemplifies a political class light on expertise and principle, yet heavy on careerism and happy to ruin lives. All the key traits are here. In a dizzying ascent, she went from rookie MP in 2010 to secretary of state for energy in 2015, before being put in charge of the Home Office the very next year. Lewis Hamilton would kill for such an accelerant, yet it leaves no time to master detail, such as your own department’s targets. Since 2014 Sajid Javid, Rudd’s replacement, has hopped from culture to business to local government, rarely staying in any post for more than a year. Margaret Thatcher kept her cabinet ministers at one department for most of a parliamentary term, but this stepping-stone culture turns urgent national problems – such as police funding and knife crime – into PR firefighting.
Another hallmark of this set is the disposability of its values. Cameron hugs Arctic huskies, then orders aides to “get rid of all the green crap”. As for Rudd, the May cabinet’s big liberal, she vowed to force companies to reveal the numbers of their foreign staff, stoking the embers of racism in a tawdry bid to boost her standing with Tory activists. Praised by Osborne for her “human” touch, she was revealed this week privately moaning about “bed-blocking” in British detention centres.

And when things get sticky, you put your officials in the line of fire. During the Brexit referendum, Osborne revved up the Treasury to generate apocalyptic scenarios about the cost of leaving. While doomsday never came, his tactic caused incalculable damage both to the standing of economists and to the civil service’s reputation for impartiality. Rudd settled for trashing her own officials for their “appalling” treatment of Windrush-era migrants.

None of these traits are entirely new, nor are they the sole preserve of the blue team. At the fag end of Gordon Brown’s government, the sociologist Aeron Davis studied the 49 politicians on both frontbenches. They split readily into two types. An older lot had spent an average of 15 years in business or law or campaigning before going into parliament – then debated and amended and sat on select committees for another nine years before reaching the cabinet.

The younger bunch had pre-Westminster careers that typically came to little more than seven years, often spent at thinktanks or as ministerial advisers. They took a mere three years to vault into cabinet ranks. This isn’t “professionalisation”. It is nothing less than the creation of a new Westminster caste: a group of self-styled leaders with no proof of prowess and nothing in common with their voters. May’s team is stuffed full of them. After conducting more than 350 interviews with frontbench politicians, civil servants, FTSE chief executives and top financiers, Davis has collected his insights in a book. The argument is summed up in its title: Reckless Opportunists.

Davis depicts a political and business elite that can’t be bothered about the collective good or even its own institutions – because it cannot see further than the next job opportunity. In this environment, you promise anything for poll ratings, even if it’s an impossible pledge to get net migration down to the tens of thousands.

Good coverage matters more than a track record – because at the top of modern Britain no one sticks around for too long. Of the 25 permanent secretaries in Whitehall, Davis finds that 11 have been in post less than two years. Company bosses now typically spend less than five years in the top job, down from eight years in 2010. Over that same period, their pay has shot up from 120 times the average salary to 160 times. Bish bash bosh!

There is one field that revels in such short-termism: the City. What emerges from Reckless Opportunists is the degree to which City values have infected the rest of the British elite. Chief executives are judged by how much cash they return to shareholders, even if that means slashing spending on research and investment. Ministers either come from finance (Rudd, Javid) or end up working for it (Osborne and his advisers).

Promise the earth and leave it to the next mug to deliver. Crash around, cash out and move on to the next job. State these new mantras, and you see how Jeremy Corbyn, whatever his other faults, can’t conform to them. You can also see how he poses such a threat to a political-business elite reared on them.

Soon after May moved into No 10, she famously declared: “If you believe you are a citizen of the world, you are a citizen of nowhere. You don’t understand what citizenship means.” The press wrote it up as her threat to migrants. Yet the more I think about it, the more accurately I believe it describes her own shiny-faced team, her own poisonous politics, her own self-serving elite.

Tuesday 1 May 2018

Should politicians be replaced by experts?

In the age of Trump and Brexit, some people say that democracy is fatally flawed and we should be ruled by ‘those who know best’. Here’s why that’s not very clever. David Runciman in The Guardian

Democracy is tired, vindictive, self-deceiving, paranoid, clumsy and frequently ineffectual. Much of the time it is living on past glories. This sorry state of affairs reflects what we have become. But current democracy is not who we are. It is just a system of government, which we built, and which we could replace. So why don’t we replace it with something better?

This line of argument has grown louder in recent years, as democratic politics has become more unpredictable and, to many, deeply alarming in its outcomes. First Brexit, then Donald Trump, plus the rise of populism and the spread of division, has started a tentative search for plausible alternatives. But the rival systems we see around us have a very limited appeal. The unlovely forms of 21st-century authoritarianism can at best provide only a partial, pragmatic alternative to democracy. The world’s strongmen still pander to public opinion, and in the case of competitive authoritarian regimes such as the ones in Hungary and Turkey, they persist with the rigmarole of elections. From Trump to Recep Tayyip Erdoğan is not much of a leap into a brighter future.

There is a far more dogmatic alternative, which has its roots in the 19th century. Why not ditch the charade of voting altogether? Stop pretending to respect the views of ordinary people – it’s not worth it, since the people keep getting it wrong. Respect the experts instead! This is the truly radical option. So should we try it?

The name for this view of politics is epistocracy: the rule of the knowers. It is directly opposed to democracy, because it argues that the right to participate in political decision-making depends on whether or not you know what you are doing. The basic premise of democracy has always been that it doesn’t matter how much you know: you get a say because you have to live with the consequences of what you do. In ancient Athens, this principle was reflected in the practice of choosing office-holders by lottery. Anyone could do it because everyone – well, everyone who wasn’t a woman, a foreigner, a pauper, a slave or a child – counted as a member of the state. With the exception of jury service in some countries, we don’t choose people at random for important roles any more. But we do uphold the underlying idea by letting citizens vote without checking their suitability for the task.

Critics of democracy – starting with Plato – have always argued that it means rule by the ignorant, or worse, rule by the charlatans that the ignorant people fall for. Living in Cambridge, a passionately pro-European town and home to an elite university, I heard echoes of that argument in the aftermath of the Brexit vote. It was usually uttered sotto voce – you have to be a brave person to come out as an epistocrat in a democratic society – but it was unquestionably there. Behind their hands, very intelligent people muttered to each other that this is what you get if you ask a question that ordinary people don’t understand. Dominic Cummings, the author of the “Take Back Control” slogan that helped win the referendum, found that his critics were not so shy about spelling it out to his face. Brexithappened, they told him, because the wicked people lied to the stupid people. So much for democracy.

To say that democrats want to be ruled by the stupid and the ignorant is unfair. No defender of democracy has ever claimed that stupidity or ignorance are virtues in themselves. But it is true that democracy doesn’t discriminate on the grounds of a lack of knowledge. It considers the ability to think intelligently about difficult questions a secondary consideration. The primary consideration is whether an individual is implicated in the outcome. Democracy asks only that the voters should be around long enough to suffer for their own mistakes.

The question that epistocracy poses is: why don’t we discriminate on the basis of knowledge? What’s so special about letting everyone take part? Behind it lies the intuitively appealing thought that, instead of living with our mistakes, we should do everything in our power to prevent them in the first place – then it wouldn’t matter who has to take responsibility.

This argument has been around for more than 2,000 years. For most of that time, it has been taken very seriously. The consensus until the end of the 19th century was that democracy is usually a bad idea: it is just too risky to put power in the hands of people who don’t know what they are doing. Of course, that was only the consensus among intellectuals. We have little way of knowing what ordinary people thought about the question. Nobody was asking them.

Over the course of the 20th century, the intellectual consensus was turned around. Democracy established itself as the default condition of politics, its virtues far outweighing its weaknesses. Now the events of the 21st century have revived some of the original doubts. Democracies do seem to be doing some fairly stupid things at present. Perhaps no one will be able to live with their mistakes. In the age of Trump, climate change and nuclear weapons, epistocracy has teeth again.

So why don’t we give more weight to the views of the people who are best qualified to evaluate what to do? Before answering that question, it is important to distinguish between epistocracy and something with which it is often confused: technocracy. They are different. Epistocracy means rule by the people who know best. Technocracy is rule by mechanics and engineers. A technocrat is someone who understands how the machinery works.

In November 2011, Greek democracy was suspended and an elected government was replaced by a cabinet of experts, tasked with stabilising the collapsing Greek economy before new elections could be held. This was an experiment in technocracy, however, not epistocracy. The engineers in this case were economists. Even highly qualified economists often haven’t a clue what’s best to do. What they know is how to operate a complex system that they have been instrumental in building – so long as it behaves the way it is meant to. Technocrats are the people who understand what’s best for the machine. But keeping the machine running might be the worst thing we could do. Technocrats won’t help with that question.

Both representative democracy and pragmatic authoritarianism have plenty of space for technocracy. Increasingly, each system has put decision-making capacity in the hands of specially trained experts, particularly when it comes to economic questions. Central bankers wield significant power in a wide variety of political systems around the world. For that reason, technocracy is not really an alternative to democracy. Like populism, it is more of an add-on. What makes epistocracy different is that it prioritises the “right” decision over the technically correct decision. It tries to work out where we should be going. A technocrat can only tell us how we should get there.

How would epistocracy function in practice? The obvious difficulty is knowing who should count as the knowers. There is no formal qualification for being a general expert. It is much easier to identify a suitable technocrat. Technocracy is more like plumbing than philosophy. When Greece went looking for economic experts to sort out its financial mess, it headed to Goldman Sachs and the other big banks, since that is where the technicians were congregated. When a machine goes wrong, the people responsible for fixing it often have their fingerprints all over it already.

Historically, some epistocrats have tackled the problem of identifying who knows best by advocating non-technical qualifications for politics. If there were such a thing as the university of life, that’s where these epistocrats would want political decision-makers to get their higher degrees. But since there is no such university, they often have to make do with cruder tests of competence. The 19th-century philosopher John Stuart Mill argued for a voting system that granted varying numbers of votes to different classes of people depending on what jobs they did. Professionals and other highly educated individuals would get six or more votes each; farmers and traders would get three or four; skilled labourers would get two; unskilled labourers would get one. Mill also pushed hard for women to get the vote, at a time when that was a deeply unfashionable view. He did not do this because he thought women were the equals of men. It was because he thought some women, especially the better educated, were superior to most men. Mill was a big fan of discrimination, so long as it was on the right grounds.

To 21st-century eyes, Mill’s system looks grossly undemocratic. Why should a lawyer get more votes than a labourer? Mill’s answer would be to turn the question on its head: why should a labourer get the same number of votes as a lawyer? Mill was no simple democrat, but he was no technocrat either. Lawyers didn’t qualify for their extra votes because politics placed a special premium on legal expertise. No, lawyers got their extra votes because what’s needed are people who have shown an aptitude for thinking about questions with no easy answers. Mill was trying to stack the system to ensure as many different points of view as possible were represented. A government made up exclusively of economists or legal experts would have horrified him. The labourer still gets a vote. Skilled labourers get two. But even though a task like bricklaying is a skill, it is a narrow one. What was needed was breadth. Mill believed that some points of view carried more weight simply because they had been exposed to more complexity along the way.

Jason Brennan, a very 21st-century philosopher, has tried to revive the epistocratic conception of politics, drawing on thinkers like Mill. In his 2016 book Against Democracy, Brennan insists that many political questions are simply too complex for most voters to comprehend. Worse, the voters are ignorant about how little they know: they lack the ability to judge complexity because they are so attached to simplistic solutions that feel right to them.

Brennan writes: “Suppose the United States had a referendum on whether to allow significantly more immigrants into the country. Knowing whether this is a good idea requires tremendous social scientific knowledge. One needs to know how immigration tends to affect crime rates, domestic wages, immigrants’ welfare, economic growth, tax revenues, welfare expenditures and the like. Most Americans lack this knowledge; in fact, our evidence is that they are systematically mistaken.”

In other words, it’s not just that they don’t know; it’s not even that they don’t know that they don’t know; it’s that they are wrong in ways that reflect their unwavering belief that they are right.

 
Some philosophers advocate exams for voters, to ‘screen out citizens who are badly misinformed’. Photograph: David Jones/PA

Brennan doesn’t have Mill’s faith that we can tell how well-equipped someone is to tackle a complex question by how difficult that person’s job is. There is too much chance and social conditioning involved. He would prefer an actual exam, to “screen out citizens who are badly misinformed or ignorant about the election, or who lack basic social scientific knowledge”. Of course, this just pushes the fundamental problem back a stage without resolving it: who gets to set the exam? Brennan teaches at a university, so he has little faith in the disinterested qualities of most social scientists, who have their own ideologies and incentives. He has also seen students cramming for exams, which can produce its own biases and blind spots. Still, he thinks Mill was right to suggest that the further one advances up the educational ladder, the more votes one should get: five extra votes for finishing high school, another five for a bachelor’s degree, and five more for a graduate degree.

Brennan is under no illusions about how provocative this case is today, 150 years after Mill made it. In the middle of the 19th century, the idea that political status should track social and educational standing was barely contentious; today, it is barely credible. Brennan also has to face the fact that contemporary social science provides plenty of evidence that the educated are just as subject to groupthink as other people, sometimes even more so. The political scientists Larry Bartels and Christopher Achen point this out in their 2016 book Democracy for Realists: “The historical record leaves little doubt that the educated, including the highly educated, have gone wrong in their moral and political thinking as often as everyone else.” Cognitive biases are no respecters of academic qualifications. How many social science graduates would judge the question about immigration according to the demanding tests that Brennan lays out, rather than according to what they would prefer to believe? The irony is that if Brennan’s voter exam were to ask whether the better-educated deserve more votes, the technically correct answer might be no. It would depend on who was marking it.

However, in one respect Brennan insists that the case for epistocracy has grown far stronger since Mill made it. That is because Mill was writing at the dawn of democracy. Mill published his arguments in the run-up to what became the Second Reform Act of 1867, which doubled the size of the franchise in Britain to nearly 2.5 million voters (out of a general population of 30 million). Mill’s case for epistocracy was based on his conviction that over time it would merge into democracy. The labourer who gets one vote today would get more tomorrow, once he had learned how to use his vote wisely. Mill was a great believer in the educative power of democratic participation.

Brennan thinks we now have 100-plus years of evidence that Mill was wrong. Voting is bad for us. It doesn’t make people better informed. If anything, it makes them stupider, because it dignifies their prejudices and ignorance in the name of democracy. “Political participation is not valuable for most people,” Brennan writes. “On the contrary, it does most of us little good and instead tends to stultify and corrupt us. It turns us into civic enemies who have grounds to hate one another.” The trouble with democracy is that it gives us no reason to become better informed. It tells us we are fine as we are. And we’re not.

In the end, Brennan’s argument is more historical than philosophical. If we were unaware of how democracy would turn out, it might make sense to cross our fingers and assume the best of it. But he insists that we do know, and so we have no excuse to keep kidding ourselves. Brennan thinks that we should regard epistocrats like himself as being in the same position as democrats were in the mid-19th century. What he is championing is anathema to many people, as democracy was back then. Still, we took a chance on democracy, waiting to see how it would turn out. Why shouldn’t we take a chance on epistocracy, now we know how the other experiment went? Why do we assume that democracy is the only experiment we are ever allowed to run, even after it has run out of steam?

It’s a serious question, and it gets to how the longevity of democracy has stifled our ability to think about the possibility of something different. What was once a seemingly reckless form of politics has become a byword for caution. And yet there are still good reasons to be cautious about ditching it. Epistocracy remains the reckless idea. There are two dangers in particular.

The first is that we set the bar too high in politics by insisting on looking for the best thing to do. Sometimes it is more important to avoid the worst. Even if democracy is often bad at coming up with the right answers, it is good at unpicking the wrong ones. Moreover, it is good at exposing people who think they always know best. Democratic politics assumes there is no settled answer to any question and it ensures that is the case by allowing everyone a vote, including the ignorant. The randomness of democracy – which remains its essential quality – protects us against getting stuck with truly bad ideas. It means that nothing will last for long, because something else will come along to disrupt it.

Epistocracy is flawed because of the second part of the word rather than the first – this is about power (kratos) as much as it is about knowledge (episteme). Fixing power to knowledge risks creating a monster that can’t be deflected from its course, even when it goes wrong – which it will, since no one and nothing is infallible. Not knowing the right answer is a great defence against people who believe that their knowledge makes them superior.

Brennan’s response to this argument (a version of which is made by David Estlund in his 2007 book Democratic Authority) is to turn it on its head. Since democracy is a form of kratos, too, he says, why aren’t we concerned about protecting individuals from the incompetence of the demos just as much as from the arrogance of the epistocrats? But these are not the same kinds of power. Ignorance and foolishness don’t oppress in the same way that knowledge and wisdom do, precisely because they are incompetent: the demos keeps changing its mind.

The democratic case against epistocracy is a version of the democratic case against pragmatic authoritarianism. You have to ask yourself where you’d rather be when things go wrong. Maybe things will go wrong quicker and more often in a democracy, but that is a different issue. Rather than thinking of democracy as the least worst form of politics, we could think of it as the best when at its worst. It is the difference between Winston Churchill’s famous dictum and a similar one from Alexis de Tocqueville a hundred years earlier that is less well-known but more apposite. More fires get started in a democracy, de Tocqueville said, but more fires get put out, too.

The recklessness of epistocracy is also a function of the historical record that Brennan uses to defend it. A century or more of democracy may have uncovered its failings, but they have also taught us that we can live with them. We are used to the mess and attached to the benefits. Being an epistocrat like Mill before democracy had got going is very different from being one now that democracy is well established. We now know what we know, not just about democracy’s failings, but about our tolerance for its incompetences.

The great German sociologist Max Weber, writing at the turn of the 20th century, took it for granted that universal suffrage was a dangerous idea, because of the way that it empowered the mindless masses. But he argued that once it had been granted, no sane politician should ever think about taking it away: the backlash would be too terrible. The only thing worse than letting everyone vote is telling some people that they no longer qualify. Never mind who sets the exam, who is going to tell us that we’ve failed? Mill was right: democracy comes after epistocracy, not before. You can’t run the experiment in reverse.

The cognitive biases that epistocracy is meant to rescue us from are what will ultimately scupper it. Loss aversion makes it more painful to be deprived of something we have that doesn’t always work than something we don’t have that might. It’s like the old joke. Q: “Do you know the way to Dublin?” A: “Well, I wouldn’t start from here.” How do we get to a better politics? Well, maybe we shouldn’t start from here. But here is where we are.

Still, there must be other ways of trying to inject more wisdom into democratic politics than an exam. This is the 21st century: we have new tools to work with. If many of the problems with democracy derive from the business of politicians hawking for votes at election time, which feeds noise and bile into the decision-making process, perhaps we should try to simulate what people would choose under more sedate and reflective conditions. For instance, it may be possible to extrapolate from what is known about voters’ interests and preferences what they ought to want if they were better able to access the knowledge they needed. We could run mock elections that replicate the input from different points of view, as happens in real elections, but which strip out all the distractions and distortions of democracy in action.

Brennan suggests the following: “We can administer surveys that track citizens’ political preferences and demographic characteristics, while testing their basic objective political knowledge. Once we have this information, we can simulate what would happen if the electorate’s demographics remained unchanged, but all citizens were able to get perfect scores on tests of objective political knowledge. We can determine, with a strong degree of confidence, what ‘We the People’ would want, if only ‘We the People’ understood what we were talking about.”

Democratic dignity – the idea that all citizens should be allowed to express their views and have them taken seriously by politicians – goes out the window under such a system. We are each reduced to data points in a machine-learning exercise. But, according to Brennan, the outcomes should improve.

In 2017, a US-based digital technology company called Kimera Systems announced that it was close to developing an AI named Nigel, whose job was to help voters know how they should vote in an election, based on what it already knew of their personal preferences. Its creator, Mounir Shita, declared: “Nigel tries to figure out your goals and what reality looks like to you and is constantly assimilating paths to the future to reach your goals. It’s constantly trying to push you in the right direction.”

 
‘Politicians don’t care what we actually want. They care what they can persuade us we want’ … Donald Trump in Michigan last week. Photograph: Chirag Wakaskar/SOPA/Rex/Shutterstock

This is the more personalised version of what Brennan is proposing, with some of the democratic dignity plugged back in. Nigel is not trying to work out what’s best for everyone, only what’s best for you. It accepts your version of reality. Yet Nigel understands that you are incapable of drawing the correct political inferences from your preferences. You need help, from a machine that has seen enough of your personal behaviour to understand what it is you are after. Siri recommends books you might like. Nigel recommends political parties and policy positions.

Would this be so bad? To many people it instinctively sounds like a parody of democracy because it treats us like confused children. But to Shita it is an enhancement of democracy because it takes our desires seriously. Democratic politicians don’t much care what it is that we actually want. They care what it is they can persuade us we want, so they can better appeal to it. Nigel puts the voter first. At the same time, by protecting us from our own confusion and inattention, Nigel strives to improve our self-understanding. Brennan’s version effectively gives up on Mill’s original idea that voting might be an educative experience. Shita hasn’t given up. Nigel is trying to nudge us along the path to self-knowledge. We might end up learning who we really are.

The fatal flaw with this approach, however, is that we risk learning only who it is we think we are, or who it is we would like to be. Worse, it is who we would like to be now, not who or what we might become in the future. Like focus groups, Nigel provides a snapshot of a set of attitudes at a moment in time. The danger of any system of machine learning is that it produces feedback loops. By restricting the dataset to our past behaviour, Nigel teaches us nothing about what other people think, or even about other ways of seeing the world. Nigel simply mines the archive of our attitudes for the most consistent expression of our identities. If we lean left, we will end up leaning further left. If we lean right, we will end up leaning further right. Social and political division would widen. Nigel is designed to close the circle in our minds.

There are technical fixes for feedback loops. Systems can be adjusted to inject alternative points of view, to notice when data is becoming self-reinforcing or simply to randomise the evidence. We can shake things up to lessen the risk that we get set in our ways. For instance, Nigel could make sure that we visit websites that challenge rather than reinforce our preferences. Alternatively, on Brennan’s model, the aggregation of our preferences could seek to take account of the likelihood that Nigel had exaggerated rather than tempered who we really are. A Nigel of Nigels – a machine that helps other machines to better align their own goals – could try to strip out the distortions from the artificial democracy we have built. After all, Nigel is our servant, not our master. We can always tell him what to do.

But that is the other fundamental problem with 21st-century epistocracy: we won’t be the ones telling Nigel what to do. It will be the technicians who have built the system. They are the experts we rely on to rescue us from feedback loops. For this reason, it is hard to see how 21st-century epistocracy can avoid collapsing back into technocracy. When things go wrong, the knowers will be powerless to correct for them. Only the engineers who built the machines have that capacity, which means that it will be the engineers who have the power.

In recent weeks, we have been given a glimpse of what rule by engineers might look like. It is not an authoritarian nightmare of oppression and violence. It is a picture of confusion and obfuscation. The power of engineers never fully comes out into the open, because most people don’t understand what it is they do. The sight of Mark Zuckerberg, perched on his cushion, batting off the ignorant questions of the people’s representatives in Congress is a glimpse of a technocratic future in which democracy meets its match. But this is not a radical alternative to democratic politics. It is simply a distortion of it.