Search This Blog

Saturday 5 May 2018

Is your job pointless?

David Graeber in The Guardian

Copying and pasting emails. Inventing meaningless tasks for others. Just looking busy. Why do so many people feel their work is completely unnecessary?



 

Shoot me now: does your job do anyone any good? Illustration: Igor Bastidas


One day, the wall shelves in my office collapsed. This left books scattered all over the floor and a jagged, half-dislocated metal frame that once held the shelves in place dangling over my desk. I’m a professor of anthropology at a university. A carpenter appeared an hour later to inspect the damage, and announced gravely that, as there were books all over the floor, safety rules prevented him from entering the room or taking further action. I would have to stack the books and not touch anything else, whereupon he would return at the earliest available opportunity.

The carpenter never reappeared. Each day, someone in the anthropology department would call, often multiple times, to ask about the fate of the carpenter, who always turned out to have something extremely pressing to do. By the time a week was out, it had become apparent that there was one man employed by buildings and grounds whose entire job it was to apologise for the fact that the carpenter hadn’t come. He seemed a nice man. Still, it’s hard to imagine he was particularly happy with his work life.


A bullshit job is so completely pointless, unnecessary, or pernicious that even the employee can't justify its existence

Everyone is familiar with the sort of jobs that don’t seem, to the outsider, really to do much of anything: HR consultants, communications coordinators, PR researchers, financial strategists, corporate lawyers or the sort of people who spend their time staffing committees that discuss the problem of unnecessary committees. What if these jobs really are useless, and those who hold them are actually aware of it? Could there be anything more demoralising than having to wake up in the morning five out of seven days of one’s adult life to perform a task that one believes does not need to be performed, is simply a waste of time or resources, or even makes the world worse? There are plenty of surveys about whether people are happy at work, but what about whether people feel their jobs have any good reason to exist? I decided to investigate this phenomenon by drawing on more than 250 testimonies from people around the world who felt they once had, or now have, what I call a bullshit job.


What is a bullshit job?

The defining feature is this: one so completely pointless that even the person who has to perform it every day cannot convince themselves there’s a good reason for them to be doing it. They may not be able to admit this to their co-workers – often, there are very good reasons not to do so – but they are convinced the job is pointless nonetheless.

Bullshit jobs are not just jobs that are useless; typically, there has to be some degree of pretence and fraud involved as well. The employee must feel obliged to pretend that there is, in fact, a good reason their job exists, even if, privately, they find such claims ridiculous.

When people speak of bullshit jobs, they are generally referring to employment that involves being paid to work for someone else, either on a waged or salaried basis (most would include paid consultancies). Obviously, there are many self-employed people who manage to get money from others by means of falsely pretending to provide them with some benefit or service (normally we call them grifters, scam artists, charlatans or frauds), just as there are self-employed people who get money off others by doing or threatening to do them harm (normally we refer to them as muggers, burglars, extortionists or thieves). In the first case, at least, we can definitely speak of bullshit, but not of bullshit jobs, because these aren’t “jobs”, properly speaking. A con job is an act, not a profession. People do sometimes speak of professional burglars, but this is just a way of saying that theft is the burglar’s primary source of income.

These considerations allow us to formulate what I think can serve as a final working definition of a bullshit job: a form of paid employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence, even though, as part of the conditions of employment, the employee feels obliged to pretend that this is not the case.


The five types of bullshit job

Flunkies

They are given some minor task to justify their existence, but this is really just a pretext: in reality, flunky jobs are those that exist only or primarily to make someone else look or feel important. A classic flunky is someone like Steve, who told me, “I just graduated, and my new ‘job’ basically consists of my boss forwarding emails to me with the message: ‘Steve refer to the below’, and I reply that the email is inconsequential or spam.”

In countries such as Brazil, some buildings still have elevator operators whose entire job is to push the button for you

Doormen are the most obvious example. They perform the same function in the houses of the very rich that electronic intercoms have performed for everyone else since at least the 1950s. In some countries, such as Brazil, some buildings still have uniformed elevator operators whose entire job is to push the button for you. Further examples are receptionists and front-desk personnel at places that obviously don’t need them. Other flunkies provide a badge of importance. These include cold callers, who make contact with potential clients on the understanding that the broker for whom they work is so busy making money that they need an assistant to make this call.

Goons

These are people whose jobs have an aggressive element but, crucially, who exist only because other people also employ people in these roles. The most obvious example of this are national armed forces. Countries need armies only because other countries have armies; if no one had an army, armies would not be needed. But the same can be said of most lobbyists, PR specialists, telemarketers and corporate lawyers.

Goons find their jobs objectionable not just because they feel they lack positive value, but also because they see them as essentially manipulative and aggressive. These include a lot of call-centre employees: “You’re making an active negative contribution to people’s day,” explained one anonymous testimony. “I called people up to hock them useless shit: specifically, access to their ‘credit score’ that they could obtain for free elsewhere, but that we were offering, with some mindless add-ons, for £6.99 a month.”

Duct-tapers

These employees’ jobs exist only because of a glitch or fault in the organisation; they are there to solve a problem that ought not to exist. The most obvious examples of duct-tapers are those whose job it is to undo the damage done by sloppy or incompetent superiors.

Many duct-taper jobs are the result of a glitch in the system that no one has bothered to correct – tasks that could easily be automated, for instance, but haven’t been either because no one has got around to it, or because the manager wants to maintain as many subordinates as possible, or because of some structural confusion.

Magda’s job required her to proofread research reports written by her company’s star researcher-statistician. “The man didn’t know the first thing about statistics, and he struggled to produce grammatically correct sentences. I’d reward myself with a cake if I found a coherent paragraph. I lost 12lb working in that company. My job was to convince him to undertake a major reworking of every report he produced. Of course, he would never agree to correct anything, so I would then have to take the report to the company directors. They were statistically illiterate, too, but, being the directors, they could drag things out even more.”

Box-tickers

These employees exist only or primarily to allow an organisation to be able to claim it is doing something that, in fact, it is not doing. The most miserable thing about box-ticking jobs is that the employee is usually aware that not only does the box-ticking exercise do nothing towards accomplishing its ostensible purpose, but also it undermines it, because it diverts time and resources away from the purpose itself.

We’re all familiar with box-ticking as a form of government. If a government’s employees are caught doing something very bad – taking bribes, for instance, or shooting citizens at traffic lights – the first reaction is invariably to create a “fact-finding commission” to get to the bottom of things. This serves two functions. First of all, it’s a way of insisting that, aside from a small group of miscreants, no one had any idea that any of this was happening (this, of course, is rarely true); second, it’s a way of implying that once all the facts are in, someone will definitely do something about it (this usually isn’t true, either).


I had one responsibility: watching an inbox of forms asking for tech help, and pasting them into a different form

Local government has been described as little more than an endless sequence of box-ticking rituals revolving around monthly “target figures”. There are all sorts of ways that private companies employ people to be able to tell themselves they are doing something that they aren’t really doing. Many large corporations, for instance, maintain their own in-house magazines or even television channels, the ostensible purpose of which is to keep employees up to date on interesting news and developments, but which, in fact, exist for almost no reason other than to allow executives to experience that warm and pleasant feeling that comes when you see a favourable story about yourself in the media.

Taskmasters

These fall into two groups. Type one comprises those whose role consists entirely of assigning work to others. This job can be considered bullshit if the taskmaster believes there is no need for their intervention, and that if they were not there, underlings would be perfectly capable of carrying on by themselves.

Whereas the first variety of taskmaster is merely useless, the second variety does actual harm. These are taskmasters whose primary role is to create bullshit tasks for others to do, to supervise bullshit, or even to create entirely new bullshit jobs.

A taskmaster may spend at least 75% of their time allocating tasks and monitoring if the underling is doing them, even though they have absolutely no reason to believe the underlings in question would behave any differently if they weren’t there.

“Strategic mission statements” (or, even worse, “strategic vision documents”) instil a particular terror in academics. These are the primary means by which corporate management techniques – setting up quantifiable methods for assessing performance, forcing teachers and scholars to spend more and more of their time assessing and justifying what they do, and less and less time actually doing it – are insinuated into academic life.

I should add that there is really only one class of people who not only deny their jobs are pointless, but also express outright hostility to the very idea that our economy is rife with bullshit jobs. These are – predictably enough – business owners and others in charge of hiring and firing. No one, they insist, would ever spend company money on an employee who wasn’t needed. All the people who are convinced their jobs are worthless must be deluded, or self-important, or simply don’t understand their real function, which is fully visible only to those above. One might be tempted to conclude from this response that this is one class of people who genuinely don’t realise their own jobs are bullshit. 


Do you have a bullshit job?

These holders of bullshit jobs testify to the misery that can ensue when the only challenge you can overcome in your work is the challenge of coming to terms with the fact that you are not, in fact, presented with any challenges; when the only way you can exercise your powers is in coming up with creative ways to cover up the fact that you cannot exercise your powers; of managing the fact that you have, completely against your choosing, been turned into a parasite and a fraud. All wanted to remain anonymous:

Guarding an empty room

“I worked as a museum guard for a global security company in a museum where one exhibition room was left unused. My job was to guard that empty room, ensuring no museum guests touched the, well, nothing in the room and ensure nobody set any fires. To keep my mind sharp and attention undivided, I was forbidden any form of mental stimulation, like books, phones, etc. As nobody was ever there, I sat still and twiddled my thumbs for seven and a half hours, waiting for the fire alarm to sound. If it did, I was to calmly stand up and walk out. That was it.”

Copying and pasting

“I was given one responsibility: watching an inbox that received emails in a certain form from employees asking for tech help, and copy and paste it into a different form. Not only was this a textbook example of an automatable job, it actually used to be automated. There was some disagreement between managers that led to a standardisation that nullified the automation.”

Looking busy

“I was hired as a temp but not assigned any duties. I was told it was very important that I stay busy, but I wasn’t to play games or surf the web. My primary function seemed to be occupying a chair and contributing to the decorum of the office. At first, this seemed pretty easy, but I quickly discovered that looking busy when you aren’t is one of the least pleasant office activities imaginable. In fact, after two days, it was clear that this was going to be the worst job I had ever had. I installed Lynx, a text-only web browser that basically looks like a DOS [disk-operating system] window. No images, just monospaced text on an endless black background. My absentminded browsing of the internet now appeared to be the work of a skilled technician, the web browser a terminal into which diligently typed commands signalled my endless productivity.”

Sitting in the right place


“I work in a college dormitory during the summer. I have worked at this job for three years, and at this point it is still unclear to me what my actual duties are. Primarily, it seems that my job consists of physically occupying space at the front desk. While engaged in this, I am free to ‘pursue my own projects’, which I take to mean mainly creating rubber band balls out of rubber bands I find in the cabinets. When I am not busy with this, I might be checking the office email account (I have basically no training or administrative power, of course, so all I can do is forward these emails to my boss), moving packages from the door, where they get dropped off, to the package room, answering phone calls (again, I know nothing and rarely answer a question to the caller’s satisfaction), or finding ketchup packets from 2005 in the desk drawers. For these duties, I am paid $14 an hour.”

Into the brave new age of irrationality

The assault on rationality is part of a concerted political strategy writes Sanjay Rajoura in The Hindu


Much has been written and said about the assault on liberal arts under way in India since the new political era dawned. But the real assault is on science and rationality. And it has not been difficult to mount this attack.

For long, India has as a nation proudly claimed to be a society of belief. And Indians like to assert that faith is a ‘way of life’ here. Terms such as modernity, rational thinking and scientific analysis are often frowned upon, and misdiagnosed as disrespect to Indian culture.


Freshly minted spokesmodel

In recent years, we have entered a new era. I call it the Era of Irrationality. The new Chief Minister of Tripura, Biplab Kumar Deb, is the freshly minted spokesmodel of this bold, new era.

There appears to be a relay race among people in public positions, each one making an astonishingly ridiculous claim and then passing on the baton. Mr. Deb’s claim that the Internet existed in the times of the Mahabharata is the latest. But there have been several other persons before that: Ganesh was the first example of plastic surgery, Darwin’s theory of evolution is hokum because nobody has seen monkeys turning into humans, and that Stephen Hawking had said that Vedas have a theory superior to Einstein’s E = mc2.

Such statements have made us the laughing stock of the global scientific community. But more importantly, they also undermine the significant scientific achievements we have made post-Independence.

We cannot even dismiss these as random remarks by the fringe, the babas and the sadhus. These claims are often made by public officials (it’s another matter that the babas and sadhus are now occupying several public offices). The assault on rationality is a consequence of a concerted strategy of political forces. As rational thinking thins, the same political forces fatten.

We Indians have never really adopted the scientific temper, irrespective of our education. It’s evident from our obsession with crackpot sciences such as astrology and palmistry in our daily lives. However, in the past four years, the belief in pseudo-sciences has gained a political fig leaf as have tall, unverifiable claims on science.

The cultivation of scientific temper involves asking questions and demanding empirical evidence. It has no place for blind faith. The ruling political dispensation is uncomfortable with questioning Indians. But at the same time, it also wants to come across as a dispensation that champions a 21st century modern India. Therein lies a catch-22 situation.

So, they have devised a devious strategy to invest in the culture of blind belief. They already have a willing constituency. Ludicrous statements like those mentioned above — made by leaders in positions of power with alarming frequency — go on to legitimise and boost the Era of Irrationality.

An unscientific society makes the job of an incompetent ruler a lot easier. No questions are asked; not even basic ones. The ruler has to just make a claim and the believers will worship him. Rather than conforming, a truly rational community often questions disparity, exploitation, persecution on the basis of caste, religion or gender. It demands answers and accountability for such violations, which are often based on irrational whims. Hence rationality must be on top of the casualty list followed quickly by the minorities, Dalits, women, liberals. For the ‘Irrationality project’ to succeed, the ruler needs a willing suspension of disbelief on a mass scale.


Science v. technology

The vigour with which the government is making an assault on the scientific temper only confirms that it is actually frightened of it. This is the reason why authoritarian regimes are often intolerant of those who champion the spirit of science, but encourage scientists who will launch satellites and develop nuclear weapons — even as they break coconuts, chant hymns and press “Enter” with their fingers laden with auspicious stones.

These ‘techno-scientists’ are what I call ‘the DJs of the scientific community’. And they are often the establishment’s yes-men and yes-women.

The founders of the Constitution were aware of this. Hence the words “scientific temper” and “the spirit of inquiry and reform” find place in the Constitution, along with “secular” (belatedly), “equality” and “rights”. To dismantle secularism, dilute equality and pushback rights, it is imperative to destroy a scientific temperament.

The indoctrination against the scientific temper begins very early in our lives. It starts in our families and communities where young minds are aggressively discouraged from questioning authority and asking questions. An upper caste child for example may be forced to follow customs, which among others include practising and subscribing to the age-old caste system. The same methodology is used to impose fixed gender, sexual and religious identities. As a result, we are hardwired to be casteist, majoritarian and misogynist.

The final step in the ‘Irrationality project’ is to inject with regularity, preposterous, over-the-top claims about the nation’s past. It effectively blurs vision of the present.

The world is busy studying string theory, the god particle in a cyclotron, quantum mechanics. But we are busy expanding our chest size with claims of a fantastic yore.

Why is ignorance of science acceptable?

Janan Ganesh in The FT

Stephen Hawking’s final research paper clarifies his idea of a “multiverse”. I think. Published posthumously this week, it explores whether the same laws of physics obtain in all the parallel universes that were the Big Bang’s supposed offspring. Apparently. The paper envisages a plural but finite number of universes rather than a limitless amount. It says here. 


I do not begin to know how to engage with this material. Nor could I say more than a sentence or two about how aeroplanes achieve flight, or distinguish mass from weight, or name a chemical compound outside those two biggies, H2O and CO2. Not only can I not do calculus, I cannot tell you with much confidence what it is. 

For all this ignorance of the sciences, society treats me as a thoughtful person, rewards me with a line of work that is sometimes hard to distinguish from recreation and invites me to politico-media parties, where I catch up with people who, I promise you, make me look like a Copley Medalist. 

In 1959, CP Snow spoke of “two cultures”, the humanities and the sciences, the first blind to the second in a way that is not reciprocated. When his cultured friends laughed at scientists who did not know their way around the Shakespearean canon, he invited them to recite the Second Law of Thermodynamics. This should be no great ask for anyone of moderately rounded learning, he thought, but they were stumped, and peeved to be tested. It was more in despair than in mischief that he rolled out this parlour game of an evening. 

The subsequent trend of events — the space race, the energy crisis, the computer age — should have embarrassed those steeped exclusively in the humanities into meeting science halfway with a hybrid or “third” culture. In the likes of Ian McEwan, who smuggles scientific ideas into his novels, and Steven Pinker, who has tried to establish a scientific basis for literary style, there are some willing brokers of an intellectual concordat out there. 

Yet almost six decades on from Snow’s intervention, near-perfect ignorance of the natural world is still no bar to life as a sophisticate. In Britain, especially, scientific geniuses have always had to coexist with a culture that holds them to be somehow below stairs. This is not the principled anti-science of the Romantics or the hyper-religious. The laws of physics are not being doubted here. It is “just” an aesthetic distaste. 

We can guess at the costs of this distaste in a world already tilting to economies that do not share a bit of it. In this vision of the future, China and India are to the west what Snow said Germany and America were to late-Victorian Britain: profiteers of our own decadent neglect of the hard sciences. But what if the stakes are higher than mere material decline? 

Since the populist shocks of 2016, there has been fighting talk about the preciousness of facts and the urgency of their defence. It just tends to be tactical — a call for the regulation of Facebook, perhaps, or a more vigilant, news-buying citizenry. 

If something as basic as truth is faltering, the cause might be deeper than the habits and technologies of the past decade. The longer-term estrangement of humanities and science seems more like it. A culture that does not punish scientific ignoramuses, and instead hands us the keys to public life, is likely to be vulnerable and credulous — a sucker for any passing nonsense. 

It is not the content of scientific knowledge so much as the scientific method itself that helps to inoculate against ideology and hysteria. Doubt, evidence, falsifiability, the provisional status of all knowledge: these are priceless habits of mind, but you can go far in Britain and other rich democracies without much formal grounding in them. 

The Eloi-and-Morlocks split between the cultured and the scientific, the latter toiling unseen as a necessary evil, is too one-sided for the wider good. It should be a mortifying faux pas to profess ignorance of Hawking’s work in polite company. In his own country, it borders on a boast.

Friday 4 May 2018

Pakistan's Extraordinary Times

Najam Sethi in The Friday TimesExtraordinary times


We live in extraordinary times. There are over 100 TV channels and over 5000 newspapers, magazines and news websites in the country. Yet, on Press Freedom Day, Thursday May 3, the shackles that bind us and the gags that silence us must be recorded.

We cannot comment freely on the machinations of the Miltablishment without being roughed up or “disappeared”. We cannot comment freely on the utterances and decisions of the judges without being jailed for contempt. We cannot comment freely on the motives that drive the Pashtun Tahaffuz Movement and other rights-based groups without being berated for anti-state behavior. We cannot comment freely on the “protests” and “dharnas” of militant religious parties and groups without being accused of “blasphemy” and threatened with death. And so on. The price of freedom is costly. There have been over 150 attacks on journalists in the last twelve months, one-third in Islamabad, the seat of the “democratically” elected, pro-media government.

We live in extraordinary times. With less than one month to go in the term of the present government, we still do not know who the interim prime minister and chief ministers will be, or whether general elections will be held on time or whether these will be rigged or free and fair.


----Also watch



India and its 'free press'


Yashwant Sinha - " Without a Blink, I Will Ask People to Vote the BJP Out of Power"

--------

We live in extraordinary times. The “hidden hand” is everywhere and nowhere at the same time, pulling the plug on dissenters. For over four years, the democratically elected PMLN government in Balochistan was alive and kicking. One day, suddenly, it was gone in a puff of smoke, replaced by a motley crew of pro-Miltablishment “representatives”. For over three decades, the MQM was alive and kicking. One day, it was splintered into three groups, each vying for the favours of the Miltablishment. For over two decades, Nawaz Sharif was the President of the PMLN and thrice elected prime minister of Pakistan. One day he was no more for ever. And so on.

We live in extraordinary times. For over five decades, the Peoples Party of the Bhuttos was the main liberal, anti-Miltablishment party in the country. Now, under the Zardaris, it is solidly on the side of the Miltablishment. For over seven decades, the Mulsim League has been the main pro-Miltablishment party of the country. Now, under Nawaz Sharif, it is the main anti-Miltablishment party in Pakistan. Indeed, for long Mr Sharif was the blue-eyed boy of the Miltablishment. Now he is its chief nemesis.

We live in extraordinary times. A massive political engineering exercise is being held today to thwart some parties and politicians and prop up others. Such attempts were made in the past too but always under the umbrella of martial law and PCO judges. What is unprecedented in the current exercise is the bid to achieve the ends of martial law by “other” means. An unaccountable judiciary is the mask behind which lurks the Miltablishment. The judges have taken no new oath. Nor is the order of the day “provisional”.

We live in extraordinary times. The liberal and secular supporters of the PPP are in disarray. Some have sullenly retreated into a damning silence. Many have plonked their hearts in the freezer and are queuing up to vote for Nawaz Sharif because he is the sole anti-establishment leader in the country. A clutch is ever ready to join the ranks of rights-groups protesting “state” highhandedness or injustice, like the PTM. We are in the process of completing the circle that began with the left-wing, anti-establishment, party of Zulfikar Ali Bhutto and is ending with the right-wing, pro-establishment, party of Imran Khan. The “caring socialist-fascism” of the PPP in the 1970s has morphed into the “uncaring capitalist-fascism” of the PTI today. The middle-class, cheery, internationalist “hopefuls” of yesteryear have been swept aside by the middle-class, angry, nationalist “fearfuls” of today.

We live in extraordinary times. In the first two decades of Pakistan, we stumbled from one civil-military bureaucrat to another without an organic constitution or free and fair elections. In the third decade, we lost half the country because of the political engineering of the first two decades but managed to cobble a democratic constitution in its aftermath. Trouble arose when we violated the constitutional rules of democracy and paid the price of martial law in the fourth decade. In the fifth, we reeled from one engineered election and government to another until we were engulfed by another martial law in the sixth. In the seventh, we wowed to stick together under a Charter of Democracy but joined hands with the Miltablishment to violate the rules of the game. Now, after sacrificing two elected prime ministers at the altar of “justice”, we are back at the game of political engineering in the new decade.

Pakistan is more internally disunited today than ever before. It has more external enemies today than ever before. It is more economically, demographically and environmentally challenged today than ever before. The more it experiments with engineered political change, the worse it becomes. We live in extraordinary times.

Thursday 3 May 2018

Big Tech is sorry. Why Silicon Valley can’t fix itself

Tech insiders have finally started admitting their mistakes – but the solutions they are offering could just help the big players get even more powerful. By Ben Tarnoff and Moira Weigel in The Guardian 


Big Tech is sorry. After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.

Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.” Justin Rosenstein, an engineer who helped build Facebook’s “like” button and Gchat, regrets having contributed to technology that he now considers psychologically damaging, too. “Everyone is distracted,” Rosenstein says. “All of the time.” 

Ever since the internet became widely used by the public in the 1990s, users have heard warnings that it is bad for us. In the early years, many commentators described cyberspace as a parallel universe that could swallow enthusiasts whole. The media fretted about kids talking to strangers and finding porn. A prominent 1998 study from Carnegie Mellon University claimed that spending time online made you lonely, depressed and antisocial.

In the mid-2000s, as the internet moved on to mobile devices, physical and virtual life began to merge. Bullish pundits celebrated the “cognitive surplus” unlocked by crowdsourcing and the tech-savvy campaigns of Barack Obama, the “internet president”. But, alongside these optimistic voices, darker warnings persisted. Nicholas Carr’s The Shallows (2010) argued that search engines were making people stupid, while Eli Pariser’s The Filter Bubble (2011) claimed algorithms made us insular by showing us only what we wanted to see. In Alone, Together (2011) and Reclaiming Conversation (2015), Sherry Turkle warned that constant connectivity was making meaningful interaction impossible.

Still, inside the industry, techno-utopianism prevailed. Silicon Valley seemed to assume that the tools they were building were always forces for good – and that anyone who questioned them was a crank or a luddite. In the face of an anti-tech backlash that has surged since the 2016 election, however, this faith appears to be faltering. Prominent people in the industry are beginning to acknowledge that their products may have harmful effects.

Internet anxiety isn’t new. But never before have so many notable figures within the industry seemed so anxious about the world they have made. Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.

It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity. The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.

The hub of the new tech humanism is the Center for Humane Technology in San Francisco. Founded earlier this year, the nonprofit has assembled an impressive roster of advisers, including investor Roger McNamee, Lyft president John Zimmer, and Rosenstein. But its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction. In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.

As suspicion of Silicon Valley grows, the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track. For this, they have been getting a lot of attention. As the backlash against tech has grown, so too has the appeal of techies repenting for their sins. The Center for Humane Technology has been profiled – and praised by – the New York Times, the Atlantic, Wired and others.

But tech humanism’s influence cannot be measured solely by the positive media coverage it has received. The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”, and Twitter boss Jack Dorsey recently claimed he wants to improve the platform’s “conversational health”. 

Even Mark Zuckerberg, famous for encouraging his engineers to “move fast and break things”, seems to be taking a tech humanist turn. In January, he announced that Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.

Zuckerberg’s choice of words is significant: Time Well Spent is the name of the advocacy group that Harris led before co-founding the Center for Humane Technology. In April, Zuckerberg brought the phrase to Capitol Hill. When a photographer snapped a picture of the notes Zuckerberg used while testifying before the Senate, they included a discussion of Facebook’s new emphasis on “time well spent”, under the heading “wellbeing”.

This new concern for “wellbeing” may strike some observers as a welcome development. After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.

But these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.

The Center for Humane Technology argues that technology must be “aligned” with humanity – and that the best way to accomplish this is through better design. Their website features a section entitled The Way Forward. A familiar evolutionary image shows the silhouettes of several simians, rising from their crouches to become a man, who then turns back to contemplate his history.

“In the future, we will look back at today as a turning point towards humane design,” the header reads. To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”.

There is a good reason why the language of tech humanism is penetrating the upper echelons of the tech industry so easily: this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives. Their success turned the Bay Area tech industry into a global powerhouse – and produced the digitised world that today’s tech humanists now lament.

The story begins in the 1960s, when Silicon Valley was still a handful of electronics firms clustered among fruit orchards. Computers came in the form of mainframes then. These machines were big, expensive and difficult to use. Only corporations, universities and government agencies could afford them, and they were reserved for specialised tasks, such as calculating missile trajectories or credit scores.

Computing was industrial, in other words, not personal, and Silicon Valley remained dependent on a small number of big institutional clients. The practical danger that this dependency posed became clear in the early 1960s, when the US Department of Defense, by far the single biggest buyer of digital components, began cutting back on its purchases. But the fall in military procurement wasn’t the only mid-century crisis around computing.

Computers also had an image problem. The inaccessibility of mainframes made them easy to demonise. In these whirring hulks of digital machinery, many observers saw something inhuman, even evil. To antiwar activists, computers were weapons of the war machine that was killing thousands in Vietnam. To highbrow commentators such as the social critic Lewis Mumford, computers were instruments of a creeping technocracy that threatened to extinguish personal freedom.

But during the course of the 1960s and 70s, a series of experiments in northern California helped solve both problems. These experiments yielded breakthrough innovations like the graphical user interface, the mouse and the microprocessor. Computers became smaller, more usable and more interactive, reducing Silicon Valley’s reliance on a few large customers while giving digital technology a friendlier face.

The pioneers who led this transformation believed they were making computing more human. They drew deeply from the counterculture of the period, and its fixation on developing “human” modes of living. They wanted their machines to be “extensions of man”, in the words of Marshall McLuhan, and to unlock “human potential” rather than repress it. At the centre of this ecosystem of hobbyists, hackers, hippies and professional engineers was Stewart Brand, famed entrepreneur of the counterculture and founder of the Whole Earth Catalog. In a famous 1972 article for Rolling Stone, Brand called for a new model of computing that “served human interest, not machine”.

Brand’s disciples answered this call by developing the technical innovations that transformed computers into the form we recognise today. They also promoted a new way of thinking about computers – not as impersonal slabs of machinery, but as tools for unleashing “human potential”.

No single figure contributed more to this transformation of computing than Steve Jobs, who was a fan of Brand and a reader of the Whole Earth Catalog. Jobs fulfilled Brand’s vision on a global scale, launching the mass personal computing era with the Macintosh in the mid-80s, and the mass smartphone era with the iPhone two decades later. Brand later acknowledged that Jobs embodied the Whole Earth Catalog ethos. “He got the notion of tools for human use,” Brand told Jobs’ biographer, Walter Isaacson.

Building those “tools for human use” turned out to be great for business. The impulse to humanise computing enabled Silicon Valley to enter every crevice of our lives. From phones to tablets to laptops, we are surrounded by devices that have fulfilled the demands of the counterculture for digital connectivity, interactivity and self-expression. Your iPhone responds to the slightest touch; you can look at photos of anyone you have ever known, and broadcast anything you want to all of them, at any moment.

In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention. To guide us out of that wilderness, tech humanists say we need more humanising. They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.

Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.

It is difficult to imagine human beings without technology. The story of our species began when we began to make tools. Homo habilis, the first members of our genus, left sharpened stones scattered across Africa. Their successors hit rocks against each other to make sparks, and thus fire. With fire you could cook meat and clear land for planting; with ash you could fertilise the soil; with smoke you could make signals. In flickering light, our ancestors painted animals on cave walls. The ancient tragedian Aeschylus recalled this era mythically: Prometheus, in stealing fire from the gods, “founded all the arts of men.”

All of which is to say: humanity and technology are not only entangled, they constantly change together. This is not just a metaphor. Recent researchsuggests that the human hand evolved to manipulate the stone tools that our ancestors used. The evolutionary scientist Mary Marzke shows that we developed “a unique pattern of muscle architecture and joint surface form and functions” for this purpose.

The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities. For millennia, people have feared that new media were eroding the very powers that they promised to extend. In The Phaedrus, Socrates warned that writing on wax tablets would make people forgetful. If you could jot something down, you wouldn’t have to remember it. In the late middle ages, as a culture of copying manuscripts gave way to printed books, teachers warned that pupils would become careless, since they no longer had to transcribe what their teachers said.

Yet as we lose certain capacities, we gain new ones. People who used to navigate the seas by following stars can now program computers to steer container ships from afar. Your grandmother probably has better handwriting than you do – but you probably type faster.

The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology. Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.

Intentionally or not, this is what tech humanists are doing when they talk about technology as threatening human nature – as if human nature had stayed the same from the paleolithic era until the rollout of the iPhone. Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them. And while the tech humanists may believe they are acting in the common good, they themselves acknowledge they are doing so from above, as elites. “We have a moral responsibility to steer people’s thoughts ethically,” Tristan Harris has declared.

Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes. The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.

This paternalism produces a central irony of tech humanism: the language that they use to describe users is often dehumanising. “Facebook appeals to your lizard brain – primarily fear and anger,” says McNamee. Harris echoes this sentiment: “Imagine you had an input cable,” he has said. “You’re trying to jack it into a human being. Do you want to jack it into their reptilian brain, or do you want to jack it into their more reflective self?”

The Center for Humane Technology’s website offers tips on how to build a more reflective and less reptilian relationship to your smartphone: “going greyscale” by setting your screen to black-and-white, turning off app notifications and charging your device outside your bedroom. It has also announced two major initiatives: a national campaign to raise awareness about technology’s harmful effects on young people’s “digital health and well-being”; and a “Ledger of Harms” – a website that will compile information about the health effects of different technologies in order to guide engineers in building “healthier” products.

These initiatives may help some people reduce their smartphone use – a reasonable personal goal. But there are some humans who may not share this goal, and there need not be anything unhealthy about that. Many people rely on the internet for solace and solidarity, especially those who feel marginalised. The kid with autism may stare at his screen when surrounded by people, because it lets him tolerate being surrounded by people. For him, constant use of technology may not be destructive at all, but in fact life-saving.

Pathologising certain potentially beneficial behaviours as “sick” isn’t the only problem with the Center for Humane Technology’s proposals. They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

This may be why their approach is so appealing to the tech industry. There is no reason to doubt the good intentions of tech humanists, who may genuinely want to address the problems fuelling the tech backlash. But they are handing the firms that caused those problems a valuable weapon. Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power. By channelling popular anger at Big Tech into concerns about health and humanity, tech humanism gives corporate giants such as Facebook a way to avoid real democratic control. In a moment of danger, it may even help them protect their profits.

One can easily imagine a version of Facebook that embraces the principles of tech humanism while remaining a profitable and powerful monopoly. In fact, these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.

When Zuckerberg announced that Facebook would prioritise “time well spent” over total time spent, it came a couple weeks before the company released their 2017 Q4 earnings. These reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”.

Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable. In a recent interview, he said: “Over the long term, even if time spent goes down, if people are spending more time on Facebook actually building relationships with people they care about, then that’s going to build a stronger community and build a stronger business, regardless of what Wall Street thinks about it in the near term.”

Sheryl Sandberg has also stressed that the shift will create “more monetisation opportunities”. How? Everyone knows data is the lifeblood of Facebook – but not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”. Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently. Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.

Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact. Advertisers can target the closest friends of the users who already like a product, on the assumption that close friends tend to like the same things.

 
Facebook CEO Mark Zuckerberg testifies before the US Senate last month. Photograph: Jim Watson/AFP/Getty Images

So when Zuckerberg talks about wanting to increase “meaningful” interactions and building relationships, he is not succumbing to pressure to take better care of his users. Rather, emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable.

In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

In many ways, this process recalls an earlier stage in the evolution of capitalism. In the 19th century, factory owners in England discovered they could only make so much money by extending the length of the working day. At some point, workers would die of exhaustion, or they would revolt, or they would push parliament to pass laws that limited their working hours. So industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.

A similar situation confronts Facebook today. They have to make the attention of the user more valuable – and the language and concepts of tech humanism can help them do it. So far, it seems to be working. Despite the reported drop in total time spent, Facebook recently announced huge 2018 Q1 earnings of $11.97bn (£8.7bn), smashing Wall Street estimates by nearly $600m.

Today’s tech humanists come from a tradition with deep roots in Silicon Valley. Like their predecessors, they believe that technology and humanity are distinct, but can be harmonised. This belief guided the generations who built the “humanised” machines that became the basis for the industry’s enormous power. Today it may provide Silicon Valley with a way to protect that power from a growing public backlash – and even deepen it by uncovering new opportunities for profit-making.

Fortunately, there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.

To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention. But it does suggest that living well with technology can’t be a matter of making technology more “human”. This goal isn’t just impossible – it’s also dangerous, because it puts us at the mercy of experts who tell us how to be human. It cedes control of our technological future to those who believe they know what’s best for us because they understand the essential truths about our species.

The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.

Today, that power is wielded by corporations, which own our technology and run it for profit. The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.

There is an alternative. If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right. The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.

Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.

What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power. Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources. After all, Silicon Valley wouldn’t exist without billions of dollars of public funding, not to mention the vast quantities of information that we all provide for free. Facebook’s market capitalisation is $500bn with 2.2 billion users – do the math to estimate how much the time you spend on Facebook is worth. You could apply the same logic to Google. There is no escape: whether or not you have an account, both platforms track you around the internet.

In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation coming into effect in the European Union later this month. But more robust regulation of Silicon Valley isn’t enough. We also need to pry the ownership of our digital infrastructure away from private firms. 

This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run. These democratic digital structures can focus on serving personal and social needs rather than piling up profits for investors. One inspiring example is municipal broadband: a successful experiment in Chattanooga, Tennessee, has shown that publicly owned internet service providers can supply better service at lower cost than private firms. Other models of digital democracy might include a worker-owned Uber, a user-owned Facebook or a socially owned “smart city” of the kind being developed in Barcelona. Alternatively, we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.

More experimentation is needed, but democracy should be our guiding principle. The stakes are high. Never before have so many people been thinking about the problems produced by the tech industry and how to solve them. The tech backlash is an enormous opportunity – and one that may not come again for a long time.

The old techno-utopianism is crumbling. What will replace it? Silicon Valley says it wants to make the world a better place. Fulfilling this promise may require a new kind of disruption.

Why ‘Sufism’ is not what it is made out to be

Zahra Sabri in The Dawn

In a variety of Islamic political contexts around the world today, we see ‘Sufi’ ideas being invoked as a call to return to a deeper, more inward-directed (and more peaceful) mode of religious experience as compared to the one that results in outward-oriented political engagements that are often seen as negative and violent. A hundred years ago, it would not have been uncommon to hear western or West-influenced native voices condemn Islamic mysticism (often described problematically in English as ‘Sufism’) as one of the major sources of inertia and passivity within Muslim societies. Yet new political contingencies, especially after 9/11, have led to this same phenomenon being described as ‘the soft face of Islam’, with observers such as British writer William Dalrymple referring to a vaguely defined group of people called ‘the Sufis’ as ‘our’ best friends vis-à-vis the danger posed by Taliban-like forces.

We seem to be in a situation where journalistic discourse and policy debates celebrate idealised notions of Islamic mysticism with its enthralling music, inspiring poetry and the transformative/liberating potential of the ‘message’ of the great mystics. These mystics are clearly differentiated from more ‘closed-minded’ and ‘orthodox’ representatives of the faith such as preachers (mullahs), theologians (fuqaha) and other types of ulema.

On the other hand, when we trace the institutional legacy of these great mystics (walis/shaikhs) and spiritual guides (pirs) down to their present-day spiritual heirs, we find out that they are often all too well-entrenched in the social and political status quo. The degree of their sociopolitical influence has even become electorally quantifiable since the introduction of parliamentary institutions during colonial times. Pirs in Pakistan have been visible as powerful party leaders (Pir Pagara), ministers (Shah Mahmood Qureshi) and even prime ministers (Yousaf Raza Gillani). Even more traditional religious figures, such as Pir Hameeduddin Sialvi (who recently enjoyed media attention for threatening to withdraw support from the ruling party over a religious issue that unites many types of religious leaders), not only exercise considerable indirect influence over the vote but have also served as members of various legislative forums.

It is, therefore, unclear what policymakers mean when they call for investment in the concepts and traditions of ‘Sufi Islam’. Is it an appeal for the promotion of a particular kind of religious ethic through the public education system? Or is it a call for raising the public profile of little known faqirs and dervishes and for strengthening the position of existing sajjada-nishins (hereditary representatives of pirs and mystics and the custodians of their shrines), many of whom already enjoy a high level of social and political prominence and influence? Or are policymakers referring to some notion of Islamic mysticism that has remained very much at the level of poetic utterance or philosophical discourse — that is, at the level of the ideal rather than at the level of reality as lived and experienced by Muslims over centuries?

The salience of idealised notions of Islamic mysticism in various policy circles today makes it interesting to examine the historical relations that mystic groups within Islamic societies have had with the ruling classes and the guardians of religious law. What has the typical relationship among kings, ulema and mystics been, for example, in regions such as Central Asia, Anatolia, Persia and Mughal India that fall in a shared Persianate cultural and intellectual zone? Has tasawwuf (Islamic mysticism) historically been a passive or apolitical force in society, or have prominent mystics engaged with politics and society in ways that are broadly comparable to the way other kinds of religious representatives have done so?

It is instructive to turn first to the life of an Islamic mystic who is perhaps more celebrated and widely recognised than any other: Maulana Jalaluddin Rumi (d. 1273). He lived in Konya in modern-day Turkey. The fame of his mystic verse has travelled far and wide, but what is less widely known is that he had received a thorough training in fiqh (Islamic law).

Historical accounts show that he had studied the Quran and fiqh at a very high level in some of the most famous madrasas in Aleppo and Damascus. Later, he served as a teacher of fiqh at several madrasas. In this, he appears to have followed his father who was a religious scholar at a princely court in Anatolia and taught at an institution that blended the functions of a madrasa and those of a khanqah, demonstrating how fluid the relationship between an Islamic law college and a mystic lodge could be in Islamic societies. Even madrasas built exclusively for training ulema have often been paired with khanqahs since centuries.


Jahangir showing preference to shaikhs over kings | Courtesy purchase, Charles Lang Freer Endowment


Biographers have described how Rumi’s legal opinions were frequently sought on a variety of subjects. As a spiritual guide and preacher, he regularly delivered the Friday sermon (khutba), achieving popularity as an acclaimed speaker and attracting a considerable number of disciples from all parts of society. His followers included merchants and artisans as well as members of the ruling class. His lectures were attended by both women and men in Konya. For much of this while, he was also composing his renowned poetry and becoming identified with his own style of sama’aand dance, which sometimes drew criticism from other ulema, many of whom nevertheless continued to revere him.

It is evident from Rumi’s letters that he also had extremely close relations with several Seljuk rulers, even referring to one of them as ‘son’. It was not rare for him to advise these rulers on various points of statesmanship and make recommendations (for instance, on relations with infidel powers) in light of religious strictures and political expediencies. He is also known to have written letters to introduce his disciples and relatives to men of position and influence who could help them professionally or socially. Unlike his religious sermons and ecstatic poetry, these letters follow the conventions typically associated with correspondence addressed to nobles and state officials.

All this contradicts the idea that mystics (mashaikh) are always firmly resistant to interacting with rulers. The stereotypical image of mystics is one where they are far too caught up in contemplation of the divine to have anything to do with the mundane political affairs of the world. Yet in sharp contrast to this image, many prominent mystics in Islamic history have played eminent roles in society and politics.

This holds true not only for the descendants of prominent mystics who continue to wield considerable sociopolitical influence in Muslim countries such as today’s Egypt and Pakistan but also for the mashaikh in whose names various mystical orders were originally founded. These mashaikh evidently lived very much in the world, not unlike nobles and kings and many classes of the ulema. 
The offspring of these shaikhs also often became favoured marriage partners for royal princesses, thus becoming merged with the nobility itself.

Rumi’s life also offers evidence that the two worlds of khanqah and madrasa, often considered vastly different from each other, all too often overlap in terms of their functions. Regardless of the impressions created by mystic poetry’s derogatory allusions to the zahid (zealous ascetic), wa‘iz (preacher) or shaikh (learned religious scholar), there is little practical reason to see mystics on the whole as being fundamentally opposed to other leaders and representatives of religion. In fact, right through until modern times, we have seen ulema and mashaikh work in tandem with each other in the pursuit of shared religio-political objectives, the Khilafat movement in British India being just one such example among many of their collaborations.

Rumi’s activities are indicative of a nearly ubiquitous pattern of political involvement by prominent mystics in various Islamic societies. In Central Asia, support from the mashaikh of the Naqshbandi mystical order (tariqa) seems to have become almost indispensable by the end of the 15th century for anyone aspiring to rule since the order had acquired deep roots within the population at large. The attachment of Timurid and Mughal rulers to the Naqshbandi order is well known. The Shaybanid rulers of Uzbek origin also had deep ties with the order and Naqshbandi mashaikhtended to play a prominent role in mediating between Mughal and Uzbek rulers.

Naqshbandis are somewhat unusual among Sufi orders in their historical inclination towards involving themselves in political affairs, and for favouring fellowship (suhbat) over seclusion (khalwat), yet political interventions are not rare even among other orders.

Shaikh Moeenuddin Chishti Ajmeri | Courtesy trustees of the Chester Beatty library, Dublin



Closer to home, Shaikh Bahauddin Zakariya (d. 1262), a Suhrawardi mystic, is reported to have negotiated the peaceful surrender of Multan to the Mongols, giving 10,000 dinars in cash to the invading army’s commander in return for securing the lives and properties of the citizens. Suhrawardis, indeed, have long believed in making attempts to influence rulers to take religiously correct decisions. Bahauddin Zakariya was very close to Sultan Iltutmish of the Slave Dynasty of Delhi and was given the official post of Shaikhul Islam. He openly sided with the sultan when Nasiruddin Qabacha, the governor of Multan, conspired to overthrow him.

It is widely known that the Mughal king Jahangir was named after Shaikh Salim Chishti (d. 1572) but what is less well known is that his great-grandfather Babar’s name ‘Zahiruddin Muhammad’ was chosen by Naqshbandi shaikh Khwaja Ubaidullah Ahrar (d. 1490), who wielded tremendous political power in Central Asia. The shaikh’s son later asked Babar to defend Samarkand against the Uzbeks. When Babar fell ill in India many years later, he versified one of Khwaja Ahrar’s works in order to earn the shaikh’s blessings for his recovery.

Even after Babar lost control of his Central Asian homeland and India became his new dominion, he and his descendants maintained strong ties with Central Asian Naqshbandi orders such as Ahrars, Juybaris and Dahbidis. This affiliation was not limited to the spiritual level. It also translated into important military and administrative posts at the Mughal court being awarded to generations of descendants of Naqshbandi shaikhs.

The offspring of these shaikhs also often became favoured marriage partners for royal princesses, thus becoming merged with the nobility itself. One of Babar’s daughters as well as one of Humayun’s was given in marriage to the descendants of Naqshbandi shaikhs. The two emperors also married into the family of the shaikhs of Jam in Khurasan. Akbar’s mother, Hamida Banu (Maryam Makani), was descended from the renowned shaikh Ahmad-e-Jam (d. 1141).

In India, Mughal princes and kings also established important relationships with several other mystical orders such as the Chishtis and Qadris. In particular, the Shattari order (that originated in Persia) grew to have significant influence over certain Mughal kings. It seems to have been a common tendency among members of the Mughal household to pen hagiographical tributes to their spiritual guides. Dara Shikoh, for example, wrote tazkirahs (biographies) of his spiritual guide Mian Mir (d. 1635) and other Qadri shaikhs. His sister Jahanara wrote about the Chishti shaikhs of Delhi.

So great was the royal reverence for mystics that several Mughal emperors, like their counterparts outside India, wanted to be buried beside the graves of prominent shaikhs. Aurangzeb, for example, was buried beside a Chishti shaikh, Zainuddin Shirazi (d. 1369). Muhammad Shah’s grave in Delhi is near that of another Chishti shaikh, Nizamuddin Auliya (d. 1325).

Like several other Mughal and Islamic rulers, Aurangzeb’s life demonstrates a devotion to a number of different mystical orders (Chishtis, Shattaris and Naqshbandis) at various points in his life. The emperor is reported to have sought the blessings of Naqshbandis during his war of succession with his brother Dara Shikoh. Naqshbandi representatives not only committed themselves to stay by his side in the battle but they also vowed to visit Baghdad to pray at the tomb of Ghaus-e-Azam Abdul Qadir Jilani (d. 1166) for his victory. They similarly promised to mobilise the blessings of the ulema and mashaikh living in the holy city of Makkah in his favour.
Mughal prince Parvez talking to a holy man | Courtesy purchase — Charles Lang Freer Endowment



The combined spiritual and temporal power of influential mashaikh across various Islamic societies meant that rulers were eager to seek their political support and spiritual blessings for the stability and longevity of their rule. Benefits accrued to both sides. The mashaikh’s approval and support bolstered the rulers’ political position, and financial patronage by rulers and wealthy nobles, in turn, served to strengthen the social and economic position of mashaikh who often grew to be powerful landowners. The estates and dynasties left behind by these shaikhs frequently outlasted those of their royal patrons.

This is not to say that every prominent mystic had equally intimate ties with rulers. Some mashaikh (particularly among Chishtis) are famous for refusing to meet kings and insisting on remaining aloof from the temptations of worldly power. Shaikh Nizamuddin Auliya’s response to Alauddin Khilji’s repeated requests for an audience is well known: “My house has two doors. If the Sultan enters by one, I will make my exit by the other.” In effect, however, even these avowedly aloof mashaikh often benefited from access to the corridors of royal power via their disciples among the royal household and high state officials.

The relationship between sultans and mashaikh was also by no means always smooth. From time to time, there was a real breakdown in their ties. Shaikhs faced the prospect of being exiled, imprisoned or even executed if their words or actions threatened public order or if they appeared to be in a position to take over the throne. The example of Shaikh Ahmad Sirhindi (d. 1624) is famous. He was imprisoned by Jahangir for a brief period reportedly because his disquietingly elevated claims about his own spiritual rank threatened to disrupt public order. Several centuries earlier, Sidi Maula was executed by Jalaluddin Khilji, who suspected the shaikh of conspiring to seize his throne.

It is not only through influence over kings and statesmen that Islamic mystical orders have historically played a political role. Some of them are known to have launched direct military campaigns. Contrary to a general notion in contemporary popular discourse that ‘Sufism’ somehow automatically means ‘peace’, some Islamic mystical orders have had considerable military recruiting potential.

The Safaviyya mystical order of Ardabil in modern day Iranian Azerbaijan offers a prominent example of this. Over the space of almost two centuries, this originally Sunni mystical order transformed itself into a fighting force. With the help of his army of Qizilbash disciples, the first Safavid ruler Shah Ismail I established an enduring Shia empire in 16th century Iran.

In modern times, Pir Pagara’s Hurs in Sindh during the British period offer another example of a pir’s devotees becoming a trained fighting force. It is not difficult to find other examples in Islamic history of mashaikh who urged sultans to wage wars, accompanied sultans on military expeditions and inspired their disciples to fight in the armies of favoured rulers. Some are believed to have personally participated in armed warfare.

Maulana Jalaluddin Rumi distributing sweetmeats to disciples | Courtesy Museum of Fine Arts, Boston



To speak of a persistent difference between the positions of ulema and mystics on the issue of war or jihad would be, thus, a clear mistake. ‘Sufism’ on the whole is hardly outside the mainstream of normative Islam on this issue, as on others.

Another popular misconception is to speak of ‘Sufism’ as something peculiar to the South Asian experience of Islam or deem it to be some indigenously developed, soft ‘variant’ of Islam that is different from the ‘harder’ forms of the religion prevalent elsewhere. Rituals associated with piri-muridi (master-disciple) relationships and visits to dargahs can, indeed, display the influence of local culture and differ significantly from mystical rituals in other countries and regions.

However, the main trends and features defining Islamic mysticism in South Asia remain pointedly similar to those characterising Islamic mysticism in the Middle East and Central Asia. As British scholar Nile Green points out, “What is often seen as being in some way a typically South Asian characteristic of Islam – the emphasis on a cult of Sufi shrines – was in fact one of the key practices and institutions of a wider Islamic cultural system to be introduced to South Asia at an early period ... It is difficult to understand the history of Sufism in South Asia without reference to the several lengthy and distinct patterns of immigration into South Asia of holy men from different regions of the wider Muslim world, chiefly from Arabia, the fertile crescent, Iran and Central Asia.”

It is a fact that all the major mystical orders in South Asia have their origins outside this region. Even the Chishti order, which has come to be associated more closely with South Asia than with any other region, originated in Chisht near Herat in modern-day Afghanistan. These interregional connections have consistently been noted and celebrated by masters and disciples connected with mystic orders over time. Shaikh Ali al-Hujweri (d. circa 1072-77), who migrated from Ghazna in Afghanistan to settle in Lahore, is known and revered as Data Ganj Bakhsh. Yet this does not mean that the status of high ranking shaikhs who lived far away from the Subcontinent is lower than his in any way. Even today, the cult of Ghaus-e-Azam of Baghdad continues to be popular in South Asia.
The third myth is that mystics across the board are intrinsically ‘peaceful’ and opposed to armed jihad or warfare.

For anyone who has the slightest acquaintance with Muslim history outside the Subcontinent, it would be difficult to defend the assertion – one that we hear astoundingly often in both lay and academic settings in South Asia – that ‘Sufi Islam’ is somehow particular to Sindh or Punjab in specific or to the Indian subcontinent more broadly. It is simply not possible to understand the various strands of Islamic mysticism in our region without reference to their continual interactions with the broader Islamic world.

What is mystical experience, after all? The renowned Iranian scholar Abdolhossein Zarrinkoub defines it as an “attempt to attain direct and personal communication with the godhead” and argues that mysticism is as old as humanity itself and cannot be confined to any race or religion.

It would, therefore, be quite puzzling if Islamic mysticism had flowered only in the Indian subcontinent and in no other Muslim region, as some of our intellectuals seem to assert. Islamic mysticism in South Asia owes as much to influences from Persia, Central Asia and the Arab lands as do most other aspects of Islam in our region. These influences are impossible to ignore when we study the lives and works of the mystics themselves.

As Shaikh Ahmad Sirhindi (Mujaddid-e-Alf-e-Sani) wrote in the 16th-17th century: “We ... Muslims of India ... are so much indebted to the ulema and Sufis (mashaikh) of Transoxiana (Mawara un-Nahr) that it cannot be conveyed in words. It was the ulema of the region who strove to correct the beliefs [of Muslims] to make them consistent with the sound beliefs and opinions of the followers of the Prophet’s tradition and the community (Ahl-e-Sunna wa’l-Jama’a). It was they who reformed the religious practices [of the Muslims] according to Hanafi law. The travels of the great Sufis (may their graves be hallowed) on the path of this sublime Sufi order have been introduced to India by this blessed region.” *

These influences were not entirely one-way. We see that the Mujaddidi order (developed in India by Shaikh Ahmad Sirhindi as an offshoot of the Naqshbandi order) went on to exert a considerable influence in Central Asia and Anatolia. This demonstrates once again how interconnected these regions had been at the intellectual, literary and commercial levels before the advent of colonialism.
Dancing dervishes | Courtesy purchase, Rogers Fund and the Kevorkian Foundation Gift, 1955



This essay has been an attempt to dispel four myths about Islamic mysticism. The first myth is that there is a wide gap between the activities of the mystic khanqah and those of the scholarly madrasa (and that there is, thus, a vast difference between ‘Sufi’ Islam and normative/mainstream Sunni Islam). The second myth is that mystics are ‘passive’, apolitical and withdrawn from the political affairs of their time. The third myth is that mystics across the board are intrinsically ‘peaceful’ and opposed to armed jihad or warfare. The last myth is that Islamic mysticism is a phenomenon particular to, or intrinsically more suited to, the South Asian environment as compared to other Islamic lands.

All these four points are worth taking into consideration in any meaningful policy discussion of the limits and possibilities of harnessing Islamic mysticism for political interventions in Muslim societies such as today’s Pakistan. It is important to be conscious of the fact that when we make an argument for promoting mystical Islam in this region, we are in effect making an argument for the promotion of mainstream Sunni (mostly Hanafi) Islam in its historically normative form.