Search This Blog

Thursday 4 October 2018

Finance, the media and a catastrophic breakdown in trust

John Authers in The Financial Times


Finance is all about trust. JP Morgan, patriarch of the banking dynasty, told Congress in the 1912 hearings that led to the foundation of the US Federal Reserve, that the first thing in credit was “character, before money or anything else. Money cannot buy it. 

“A man I do not trust could not get money from me on all the bonds in Christendom,” he added. “I think that is the fundamental basis of business.” He was right. More than a century later, it is ever clearer that, without trust, finance collapses. That is no less true now, when quadrillions change hands in electronic transactions across the globe, than it was when men such as Morgan dominated markets trading face to face. 

And that is a problem. Trust has broken down throughout society. From angry lynch mobs on social media to the fracturing of the western world’s political establishment, this is an accepted fact of life, and it is not merely true of politics. Over the past three decades, trust in markets has evaporated. 

In 1990, when I started at the Financial Times, trust in financiers and the media who covered them was, if anything, excessive. Readers were deferential towards the FT and, particularly, the stone-tablet certainties of the Lex column, which since the 1930s has dispensed magisterial and anonymous investment advice in finely chiselled 300-word notes. 

Trainee bankers in the City of London were required to read Lex before arriving at the office. If we said it, it must be true. Audience engagement came in handwritten letters, often in green ink. Once, a reader pointed out a minor error and ended: “The FT cannot be wrong, can it?” I phoned him and discovered this was not sarcasm. The FT was and is exclusively produced by human beings, but it had not occurred to him that we were capable of making a mistake. 

Back then, we made easy profits publishing page after page of almost illegible share price tables. One colleague had started in the 1960s as “Our Actuary” — his job was to calculate, using a slide rule, the value of the FTSE index after the market closed. 

Then came democratisation. As the 1990s progressed, the internet gave data away for free. Anyone with money could participate in the financial world without relying on the old intermediaries. If Americans wanted to shift between funds or countries, new online “ fund supermarkets” sprung up to let them move their pension fund money as much as they liked. 

Technology also broke the hold of bankers over finance, replacing it with the invisible hand of capital markets. No longer did banks’ lending officers decide on loans for businesses or mortgages; those decisions instead rested in the markets for mortgage-backed securities, corporate paper and junk bonds. Meanwhile, banks were merged, deregulated and freed to re-form themselves. 

But the sense of democratisation did not last. The crises that rent the financial world in twain, from the dotcom bubble in 2000 through to the 2008 Lehman debacle and this decade’s eurozone sovereign debt crisis, ensured instead that trust broke down. That collapse appears to me to be total: in financial institutions, in the markets and, most painfully for me, in the financial media. Once our word was accepted unquestioningly (which was unhealthy); now, information is suspect just because it comes from us, which is possibly even more unhealthy. 

To explain this, let me tell the story of the most contentious trip to the bank I have ever made. 

Two days after Lehman Brothers declared bankruptcy, in September 2008, I went on an anxious walk to my local bank branch. Working in New York, I had recently sold my flat in London and a large sum had just landed in my account at Citibank — far more than the insured limit, which at that point was $100,000. 

It did not seem very safe there. Overnight, the Federal Reserve had spent $85bn to bail out the huge insurance company AIG, which had unwisely guaranteed much credit now sitting on banks’ books. Were AIG to go south, taking its guarantees with it, many banks would suddenly find themselves with worthless assets and become insolvent. 

Meanwhile, a money market fund had “broken the buck”. Money market funds were treated like bank accounts by their clients. They switch money between very safe short-term bonds, trying to find higher rates than a deposit account can offer. Each share in the fund is worth $1, interest is distributed and the price cannot dip below $1. As the funds did not pay premiums for deposit insurance, they could pay higher interest rates for no perceived extra risk. 

Thus there was outright panic when a large money market fund admitted that it held Lehman Brothers bonds, that its price must drop to 97 cents and that it was freezing access to the fund. Across the US, investors rushed to pull their money out of almost anything with any risk attached to it, and poured it into the safest investments they could find — gold and very short-term US government debt (Treasury bills, or T-bills). This was an old-fashioned bank run, but happening where the general public could not see it. Panic was only visible to those who understood the arcana of T-bill yields.

Our headline that day read “Panic grips credit markets” under a banner about the “banking crisis” in red letters. 

There was no time to do anything complicated with my own money. Once I reached my lunch hour, I went to our local Citi branch, with a plan to take out half my money and put it into rival bank Chase, whose branch was next door. This would double the amount of money that was insured. 

This is how I recounted what happened next, in a column for the FT last month: 

“We were in Midtown Manhattan, surrounded by investment banking offices. At Citi, I found a long queue, all well-dressed Wall Streeters. They were doing the same as me. Next door, Chase was also full of anxious-looking bankers. Once I reached the relationship officer, who was great, she told me that she and her opposite number at Chase had agreed a plan of action. I need not open an account at another bank. Using bullet points, she asked if I was married and had children. Then she opened accounts for each of my children in trust and a joint account with my wife. In just a few minutes I had quadrupled my deposit insurance coverage. I was now exposed to Uncle Sam, not Citi. With a smile, she told me she had been doing this all morning. Neither she nor her friend at Chase had ever had requests to do this until that week.” 

Ten years on, this is my most vivid memory of the crisis. The implications were clear: Wall Streeters, who understood what was going on, felt they had to shore up their money in insured deposits. The bank run in the trading rooms was becoming visible in the bank branches down below. 

In normal circumstances, the tale of the bank branch would have made an ideal anecdote with which to lead our coverage, perhaps with a photo of the queue of anxious bankers. Low T-bill yields sound dry and lack visual appeal; what I had just seen looked like a bank run. (Although technically it was not — nobody I saw was taking out money.) 

But these were not normal circumstances, and I never seriously considered writing about it. Banks are fragile constructs. By design, they have more money lent out than they keep to cover deposits. A self-fulfilling loss of confidence can force a bank out of business, even if it is perfectly well run. In a febrile environment, I thought an image of a Manhattan bank run would be alarmist. I wrote a piece invoking a breakdown in trust between banks and described the atmosphere as “panic”, but did not mention the bank branch. Ten years later, with the anniversary upon us, I thought it would be an interesting anecdote to dramatise the crisis. 

In the distrustful and embittered world of 2018, the column about what I saw and why I chose not to write about it provoked a backlash that amazed me. Hundreds of responses poured in. Opinion was overwhelmingly against me. 

One email told me: “Your decision to save yourself while neglecting your readership is unforgivable and in the very nature of the elitist Cal Hockley of the Titanic scrambling for a lifeboat at the expense of others in need.” One commenter on FT.com wrote: “This reads like Ford trying to explain why pardoning Nixon was the right thing to do.” 

“I have re-read the article, and the comments, a couple of times,” wrote another. “And I realised that it actually makes me want to vomit, as I realise what a divide there is between you and I, between the people of the establishment like yourself, and the ordinary schmucks like myself. The current system is literally sickening and was saved for those who have something to protect, at the expense of those who they are exploiting.” 

Feedback carried on and on in this vein. How could we in the media ever be trusted if we did not tell the whole truth? Who were we to edit the facts and the truth that were presented? Why were we covering up for our friends in the banks? Newspaper columns attacking me for my hypocrisy popped up across the world, from France to Singapore. 

I found the feedback astonishing and wrong-headed. But I am now beginning to grasp the threads of the problem. Most important is the death of belief in the media as an institution that edits and clarifies or chooses priorities. Newspapers had to do this. There was only so much space in the paper each day. Editing was their greatest service to society. 

Much the same was true of nightly half-hour news broadcasts in pre-cable television. But now, the notion of self-censorship is alien and suspect. People expect “the whole truth”. The idea of news organisations with long-standing cultures and staffed by trained professionals deciding what is best to publish appears bankrupt. We are not trusted to do this, and not just because of politicians crying “fake news”. 

Rather, the rise of social media has redefined all other media. If the incident in the Citi branch were to happen today, someone would put a photo of it on Facebook and Twitter. It might or might not go viral. But it would be out there, without context or explanation. The journalistic duty I felt to be responsible and not foment panic is now at an end. This is dangerous. 

Another issue is distrust of bankers. Nobody ever much liked “fat cats”, but this pickled into hatred as bankers avoided personal criminal punishment for their roles in the crisis. Bank bailouts were, I still think, necessary to protect depositors. But they are now largely perceived merely as protecting bankers. My self-censorship seemed to be an effort to help my friends the bankers, not to shield depositors from a panic. 

Then there is inequality. In my column, I said that I “happened to have a lot of money in my account” but made no mention of selling my London flat. People assumed that if I had several hundred thousand dollars sitting in a bank account, I must be very rich. That, in many eyes, made my actions immoral. Once I entered the FT website comments thread to explain where the money had come from, some thought this changed everything. It was “important information”. “In the article where moral questions [were] raised, the nature of the capital should have been explained better,” one commenter said. 

The hidden premise was that if I were rich, I would not have been morally entitled to protect my money ahead of others lacking the information I was privy to. Bear in mind that to read this piece, it was necessary to subscribe to the FT. 

Put these factors together, and you have a catastrophic breakdown in trust. How did we get here? 

The democratisation of finance in the 1990s was healthy. Transparency revealed excessive fees that slowly began to fall. For us at the FT, in many ways an entrenched monopoly, this meant lost advertising and new competition from cable TV, data providers and an array of online services. 

But that democratisation was tragically mishandled and regulators let go of the reins far too easily. In 1999, as the Nasdaq index shot to the sky, the share prices of new online financial media groups such as thestreet.com shot up with them. On US television, ads for online brokers showed fictional truck drivers apparently buying their own island with the proceeds of their earnings from trading on the internet. By 2000, when I spent time at business school, MBA students day-traded on their laptops in class, oblivious to what their professors were saying. 

Once that bubble burst, the pitfalls of rushed democratisation were painfully revealed. Small savers had been sucked into the bubble at the top, and sustained bad losses. 

Trust then died with the credit crisis of 2008 and its aftermath. The sheer injustice of the ensuing government cuts and mass layoffs, which deepened inequality and left many behind while leaving perpetrators unpunished, ensured this. 

The public also lost their trust in journalists as their guides in dealing with this. We were held to have failed to warn the public of the impending crisis in 2008. I think this is unfair; the FT and many other outlets were loudly sceptical and had been giving the problems of US subprime lenders blanket coverage for two years before Lehman Brothers went down. In the earlier dotcom bubble, however, I think the media has more of a case to answer — that boom was lucrative for us and many were too credulous, helping the bubble to inflate. 

Further, new media robbed journalists of our mystique. In 1990, readers had no idea what we looked like. Much of the FT, including all its stock market coverage, was written anonymously. The only venue for our work was on paper and the only way to respond (apart from the very motivated, who used the telephone) was also on paper. The rule of thumb was that for every letter we received, at least another hundred readers felt the same way. 

Now, almost everything in the paper that expresses an opinion carries a photo. Once my photo appeared above my name on the old Short View column, my feedback multiplied maybe fivefold. The buzzword was to be “multimodal”, regaling readers with the same ideas in multiple formats. In 2007 we started producing video. 

My readers became my viewers, watching me speak to them on screen every day, and my feedback jumped again. Answering emails from readers took over my mornings. Often these would start “Dear John”, or even just “John”, as though from people who knew me. So much for our old mystique. 

By 2010, social media was a fact of life. Writing on Twitter, journalists’ social network of choice, became part of the job. People expected us to interact with them. This sounds good. We were transparent and interactive in a way we had not been before. But it became part of my job to get into arguments with strangers, who stayed anonymous, in a 140-character medium that made the expression of any nuance impossible. 

Meanwhile, the FT hosted social media of its own. Audience engagement became a buzzword. If readers commented, we talked back. Starting in 2012, I started debating with readers and I learnt a lot. FT readers are often specialists, and they helped me understand some arcane subject matter. Once, an intense discussion with well over a hundred entries on the subject of cyclically adjusted price/earnings multiples (don’t ask) yielded all the research I needed to write a long feature. 

Now, following Twitter, comments below the line are degenerating into a cesspit of anger and disinformation. Where once I debated with specialists, now I referee nasty political arguments or take the abuse myself. The status of the FT and its competitors in the financial media as institutions entrusted with the task of giving people a sound version of the truth now appears, to many, to be totally out of date. 

Even more dangerously for the future, the markets and their implicit judgments have been brought into the realm of politics (and not just by President Trump). This was not true even 20 years ago; when Al Gore faced off against George W Bush in 2000, only months after the dotcom bubble burst, neither candidate made much of an issue of it. 

But now, following Lehman, people understand that decisions made in capital markets matter. That makes markets part of the political battlefield; not just how to interpret them, but even the actual market numbers are now open to question. 

Brexit rammed this home to me. During the 2016 referendum campaign, Remainers argued that voting to leave would mean a disastrous hit for sterling. This was not exactly Project Fear; whether or not you thought Brexit was a good idea, it was obvious that it would initially weaken the pound. A weaker currency can be good news — the pound’s humiliating exit from the EU’s exchange rate mechanism in 1992, for example, set the scene for an economic boom throughout the late 1990s. 

But reporting on the pound on the night of the referendum was a new and different experience. Sitting in New York as the results came in through the British night, I had to write comments and make videos, while trying to master my emotions about the huge decision that my home country had just taken. Sterling fell more than 10 per cent against the dollar in a matter of minutes — more than double its previous greatest fall in the many decades that it had been allowed to float, bringing it to its lowest level in more than three decades. Remarkably, that reaction by foreign exchange traders has stood up; after two more years of political drama, the pound has wavered but more than two years later remains slightly below the level at which it settled on referendum night. 

As I left, at 1am in New York, with London waking up for the new day, I tweeted a chart of daily moves in sterling since 1970, showing that the night’s fall dwarfed anything previously seen. It went viral, which was not surprising. But the nature of the response was amazing. It was a factual chart with a neutral accompanying message. It was treated as a dubious claim. 

“LOL got that wrong didn’t you . . . oops!” (There was nothing wrong with it.) “Pretty sure it was like that last month. Scaremongering again.” (No, it was a statement of fact and nothing like this had happened ever, let alone the previous month.) 

“Scaremongering. Project Fear talking us down. This is nothing to do with Brexit, it’s to do with the PM cowardice resignation.” (I had made the tweet a matter of hours before David Cameron resigned.) 

The reaction showed a willingness to doubt empirical facts. Many also felt that the markets themselves were being political and not just trying to put money where it would make the greatest return. “Bankers punish Britons for their audacity in believing they should have political control of their own country.” (Forex traders in the US and Asia were probably not thinking about this.) 

“It will recover, this is what uncertainty does. Also the rich bitter people upset about Brexit.” (Rich and bitter people were unlikely to make trades that they thought would make them poorer, and most of that night’s trading was by foreigners more dispassionate than Britons could be at that point.) 

So it continued for days. Thanks to the sell-off in sterling, the UK stock market did not perform that badly (unless you compared it with others, which showed that its performance was lousy). Whether the market really disliked the Brexit vote became a topic of hot debate, which it has remained — even as the market verdict, that Brexit is very bad news if not a disaster, becomes ever clearer. 

After Brexit, of course, came Trump. The US president takes the stock market as a gauge of his performance, and any upward move as a political endorsement — while his followers treat any fall, or any prediction of a fall by pundits such as me, as a political attack. The decade in which central banks have bought assets in an open attempt to move markets plays into the narrative that markets are political creations. 

This is the toxic loss of trust that now vitiates finance. Once lost, trust is very hard to retrieve, which is alarming. It is also not clear what the financial media can do about it, beyond redoubling our efforts to do a good job. 

All the most obvious policy responses come with dangers. Regulating social media from its current sick and ugly state would have advantages but would also be the thin end of a very long wedge. Greater transparency and political oversight for central banks might rebuild confidence but at the risk of politicising institutions we desperately need to maintain independence from politicians. And an overhaul of the prosecutorial system for white-collar crime, to avert the scandalous way so many miscreants escaped a reckoning a decade ago, might work wonders for bolstering public trust — but not if it led to scapegoating or show trials. 

On one thing, I remain gloomily clear. Without trust in financial institutions themselves, or those who work in them, or the media who cover them, the next crisis could be far more deadly than the last. Just ask JP Morgan.

Do not blame accounting rules for the financial crisis

Hans Hoogervorst in The Financial Times

Ten years after the outbreak of the financial crisis, there are still persistent arguments about the role that accounting standards may have played in its genesis.

Some critics of International Financial Reporting Standards argue that they gave an overly rosy picture of banks’ balance sheets before the crisis and are still not prudent enough despite improvements since then. These same critics also argue that excessive reliance on fair value accounting, which reflects an asset’s current market value, has encouraged untimely recognition of unrealised profits.

They want to require banks to make upfront provisions for all expected lifetime losses on loans and, presumably, a return to good old historical cost accounting, which values assets at the price they were initially purchased.

Though superficially appealing, these changes would weaken prudent accounting, rather than strengthen it.

The British bank HBOS, which collapsed and was taken over by Lloyds Banking Group during the crisis, has been presented as an example of failing pre-crisis accounting standards. The truth is that HBOS met bank regulators’ capital requirements, and its financial statements clearly showed that its balance sheet was supported by no more than 3.3 per cent of equity. For investors who cared to look, the IFRS standards did a quite decent job of making crystal clear that many banks had wafer-thin capital levels and were accidents waiting to happen.
However, the crisis did reveal that the existing standards gave banks too much leeway to delay recognition of inevitable loan losses. In response, the International Accounting Standards Board developed an “expected loss model” that significantly lowered the thresholds for recognising loan losses. The new standard, IFRS 9, requires banks to initially set aside a moderate provision for loan losses on all loans. This prevents them from recognising too much profit up front. Then if a loan experiences a significant increase in credit risk, all the losses that can be expected over the lifetime of the loan must be recognised immediately. Normally, that will happen long before actual default.

In developing this standard, the IASB did consider whether to require banks to recognise full lifetime losses from day one. We rejected this approach for several reasons.

First, accounting standards are designed to reflect economic reality as closely as possible. Banks do not suffer losses on the very first day a loan has been made, so recording a full lifetime loss immediately is counter-intuitive. Moreover, in bad economic times, when earnings are already depressed, banks would have an incentive to cut back on new lending in order to avoid having to recognise large day one losses. Just when you need it most, the economy would probably be starved of credit.

Second, future losses are notoriously difficult to predict, so any model based on expected losses many years later would be subjective. Before the crisis, Spanish regulators required their banks to provision for bad times on the basis of lifetime expected losses. But their lenders underestimated and were still overwhelmed by the tide of bad loans. This kind of accounting also tempts banks to overstate losses in good times, creating reserves that could be released in bad times. That may seem prudent at first but could mask deteriorating performance in a later period, when investors are most in need of reliable information.

Critics also allege that IFRS has been too enamoured of fair value accounting. In fact, banks value almost all of their loan portfolios at cost, so the historical cost method remains much more pervasive.

Fears that fair value accounting lead to improper early profit recognition are also overblown. IFRS 9 prohibits companies from doing that when quoted prices in active markets are not available and the quality of earnings is highly uncertain. Moreover, fair value accounting is often quicker at identifying losses than cost accounting. That is why banks lobbied so actively against it during the crisis.

This does not mean that the accounting standards are infallible. Accounting is highly dependent on the exercise of judgement and is therefore more an art than a science. Good standards limit the room for mistakes or abuse, but can never entirely eliminate them. The capital markets are full of risks that accounting cannot possibly predict. This is certainly the case now, with markets swimming in debt and overpriced assets. For accounting standards to do their job properly, we need management to own up to the facts — and auditors, regulators and investors to be vigilant.

Wednesday 3 October 2018

Our cult of personality is leaving real life in the shade

George Monbiot in The Guardian

By reducing politics to a celebrity obsession – from Johnson to Trump to Corbyn – the media misdirects and confuses us 

Illustration: Ben Jennings


What kind of people would you expect the newspapers to interview most? Those with the most to say, perhaps, or maybe those with the richest and weirdest experiences. Might it be philosophers, or detectives, or doctors working in war zones, refugees, polar scientists, street children, firefighters, base jumpers, activists, writers or free divers? No. It’s actors. I haven’t conducted an empirical study, but I would guess that between a third and a half of the major interviews in the newspapers feature people who make their living by adopting someone else’s persona and speaking someone else’s words.

This is such a bizarre phenomenon that, if it hadn’t crept up on us slowly, we would surely find it astounding. But it seems to me symbolic of the way the media works. Its problem runs deeper than fake news. What it offers is news about a fake world.

I am not proposing that the papers should never interview actors, or that they have no wisdom of their own to impart. But the remarkable obsession with this trade blots out other voices. One result is that an issue is not an issue until it has been voiced by an actor. Climate breakdown, refugees, human rights, sexual assault: none of these issues, it seems, can surface until they go Hollywood.

This is not to disparage the actors who have helped bring them to mainstream attention, least of all the brave and brilliant women who exposed Harvey Weinstein and popularised the #MeToo movement. But many other brave and brilliant women stood up to say the same thing – and, because they were not actors, remained unheard. The #MeToo movement is widely assumed to have begun a year ago, with Weinstein’s accusers. But it actually started in 2006, when the motto was coined by the activist Tarana Burke. She and the millions of others who tried to speak out were, neither literally nor metaphorically, in the spotlight.

At least actors serve everyone. But the next most-interviewed category, according to my unscientific survey, could be filed as “those who serve the wealthy”: restaurateurs, haute couturists, interior designers and the like, lionised and thrust into our faces as if we were their prospective clients. This is a world of make-believe, in which we are induced to imagine we are participants rather than mere gawpers.

The spotlight effect is bad enough on the culture pages. It’s worse when the same framing is applied to politics. Particularly during party conference season, but at other times of the year as well, public issues are cast as private dramas. Brexit, which is likely to alter the lives of everyone in Britain, is reduced to a story about whether or not Theresa May will keep her job. Who cares? Perhaps, by now, not even Theresa May.

Neither May nor Jeremy Corbyn can carry the weight of the personality cults that the media seeks to build around them. They are diffident and awkward in public, and appear to writhe in the spotlight. Both parties grapple with massive issues, and draw on the work of hundreds in formulating policy, tactics and presentation. Yet these huge and complex matters are reduced to the drama of one person’s struggle. Everyone, in the media’s viewfinder, becomes an actor. Reality is replaced by representation.

Even when political reporting is not reduced to personality, political photography is. An article might offer depth and complexity, but is illustrated with a photo of one of the 10 politicians whose picture must be attached to every news story. Where is the public clamour to see yet another image of May – let alone Boris Johnson? The pictures, like the actors, blot out our view of other people, and induce us to forget that these articles discuss the lives of millions, not the life of one.

The media’s failure of imagination and perspective is not just tiresome: it’s dangerous. There is a particular species of politics that is built entirely around personalities. It is a politics in which substance, evidence and analysis are replaced by symbols, slogans and sensation. It is called fascism. If you construct political narratives around the psychodramas of politicians, even when they don’t invite it, you open the way for those who can play this game more effectively.

Already this reporting style has led to the rise of people who, though they are not fascists, have demagogic tendencies. Johnson, Nigel Farage and Jacob Rees-Mogg are all, like Donald Trump, reality TV stars. The reality TV on which they feature is not The Apprentice, but Question Time and other news and current affairs programmes. In the media circus, the clowns have the starring roles. And clowns in politics are dangerous.

The spotlight effect allows the favoured few to set the agenda. Almost all the most critical issues remain in the darkness beyond the circle of light. Every day, thousands of pages are published and thousands of hours broadcast by the media. But scarcely any of this space and time is made available for the matters that really count: environmental breakdown, inequality, exclusion, the subversion of democracy by money. In a world of impersonation, we obsess about trivia. A story carried by BBC News last week was headlined “Meghan closes a car door”

The BBC has just announced that two of its programmes will start covering climate change once a week. Given the indifference and sometimes outright hostility with which it has treated people trying to raise this issue over the past 20 years, this is progress. But business news, though less important than environmental collapse, is broadcast every minute, partly because it is treated as central by the people who run the media and partly because it is of pressing interest to those within the spotlight. We see what they want us to see. The rest remains in darkness.

The task of all journalists is to turn off the spotlight, roll up the blinds and see what’s lurking at the back of the room. There are some magnificent examples of how this can be done, such as the Windrush scandal reporting, by the Guardian’s Amelia Gentleman and others. This told the story of people who live far from where the spotlight falls. The articles were accompanied by pictures of victims rather than of the politicians who had treated them so badly: their tragedies were not supplanted by someone else’s drama. Yet these stories were told with such power that they forced even those within the spotlight to respond.

The task of all citizens is to understand what we are seeing. The world as portrayed is not the world as it is. The personification of complex issues confuses and misdirects us, ensuring that we struggle to comprehend and respond to our predicaments. This, it seems, is often the point.


Thursday 27 September 2018

How did Sri Rama's idols suddenly appear in Babri Masjid on 22 December 1949?

Krishna Jha and Dhirendra K Jha in The Wire.In


The night was almost over. Ayodhya was still numb with sleep. Piercing through the quiet, a young sadhu, drenched in sweat, came scampering from Hanumangarhi, a fortress-like Hindu religious establishment housing over five hundred sadhus in Ayodhya. He had been sent to summon Satyendra Das to his guru, Abhiram Das, who seemed to be breathing his last. Those were the early hours of 3 December 1981, and a curtain was coming down over a few forgotten pages of history.

Dharam Das, the other disciple who stayed with Abhiram Das in his one-room tenement, the asan in Hanumangarhi, had asked for him so that they could be with their guru in his last moments. The news did not come as a shock. Satyendra Das had been almost awaiting the moment, since he had known for long that his guru was nearing the end of his journey. He had been at his bedside the whole day and the signs were not encouraging. Even when he had left Abhiram Das’s asan to get a breather after hours of tending to the terminally ill, he had a premonition that his guru – the man who had led a small band of Hindus to surreptitiously plant the idol of Lord Rama in Babri Masjid on yet another December night three decades ago – might not live long. After he had come away from the bedside, unwilling but tired to the bones, Satyendra Das was restless and unable to sleep. He dreaded the moment, yet knew that someone would knock on his doors with the news any time, and when it came, he responded fast, wrapped a quilt around himself and ran out along with the young sadhu who had come to fetch him.

It was very cold outside. The winter night was fading into a dense fog that smothered everything in its folds. Nothing was visible. The duo, almost running in total invisibility, knew the nooks and crannies of Ayodhya like the back of their hands. As Satyendra Das arrived at the asan, he saw Abhiram Das lying in the middle of the room on a charpoy, surrounded by a few sadhus from Hanumangarhi. No one spoke; it was very quiet. Only Dharam Das moved close to him and murmured softly that their guru had passed away minutes before he had stepped in. Slowly, as the day began to break, devotees and disciples started pouring into the room. Soon, preparations for the last rites of the deceased were begun with the help of some residents of Hanumangarhi.

The rituals for the final journey of ascetics are not the same as those for non-ascetic Hindu grihasthas, particularly in north India. Sadhus, unlike Hindu grihasthas, are rarely cremated. There are two options: either their bodies are smeared with salt and buried sitting in a meditative posture or they are dropped down a sacred river tied with a rock or sacks full of sand. The fact that sadhus who take vows of complete renunciation are not cremated symbolizes their separation from the material world. The claim goes that cremation for sadhus is superfluous since they have already burnt their attachments through ascetic initiation, opting for a life of austerities and renunciation.

In Ayodhya, the normal ascetic practice has been to immerse the body of a sadhu in the Sarayu – the name given to the river only as long as it touches the shores of the town. Before and after Ayodhya, the river is known as the Ghaghara. The reason for this nomenclatural confusion lies in a particular Hindu belief. As mythology has turned Ayodhya into the birthplace of Lord Rama, the river owing by it has also assumed the mythical name of Sarayu – the stream that is believed to have owed through the kingdom of Lord Rama.

Back in Hanumangarhi, by the noon of 3 December 1981, Abhiram Das’s disciples and friends had completed all preparations and were ready to initiate the final rituals for the deceased. Outside the asan, the body of Abhiram Das had been placed on a platform made of bamboo in a seated posture, his face frozen into a mask of self-control, his eyes half-closed as if he were deep in meditation. A saffron piece of cloth that had the name of Lord Rama printed all over – a particular kind of cotton or silk material called ramnami – had been carefully wrapped around his body. A similar cloth covered three sides of the arch made out of split bamboo that rested on the hard bamboo platform holding the corpse. The bamboo structure – euphemistically called viman to symbolize the mythical transporter of souls to the heavenly realm – had been kept uncovered on one side to enable people to have a last glimpse of the deceased.

Slowly, a group of sadhus lifted the viman on their shoulders and climbed up the flight of stairs leading to the temple of Lord Hanuman in the centre of Hanumangarhi. At the temple, the group swelled further and as the viman was taken out of Hanumangarhi, the motley crowd accompanying it chanted, ‘Ramajanmabhoomi Uddharak amar rahen (Long live the saviour of the birth place of Rama).’

Three decades back, on the morning of 23 December 1949, the First Information Report (FIR) registered by Ayodhya Police following the planting of the idol of Lord Rama in Babri Masjid on the night before had named Abhiram Das as the prime accused. He had also been tried for the crime he and his friends had committed that night, but the case had remained inconclusive. In course of time, many Hindus in Ayodhya had started calling him Ramajanmabhoomi Uddharak.


Krishna Jha and Dhirendra K. Jha, Ayodhya – The Dark 
Night, Harper Collins

The slogan-shouting grew louder as the viman reached the entrance of Babri Masjid, where it was carefully laid down. The priests of Ramajanmabhoomi, the temple that operated inside Babri Masjid ever since the idol was planted in it, as well as those of nearby Hindu religious establishments already knew about the demise of the sadhu, and they came out and garlanded the corpse and paid their homage to the departed soul.

By and large, however, Ayodhya remained unaware of Abhiram Das’s death. Though some residents looked at this funeral procession with curiosity, for the majority it was the demise of yet another old sadhu. After three decades, the historical facts associated with the developments in 1949 had slipped into obscurity. e propaganda of All India Hindu Mahasabha and Rashtriya Swayamsevak Sangh (RSS) – that the idol had never been planted and Lord Rama had manifested Himself at His place of birth – had gained ground among devout Hindus by now, largely delinking Abhiram Das from what he had done in the dark hours of that fateful night. Booklets and pamphlets written by Hindu communalists during the intervening period had flooded the shops of Ayodhya and had gone a long way in reinforcing the myth of ‘divine exercise’. For legal reasons, even those who had a role in that surreptitious act found it convenient to let the myth grow and capture popular imagination. e law, after all, could catch human conspiracies, but a ‘divine exercise’ was beyond its reach. Yet, to a small group of Hindus in Ayodhya, Abhiram Das continued to remain till his death Ramajanmabhoomi Uddharak or simply Uddharak Baba.

Whatever be the case, the lack of interest among locals could not be missed by many present in the cortège as it wound down the narrow lanes of Ayodhya and moved towards the banks of the Sarayu. On the bank, where the cortège reached at around two that afternoon, those carrying the viman on their shoulders bent down to put their burden on the ground. The sadhu’s body was taken out of it, bathed in the river and, after being smeared with ghee all over, was wrapped in a fresh white cloth. Two sand-filled sacks were tied to the back of the body, one beneath the shoulder and the other under the waist, which was then gently laid out in the boat that sailed o the moment Satyendra Das, Dharam Das and three other sadhus of Hanumangarhi boarded it. Within minutes, the boat reached the centre of the river, where it was no longer shallow and which had traditionally been used for such water burials. Those present on the boat performed the final rites before lifting Abhiram Das’s body and casting it into the cool, calm waters of the Sarayu.

II

The indifferent response that Abhiram Das’s death evoked among the local populace in 1981 was at odds with the atmosphere the town had witnessed three decades ago, during the years following Independence. At that time, many in Ayodhya, as in several other parts of the country, had seen things differently. The communal frenzy which had accompanied the partition of India had intensely brutalized the atmosphere. No less important was the role played by organizations which saw the immediate aftermath of Partition as an opportunity to derail the secular project of independent India. e conspirators associated with these organizations and the conspiracies they hatched had already resulted in major national tragedies.

One such was the gruesome murder of Mahatma Gandhi on 30 January 1948. The hands that pumped bullets into the chest of the Mahatma were that of Nathuram Godse, but, as was proved later, the assassination was part of a conspiracy hatched by top Hindu Mahasabha leaders, led by V.D. Savarkar, whose prime objectives were to snatch political initiative from the Congress and destabilize all efforts to uphold secularism in India. The conspiracy to kill Gandhi could not remain hidden for long even though the trial, held immediately after the assassination, had failed to uncover its extent.

The surreptitious occupation of the Babri Masjid was an act planned by almost the same set of people about two years later – on the night of 22 December 1949. It was, in many ways, a reflection of the same brutalized atmosphere that saw Gandhi being murdered. Neither the conspirators nor their underlying objectives were different. In both instances, the conspirators belonged to the Hindu Mahasabha leadership – some of the prime movers of the planting of the idol had been the prime accused in the Gandhi murder case – and their objective this time too was to wrest the political centre stage from the Congress by provoking large-scale Hindu mobilization in the name of Lord Rama.

Yet the two incidents differed – as much in the modus operandi used by Hindu communalists as in the manner in which the government and the ruling party, the Congress, responded to them. While the Mahatma was killed in full public view in broad daylight, the Babri Masjid was converted into a temple secretly, in the dead of night. Apparently, the quick and massive government reprisal in the aftermath of Gandhi’s assassination had taught the Hindu Mahasabha leaders several lessons. One was to avoid confrontation with the government so that they could extract maximum political advantage out of their act. Another was to involve a section of the Congress that was sympathetic to their cause. So when, two years later, they set out to execute the Ayodhya project, they remained extremely careful, keeping themselves in the backstage until the mosque was actually impounded and ensuring a large-scale mobilization of Hindus in the immediate aftermath without wasting any time. Though the political objective they had planned through this act of communal aggression in Ayodhya could not be achieved in the manner they had hoped for, they greatly succeeded in keeping the story of the night and the conspiracy behind it a secret, for it never came out in its entirety.

Also, while the conspiracy to kill the Mahatma was probed thoroughly by a commission set up by the Government of India albeit two decades later, no such inquiry was conducted to unmask the plot and the plotters behind the forcible conversion of the Babri Masjid into a temple. As a result, an event that so remarkably changed the political discourse in India continues to be treated as a localized crime committed spontaneously by a handful of local people led, of course, by Abhiram Das, a local sadhu. It was, however, a well-planned conspiracy involving national-, provincial- and local-level leaders of the Hindu Mahasabha undertaken with he objective of reviving the party’s political fortunes that were lost in the aftermath of the Gandhi assassination.

Time has further pushed the secret story of the Hindu Mahasabha’s Ayodhya strategy into obscurity, leaving only what is most apparent for public debate. The unending process of litigation which it triggered completely shifted the focus away from that fateful night and has now become the basis of communal politics in the country. Incidentally, the most crucial part of the controversy – the hidden one – remains an ignored area of research. For instance, the White Paper on the Babri Masjid–Ramajanmabhoomi dispute of the Government of India dismissed the incident of 1949 – legally the root cause of the dispute – in just one paragraph. Issued in the aftermath of the demolition of the mosque on 6 December 1992, the document does not have more to say on the incident:


The controversy entered a new phase with the placing of idols in the disputed structure in December 1949. The premises were attached under Section 145 of the Code of Criminal Procedure. Civil suits were led shortly thereafter. Interim orders in these civil suits restrained the parties from removing the idols or interfering with their worship. In effect, therefore, from December 1949 till December 6, 1992 the structure had not been used as a mosque.

It seems impertinent to say that so little is known about the night of 22–23 December 1949 since, in a sense, almost the entire dispute over the mosque emanates from the appearance of the idol of Rama inside that structure. Nevertheless, it is true that there has been little research by contemporary or later writers to fill the gap. This missing link of history remained out of focus till the issue was politically revived and strengthened by the Vishwa Hindu Parishad (VHP) in the mid-1980s. And by then the story of the night had been taken over by the politics of communalism and the debate over the proprietorship of the disputed land. 

But till Lord Rama ‘manifested’ Himself inside the Babri Masjid, all moves had sought to construct the temple at Ramachabutara, an elevated platform outside the inner courtyard of the mosque. Only after the idols were placed inside did the demand for converting the Muslim place of worship into a temple enter the legal arena. And yet the development of that night did not attract much attention in the media when it actually took place. No major newspaper or journal of the time gave it the kind of serious coverage it deserved even though the import of the development was not at all lost on Congress leaders like Jawaharlal Nehru, Sardar Vallabhbhai Patel, Govind Ballabh Pant and Akshay Brahmachary as well as Hindu Mahasabha president N.B. Khare, its vice-president V.G. Deshpande and its all India general secretary and president of the party’s UP unit Mahant Digvijai Nath.

The only journal that covered the events in detail was a local Hindi weekly in Ayodhya called Virakta. Its editor, Ramgopal Pandey ‘Sharad’, was a known Mahasabhaite. The kind of material that Virakta published had a pronounced Hindu communal bias, and it was hardly expected to carry objective reportage on the developments. If anything, this journal was the first to promote the theory of ‘divine exercise’ – though in bits and pieces – to explain the appearance of the idol of Lord Rama inside the mosque.

Later, Ramgopal Pandey ‘Sharad’ wrote a booklet in Hindi – Shree Ramjanmabhoomi Ka Rakta Ranjit Itihaas (The Blood-soaked History of the Birth Place of Lord Rama). In Ayodhya, this has remained the most popular and perhaps only available material on the subject ever since. Like Virakta, this booklet, too, explains the developments of that night in terms of divine intervention rather than as a communal tactic conceived and executed by the Mahasabha in collaboration with local communalists. is is what the booklet says:


Twenty-third December 1949 was a glorious day for India. On that day, after a long gap of about four hundred years, the birth place of Lord Rama was redeemed. e way developments happened [on the night before], it can be said that Lord Rama himself redeemed his place of birth.

While this theory was being used by communalists to explain the mystery of those dark hours, no serious attempt was made to explore the events of that night objectively, neither by the government nor by any institutions or individual researchers. Debunking the theory of ‘divine exercise’ is one thing (and there is no dearth of works in this regard), but unravelling the truth that was sought to be covered is something else.

Surely, part of the reason why the facts could not come out as and when they occurred – as happened in case of Mahatma Gandhi’s assassination – had greatly to do with the power politics of the time. After the assassination of Gandhi in 1948 until the death of Sardar Vallabhbhai Patel in 1950, the Congress party was beset with an intense intra-party power struggle. Though it had witnessed factional fights earlier as well, there had always been an element of restraint under the influence of Mahatma Gandhi and the idealism of the freedom struggle. But as soon as these restraints disappeared, the fight between the two power blocs in the Congress – Hindu conservatives led by Patel and secularists led by Nehru – came out in the open.

The United Provinces, in particular, emerged as one of the main battlegrounds for these power blocs in the Congress, merely months after Gandhi’s assassination. Govind Ballabh Pant, the chief minister of the province (called prime minister before adoption of the Constitution on 26 January 1950), was a staunch loyalist of Patel. His desperation to remove all those who appeared to be potential challengers to his authority in the state Congress led him to align with Hindu revivalists in Ayodhya – a move that, apart from paying him dividends, greatly emboldened Mahasabhaites and set the ground for the eventual appearance of the idols at the Babri Masjid.

With the Hindu conservative faction of the Congress, in a bid to neutralize Nehru, openly trying to outsource political strength from communal elements outside the party, and the latter endeavouring to arrest this political drift and salvage its own position, there was hardly much time, or determination, to probe the misdeeds of the Mahasabhaites. This was even more so in the United Provinces where the government appeared to be more interested in protecting the Hindu communalists than bringing them to book.

By the time this battle was won by Nehru in late 1950, the incidents of the night of 22 December 1949 had got lost in legal thickets, and the mood of the nation had changed, with the secular fabric seemingly no longer threatened by Hindu revivalists. As the focus shifted following the promulgation of the Constitution of India on 26 January 1950, almost all the players of the Hindu Mahasabha’s Ayodhya strategy either lost their relevance or, in cases where some of them managed to remain in currency, their ability to break the secular equilibrium got severely restricted and their link with the night became part of this missing link of modern India’s history.

Trump has a point about globalisation

Larry Elliott in The Guardian


The president’s belief that the nation state can cure economic ills is not without merit


  
‘The stupendous growth posted by China over the past four decades has been the result of doing the opposite of what the globalisation textbooks recommend.’ Photograph: AFP/Getty Images


Once every three years the International Monetary Fund and the World Bank hold their annual meetings out of town. Instead of schlepping over to Washington, the gathering of finance ministers and central bank governors is hosted by a member state. Ever since the 2000 meeting in Prague was besieged by anti-globalisation rioters, the away fixtures have tended to be held in places that are hard to get to or where the regime tends to take a dim view of protest: Singapore, Turkey, Peru.

This year’s meeting will take place in a couple of weeks on the Indonesian island of Bali, where the IMF and the World Bank can be reasonably confident that the meetings will not be disrupted. At least not from the outside. The real threat no longer comes from balaclava-wearing anarchists throwing Molotov cocktails but from within. Donald Trump is now the one throwing the petrol bombs and for multilateral organisations like the IMF and World Bank, that poses a much bigger threat.

The US president put it this way in his speech to the United Nations on Tuesday: “We reject the ideology of globalism and we embrace the doctrine of patriotism.” For decades, the message from the IMF has been that breaking down the barriers to trade, allowing capital to move unhindered across borders and constraining the ability of governments to regulate multinational corporations was the way to prosperity. Now the most powerful man on the planet is saying something different: that the only way to remedy the economic and social ills caused by globalisation is through the nation state. Trump’s speech was mocked by fellow world leaders, but the truth is that he’s not a lone voice.

The world’s other big economic superpower – China – has never given up on the nation state. Xi Jinping likes to use the language of globalisation to make a contrast with Trump’s protectionism, but the stupendous growth posted by China over the past four decades has been the result of doing the opposite of what the globalisation textbooks recommend. The measures traditionally frowned upon by the IMF – state-run industries, subsidies, capital controls – have been central to Beijing’s managed capitalism. China has certainly not closed itself off from the global economy but has engaged on its own terms. When the communist regime wanted to move people out of the fields and into factories it did so through the mechanism of an undervalued currency, which made Chinese exports highly competitive. When the party decided that it wanted to move into more sophisticated, higher-tech manufacturing, it insisted that foreign companies wishing to invest in China share their intellectual property.

This sort of approach isn’t new. It was the way most western countries operated in the decades after the second world war, when capital controls, managed immigration and a cautious approach to removing trade barriers were seen as necessary if governments were to meet public demands for full employment and rising living standards. The US and the EU now say that China is not playing fair because it has been prospering with an economic strategy that is supposed not to work. There is some irony in this.

The idea that the nation state would wither away was based on three separate arguments. The first was that the barriers to the global free movement of goods, services, people and money were economically inefficient and that removing them would lead to higher levels of growth. This has not been the case. Growth has been weaker and less evenly shared.

The second was that governments couldn’t resist globalisation even if they wanted to. This was broadly the view once adopted by Bill Clinton and Tony Blair, and now kept alive by Emmanuel Macron. The message to displaced workers was that the power of the market was – rather like a hurricane or a blizzard – an irresistible force of nature. This has always been a dubious argument because there is no such thing as a pure free market. Globalisation has been shaped by political decisions, which for the past four decades have favoured the interests of capital over labour.
Finally, it was argued that the trans-national nature of modern capitalism made the nation state obsolete. Put simply, if economics was increasingly global then politics had to go global, too. There is clearly something in this because financial markets impose constraints on individual governments and it would be preferable for there to be a form of global governance pushing for stability and prosperity for all. The problem is that to the extent such an institutional mechanism exists, it has been captured by the globalists. That is as true of the EU as it is of the IMF.

So while the nation state is far from perfect, it is where an alternative to the current failed model will inevitably begin. Increasingly, voters are looking to the one form of government where they do have a say to provide economic security. And if the mainstream parties are not prepared to offer what these voters want – a decently paid job, properly funded public services and controls on immigration – then they will look elsewhere for parties or movements that will. This has proved to be a particular problem for the parties of the centre left – the Democrats in the US, New Labour in Britain, the SDP in Germany – that signed up to the idea that globalisation was an unstoppable force.

Jeremy Corbyn certainly does not accept the idea that the state is obsolete as an economic actor. The plan is to build a different sort of economy from the bottom up – locally and nationally. That’s not going to be easy but beats the current, failed, top-down approach.

Tuesday 25 September 2018

Why western philosophy can only teach us so much

Julian Baggini in The Guardian

One of the great unexplained wonders of human history is that written philosophy first flowered entirely separately in different parts of the globe at more or less the same time. The origins of Indian, Chinese and ancient Greek philosophy, as well as Buddhism, can all be traced back to a period of roughly 300 years, beginning in the 8th century BC.

These early philosophies have shaped the different ways people worship, live and think about the big questions that concern us all. Most people do not consciously articulate the philosophical assumptions they have absorbed and are often not even aware that they have any, but assumptions about the nature of self, ethics, sources of knowledge and the goals of life are deeply embedded in our cultures and frame our thinking without our being aware of them.

Yet, for all the varied and rich philosophical traditions across the world, the western philosophy I have studied for more than 30 years – based entirely on canonical western texts – is presented as the universal philosophy, the ultimate inquiry into human understanding. Comparative philosophy – study in two or more philosophical traditions – is left almost entirely to people working in anthropology or cultural studies. This abdication of interest assumes that comparative philosophy might help us to understand the intellectual cultures of India, China or the Muslim world, but not the human condition.

This has become something of an embarrassment for me. Until a few years ago, I knew virtually nothing about anything other than western philosophy, a tradition that stretches from the ancient Greeks to the great universities of Europe and the US. Yet, if you look at my PhD certificate or the names of the university departments where I studied, there is only one, unqualified, word: philosophy. Recently and belatedly, I have been exploring the great classical philosophies of the rest of the world, travelling across continents to encounter them first-hand. It has been the most rewarding intellectual journey of my life.

My philosophical journey has convinced me that we cannot understand ourselves if we do not understand others. Getting to know others requires avoiding the twin dangers of overestimating either how much we have in common or how much divides us. Our shared humanity and the perennial problems of life mean that we can always learn from and identify with the thoughts and practices of others, no matter how alien they might at first appear. At the same time, differences in ways of thinking can be both deep and subtle. If we assume too readily that we can see things from others’ points of view, we end up seeing them from merely a variation of our own.

To travel around the world’s philosophies is an opportunity to challenge the beliefs and ways of thinking we take for granted. By gaining greater knowledge of how others think, we can become less certain of the knowledge we think we have, which is always the first step to greater understanding.

Take the example of time. Around the world today, time is linear, ordered into past, present and future. Our days are organised by the progression of the clock, in the short to medium term by calendars and diaries, history by timelines stretching back over millennia. All cultures have a sense of past, present and future, but for much of human history this has been underpinned by a more fundamental sense of time as cyclical. The past is also the future, the future is also the past, the beginning also the end.

The dominance of linear time fits in with an eschatological worldview in which all of human history is building up to a final judgment. This is perhaps why, over time, it became the common-sense way of viewing time in the largely Christian west. When God created the world, he began a story with a beginning, a middle and an end. As Revelation puts it, while prophesying the end times, Jesus is this epic’s “Alpha and Omega, the beginning and the end, the first and the last”.

But there are other ways of thinking about time. Many schools of thought believe that the beginning and the end are and have always been the same because time is essentially cyclical. This is the most intuitively plausible way of thinking about eternity. When we imagine time as a line, we end up baffled: what happened before time began? How can a line go on without end? A circle allows us to visualise going backwards or forwards for ever, at no point coming up against an ultimate beginning or end.

Thinking of time cyclically especially made sense in premodern societies, where there were few innovations across generations and people lived very similar lives to those of their grandparents, their great-grandparents and going back many generations. Without change, progress was unimaginable. Meaning could therefore only be found in embracing the cycle of life and death and playing your part in it as best you could.


Confucius (551-479 BC). Photograph: Getty

Perhaps this is why cyclical time appears to have been the human default. The Mayans, Incans and Hopi all viewed time in this way. Many non-western traditions contain elements of cyclical thinking about time, perhaps most evident in classical Indian philosophy. The Indian philosopher and statesman Sarvepalli Radhakrishnan wrote: “All the [orthodox] systems accept the view of the great world rhythm. Vast periods of creation, maintenance and dissolution follow each other in endless succession.” For example, a passage in the Rig Veda addressing Dyaus and Prithvi (heaven and earth) reads: “Which was the former, which of them the latter? How born? O sages, who discerns? They bear themselves all that has existence. Day and night revolve as on a wheel.”

East Asian philosophy is deeply rooted in the cycle of the seasons, part of a larger cycle of existence. This is particularly evident in Taoism, and is vividly illustrated by the surprising cheerfulness of the 4th century BC Taoist philosopher Zhuangzi when everyone thought he should have been mourning for his wife. At first, he explained, he was as miserable as anyone else. Then he thought back beyond her to the beginning of time itself: “In all the mixed-up bustle and confusion, something changed and there was qi. The qi changed and there was form. The form changed and she had life. Today there was another change and she died. It’s just like the round of four seasons: spring, summer, autumn and winter.”

In Chinese thought, wisdom and truth are timeless, and we do not need to go forward to learn, only to hold on to what we already have. As the 19th- century Scottish sinologist James Legge put it, Confucius did not think his purpose was “to announce any new truths, or to initiate any new economy. It was to prevent what had previously been known from being lost.” Mencius, similarly, criticised the princes of his day because “they do not put into practice the ways of the ancient kings”. Mencius also says, in the penultimate chapter of the eponymous collection of his conversations, close to the book’s conclusion: “The superior man seeks simply to bring back the unchanging standard, and, that being correct, the masses are roused to virtue.” The very last chapter charts the ages between the great kings and sages.

A hybrid of cyclical and linear time operates in strands of Islamic thought. “The Islamic conception of time is based essentially on the cyclic rejuvenation of human history through the appearance of various prophets,” says Seyyed Hossein Nasr, professor emeritus of Islamic studies at George Washington University. Each cycle, however, also moves humanity forward, with each revelation building on the former – the dictation of the Qur’an to Muhammad being the last, complete testimony of God – until ultimately the series of cycles ends with the appearance of the Mahdi, who rules for 40 years before the final judgment.

The distinction between linear and cyclical time is therefore not always neat. The assumption of an either/or leads many to assume that oral philosophical traditions have straightforwardly cyclical conceptions of time. The reality is more complicated. Take Indigenous Australian philosophies. There is no single Australian first people with a shared culture, but there are enough similarities across the country for some tentative generalisations to be made about ideas that are common or dominant. The late anthropologist David Maybury-Lewis suggested that time in Indigenous Australian culture is neither cyclical nor linear; instead, it resembles the space-time of modern physics. Time is intimately linked to place in what he calls the “dreamtime” of “past, present, future all present in this place”.

“One lives in a place more than in a time,” is how Stephen Muecke puts it in his book Ancient and Modern: Time, Culture and Indigenous Philosophy. More important than the distinction between linear or cyclical time is whether time is separated from or intimately connected to place. Take, for example, how we conceive of death. In the contemporary west, death is primarily seen as the expiration of the individual, with the body as the locus, and the location of that body irrelevant. In contrast, Muecke says: “Many indigenous accounts of the death of an individual are not so much about bodily death as about a return of energy to the place of emanation with which it re-identifies.”

Such a way of thinking is especially alien to the modern west, where a pursuit of objectivity systematically downplays the particular, the specifically located. In a provocative and evocative sentence, Muecke says: “Let me suggest that longsightedness is a European form of philosophical myopia and that other versions of philosophy, indigenous perhaps, have a more lived-in and intimate association with societies of people and the way they talk about themselves.”

Muecke cites the Australian academic Tony Swain’s view that the concept of linear time is a kind of fall from place. “I’ve got a hunch that modern physics separated out those dimensions and worked on them, and so we produced time as we know it through a whole lot of experimental and theoretical activities,” Muecke told me. “If you’re not conceptually and experimentally separating those dimensions, then they would tend to flow together.” His indigenous friends talk less of time or place independently, but more of located events. The key temporal question is not “When did this happen?” but “How is this related to other events?”

That word related is important. Time and space have become theoretical abstractions in modern physics, but in human culture they are concrete realities. Nothing exists purely as a point on a map or a moment in time: everything stands in relation to everything else. So to understand time and space in oral philosophical traditions, we have to see them less as abstract concepts in metaphysical theories and more as living conceptions, part and parcel of a broader way of understanding the world, one that is rooted in relatedness. Hirini Kaa, a lecturer at the University of Auckland, says that “the key underpinning of Maori thought is kinship, the connectedness between humanity, between one another, between the natural environment”. He sees this as a form of spirituality. “The ocean wasn’t just water, it wasn’t something for us to be afraid of or to utilise as a commodity, but became an ancestor deity, Tangaroa. Every living thing has a life force.”

David Mowaljarlai, who was a senior lawman of the Ngarinyin people of Western Australia, once called this principle of connectivity “pattern thinking”. Pattern thinking suffuses the natural and the social worlds, which are, after all, in this way of thinking, part of one thing. As Muecke puts it: “The concept of connectedness is, of course, the basis of all kinship systems [...] Getting married, in this case, is not just pairing off, it is, in a way, sharing each other.”

The emphasis on connectedness and place leads to a way of thinking that runs counter to the abstract universalism developed to a certain extent in all the great written traditions of philosophy. Muecke describes as one of the “enduring [Indigenous Australian] principles” that “a way of being will be specific to the resources and needs of a time and place and that one’s conduct will be informed by responsibility specific to that place”. This is not an “anything goes” relativism, but a recognition that rights, duties and values exist only in actual human cultures, and their exact shape and form will depend on the nature of those situations.

 
A Mayan calendar. Photograph: Alamy Stock Photo

This should be clear enough. But the tradition of western philosophy, in particular, has striven for a universality that glosses over differences of time and place. The word “university”, for example, even shares the same etymological root as “universal”. In such institutions, “the pursuit of truth recognises no national boundaries”, as one commentator observed. Place is so unimportant in western philosophy that, when I discovered it was the theme of the quinquennial East-West Philosophers’ Conference in 2016, I wondered if there was anything I could bring to the party at all. (I decided that the absence of place in western philosophy itself merited consideration.)

The universalist thrust has many merits. The refusal to accept any and every practice as a legitimate custom has bred a very good form of intolerance for the barbaric and unjust traditional practices of the west itself. Without this intolerance, we would still have slavery, torture, fewer rights for women and homosexuals, feudal lords and unelected parliaments. The universalist aspiration has, at its best, helped the west to transcend its own prejudices. At the same time, it has also legitimised some prejudices by confusing them with universal truths. The philosopher Kwame Anthony Appiah argues that the complaints of anti-universalists are not generally about universalism at all, but pseudo-universalism, “Eurocentric hegemony posing as universalism”. When this happens, intolerance for the indefensible becomes intolerance for anything that is different. The aspiration for the universal becomes a crude insistence on the uniform. Sensitivity is lost to the very different needs of different cultures at different times and places.

This “posing as universalism” is widespread and often implicit, with western concepts being taken as universal but Indian ones remaining Indian, Chinese remaining Chinese, and so on. To end this pretence, Jay L Garfield and Bryan W Van Norden propose that those departments of philosophy that refuse to teach anything from non-western traditions at least have the decency to call themselves departments of western philosophy.

The “pattern thinking” of Maori and Indigenous Australian philosophies could provide a corrective to the assumption that our values are the universal ones and that others are aberrations. It makes credible and comprehensible the idea that philosophy is never placeless and that thinking that is uprooted from any land soon withers and dies.

Mistrust of the universalist aspiration, however, can go too far. At the very least, there is a contradiction in saying there are no universal truths, since that is itself a universal claim about the nature of truth. The right view probably lies somewhere between the claims of naive universalists and those of defiant localists. There seems to be a sense in which even the universalist aspiration has to be rooted in something more particular. TS Eliot is supposed to have said: “Although it is only too easy for a writer to be local without being universal, I doubt whether a poet or novelist can be universal without being local, too.” To be purely universal is to inhabit an abstract universe too detached from the real world. But just as a novelist can touch on universals of the human condition through the particulars of a couple of characters and a specific story, so our different, regional philosophical traditions can shed light on more universal philosophical truths even though they approach them from their own specific angles.

We should not be afraid to ground ourselves in our own traditions, but we should not be bound by them. Gandhi put this poetically when he wrote: “I do not want my house to be walled in on all sides and my windows to be stuffed. I want the cultures of all lands to be blown about my house as freely as possible. But I refuse to be blown off my feet by any. I refuse to live in other people’s houses as an interloper, a beggar or a slave.”

In the west, the predominance of linear time is associated with the idea of progress that reached its apotheosis in the Enlightenment. Before this, argues the philosopher Anthony Kenny, “people looking for ideals had looked backwards in time, whether to the primitive church, or to classical antiquity, or to some mythical prelapsarian era. It was a key doctrine of the Enlightenment that the human race, so far from falling from some earlier eminence, was moving forward to a happier future.”

Kenny is expressing a popular view, but many see the roots of belief in progress deeper in the Christian eschatological religious worldview. “Belief in progress is a relic of the Christian view of history as a universal narrative,” claims John Gray. Secular thinkers, he says, “reject the idea of providence, but they continue to think humankind is moving towards a universal goal”, even though “the idea of progress in history is a myth created by the need for meaning”.

Whether faith in progress is an invention or an adaptation of the Enlightenment, the image of secular humanists naively believing humanity is on an irreversible, linear path of advancement seems to me a caricature of their more modest hope, based in history, that progress has occurred and that more is possible. As the historian Jonathan Israel says, Enlightenment ideas of progress “were usually tempered by a strong streak of pessimism, a sense of the dangers and challenges to which the human condition is subject”. He dismisses the idea that “Enlightenment thinkers nurtured a naive belief in man’s perfectibility” as a “complete myth conjured up by early 20th-century scholars unsympathetic to its claims”.

Nevertheless, Gray is right to point out that linear progress is a kind of default way of thinking about history in the modern west and that this risks blinding us to the ways in which gains can be lost, advances reversed. It also fosters a sense of the superiority of the present age over earlier, supposedly less advanced” times. Finally, it occludes the extent to which history doesn’t repeat itself but does rhyme.

The different ways in which philosophical traditions have conceived time turn out to be far from mere metaphysical curiosities. They shape the way we think about both our temporal place in history and our relation to the physical places in which we live. It provides one of the easiest and clearest examples of how borrowing another way of thinking can bring a fresh perspective to our world. Sometimes, simply by changing the frame, the whole picture can look very different.