Search This Blog

Showing posts with label trust. Show all posts
Showing posts with label trust. Show all posts

Wednesday 7 November 2018

David Attenborough has betrayed the living world he loves

George Monbiot in The Guardian


By downplaying our environmental crisis, the presenter’s BBC films have generated complacency, confusion and ignorance

 
David Attenborough filming the BBC series Africa in the Suguta Valley, northern Kenya. Photograph: David Chancellor/BBC


Knowingly creating a false impression of the world: this is a serious matter. It is more serious still when the BBC does it, and yet worse when the presenter is “the most trusted man in Britain”. But, as his latest interview with the Observer reveals, David Attenborough sticks to his line that fully representing environmental issues is a “turn-off”.

His new series, Dynasties, will mention the pressures affecting wildlife, but Attenborough makes it clear that it will play them down. To do otherwise, he suggests, would be “proselytising” and “alarmist”. His series will be “a great relief from the political landscape which otherwise dominates our thoughts”. In light of the astonishing rate of collapse of the animal populations he features, alongside most of the rest of the world’s living systems – and when broadcasting as a whole has disgracefully failed to represent such truths – I don’t think such escapism is appropriate or justifiable.

It is not proselytising or alarmist to tell us the raw truth about what is happening to the world, however much it might discomfit us. Nor do I believe that revealing the marvels of nature automatically translates into environmental action, as the executive producer of Dynasties claims. I’ve come to believe it can have the opposite effect.

For many years, wildlife film-making has presented a pristine living world. It has created an impression of security and abundance, even in places afflicted by cascading ecological collapse. The cameras reassure us that there are vast tracts of wilderness in which wildlife continues to thrive. They cultivate complacency, not action.

You cannot do such a thing passively. Wildlife film-makers I know tell me that the effort to portray what looks like an untouched ecosystem becomes harder every year. They have to choose their camera angles ever more carefully to exclude the evidence of destruction, travel further to find the Edens they depict. They know – and many feel deeply uncomfortable about it – that they are telling a false story, creating a fairytale world that persuades us all is well, in the midst of an existential crisis. While many people, thanks in large part to David Attenborough, are now quite well informed about wildlife, we remain astonishingly ignorant about what is happening to it.

What makes Attenborough’s comments particularly odd is that they come just a year after the final episode of his Blue Planet II series triggered a massive effort to reduce plastic pollution. Though the programme made a complete dog’s breakfast of the issue, the response demonstrated a vast public appetite for information about the environmental crisis, and an urgent desire to act on it.

Since 1985, when I worked in the department that has made most of his programmes, I have pressed the BBC to reveal environmental realities, often with dismal results. In 1995 I spent several months with a producer, developing a novel and imaginative proposal for an environmental series. The producer returned from his meeting with the channel controller in a state of shock. “He just looked at the title and asked ‘Is this environment?’ I said yes. He said, ‘I’ve spent two years trying to get environment off this fucking channel. Why the fuck are you bringing me environment?’” 

I later discovered that this response was typical. The controllers weren’t indifferent. They were actively hostile. If you ask me whether the BBC or ExxonMobil has done more to frustrate environmental action in this country, I would say the BBC.

We all knew that only one person had the power to break this dam. For decades David Attenborough, a former channel controller widely seen as the living embodiment of the BBC, has been able to make any programme he wants. So where, we kept asking, was he? At last, in 2000, he presented an environmental series: State of the Planet.

It was an interesting and watchable series, but it left us with nowhere to go and nothing to do. Only in the last few seconds of the final episode was there a hint that structural forces might be at play: “Real success can only come if there’s a change in our societies, in our economics and in our politics.” But what change? What economics? What politics? He had given us no clues.

To make matters worse, it was sandwiched between further programmes of his about the wonders of nature, which created a strong impression of robust planetary health. He might have been describing two different worlds. Six years later he made another environmental series, The Truth About Climate Change. And this, in my view, was a total disaster.

It told us nothing about the driving forces behind climate breakdown. The only mention of fossil fuel companies was as part of the solution: “The people who extract fossil fuels like oil and gas have now come up with a way to put carbon dioxide back underground.” Apart from the general “we”, the only distinct force identified as responsible was the “1.3 billion Chinese”. That a large proportion of Chinese emissions are caused by manufacturing goods the west buys was not mentioned. The series immediately triggered a new form of climate denial: I was bombarded with people telling me there was no point in taking action in Britain because the Chinese were killing the planet.

If Attenborough’s environmentalism has a coherent theme, it is shifting the blame from powerful forces on to either society in general or the poor and weak. Sometimes it becomes pretty dark. In 2013 he told the Telegraph“What are all these famines in Ethiopia? What are they about? They’re about too many people for too little land … We say, get the United Nations to send them bags of flour. That’s barmy.”

There had not been a famine in Ethiopia for 28 years, and the last one was caused not by an absolute food shortage but by civil war and government policies. His suggestion that food relief is counter-productive suggests he has read nothing on the subject since Thomas Malthus’s essay in 1798. But, cruel and ignorant as these comments were, they were more or less cost-free. By contrast, you do not remain a national treasure by upsetting powerful vested interests: look at the flak the outspoken wildlife and environmental presenter Chris Packham attracts for standing up to the hunting lobby.

I have always been entranced by Attenborough’s wildlife programmes, but astonished by his consistent failure to mount a coherent, truthful and effective defence of the living world he loves. His revelation of the wonders of nature has been a great public service. But withholding the knowledge we need to defend it is, I believe, a grave disservice.

Thursday 4 October 2018

Finance, the media and a catastrophic breakdown in trust

John Authers in The Financial Times


Finance is all about trust. JP Morgan, patriarch of the banking dynasty, told Congress in the 1912 hearings that led to the foundation of the US Federal Reserve, that the first thing in credit was “character, before money or anything else. Money cannot buy it. 

“A man I do not trust could not get money from me on all the bonds in Christendom,” he added. “I think that is the fundamental basis of business.” He was right. More than a century later, it is ever clearer that, without trust, finance collapses. That is no less true now, when quadrillions change hands in electronic transactions across the globe, than it was when men such as Morgan dominated markets trading face to face. 

And that is a problem. Trust has broken down throughout society. From angry lynch mobs on social media to the fracturing of the western world’s political establishment, this is an accepted fact of life, and it is not merely true of politics. Over the past three decades, trust in markets has evaporated. 

In 1990, when I started at the Financial Times, trust in financiers and the media who covered them was, if anything, excessive. Readers were deferential towards the FT and, particularly, the stone-tablet certainties of the Lex column, which since the 1930s has dispensed magisterial and anonymous investment advice in finely chiselled 300-word notes. 

Trainee bankers in the City of London were required to read Lex before arriving at the office. If we said it, it must be true. Audience engagement came in handwritten letters, often in green ink. Once, a reader pointed out a minor error and ended: “The FT cannot be wrong, can it?” I phoned him and discovered this was not sarcasm. The FT was and is exclusively produced by human beings, but it had not occurred to him that we were capable of making a mistake. 

Back then, we made easy profits publishing page after page of almost illegible share price tables. One colleague had started in the 1960s as “Our Actuary” — his job was to calculate, using a slide rule, the value of the FTSE index after the market closed. 

Then came democratisation. As the 1990s progressed, the internet gave data away for free. Anyone with money could participate in the financial world without relying on the old intermediaries. If Americans wanted to shift between funds or countries, new online “ fund supermarkets” sprung up to let them move their pension fund money as much as they liked. 

Technology also broke the hold of bankers over finance, replacing it with the invisible hand of capital markets. No longer did banks’ lending officers decide on loans for businesses or mortgages; those decisions instead rested in the markets for mortgage-backed securities, corporate paper and junk bonds. Meanwhile, banks were merged, deregulated and freed to re-form themselves. 

But the sense of democratisation did not last. The crises that rent the financial world in twain, from the dotcom bubble in 2000 through to the 2008 Lehman debacle and this decade’s eurozone sovereign debt crisis, ensured instead that trust broke down. That collapse appears to me to be total: in financial institutions, in the markets and, most painfully for me, in the financial media. Once our word was accepted unquestioningly (which was unhealthy); now, information is suspect just because it comes from us, which is possibly even more unhealthy. 

To explain this, let me tell the story of the most contentious trip to the bank I have ever made. 

Two days after Lehman Brothers declared bankruptcy, in September 2008, I went on an anxious walk to my local bank branch. Working in New York, I had recently sold my flat in London and a large sum had just landed in my account at Citibank — far more than the insured limit, which at that point was $100,000. 

It did not seem very safe there. Overnight, the Federal Reserve had spent $85bn to bail out the huge insurance company AIG, which had unwisely guaranteed much credit now sitting on banks’ books. Were AIG to go south, taking its guarantees with it, many banks would suddenly find themselves with worthless assets and become insolvent. 

Meanwhile, a money market fund had “broken the buck”. Money market funds were treated like bank accounts by their clients. They switch money between very safe short-term bonds, trying to find higher rates than a deposit account can offer. Each share in the fund is worth $1, interest is distributed and the price cannot dip below $1. As the funds did not pay premiums for deposit insurance, they could pay higher interest rates for no perceived extra risk. 

Thus there was outright panic when a large money market fund admitted that it held Lehman Brothers bonds, that its price must drop to 97 cents and that it was freezing access to the fund. Across the US, investors rushed to pull their money out of almost anything with any risk attached to it, and poured it into the safest investments they could find — gold and very short-term US government debt (Treasury bills, or T-bills). This was an old-fashioned bank run, but happening where the general public could not see it. Panic was only visible to those who understood the arcana of T-bill yields.

Our headline that day read “Panic grips credit markets” under a banner about the “banking crisis” in red letters. 

There was no time to do anything complicated with my own money. Once I reached my lunch hour, I went to our local Citi branch, with a plan to take out half my money and put it into rival bank Chase, whose branch was next door. This would double the amount of money that was insured. 

This is how I recounted what happened next, in a column for the FT last month: 

“We were in Midtown Manhattan, surrounded by investment banking offices. At Citi, I found a long queue, all well-dressed Wall Streeters. They were doing the same as me. Next door, Chase was also full of anxious-looking bankers. Once I reached the relationship officer, who was great, she told me that she and her opposite number at Chase had agreed a plan of action. I need not open an account at another bank. Using bullet points, she asked if I was married and had children. Then she opened accounts for each of my children in trust and a joint account with my wife. In just a few minutes I had quadrupled my deposit insurance coverage. I was now exposed to Uncle Sam, not Citi. With a smile, she told me she had been doing this all morning. Neither she nor her friend at Chase had ever had requests to do this until that week.” 

Ten years on, this is my most vivid memory of the crisis. The implications were clear: Wall Streeters, who understood what was going on, felt they had to shore up their money in insured deposits. The bank run in the trading rooms was becoming visible in the bank branches down below. 

In normal circumstances, the tale of the bank branch would have made an ideal anecdote with which to lead our coverage, perhaps with a photo of the queue of anxious bankers. Low T-bill yields sound dry and lack visual appeal; what I had just seen looked like a bank run. (Although technically it was not — nobody I saw was taking out money.) 

But these were not normal circumstances, and I never seriously considered writing about it. Banks are fragile constructs. By design, they have more money lent out than they keep to cover deposits. A self-fulfilling loss of confidence can force a bank out of business, even if it is perfectly well run. In a febrile environment, I thought an image of a Manhattan bank run would be alarmist. I wrote a piece invoking a breakdown in trust between banks and described the atmosphere as “panic”, but did not mention the bank branch. Ten years later, with the anniversary upon us, I thought it would be an interesting anecdote to dramatise the crisis. 

In the distrustful and embittered world of 2018, the column about what I saw and why I chose not to write about it provoked a backlash that amazed me. Hundreds of responses poured in. Opinion was overwhelmingly against me. 

One email told me: “Your decision to save yourself while neglecting your readership is unforgivable and in the very nature of the elitist Cal Hockley of the Titanic scrambling for a lifeboat at the expense of others in need.” One commenter on FT.com wrote: “This reads like Ford trying to explain why pardoning Nixon was the right thing to do.” 

“I have re-read the article, and the comments, a couple of times,” wrote another. “And I realised that it actually makes me want to vomit, as I realise what a divide there is between you and I, between the people of the establishment like yourself, and the ordinary schmucks like myself. The current system is literally sickening and was saved for those who have something to protect, at the expense of those who they are exploiting.” 

Feedback carried on and on in this vein. How could we in the media ever be trusted if we did not tell the whole truth? Who were we to edit the facts and the truth that were presented? Why were we covering up for our friends in the banks? Newspaper columns attacking me for my hypocrisy popped up across the world, from France to Singapore. 

I found the feedback astonishing and wrong-headed. But I am now beginning to grasp the threads of the problem. Most important is the death of belief in the media as an institution that edits and clarifies or chooses priorities. Newspapers had to do this. There was only so much space in the paper each day. Editing was their greatest service to society. 

Much the same was true of nightly half-hour news broadcasts in pre-cable television. But now, the notion of self-censorship is alien and suspect. People expect “the whole truth”. The idea of news organisations with long-standing cultures and staffed by trained professionals deciding what is best to publish appears bankrupt. We are not trusted to do this, and not just because of politicians crying “fake news”. 

Rather, the rise of social media has redefined all other media. If the incident in the Citi branch were to happen today, someone would put a photo of it on Facebook and Twitter. It might or might not go viral. But it would be out there, without context or explanation. The journalistic duty I felt to be responsible and not foment panic is now at an end. This is dangerous. 

Another issue is distrust of bankers. Nobody ever much liked “fat cats”, but this pickled into hatred as bankers avoided personal criminal punishment for their roles in the crisis. Bank bailouts were, I still think, necessary to protect depositors. But they are now largely perceived merely as protecting bankers. My self-censorship seemed to be an effort to help my friends the bankers, not to shield depositors from a panic. 

Then there is inequality. In my column, I said that I “happened to have a lot of money in my account” but made no mention of selling my London flat. People assumed that if I had several hundred thousand dollars sitting in a bank account, I must be very rich. That, in many eyes, made my actions immoral. Once I entered the FT website comments thread to explain where the money had come from, some thought this changed everything. It was “important information”. “In the article where moral questions [were] raised, the nature of the capital should have been explained better,” one commenter said. 

The hidden premise was that if I were rich, I would not have been morally entitled to protect my money ahead of others lacking the information I was privy to. Bear in mind that to read this piece, it was necessary to subscribe to the FT. 

Put these factors together, and you have a catastrophic breakdown in trust. How did we get here? 

The democratisation of finance in the 1990s was healthy. Transparency revealed excessive fees that slowly began to fall. For us at the FT, in many ways an entrenched monopoly, this meant lost advertising and new competition from cable TV, data providers and an array of online services. 

But that democratisation was tragically mishandled and regulators let go of the reins far too easily. In 1999, as the Nasdaq index shot to the sky, the share prices of new online financial media groups such as thestreet.com shot up with them. On US television, ads for online brokers showed fictional truck drivers apparently buying their own island with the proceeds of their earnings from trading on the internet. By 2000, when I spent time at business school, MBA students day-traded on their laptops in class, oblivious to what their professors were saying. 

Once that bubble burst, the pitfalls of rushed democratisation were painfully revealed. Small savers had been sucked into the bubble at the top, and sustained bad losses. 

Trust then died with the credit crisis of 2008 and its aftermath. The sheer injustice of the ensuing government cuts and mass layoffs, which deepened inequality and left many behind while leaving perpetrators unpunished, ensured this. 

The public also lost their trust in journalists as their guides in dealing with this. We were held to have failed to warn the public of the impending crisis in 2008. I think this is unfair; the FT and many other outlets were loudly sceptical and had been giving the problems of US subprime lenders blanket coverage for two years before Lehman Brothers went down. In the earlier dotcom bubble, however, I think the media has more of a case to answer — that boom was lucrative for us and many were too credulous, helping the bubble to inflate. 

Further, new media robbed journalists of our mystique. In 1990, readers had no idea what we looked like. Much of the FT, including all its stock market coverage, was written anonymously. The only venue for our work was on paper and the only way to respond (apart from the very motivated, who used the telephone) was also on paper. The rule of thumb was that for every letter we received, at least another hundred readers felt the same way. 

Now, almost everything in the paper that expresses an opinion carries a photo. Once my photo appeared above my name on the old Short View column, my feedback multiplied maybe fivefold. The buzzword was to be “multimodal”, regaling readers with the same ideas in multiple formats. In 2007 we started producing video. 

My readers became my viewers, watching me speak to them on screen every day, and my feedback jumped again. Answering emails from readers took over my mornings. Often these would start “Dear John”, or even just “John”, as though from people who knew me. So much for our old mystique. 

By 2010, social media was a fact of life. Writing on Twitter, journalists’ social network of choice, became part of the job. People expected us to interact with them. This sounds good. We were transparent and interactive in a way we had not been before. But it became part of my job to get into arguments with strangers, who stayed anonymous, in a 140-character medium that made the expression of any nuance impossible. 

Meanwhile, the FT hosted social media of its own. Audience engagement became a buzzword. If readers commented, we talked back. Starting in 2012, I started debating with readers and I learnt a lot. FT readers are often specialists, and they helped me understand some arcane subject matter. Once, an intense discussion with well over a hundred entries on the subject of cyclically adjusted price/earnings multiples (don’t ask) yielded all the research I needed to write a long feature. 

Now, following Twitter, comments below the line are degenerating into a cesspit of anger and disinformation. Where once I debated with specialists, now I referee nasty political arguments or take the abuse myself. The status of the FT and its competitors in the financial media as institutions entrusted with the task of giving people a sound version of the truth now appears, to many, to be totally out of date. 

Even more dangerously for the future, the markets and their implicit judgments have been brought into the realm of politics (and not just by President Trump). This was not true even 20 years ago; when Al Gore faced off against George W Bush in 2000, only months after the dotcom bubble burst, neither candidate made much of an issue of it. 

But now, following Lehman, people understand that decisions made in capital markets matter. That makes markets part of the political battlefield; not just how to interpret them, but even the actual market numbers are now open to question. 

Brexit rammed this home to me. During the 2016 referendum campaign, Remainers argued that voting to leave would mean a disastrous hit for sterling. This was not exactly Project Fear; whether or not you thought Brexit was a good idea, it was obvious that it would initially weaken the pound. A weaker currency can be good news — the pound’s humiliating exit from the EU’s exchange rate mechanism in 1992, for example, set the scene for an economic boom throughout the late 1990s. 

But reporting on the pound on the night of the referendum was a new and different experience. Sitting in New York as the results came in through the British night, I had to write comments and make videos, while trying to master my emotions about the huge decision that my home country had just taken. Sterling fell more than 10 per cent against the dollar in a matter of minutes — more than double its previous greatest fall in the many decades that it had been allowed to float, bringing it to its lowest level in more than three decades. Remarkably, that reaction by foreign exchange traders has stood up; after two more years of political drama, the pound has wavered but more than two years later remains slightly below the level at which it settled on referendum night. 

As I left, at 1am in New York, with London waking up for the new day, I tweeted a chart of daily moves in sterling since 1970, showing that the night’s fall dwarfed anything previously seen. It went viral, which was not surprising. But the nature of the response was amazing. It was a factual chart with a neutral accompanying message. It was treated as a dubious claim. 

“LOL got that wrong didn’t you . . . oops!” (There was nothing wrong with it.) “Pretty sure it was like that last month. Scaremongering again.” (No, it was a statement of fact and nothing like this had happened ever, let alone the previous month.) 

“Scaremongering. Project Fear talking us down. This is nothing to do with Brexit, it’s to do with the PM cowardice resignation.” (I had made the tweet a matter of hours before David Cameron resigned.) 

The reaction showed a willingness to doubt empirical facts. Many also felt that the markets themselves were being political and not just trying to put money where it would make the greatest return. “Bankers punish Britons for their audacity in believing they should have political control of their own country.” (Forex traders in the US and Asia were probably not thinking about this.) 

“It will recover, this is what uncertainty does. Also the rich bitter people upset about Brexit.” (Rich and bitter people were unlikely to make trades that they thought would make them poorer, and most of that night’s trading was by foreigners more dispassionate than Britons could be at that point.) 

So it continued for days. Thanks to the sell-off in sterling, the UK stock market did not perform that badly (unless you compared it with others, which showed that its performance was lousy). Whether the market really disliked the Brexit vote became a topic of hot debate, which it has remained — even as the market verdict, that Brexit is very bad news if not a disaster, becomes ever clearer. 

After Brexit, of course, came Trump. The US president takes the stock market as a gauge of his performance, and any upward move as a political endorsement — while his followers treat any fall, or any prediction of a fall by pundits such as me, as a political attack. The decade in which central banks have bought assets in an open attempt to move markets plays into the narrative that markets are political creations. 

This is the toxic loss of trust that now vitiates finance. Once lost, trust is very hard to retrieve, which is alarming. It is also not clear what the financial media can do about it, beyond redoubling our efforts to do a good job. 

All the most obvious policy responses come with dangers. Regulating social media from its current sick and ugly state would have advantages but would also be the thin end of a very long wedge. Greater transparency and political oversight for central banks might rebuild confidence but at the risk of politicising institutions we desperately need to maintain independence from politicians. And an overhaul of the prosecutorial system for white-collar crime, to avert the scandalous way so many miscreants escaped a reckoning a decade ago, might work wonders for bolstering public trust — but not if it led to scapegoating or show trials. 

On one thing, I remain gloomily clear. Without trust in financial institutions themselves, or those who work in them, or the media who cover them, the next crisis could be far more deadly than the last. Just ask JP Morgan.

Wednesday 21 March 2018

Should the Big Four accountancy firms be split up?

Natasha Landell-Mills and Jim Peterson in The Financial Times

Yes - Separating audit from consulting would prevent conflicts of interest.


Auditors are failing investors. The situation has become so dire that last week the head of the UK’s accounting watchdog said it was time to consider forcing audit firms to divest their substantial and lucrative consulting work, writes Natasha Landell-Mills. 

This shift from the Financial Reporting Council, which opposed the idea six years ago, is welcome. But breaking up the Big Four accountancy firms — PwC, KPMG, EY and Deloitte — can only be a first step. Lasting reform depends on auditors working for shareholders, not management. 

Auditors are supposed to underpin trust in financial markets. Major stock markets require listed companies to hire auditors to verify their accounts, providing reassurance to shareholders that material matters have been inspected and their capital is protected. In the UK, auditors must certify that the published numbers give a “true and fair view” of circumstances and income; that they have been prepared in accordance with accounting standards; and that they comply with company law. 

But audit is failing to meet investors’ expectations. The failure of Carillion, linked to aggressive accounting, is just the latest high profile example. And this is not just a UK phenomenon. The International Forum of Independent Audit Regulators found that 40 per cent of the audits it inspected were sub-standard. 

Multiple market failures need to be addressed. The most obvious problem is that audit quality is invisible to those whom it is intended to benefit: the shareholders. It is difficult to differentiate good and bad audits. Even with the introduction of extended auditor reports in the UK (and starting in 2019 in the US), formulaic notes about audit risks often hide more than they convey. 

Even when questions are raised about the quality of audits, shareholders almost always vote to retain auditors, with most receiving at least 95 per cent support. Last year, 97 per cent of Carillion shareholders voted to re-appoint KPMG. Lack of scrutiny creates space for conflicts of interest. Auditors who feel accountable to company executives rather than shareholders will be less likely to challenge them. These conflicts are exacerbated when audit firms also sell other services to management teams, particularly if that consultancy work is more profitable. 

The dominance of the Big Four in large company audits is another concern: when large and powerful firms are able to crowd out high quality competitors, the damage is lasting. 

Taken together, these failures have resulted in a dysfunctional audit market that needs a broad revamp. Splitting audit from consulting would prevent the most insidious conflict of interest. When non-audit work makes up around 80 per cent of fee income for the Big Four (and just over half of income from audit clients), the influence of this part of the business is huge. 

Current limits on consulting work have not eliminated this problem. They are often set too high or can be gamed, while auditors can still be influenced by the hope of winning non-audit work after they relinquish the audit mandate. 

There is quite simply no compelling reason why shareholders should accept these conflicts and the resulting risks to audit quality introduced by non-audit work. But other reforms are necessary. 

Auditors should provide meaningful disclosures about the risks they uncover. They need to verify that company accounts do not overstate performance and capital and that unrealised profits are disclosed. 

Engagement between shareholders and audit committees and auditors should become the norm, not the exception. Shareholders need to scrutinise accounting and audit performance, and use their votes to remove auditors or audit committee directors where performance is substandard. 

Finally, the accounting watchdogs must be far more robust on audit quality and impose meaningful sanctions. Even the best intentioned will struggle against a broken system. 


No — Lopping off advisory services would hurt performance 

The recent spate of large-scale corporate accounting scandals is deeply worrying and raises a familiar question: “Where were the auditors?” But the correct answer does not involve breaking up the four professional services firms that dominate auditing, writes Jim Peterson. 

Forcing Deloitte, EY, KPMG and PwC to shed their non-audit businesses would neither add competition nor boost smaller competitors. Lopping off the Big Four’s consulting and advisory services would degrade their performance, weaken them financially, and hamper their ability to meet the needs of their clients and the capital markets. 

Although the UK regulator is raising competition concerns, the root problem is global. The growth of the Big Four, operating in more than 100 countries, reflects their multinational clients’ needs for breadth of geographic presence and specialised industry expertise. 

 The yawning gap in size between the Big Four and their smaller peers has long since grown beyond closure: even the smallest, KPMG, took in $26.4bn in 2017, three times as much as BDO, its next nearest competitor. If pressed, risk managers of the smaller firms admit to lacking the skills and the risk tolerance even to consider bidding to audit a far-flung multinational. 

The suggestion that competition and choice would be increased by splitting up the Big Four is doubly unrealistic. Forcing them to spin off their non-auditing business would not create any new auditors. We would continue to see dilemmas like the one faced by BT last year when it set out to replace PwC after a £530m discrepancy was uncovered in the accounts of its Italian division. The UK telecoms group ended up picking KPMG for want of alternatives, even though BT’s chairman had previously been global chairman of KPMG. 

Similarly, Japan’s Toshiba tossed EY in favour of PwC in 2016, only to suffer disagreements with the second firm — this led to delays in its financial statements and an eventual qualified audit report. Wish as it might, Toshiba has no further choices, because of business-based conflicts on the part of Deloitte and KPMG. 

A split by industry sector — say, assigning auditing of banking and technology to Firm A-1, while manufacturing and energy go to new Firm A-2 — would be no better. Each sector would still be served by just four big firms. If each firm were split in half, the two smaller firms would struggle to amass the expertise, personnel and capital necessary to provide the level of service that big companies expect. 

Splitting auditing from advisory work is a solution in search of a problem. Many jurisdictions, including the UK, EU and US, restrict the ability of firms to cross-sell other services to their audit clients. Concerns about inherent conflicts of interest are overblown. 

The enthusiasm for cutting up the Big Four also fails to recognise how the world is changing. The rise of artificial intelligence, blockchain and robotics is reshaping the way information is gathered and verified. Auditors will need more — rather than less — expertise. 

Warehouse inventories, crop yields and wind farms will soon be surveyed rapidly and comprehensively in ways that could easily displace the tedious and partial sampling done for decades by squadrons of young audit staff. But to take advantage of these advances, auditors need to have the scale, the financial strength and the technical skills to develop and offer them. 

These tools will also deliver data that management needs for operational and strategic decision making. If auditors are to be barred from providing this kind of advisory work, the legitimacy of methods that have prevailed since the Victorian era is under threat. Investors will require some sort of audit function, but who would provide it? Splitting up the Big Four will achieve nothing if they fail and are replaced by arms of Amazon and Google. 

Auditors should be held accountable for their mistakes, but these issues are too complex for simplistic solutions. Rather than a quick amputation, we need a full-scale re-engineering of the current model with all of its parts.

Monday 16 October 2017

Britain is over-tolerant of monopolies

Jonathan Ford in The Financial Times


The old joke that asks why there is only one Monopolies Commission may no longer work now the watchdog has rechristened itself the Competition and Markets Authority. 

But perhaps it’s no coincidence that the UK’s trustbusters have dropped the word “monopoly” from their name. 

Contemporary Britain can seem oddly complacent in the face of declining competition. True, it is not the only country to face the uncomfortable concentrations of market power that new technology and global capital make possible. 

Many advanced economies struggle with the “winner takes all” nature of the internet. 

Large parts of the UK’s competition mechanism are in any case delegated to Brussels. But even so, the country often contrives to drop the trustbusting ball. 

Take the ongoing dispute between Transport for London and Uber over whether the car-booking service should retain its taxi licence. 

TfL is up in arms about safety standards. But the real scandal here is the way Uber has been allowed to hoover up the London taxi market.  

Almost unseen, the US company has been able to turn a price-regulated black cab monopoly into an unregulated one where it increasingly dominates the capital’s streets. 

Facts on market shares are hard to obtain, in part because of Uber’s un-transparent structure. 

Fares for its services are paid not to a UK company, but to a Netherlands vehicle, which remits only sufficient money to the UK subsidiary to cover its costs and pay vestigial amounts of tax. 

Nonetheless, it is clear that Uber has built a very substantial position in the five years since it received a licence, the only app-based service yet to do so. 

The service has 40,000 drivers on its network, four times the number of black cab drivers. The second largest non-black cab private hire operator in London, Addison Lee, has just 4,800 drivers on its books. 

Compare that, for instance, to supposedly highly regulated Paris. There, customers have a choice of numerous app-based services, including Uber, Taxify, Allocab and Le Cab, as well as traditional regulated taxis. 

Travis Kalanick, Uber’s founder, may talk about London as the “Champion’s League of transportation”. But it is also one of the company’s top three cities worldwide in terms of profitability. Unsurprisingly, perhaps, given that in this “competition”, the authorities have excluded its main rivals. 

Other app-based services such as Taxify have been unable so far to obtain licences. Perhaps Uber has been treated as a guinea pig by the regulators. But if so, that careless decision may have allowed it to steal an uncatchable head start. 

Taxis are not the only area where competition has been allowed to take a back seat. Take the concentration of market power that occurred in the banking sector after the financial crisis, largely prompted by the merger of Lloyds TSB and HBOS. 

The CMA has placed its faith in limp behavioural remedies and backed away from any muscular changes such as break-ups. 

Or the telecoms sector, where the regulator allowed BT, the old national network, to buy EE and create a preponderant mobile operator without proposing any material steps to redress its evident market power. 

A recent study by the Social Market Foundation shows how the cumulative effect of market concentration increasingly threatens consumers’ interests. 

Out of 10 key markets accounting for 40 per cent of consumer spending, it found that eight — including groceries, mobile phones, gas and current accounts — were concentrated, meaning they were dominated by a small number of large companies. 

Only the car industry and the mortgage market were genuinely open, with no single operator in the former sector controlling more than 15 per cent of the market. Meanwhile, in telephone landlines, BT has about 80 per cent. 

Concentration and competition are not the same thing. In some sectors, such as groceries, it can be possible to have both because of the ease of switching. 

But in many sectors the concentrated market power erodes competition to the detriment of consumers. 

The lack of competition in banking, for instance, costs customers £6bn a year, or £116 each, according to a competition inquiry in 2016. In the energy sector, another inquiry found that Britons are paying £1.7bn too much each year for their power. Despite official investigations galore, neither has been addressed. 

Like the famous line about empire, Britain appears to be acquiring oligopolies in a fit of absence of mind. It is a dangerous inattention. 

For these concentrations do not just hit consumers in the wallet. They exact a cost in terms of public loss of confidence in private business and free markets. A state that believed in either would bust more trusts.

Friday 12 August 2016

Trusts keep wealth in the hands of the few. It’s time to stop this tax abuse

Richard Murphy in The Guardian

If there is a name that is synonymous with tax avoidance in the UK, it is that of the Duke of Westminster. The duke in question was, admittedly, the second duke, who in 1936 won an infamous tax case that permitted him to pay his gardeners in a way that avoided a tax liability. He achieved abiding fame as a consequence of the opinion of Lord Tomlin, who in his judgment on that case said: “Every man is entitled if he can to order his affairs so that the tax attracted under the appropriate act is less than it otherwise would be. If he succeeds in ordering them so as to secure this result, then, however unappreciative the commissioners of Inland Revenue or his fellow taxpayers may be of his ingenuity, he cannot be compelled to pay an increased tax.”

That statement has, to a large degree, been both the foundation of and justification for all tax avoidance activity in the UK since. That this activity continues is evidenced by the fact that the sixth duke is said to have left an estate worth £9.9bn upon his death this week to his son and yet, despite the fact that inheritance tax is supposedly payable on all estates on death worth more than £325,000, it has been widely reported that very little tax will be due in this case. It seems that the sixth duke has put the second to shame: his forebear saved a few pounds on his wages bill while the sixth has avoided something approaching £4bn. He may in the process have even outdone the fifth duke, who argued the fourth duke died of a war wound 232 years after he suffered it to escape all charges on the estate in the 1960s.

His likely motives for doing so can be easily summarised: there may be greed involved; a belief that the duke’s heirs are better entitled to this property than anyone else; and a hostility to any claim that the state might make on property that has been apparent in the UK aristocracy since the time of the Crusades.


The English legal concept of a trust is believed to have been developed during that era, when knights departing the country with no certainty of returning wanted to ensure that their land passed to those who they thought to be their rightful heirs without interference from the Crown. Trusts achieved that goal and the concept has remained in existence ever since, representing the continual struggle of those with wealth to subvert the rule of law that may apply to others but that they believe should not apply to them.

Recent political challenges have not ended the resulting abuse. Labour tried to introduce effective tax charges on inheritance in the 1970s, the Conservatives undermined them a decade later, and every subsequent attempt to tackle tax abuse using trusts (and Gordon Brown made many), has by and large left existing arrangements intact, only seeking to prevent abuse in new arrangements. As if to add insult to injury, the 2013 general anti-abuse rule, which was introduced by the coalition government and supposedly negated the decision by Lord Tomlin noted above, cannot be applied retrospectively: anything done by a duke before that date is outside of its scope.

So why has this tax avoidance been allowed to continue? First, it’s because no one in the UK has, since 1980, had the political will to tackle the use and abuse of trusts – even though continental Europe has shown it is perfectly possible to run an economy without them. Second, it’s down to the continuing power of the aristocracy and their chosen professional agents (lawyers, accountants, bankers and wealth managers) who have been willing to compromise themselves in exchange for fees to perpetuate the situation. And third, it’s because the Conservatives, in particular, have been keen to let the situation continue unchanged as they support the largely unfettered inheritance of substantial wealth. 

Another issue is that we know so little about trusts even when they are at least as powerful as companies and are even more commonly used for tax abuse. This is because of a mistaken perception of privacy, which should only be due to individuals and not artificial arrangements created by law, which trusts are. This can be corrected: we need transparency and that means a full register of trusts and their accounts on public record above modest financial limited, as for companies.

What can be done about this? In addition to the points already noted, the obvious solution is to abolish the inheritance tax reliefs that permit this tax avoidance, whether that be for trusts themselves or for those who own private companies and agricultural land. Inheritance tax assumes that the children of the wealthy are the rightful best next generation of managers of these assets and so lets them be passed on to them tax free, perpetuating wealth concentration in the process.

To put it another way, 800 years of claims by an elite to be above the law applicable to everyone else so that wealth can remain in the hands of the few has to be brought to an end. And if now is not the time to do it, I am really not sure when it will be.

Tuesday 28 June 2016

Why bad ideas refuse to die

Steven Poole in The Guardian

In January 2016, the rapper BoB took to Twitter to tell his fans that the Earth is really flat. “A lot of people are turned off by the phrase ‘flat earth’,” he acknowledged, “but there’s no way u can see all the evidence and not know … grow up.” At length the astrophysicist Neil deGrasse Tyson joined in the conversation, offering friendly corrections to BoB’s zany proofs of non-globism, and finishing with a sarcastic compliment: “Being five centuries regressed in your reasoning doesn’t mean we all can’t still like your music.”

Actually, it’s a lot more than five centuries regressed. Contrary to what we often hear, people didn’t think the Earth was flat right up until Columbus sailed to the Americas. In ancient Greece, the philosophers Pythagoras and Parmenides had already recognised that the Earth was spherical. Aristotle pointed out that you could see some stars in Egypt and Cyprus that were not visible at more northerly latitudes, and also that the Earth casts a curved shadow on the moon during a lunar eclipse. The Earth, he concluded with impeccable logic, must be round.

The flat-Earth view was dismissed as simply ridiculous – until very recently, with the resurgence of apparently serious flat-Earthism on the internet. An American named Mark Sargent, formerly a professional videogamer and software consultant, has had millions of views on YouTube for his Flat Earth Clues video series. (“You are living inside a giant enclosed system,” his website warns.) The Flat Earth Society is alive and well, with a thriving website. What is going on?

Many ideas have been brilliantly upgraded or repurposed for the modern age, and their revival seems newly compelling. Some ideas from the past, on the other hand, are just dead wrong and really should have been left to rot. When they reappear, what is rediscovered is a shambling corpse. These are zombie ideas. You can try to kill them, but they just won’t die. And their existence is a big problem for our normal assumptions about how the marketplace of ideas operates.

The phrase “marketplace of ideas” was originally used as a way of defending free speech. Just as traders and customers are free to buy and sell wares in the market, so freedom of speech ensures that people are free to exchange ideas, test them out, and see which ones rise to the top. Just as good consumer products succeed and bad ones fail, so in the marketplace of ideas the truth will win out, and error and dishonesty will disappear.

There is certainly some truth in the thought that competition between ideas is necessary for the advancement of our understanding. But the belief that the best ideas will always succeed is rather like the faith that unregulated financial markets will always produce the best economic outcomes. As the IMF chief Christine Lagarde put this standard wisdom laconically in Davos: “The market sorts things out, eventually.” Maybe so. But while we wait, very bad things might happen.

Zombies don’t occur in physical marketplaces – take technology, for example. No one now buys Betamax video recorders, because that technology has been superseded and has no chance of coming back. (The reason that other old technologies, such as the manual typewriter or the acoustic piano, are still in use is that, according to the preferences of their users, they have not been superseded.) So zombies such as flat-Earthism simply shouldn’t be possible in a well‑functioning marketplace of ideas. And yet – they live. How come?

One clue is provided by economics. It turns out that the marketplace of economic ideas itself is infested with zombies. After the 2008 financial crisis had struck, the Australian economist John Quiggin published an illuminating work called Zombie Economics, describing theories that still somehow shambled around even though they were clearly dead, having been refuted by actual events in the world. An example is the notorious efficient markets hypothesis, which holds, in its strongest form, that “financial markets are the best possible guide to the value of economic assets and therefore to decisions about investment and production”. That, Quiggin argues, simply can’t be right. Not only was the efficient markets hypothesis refuted by the global meltdown of 2007–8, in Quiggin’s view it actually caused it in the first place: the idea “justified, and indeed demanded, financial deregulation, the removal of controls on international capital flows, and a massive expansion of the financial sector. These developments ultimately produced the global financial crisis.”

Even so, an idea will have a good chance of hanging around as a zombie if it benefits some influential group of people. The efficient markets hypothesis is financially beneficial for bankers who want to make deals unencumbered by regulation. A similar point can be made about the privatisation of state-owned industry: it is seldom good for citizens, but is always a cash bonanza for those directly involved.

The marketplace of ideas, indeed, often confers authority through mere repetition – in science as well as in political campaigning. You probably know, for example, that the human tongue has regional sensitivities: sweetness is sensed on the tip, saltiness and sourness on the sides, and bitter at the back. At some point you’ve seen a scientific tongue map showing this – they appear in cookery books as well as medical textbooks. It’s one of those nice, slightly surprising findings of science that no one questions. And it’s rubbish.

 
A fantasy map of a flat earth. Photograph: Antar Dayal/Getty Images/Illustration Works

As the eminent professor of biology, Stuart Firestein, explained in his 2012 book Ignorance: How it Drives Science, the tongue-map myth arose because of a mistranslation of a 1901 German physiology textbook. Regions of the tongue are just “very slightly” more or less sensitive to each of the four basic tastes, but they each can sense all of them. The translation “considerably overstated” the original author’s claims. And yet the mythical tongue map has endured for more than a century.

One of the paradoxes of zombie ideas, though, is that they can have positive social effects. The answer is not necessarily to suppress them, since even apparently vicious and disingenuous ideas can lead to illuminating rebuttal and productive research. Few would argue that a commercial marketplace needs fraud and faulty products. But in the marketplace of ideas, zombies can actually be useful. Or if not, they can at least make us feel better. That, paradoxically, is what I think the flat-Earthers of today are really offering – comfort.

Today’s rejuvenated flat-Earth philosophy, as promoted by rappers and YouTube videos, is not simply a recrudescence of pre-scientific ignorance. It is, rather, the mother of all conspiracy theories. The point is that everyone who claims the Earth is round is trying to fool you, and keep you in the dark. In that sense, it is a very modern version of an old idea.

As with any conspiracy theory, the flat-Earth idea is introduced by way of a handful of seeming anomalies, things that don’t seem to fit the “official” story. Have you ever wondered, the flat-Earther will ask, why commercial aeroplanes don’t fly over Antarctica? It would, after all, be the most direct route from South Africa to New Zealand, or from Sydney to Buenos Aires – if the Earth were round. But it isn’t. There is no such thing as the South Pole, so flying over Antarctica wouldn’t make any sense. Plus, the Antarctic treaty, signed by the world’s most powerful countries, bans any flights over it, because something very weird is going on there. So begins the conspiracy sell. Well, in fact, some commercial routes do fly over part of the continent of Antarctica. The reason none fly over the South Pole itself is because of aviation rules that require any aircraft taking such a route to have expensive survival equipment for all passengers on board – which would obviously be prohibitive for a passenger jet.

OK, the flat-Earther will say, then what about the fact that photographs taken from mountains or hot-air balloons don’t show any curvature of the horizon? It is perfectly flat – therefore the Earth must be flat. Well, a reasonable person will respond, it looks flat because the Earth, though round, is really very big. But photographs taken from the International Space Station in orbit show a very obviously curved Earth.

And here is where the conspiracy really gets going. To a flat-Earther, any photograph from the International Space Station is just a fake. So too are the famous photographs of the whole round Earth hanging in space that were taken on the Apollo missions. Of course, the Moon landings were faked too. This is a conspiracy theory that swallows other conspiracy theories whole. According to Mark Sargent’s “enclosed world” version of the flat-Earth theory, indeed, space travel had to be faked because there is actually an impermeable solid dome enclosing our flat planet. The US and USSR tried to break through this dome by nuking it in the 1950s: that’s what all those nuclear tests were really about.

 
Flat-Earthers regard as fake any photographs of the Earth that were taken on the Apollo missions Photograph: Alamy

The intellectual dynamic here, is one of rejection and obfuscation. A lot of ingenuity evidently goes into the elaboration of modern flat-Earth theories to keep them consistent. It is tempting to suppose that some of the leading writers (or, as fans call them, “researchers”) on the topic are cynically having some intellectual fun, but there are also a lot of true believers on the messageboards who find the notion of the “globist” conspiracy somehow comforting and consonant with their idea of how the world works. You might think that the really obvious question here, though, is: what purpose would such an incredibly elaborate and expensive conspiracy serve? What exactly is the point?

It seems to me that the desire to believe such stuff stems from a deranged kind of optimism about the capabilities of human beings. It is a dark view of human nature, to be sure, but it is also rather awe-inspiring to think of secret agencies so single-minded and powerful that they really can fool the world’s population over something so enormous. Even the pro-Brexit activists who warned one another on polling day to mark their crosses with a pen so that MI5 would not be able to erase their votes, were in a way expressing a perverse pride in the domination of Britain’s spookocracy. “I literally ran out of new tin hat topics to research and I STILL wouldn’t look at this one without embarrassment,” confesses Sargent on his website, “but every time I glanced at it there was something unresolved, and once I saw the near perfection of the whole plan, I was hooked.” It is rather beautiful. Bonkers, but beautiful. As the much more noxious example of Scientology also demonstrates, it is all too tempting to take science fiction for truth – because narratives always make more sense than reality.

We know that it’s a good habit to question received wisdom. Sometimes, though, healthy scepticism can run over into paranoid cynicism, and giant conspiracies seem oddly consoling. One reason why myths and urban legends hang around so long seems to be that we like simple explanations – such as that immigrants are to blame for crumbling public services – and are inclined to believe them. The “MMR causes autism” scare perpetrated by Andrew Wakefield, for example, had the apparent virtue of naming a concrete cause (vaccination) for a deeply worrying and little-understood syndrome (autism). Years after it was shown that there was nothing to Wakefield’s claims, there is still a strong and growing “anti-vaxxer” movement, particularly in the US, which poses a serious danger to public health. The benefits of immunisation, it seems, have been forgotten.

The yearning for simple explanations also helps to account for the popularity of outlandish conspiracy theories that paint a reassuring picture of all the world’s evils as being attributable to a cabal of supervillains. Maybe a secret society really is running the show – in which case the world at least has a weird kind of coherence. Hence, perhaps, the disappointed amazement among some of those who had not expected their protest votes for Brexit to count.

And what happens when the world of ideas really does operate as a marketplace? It happens to be the case that many prominent climate sceptics have been secretly funded by oil companies. The idea that there is some scientific controversy over whether burning fossil fuels has contributed in large part to the present global warming (there isn’t) is an idea that has been literally bought and sold, and remains extraordinarily successful. That, of course, is just a particularly dramatic example of the way all western democracies have been captured by industry lobbying and party donations, in which friendly consideration of ideas that increase the profits of business is simply purchased, like any other commodity. If the marketplace of ideas worked as advertised, not only would this kind of corruption be absent, it would be impossible in general for ideas to stay rejected for hundreds or thousands of years before eventually being revived. Yet that too has repeatedly happened.

While the return of flat-Earth theories is silly and rather alarming, meanwhile, it also illustrates some real and deep issues about human knowledge. How, after all, do you or I know that the Earth really is round? Essentially, we take it on trust. We may have experienced some common indications of it ourselves, but we accept the explanations of others. The experts all say the Earth is round; we believe them, and get on with our lives. Rejecting the economic consensus that Brexit would be bad for the UK, Michael Gove said that the British public had had enough of experts (or at least of experts who lurked in acronymically named organisations), but the truth is that we all depend on experts for most of what we think we know.

The second issue is that we cannot actually know for sure that the way the world appears to us is not actually the result of some giant conspiracy or deception. The modern flat-Earth theory comes quite close to an even more all-encompassing species of conspiracy theory. As some philosophers have argued, it is not entirely impossible that God created the whole universe, including fossils, ourselves and all our (false) memories, only five minutes ago. Or it might be the case that all my sensory impressions are being fed to my brain by a clever demon intent on deceiving me (Descartes) or by a virtual-reality program controlled by evil sentient artificial intelligences (The Matrix).

The resurgence of flat-Earth theory has also spawned many web pages that employ mathematics, science, and everyday experience to explain why the world actually is round. This is a boon for public education. And we should not give in to the temptation to conclude that belief in a conspiracy is prima facie evidence of stupidity. Evidently, conspiracies really happen. Members of al-Qaida really did conspire in secret to fly planes into the World Trade Center. And, as Edward Snowden revealed, the American and British intelligence services really did conspire in secret to intercept the electronic communications of millions of ordinary citizens. Perhaps the most colourful official conspiracy that we now know of happened in China. When the half-millennium-old Tiananmen Gate was found to be falling down in the 1960s, it was secretly replaced, bit by bit, with an exact replica, in a successful conspiracy that involved nearly 3,000 people who managed to keep it a secret for years.

Indeed, a healthy openness to conspiracy may be said to underlie much honest intellectual inquiry. This is how the physicist Frank Wilczek puts it: “When I was growing up, I loved the idea that great powers and secret meanings lurk behind the appearance of things.” Newton’s grand idea of an invisible force (gravity) running the universe was definitely a cosmological conspiracy theory in this sense. Yes, many conspiracy theories are zombies – but so is the idea that conspiracies never happen.

 
‘When the half-millennium-old Tiananmen Gate was found to be falling down in the 1960s, it was secretly replaced, bit by bit, with an exact replica’ Photograph: Kevin Frayer/Getty Images

Things are better, one assumes, in the rarefied marketplace of scientific ideas. There, the revered scientific journals have rigorous editorial standards. Zombies and other market failures are thereby prevented. Not so fast. Remember the tongue map. It turns out that the marketplace of scientific ideas is not perfect either.
The scientific community operates according to the system of peer review, in which an article submitted to a journal will be sent out by the editor to several anonymous referees who are expert in the field and will give a considered view on whether the paper is worthy of publication, or will be worthy if revised. (In Britain, the Royal Society began to seek such reports in 1832.) The barriers to entry for the best journals in the sciences and humanities mean that – at least in theory – it is impossible to publish clownish, evidence-free hypotheses.

But there are increasing rumblings in the academic world itself that peer review is fundamentally broken. Even that it actively suppresses good new ideas while letting through a multitude of very bad ones. “False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years,” reported Scientific American magazine in 2011. Indeed, the writer of that column, a professor of medicine named John Ioannidis, had previously published a famous paper titled Why Most Published Research Findings Are False. The issues, he noted, are particularly severe in healthcare research, in which conflicts of interest arise because studies are funded by large drug companies, but there is also a big problem in psychology.

Take the widely popularised idea of priming. In 1996, a paper was published claiming that experimental subjects who had been verbally primed to think of old age by being made to think about words such as bingo, Florida, grey, and wrinkles subsequently walked more slowly when they left the laboratory than those who had not been primed. It was a dazzling idea, and led to a flurry of other findings that priming could affect how well you did on a quiz, or how polite you were to a stranger. In recent years, however, researchers have become suspicious, and have not been able to generate the same findings as many of the early studies. This is not definitive proof of falsity, but it does show that publication in a peer-reviewed journal is no guarantee of reliability. Psychology, some argue, is currently going through a crisis in replicability, which Daniel Kahneman has called a looming “train wreck” for the field as a whole.

Could priming be a future zombie idea? Well, most people think it unlikely that all such priming effects will be refuted, since there is now such a wide variety of studies on them. The more interesting problem is to work out what scientists call the idea’s “ecological validity” – that is, how well do the effects translate from the artificial simplicity of the lab situation to the ungovernable messiness of real life? This controversy in psychology just shows science working as it should – being self-correcting. One marketplace-of-ideas problem here, though, is that papers with surprising and socially intriguing results will be described throughout the media, and lauded as definitive evidence in popularising books, as soon as they are published, and long before awkward second questions begin to be asked.




China’s memory manipulators



It would be sensible, for a start, for us to make the apparently trivial rhetorical adjustment from the popular phrase “studies show …” and limit ourselves to phrases such as “studies suggest” or “studies indicate”. After all, “showing” strongly implies proving, which is all too rare an activity outside mathematics. Studies can always be reconsidered. That is part of their power.

Nearly every academic inquirer I talked to while researching this subject says that the interface of research with publishing is seriously flawed. Partly because the incentives are all wrong – a “publish or perish” culture rewards academics for quantity of published research over quality. And partly because of the issue of “publication bias”: the studies that get published are the ones that have yielded hoped-for results. Studies that fail to show what they hoped for end up languishing in desk drawers.

One reform suggested by many people to counteract publication bias would be to encourage the publication of more “negative findings” – papers where a hypothesis was not backed up by the experiment performed. One problem, of course, is that such findings are not very exciting. Negative results do not make headlines. (And they sound all the duller for being called “negative findings”, rather than being framed as positive discoveries that some ideas won’t fly.)

The publication-bias issue is even more pressing in the field of medicine, where it is estimated that the results of around half of all trials conducted are never published at all, because their results are negative. “When half the evidence is withheld,” writes the medical researcher Ben Goldacre, “doctors and patients cannot make informed decisions about which treatment is best.”Accordingly, Goldacre has kickstarted a campaigning group named AllTrials to demand that all results be published.

When lives are not directly at stake, however, it might be difficult to publish more negative findings in other areas of science. One idea, floated by the Economist, is that “Journals should allocate space for ‘uninteresting’ work, and grant-givers should set aside money to pay for it.” It sounds splendid, to have a section in journals for tedious results, or maybe an entire journal dedicated to boring and perfectly unsurprising research. But good luck getting anyone to fund it.

The good news, though, is that some of the flaws in the marketplace of scientific ideas might be hidden strengths. It’s true that some people think peer review, at its glacial pace and with its bias towards the existing consensus, works to actively repress new ideas that are challenging to received opinion. Notoriously, for example, the paper that first announced the invention of graphene – a way of arranging carbon in a sheet only a single atom thick – was rejected by Nature in 2004 on the grounds that it was simply “impossible”. But that idea was too impressive to be suppressed; in fact, the authors of the graphene paper had it published in Science magazine only six months later. Most people have faith that very well-grounded results will find their way through the system. Yet it is right that doing so should be difficult. If this marketplace were more liquid and efficient, we would be overwhelmed with speculative nonsense. Even peremptory or aggressive dismissals of new findings have a crucial place in the intellectual ecology. Science would not be so robust a means of investigating the world if it eagerly embraced every shiny new idea that comes along. It has to put on a stern face and say: “Impress me.” Great ideas may well face a lot of necessary resistance, and take a long time to gain traction. And we wouldn’t wish things to be otherwise.

In many ways, then, the marketplace of ideas does not work as advertised: it is not efficient, there are frequent crashes and failures, and dangerous products often win out, to widespread surprise and dismay. It is important to rethink the notion that the best ideas reliably rise to the top: that itself is a zombie idea, which helps entrench powerful interests. Yet even zombie ideas can still be useful when they motivate energetic refutations that advance public education. Yes, we may regret that people often turn to the past to renew an old theory such as flat-Earthism, which really should have stayed dead. But some conspiracies are real, and science is always engaged in trying to uncover the hidden powers behind what we see. The resurrection of zombie ideas, as well as the stubborn rejection of promising new ones, can both be important mechanisms for the advancement of human understanding.

Tuesday 19 January 2016

People judge you based on 2 criteria when they first meet you

Jenna Goudreau in The Independent


People size you up in seconds, but what exactly are they evaluating?

Harvard Business School professor Amy Cuddy has been studying first impressions alongside fellow psychologists Susan Fiske and Peter Glick for more than 15 years, and has discovered patterns in these interactions.

In her new book, "Presence," Cuddy says people quickly answer two questions when they first meet you:

Can I trust this person?
Can I respect this person?


Psychologists refer to these dimensions as warmth and competence respectively, and ideally you want to be perceived as having both.
Interestingly, Cuddy says that most people, especially in a professional context, believe that competence is the more important factor. After all, they want to prove that they are smart and talented enough to handle your business.

But in fact warmth, or trustworthiness, is the most important factor in how people evaluate you. "From an evolutionary perspective," Cuddy says, "it is more crucial to our survival to know whether a person deserves our trust."

It makes sense when you consider that in cavemen days it was more important to figure out if your fellow man was going to kill you and steal all your possessions than if he was competent enough to build a good fire.

While competence is highly valued, Cuddy says it is evaluated only after trust is established. And focusing too much on displaying your strength can backfire.

Cuddy says MBA interns are often so concerned about coming across as smart and competent that it can lead them to skip social events, not ask for help, and generally come off as unapproachable. 

These overachievers are in for a rude awakening when they don't get the job offer because nobody got to know and trust them as people.

"If someone you're trying to influence doesn't trust you, you're not going to get very far; in fact, you might even elicit suspicion because you come across as manipulative," Cuddy says.

"A warm, trustworthy person who is also strong elicits admiration, but only after you've established trust does your strength become a gift rather than a threat." 

Friday 25 September 2015

Bomb both sides in Syria and we’ll fix the country in a jiffy

We could also bomb Hell, and within a month the residents would say ‘We were better off under Satan’

Mark Steel in The Independent


Some people get confused by events in Syria, but they’re not that complicated. Quite simply, we need to bomb somewhere or other out there, like we should have done two years ago. Back then we should have dropped bombs to support the Isis rebels fighting against the evil Assad. But as we didn’t bother, we now need to put that right by bombing the Isis rebels, and protecting Assad.

Because if only we had bombed Assad back then, it would be much easier to bomb Isis and their allies now, as we would be one of their allies so we could bomb ourselves. And we could do that without the fuss of going all the way to Syria, which would cut down on carbon emissions as well.

Also, we could ask Isis if they had any bombs left over that we had given them, “as we need them back to bomb you please”.

The change has happened because back then, you may recall, Assad was so unspeakably evil he had gassed his own people. But now we have decided we support Assad so I suppose we have found out the gas wasn’t so much a chemical weapon as a Syrian version of Febreze, that has left Aleppo with an alluring scent of lemon.

Former UN Secretary-General Kofi Annan warned against bombing, saying “Syria is not Libya, it won’t implode but explode beyond its borders.” So that might not be too cheery, if he is saying things will not necessarily go as smoothly as they have turned out in Libya.

If you were really fussy, you could look for another example of a western invasion in the Syria/Iraq region in the recent past, and find out how well that went. But where we went wrong in Libya and Iraq, is we only bombed one side.

This is the sort of pacifist behaviour that causes the trouble. We should have bombed all the different sides, to make sure we annihilate the right people.

Sometimes we have tried this to a certain extent, so at different times we have armed Assad and Gaddafi and Saddam and Bin Laden and then bombed them for using the bombs we had sold them. But it is not organised properly and leaves the poor sods confused.

Instead of supporting Arab dictators for 20 years, then opposing them for three, and then supporting them again, we should arrange it on a rota system. We could bomb them on Mondays, Wednesdays and Fridays, bomb their opponents on Tuesdays, Thursdays and Saturdays, and leave Sundays for US construction companies to make some money rebuilding the stuff we have bombed, so there is something new to bomb.

Otherwise we are left with the predicament Tony Blair finds himself in. He complains that we didn’t bomb Assad two years ago. But, in 2002, Blair invited Assad to stay at Buckingham Palace and praised his modernising outlook. If he had used my suggested system, he could have grovelled to him on Thursday, then bombed him in his bedroom on Friday. I’m sure the Queen wouldn’t have minded sleeping on a mate’s settee for a couple of weeks while builders repaired the damage.

The silly thing is, it’s now claimed there are secret units of the IRA – who have kept their weapons against the rules of the peace process. It would have kept them out of mischief if they had been asked to bomb Blair’s pals such as Assad and Gaddafi, as long as they did it on one of the agreed days, and it would have strengthened the Northern Ireland peace process as well.

There could also be a surprise element to which side we bomb, with vast commercial potential. Instead of the same predictable places popping up, there should be an international body that chooses the venue, with Sepp Blatter opening an envelope to reveal “next year the place we have to bomb as we can’t just do nothing is… Finland”.

Then, whenever someone suggests bombing Finland will make things worse, columnists and politicians and blokes in pubs can shout “well, we can’t do NOTHING”.

This argument, that we can’t do NOTHING, is powerful and well thought through, because it’s clear from Western military interventions in the Middle East that no matter how bad the situation is before we go there, we manage to make it worse. This must have taken immense planning in Libya, but was worth it because everyone seems to agree that most of the country looks back on their days under the foul, despotic, murderous tyranny of Gaddafi with a dreamy nostalgic affection.

We could bomb Hell, and within a month the residents would say “We were better off under Satan. At least he kept the demons under some sort of control.”

Maybe the problem is we are not entirely trusted. This goes to show what a touchy people they are out there. We do all we can to support the spread of democracy by arming the royal family of Saudi Arabia and the Amir of Kuwait and the honourable folk who rule Qatar, and go out of our way to support people with titles such as “Mighty Wizard of Eternal Vengeance and Holy uber-King who can make up laws as he goes along, Divinely Grand Swisher of the Majestic Whip and his Million Wives of Bahrain”, and the little sods still doubt our honourable intentions.

But now there is an even more urgent reason to back the bombing of somewhere or other, which is we must do it for the refugees. The Sun newspaper, in particular, has been running a campaign that we “Do it for Aylan”, the three-year-old lad who was drowned as his family fled from the horrors of Isis.

I suppose they must have spoken to Aylan’s family, who would have told The Sun that bombing somewhere or other is exactly what he would have wanted.