Search This Blog

Showing posts with label whistleblower. Show all posts
Showing posts with label whistleblower. Show all posts

Monday, 6 May 2024

Why don’t auditors find fraud?

Stephen Foley in the FT 

For decades, investors have lamented how rarely external auditors uncover corporate fraud. From Enron to Wirecard, the cry after each scandal is, where were the auditors? The Association of Certified Fraud Examiners’ biennial report on how workplace fraud gets detected has typically shown auditors are the ones uncovering the wrongdoing only 4 per cent of the time. 

Bad news. The latest report out a few weeks ago said the number is down to 3 per cent. Whistleblower hotlines and other internal controls may have helped some companies themselves discover some malfeasance earlier, but what about when management is the perpetrator or a corporate culture is rotten? A survey of investors by the Center for Audit Quality, a trade group for large accounting firms, found that 57 per cent thought the current system “frequently” failed to detect illegal acts. 

Regulators fear auditors are failing in their role as a last line of defence for investors against corporate shenanigans. Audit firms argue that company executives are responsible for the accuracy of financial statements and that the role of an auditor role is only to provide reasonable assurance — not a guarantee — that a financial statement is free from material misstatement. 

It is an argument that has prompted the US Securities and Exchange Commission’s chief accountant, Paul Munter, to exclaim to me on more than one occasion that he is fed up hearing from auditors what they do not do. 

But a series of proposals to clarify and extend auditors’ responsibilities has now been made. In the US, the Public Company Accounting Oversight Board is revamping rules on how auditors must look for and deal with evidence of a client’s non-compliance with laws and regulations (Noclar, in the jargon). The intent is to force auditors to cast a wider net for matters that could have a material effect on a company’s financials, even indirectly by leading to big fines or regulatory action that threatens the business. 

Audit firms have responded that they cannot be expected to make legal judgments, and that the huge amount of extra work implied by the Noclar proposal as currently drafted probably will not uncover anything significant that current procedures do not already. 

A narrower proposal in the UK — which says auditors do not have to probe every minor law or regulation, and can use management’s own compliance programmes as a starting point — has elicited almost as thunderous a set of comment letters in opposition. 

The latest move is by the International Auditing and Assurance Standards Board, which sets rules that are used as a template by scores of countries around the world. It has proposed strengthening standards on fraud detection to emphasise that auditors must look for financial misstatements that might not be “quantitatively material” but which might be “qualitatively material”, depending on who instigated the fraud and why it was perpetrated. 

The emergence of all these proposals is no coincidence, and it is not as if audit firms themselves do not see room for improvement. PwC last year promised it would overhaul its fraud detection procedures and probe its clients’ whistleblower programmes more closely, among other reforms to boost audit quality. PwC boss Tim Ryan tried and failed to get all the Big Four firms to make a common pledge on these issues. 

Other leaders still talk of an “expectations gap” between what investors want an audit to be and what it really is, as if it is the investors that need to be educated instead of the profession that needs to change. 

An alternative response to some of the current proposals would embrace them to strengthen the hand of auditors. They provide new justification to pry open clients’ businesses, push back on hostile finance chiefs and chief executives, and flag more matters of concern to directors, to investors or to the authorities — to follow through on the professional scepticism that is supposed to be at the heart of the auditors’ creed. There is room for agreement, even on the contentious Noclar proposal. 

Better still for auditors, there is evidence investors are willing to pay for a more robust service. The CAQ survey showed a majority would support auditors charging an additional 20 per cent or more to cover the extra work of rooting out non-compliance. 

A high-quality audit is sometimes called a “credence good”, because its value is difficult to calculate. But, as the shareholders of Enron, Wirecard and countless others will tell you, the cost of a bad audit can be so much more.

Tuesday, 30 May 2023

How to treat the Fraud Epidemic!

Kelly Richmond Pope in The Economist

It marks a spectacular fall from grace for a one-time Silicon Valley star. This week a court in California ruled that, Hail Mary appeals notwithstanding, Elizabeth Holmes must report to prison on May 30th to begin serving an 11-year sentence for fraud. Theranos, the startup Ms Holmes had founded in 2003, was worth $9bn at its peak but crashed after its much-vaunted blood-testing technology was shown not to work, and she ended up in the dock for deceiving investors.

Theranos is one of a long list of financial scandals that have made headlines in recent years. Also among these are the frauds at Wirecard, a German payments processor, and Abraaj, a Dubai-based private-equity firm, various crypto-heists, and a bonanza of misappropriation of government handouts to businesses during the covid-19 pandemic. So many frauds are there, and so big are the biggest, that pilfering a billion dollars does not guarantee a global headline. Chances are you haven’t heard of Outcome Health, a Chicago-based health-tech firm whose former ceo and president were recently convicted of defrauding clients, lenders and investors of roughly that amount of money.

Beneath the blockbuster frauds in the billions of dollars is an alarmingly long tail of smaller financial scams. Taken together, these add up to a huge global problem. Research by Crowe, a financial-advisory firm, and the University of Portsmouth, in England, suggests that fraud costs businesses and individuals across the world more than $5trn each year. That is nearly 60% of what the world spends annually on health care.

The drivers of fraud are many and complex. Sometimes it is down to pure greed. Sometimes it begins with a relatively innocuous attempt to paper over a small financial crack but spirals when that initial effort fails; some believe that’s how it started with Bernie Madoff’s giant Ponzi scheme. Market pressure and a desire to exceed analysts’ expectations can also play a part: after the global financial crisis of 2007-09, ge was fined $50m for artificially smoothing its profits to keep investors sweet. Accounting ruses like this, which fall in a grey area, are more common than outright fraud. Among tech startups there is even an established term for manipulating the numbers to buy you time to navigate the rocky road to financial respectability: “fake it till you make it.”

Fraud is an all-weather pursuit. Economic booms help fraudsters conceal creative accounting, such as exaggerated revenues. Recessions expose some of this wrongdoing, but they also spawn fresh shenanigans. As funding dries up, some owners and managers cook the books to stay in business. When survival is at stake, the line between what is acceptable and unacceptable when disclosing information or booking sales can become blurred.

World events can stoke fraud, too. At the height of the pandemic, an estimated $80bn of American taxpayer money handed out under the Paycheck Protection Programme, set up to assist struggling businesses, was stolen by fraudsters. The covid-induced increase in remote working has created new opportunities for miscreants. The 2022 kpmg Fraud Outlook concludes that the surge in working from home has reduced businesses’ ability to monitor employees’ behaviour. Geopolitics affects fraud, too. nato countries experienced four times as many email-phishing attacks from Russia in 2022 as they did in 2020. Cybercrimes such as ransomware attacks have already transferred a staggering amount of wealth to illicit actors. The costs to businesses range from the theft of data, intellectual property and money to post-attack disruption, lost productivity and systems upgrades.

It is panglossian to think fraud can be eliminated, but more can be done to reduce it. Corporate boards and investors need to ask more questions. Investors are often too quick to take comfort from the presence of big names on the list of owners and directors. Some were clearly wowed by Theranos’s star-studded board, whose members included two former us secretaries of state and the ex-boss of Wells Fargo, a big bank.

Regulators need to be more sceptical, too. America’s Securities and Exchange Commission brushed aside a detailed and devastating analysis of Madoff’s business provided by a concerned fund manager, Harry Markopolos. Germany’s financial-markets regulator was similarly dismissive of the short-sellers and journalists who called out Wirecard.

The most effective change would be to do more to encourage whistleblowers. Falsified financial statements must start with someone who notices fraudulent acts. When fraud happens, many people ask “Where were the auditors?”. But the question should be “Where were the whistleblowers?”

As important as sceptical investors, regulators and journalists can be, much fraud would be undetectable without someone on the inside willing to spill the beans. Research shows that more than 40% of frauds are discovered by a whistleblower. The Wirecard scandal came to light largely because of the bravery of Pav Gill, one of the company’s lawyers, who went to the press with his concerns. The Theranos fraud was brought to the attention of the authorities and the Wall Street Journal by whistleblowing employees (one of whom was the grandson of a former political bigwig on the board).

Too often, companies seek to silence whistleblowers, or portray them as mad, bad or both: Wirecard, for instance, fought back ferociously against Mr Gill’s allegations and the journalists who investigated them. Organisations need to create safe spaces where employees can voice their concerns about wrongdoing. Internal reporting channels need to be robust, and employees educated on how to use them. Creating an environment where whistleblowers are celebrated, not vilified, is critical. Companies should worry more about anyone who can circumvent the controls, such as senior leaders or star employees, than about those inclined to raise concerns.

Governments, too, could do more. Protections for whistleblowers have been recognised as part of international law since 2003 when the United Nations adopted the Convention Against Corruption, and this has since been ratified by 137 countries. In reality, legal protections are patchy. They are strongest in America, which offers bounties to whistleblowers who provide information that leads to fines or imprisonment. In much of Europe, and elsewhere, the law is still too soft on those who muzzle or retaliate against alarm-ringers.

Fraud can be reduced. But first we must better understand who commits it, educate people on how to report it, and then ensure that policies protect those who choose to come forward. Until we do, financial crime will remain a multi-trillion-dollar scourge.

Tuesday, 21 March 2023

From SVB to the BBC: why did no one see the crisis coming?

Michael Skapinker in The FT  

Silicon Valley Bank collapses after its investments in long-dated bonds made it vulnerable to interest rate rises. The BBC is thrown into chaos after suspending its top football pundit and colleagues abandon their posts in solidarity. JPMorgan Chase suffers reputational damage and lawsuits after keeping sex offender Jeffrey Epstein on as a client for five years after he pleaded guilty to soliciting prostitution, including from a minor. 

In all these cases, we can ask, as Queen Elizabeth II did on a visit to the London School of Economics during the global financial crisis in 2008: “Why did no one see it coming?” 

Did anyone in the BBC’s leadership ask whether, if they suspended Gary Lineker from presenting its top Saturday night football programme Match of the Day, other pundits might walk out too? Did SVB run through the risks attached to its investment policies if interest rates rose faster than expected? And why did JPMorgan accede to senior banker Jes Staley’s desire to keep Epstein on? These are dramatic examples of what can go wrong, but any organisation that fails to keep its possible risks under regular review could go the same way. 

All too often senior managers fail to consider the worst-case scenario. Why don’t they listen to doubters? 

Amy Edmondson, a professor at Harvard Business School, says sometimes it is because there are no doubters. Leadership groups become so locked into a “shared myth” that they ignore any suggestions they might be wrong. “We’ve got the well-known confirmation bias where we are predisposed to pick up signals, data, evidence that reinforce our current belief. And we will be filtering out disconfirming evidence,” she says. 

It is like taking the wrong route in a car. “You’re on the highway driving somewhere and you’re heading in the wrong direction, but you don’t know it until you’re just hit over the head by disconfirming data that you can’t miss: you suddenly cross a state line that you didn’t expect to cross.” 

This groupthink and confirmation bias is prevalent in the wider society, where people leap on any evidence to support their view on, for example, climate change, Edmonson says. “Oh my gosh, this is the coldest winter ever. What do you mean global warming?” 

In many cases, there are doubters, but they are either reluctant to raise their voices or, when they do, colleagues hesitate to join them. At JPMorgan, there were questions about Epstein. An internal email in 2010 asked: “Are you still comfortable with this client who is now a registered sex offender?” 

James Detert, a professor at the University of Virginia’s Darden School of Business, says evolution has hard-wired us not to deviate from our group. “If you think about our time on earth as a species, for most of it we lived in very small clans, bands, tribes, and our daily struggle was for survival, both around food security and physical safety. In that environment, if you were ostracised, you were going to die. There was no solo living in those days.” 

We carry this fear of being cast out into our workplaces, compounded by the experience of whistleblowers, who sometimes suffer retribution from their employers and are shunned by colleagues. Dissenters present their colleagues with an uncomfortable choice: either to view themselves as cowards for not speaking up too, or to regard the rebel as “some kind of crackpot”. The second is often easier. 

Isn’t the Lineker saga a counter-example? His colleagues supported him, forcing the BBC to quickly see how badly it had miscalculated. Detert says this was an unusual case. Celebrated footballers-turned-commentators are brands themselves, Lineker in particular. The BBC realised how much it needed him, and how easily he could have secured a contract with a rival. Usually, he says, rebels find themselves isolated. 

So what can leaders do to encourage doubters to speak up, to ensure they consider all the possible downsides of their strategies, and escape eventual humiliation or disaster? Detert is not a fan of appointing a “devil’s advocate” who is tasked with giving a contrary view. It is often clear that they are simply going through the motions. He prefers what he calls “joint evaluation”. As well as the preferred policy — investing in long-dated bonds, for example — senior managers should draw up a distinctively different policy and compare the two. This is more likely to show up the flaws in the preferred strategy. 

Simon Walker, whose roles have included head of communications at British Airways and spokesman for Queen Elizabeth, and Sue Williams, Scotland Yard’s former chief kidnap and hostage negotiator, told me at an event organised by the Financial Times’ business networking organisation, that leaders should involve every function from communications to legal to HR when examining possible future crises. Detert agrees this can be valuable, provided the presence of often under-regarded departments such as HR is taken seriously. 

Leaders’ behaviour is a signal of whether they want staff to speak up. Edmondson says: “Leaders of organisations have to go out of their way to invite the dissenting view, the missed risk. Before we close down any conversation where there’s a decision, we need to say, without fail: ‘What are we missing?’ We say: ‘OK, let’s just say we’re wrong about this and it goes badly awry, what would have explained it?’” She recommends calling on people by name, asking what their thoughts are. 

Detert adds that office design can signal to staff that their thoughts are welcome: the leader sitting in open plan, or having bright stripes on the floor indicating the way to their office, or sitting at square tables without place names rather than at rectangular ones where their seat position makes it obvious they are in charge. 

How relevant are these workplace layouts when, post-lockdown, employees no longer come into the office every day? “That’s the $10mn question,” Detert says. On the one hand, remote working might be making it harder for leaders to read the signs that people are uneasy with a strategy. On the other, it could be that people find it easier to speak out from their own homes. They may also feel that other aspects of their lives, such as family, are now more important than work, which could encourage them to talk. 

Others think SVB’s relaxed remote-working culture, which meant senior executives were scattered across the US, contributed to its failure. Nicholas Bloom, a Stanford professor who has studied remote working, told the Financial Times: “It’s hard to have a challenging call over Zoom.” Hedging interest rate risk was more likely to come up over lunch or in small meetings. 

Leaders also need to persistently praise people who speak up. The penalties for doing so are often more obvious than the rewards. Those who keep their heads down are seldom blamed. As Warren Buffett said: “As a group, lemmings may have a rotten image, but no individual lemming has ever received bad press.”

Monday, 16 March 2020

How fighting an employer or becoming a whistleblower can lead to retaliation and undermining tactics

 Alicia Clegg in The FT

Caroline Barlow felt little emotion when she settled with the BBC last May and withdrew her employment tribunal claims over unequal pay and constructive dismissal. Just a crushing tiredness that left her shaky and sick and so disoriented that for a while she stopped driving. 

She now views her reaction as a kind of grieving, for her job and faith in an institution that she had revered. She entered the BBC’s pay review process suspecting that she was paid less than male heads of product doing jobs similar to her own, and received a 25 per cent rise, though with little explanation of how the figure was arrived at. So she used data protection law to view internal documents that indicated that even after the increase she would still be paid less. The assessors argued, without providing evidence, that she had skills she still needed to develop and the men had bigger roles. 

“Publicly the BBC was saying it had introduced a transparent process. Yet, it was made very clear to me that I’d only get salary information on my peers at a final tribunal hearing by court order,” she says. 

Like the journalist Carrie Gracie, who also challenged unequal pay at the BBC, Ms Barlow talks of her sense of entering a no-man’s-land of stonewalling and doublespeak, where evidence that she presented was watered down or selectively reported. She says that a strategic project described as “transforming” in a business case, for which she obtained executive committee sign-off, was trivialised as “a hygiene project” after she questioned her pay. She felt blocked by the slow progress of her grievance — she only received the outcome on her final day of employment − undermined in numerous small ways and made to feel unimportant. She became ill and was diagnosed with depression. 

Lawrence Davies, director of Equal Justice Solicitors, who acted for Ms Barlow, says such experiences are common. Most employers try to quash internal complaints to avoid exposing themselves legally, should the employee sue. Yet while employers uphold only 1 per cent of grievances, he says, 65-70 per cent of complainants who persevere to an employment tribunal ultimately win, though the strain can be immense. 

Kathy Ahern, a retired mental health nurse and academic, studied the psychological toll of challenging an employer after discovering that nurses who reported misconduct had strong beliefs about what it means to be a nurse. When they faced reprisals for putting patients before other loyalties they suffered overwhelming mental distress, not just because of what was done, but because the institutional reality gave the lie to everything that nursing codes of conduct teach. Another study, published in the journal Psychological Reports in 2019, found levels of anxiety and depression among whistleblowers are similar to those of cancer patients. 

Ms Ahern likens retaliatory employers to domestic abusers who psychologically manipulate or “gaslight” a partner to destroy their self-confidence and credibility. Tell-tale patterns, which she documents in a review paper published in the Journal of Perinatal & Neonatal Nursing in 2018, run the gamut from maliciously finding fault, to sustained campaigns of petty slights and obstructions, to seeding rumours that the victim is unhinged. 

Tom Mueller, author of Crisis of Conscience: Whistleblowing in an Age of Fraud, believes that while employers sometimes label whistleblowers as “crazy” simply to tarnish them, this may actually be how they see them. To “more negotiable” colleagues who know when to bend with the wind, they may come across as “unreasonable sticklers”, and end up friendless and questioning their own sanity. 

Margaret Oliver, a former detective with Greater Manchester Police, says that senior officers dismissed her as “unreasonable” and “too emotionally involved” when she voiced concerns about the conduct of two investigations into child sexual exploitation, Operation Augusta (2004-2005) and Operation Span (2010-2012). 

After returning from sick-leave, brought on by stress, she spotted an article in the staff newspaper in which GMP’s then chief constable urged officers to challenge police policies that their gut told them was wrong. She “took the scary step” of contacting him directly. But instead of meeting her, as she had suggested, she says he replied with a “bland email” promising that her concerns would be reviewed and passing her back down the command chain. 

Having got nowhere, she resigned in 2012 and went public with her allegations, prompting the Mayor of Greater Manchester to commission an independent review. In January this year phase one, covering the period to 2005, concluded that Operation Augusta, had, as she always alleged, been closed down prematurely and children at risk of sexual exploitation had been failed. Ms Oliver recently launched the Maggie Oliver Foundation to support abuse survivors, and also whistleblowers who, like her, have nowhere to turn. “I asked myself: ‘Is there something obvious to others that I’m not seeing? Or is what I’m seeing wrong and making me ill?’ I felt isolated,” she says. 

Isolation dogged whistleblower Aaron Westrick throughout a 14-year US legal battle concerning alleged corruption in the body armour industry that concluded, in 2018, with all the defendants ultimately making settlement payments. 

As research director at Second Chance Body Armor (since liquidated), Mr Westrick urged his employer to recall a line of defective bulletproof vests containing Zylon, a material manufactured by Japanese company Toyobo. Instead he says that he was frozen out, told by an HR officer accompanied by his employer’s attorney that he was “crazy,” sacked and maligned. “If there’s one word that describes being a whistleblower, it’s loneliness,” he says. “Even your friends don’t really get it.” 

Georgina Halford-Hall, chief executive of WhistleblowersUK, says the stress of fighting a bad employer is all-consuming. But, however difficult, it is important to continue doing the everyday things you enjoy. Drawing on personal experience, she recommends finding an independent mental health professional to offload on. “Don’t make every conversation with your partner and friends about your concerns, because that only isolates you further, making it likelier that you’ll end up behaving irrationally.” 

From a practical standpoint, the best way for society to support victims of retaliation is to pay their legal fees, says Peter van der Velden, senior researcher at CentERdata, a Dutch research institute, and lead investigator of the study published in Psychological Reports. “What we know from research is that financial problems are a main stressor, few people have money for a lawyer after losing their job.” Something organisations should consider doing, that might strengthen their culture, is to look for opportunities to hire former whistleblowers rather than giving them a wide berth, says Marianna Fotaki, professor of business ethics at the University of Warwick Business School. 

Ms Barlow says she still has “bad days”, though increasingly less so. Finding people who have had similar experiences, she says, is helping her rebuild her shattered sense of self. “It keeps your feet grounded in reality, not the manipulated version of reality that your employer wants you to believe.” 


The Choreography of Retaliation 

When organisations retaliate against employees, they tend to do so through a gradual piling on of pressure that pushes the individual to the point where they mistrust their own judgment, says Kathy Ahern. They become anxious, hypersensitive to threats and easy to cast as “overreacting, or simply disgruntled”. Some warning signs of what she terms a “gaslighting” pattern of retaliation include:

 ▪Reassuring employees that their complaints are being investigated, while repeatedly stalling.

 ▪Using euphemisms that diminish the person’s experience, such as “grey area” or “personality clash” for victimisation. 

▪Finding fault with a highly-regarded employee who makes a complaint. ▪Praising someone for reporting misconduct, while doing nothing to prevent reprisals.

▪ Encouraging an employee who has suffered retaliation to take sick leave or undergo a psychological evaluation, under the guise of offering support.

Monday, 19 June 2017

Balance of power deters would-be whistleblowers from rocking the boat

Sean Ingle in The Guardian


A couple of days ago I asked a UK Sport insider why more athletes do not go public with their concerns. “Put yourself in their shoes,” came the reply. “One path is potentially well rewarded. And then there’s another that comes after speaking out. If you are a rational person, do you want to travel down the road of a Brian Cookson or a Jess Varnish? There is a massive disincentive to rock the boat.”

One can see their point. Cookson, having enjoyed a long career in sports administration, is now president of the UCI, earning £235,000 a year. Varnish, having spoken out about the problems in British Cycling – and having been largely vindicated – finds herself marginalised and ostracised. At 26 she also knows her career in elite sport is probably over. What would you do?




British Bobsleigh team told: keep quiet about bullying or miss Olympics


Of course not every complaint is serious or justified. And nor is elite sport a place to hold hands round the campfire and sing kumbaya. But in a week where fresh and disturbing allegations about bullying in British Bobsleigh and child abuse in British Canoeing were heard there is an urgent need to tilt the balance in favour of whistleblowers and honest brokers.

Indeed, lost amid the flurry of reports into British Cycling last Wednesday was the damning verdict from the financial accountants Moore Stephens on UK Sport’s whistleblower policy. In their view it was inadequate: it needed to be “more robust”, “encourage a culture of openness” and “provide statutory protection from unfair dismissal for making a protected disclosure”. The question is how.

The main problem is that a vast amount of power lies with UK Sport and the heads of each sport – and very little with the athletes, who are subject to an annual review whereby their lottery money can be cut or stopped completely.
Player power” is often heard of in football but for those in Olympic sports the power dynamic favours coaches and administrators – which hardly encourages athletes to question them.

One coach recently told of an athlete who made some modest but justified criticisms of his sport. A few months later his lottery funding was trimmed. Perhaps it was coincidental but his fellow athletes took away a lesson: he rocked the boat and lost out. As the coach explained: “A lot of signals are sent to people to say don’t misbehave and I am troubled by that. No one is saying that bullying and other such behaviour is widespread but there is an environment that does not allow enough checks and balances.”

One can imagine how vulnerable this leaves the athletes. One false move and their livelihood is toast. It does not help that Olympic athletes do not really have a strong union. Nominally there is the British Athletes Commission, which represents 1,400 Olympians and Paralympians, but few believe it has enough resources or independence to be as effective as it needs to be.

There is another factor at play, too. Many athletes want to stay in sport, either as a coach or administrator, when they quit the field of play. For those who pick up a reputation as a troublemaker the stink is hard to shake. As the former British bobsleigher Henry Nwume, who spoke to the BBC last week about problems inside his sport, told me: “You have everything to lose by talking. Athletes know that they run the risk of being attacked, discredited and blackballed. And that continues even after they retire. They fear positions that might have been opened for them will be closed. And they will become persona non grata.”
Whistleblowers also know their accounts are likely to be belittled by athletes inside the system. This is not necessarily malicious. Coaches tend to treat potential medallists better: one I spoke to admitted he was seen as a “golden child” by his performance director and never received – or even saw – the abuse that many of his friends got. So when Sir Bradley Wiggins or Sir Chris Hoy is asked if there was anything wrong with British Cycling, perhaps one should not be shocked when they say no.

One potential solution, put forward by Baroness Tanni-Grey Thompson’s duty of care review in April, is for an independent sports ombudsman – or duty of care quality commission – which is separate from UK Sport, to “maintain public confidence that sport is conducted ethically”. To me that makes sense. But change also has to come from within.

UK Sport deserves praise for lifting Britain from 36th in the medal table in 1996 to second in Rio last year. There are many smart people in the system, too. But it surely knows now that its tunnel‑vision focus on winning can breed the type of performance director or head coach who knows the main performance indicator is medals and so puts athlete welfare lower on the list of priorities. It does not help when UK Sport’s chief executive, Liz Nicholl, insists that “99% of this system is working really well” when increasingly the evidence suggests otherwise.
The best organisations do not just challenge themselves to be better. They allow themselves to be challenged in turn. In fact, they welcome it because they know being open and subject to rigorous examination helps them improve. Next month Katherine Grainger, a ferocious competitor with vast intellect, takes over as chair of UK Sport. How she responds to the mounting issues of athlete welfare, whilst keeping standards high, will surely define her tenure.

Tuesday, 7 February 2017

The hi-tech war on science fraud

Stephen Buranyi in The Guardian


One morning last summer, a German psychologist named Mathias Kauff woke up to find that he had been reprimanded by a robot. In an email, a computer program named Statcheck informed him that a 2013 paper he had published on multiculturalism and prejudice appeared to contain a number of incorrect calculations – which the program had catalogued and then posted on the internet for anyone to see. The problems turned out to be minor – just a few rounding errors – but the experience left Kauff feeling rattled. “At first I was a bit frightened,” he said. “I felt a bit exposed.”

Kauff wasn’t alone. Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer.

Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics.

Susan Fiske, the former head of the Association for Psychological Science, wrote an op-ed accusing “self-appointed data police” of pioneering a new “form of harassment”. The German Psychological Society issued a statement condemning the unauthorised use of Statcheck. The intensity of the reaction suggested that many were afraid that the program was not just attributing mere statistical errors, but some impropriety, to the scientists.

The man behind all this controversy was a 25-year-old Dutch scientist named Chris Hartgerink, based at Tilburg University’s Meta-Research Center, which studies bias and error in science. Statcheck was the brainchild of Hartgerink’s colleague Michèle Nuijten, who had used the program to conduct a 2015 study that demonstrated that about half of all papers in psychology journals contained a statistical error. Nuijten’s study was written up in Nature as a valuable contribution to the growing literature acknowledging bias and error in science – but she had not published an inventory of the specific errors it had detected, or the authors who had committed them. The real flashpoint came months later,when Hartgerink modified Statcheck with some code of his own devising, which catalogued the individual errors and posted them online – sparking uproar across the scientific community.

Hartgerink is one of only a handful of researchers in the world who work full-time on the problem of scientific fraud – and he is perfectly happy to upset his peers. “The scientific system as we know it is pretty screwed up,” he told me last autumn. Sitting in the offices of the Meta-Research Center, which look out on to Tilburg’s grey, mid-century campus, he added: “I’ve known for years that I want to help improve it.” Hartgerink approaches his work with a professorial seriousness – his office is bare, except for a pile of statistics textbooks and an equation-filled whiteboard – and he is appealingly earnest about his aims. His conversations tend to rapidly ascend to great heights, as if they were balloons released from his hands – the simplest things soon become grand questions of ethics, or privacy, or the future of science.

“Statcheck is a good example of what is now possible,” he said. The top priority,for Hartgerink, is something much more grave than correcting simple statistical miscalculations. He is now proposing to deploy a similar program that will uncover fake or manipulated results – which he believes are far more prevalent than most scientists would like to admit.

When it comes to fraud – or in the more neutral terms he prefers, “scientific misconduct” – Hartgerink is aware that he is venturing into sensitive territory. “It is not something people enjoy talking about,” he told me, with a weary grin. Despite its professed commitment to self-correction, science is a discipline that relies mainly on a culture of mutual trust and good faith to stay clean. Talking about its faults can feel like a kind of heresy. In 1981, when a young Al Gore led a congressional inquiry into a spate of recent cases of scientific fraud in biomedicine, the historian Daniel Kevles observed that “for Gore and for many others, fraud in the biomedical sciences was akin to pederasty among priests”.

The comparison is apt. The exposure of fraud directly threatens the special claim science has on truth, which relies on the belief that its methods are purely rational and objective. As the congressmen warned scientists during the hearings, “each and every case of fraud serves to undermine the public’s trust in the research enterprise of our nation”.

But three decades later, scientists still have only the most crude estimates of how much fraud actually exists. The current accepted standard is a 2009 study by the Stanford researcher Daniele Fanelli that collated the results of 21 previous surveys given to scientists in various fields about research misconduct. The studies, which depended entirely on scientists honestly reporting their own misconduct, concluded that about 2% of scientists had falsified data at some point in their career.

If Fanelli’s estimate is correct, it seems likely that thousands of scientists are getting away with misconduct each year. Fraud – including outright fabrication, plagiarism and self-plagiarism – accounts for the majority of retracted scientific articles. But, according to RetractionWatch, which catalogues papers that have been withdrawn from the scientific literature, only 684 were retracted in 2015, while more than 800,000 new papers were published. If even just a few of the suggested 2% of scientific fraudsters – which, relying on self-reporting, is itself probably a conservative estimate – are active in any given year, the vast majority are going totally undetected. “Reviewers and editors, other gatekeepers – they’re not looking for potential problems,” Hartgerink said.

But if none of the traditional authorities in science are going to address the problem, Hartgerink believes that there is another way. If a program similar to Statcheck can be trained to detect the traces of manipulated data, and then make those results public, the scientific community can decide for itself whether a given study should still be regarded as trustworthy.

Hartgerink’s university, which sits at the western edge of Tilburg, a small, quiet city in the southern Netherlands, seems an unlikely place to try and correct this hole in the scientific process. The university is best known for its economics and business courses and does not have traditional lab facilities. But Tilburg was also the site of one of the biggest scientific scandals in living memory – and no one knows better than Hartgerink and his colleagues just how devastating individual cases of fraud can be.

In September 2010, the School of Social and Behavioral Science at Tilburg University appointed Diederik Stapel, a promising young social psychologist, as its new dean. Stapel was already popular with students for his warm manner, and with the faculty for his easy command of scientific literature and his enthusiasm for collaboration. He would often offer to help his colleagues, and sometimes even his students, by conducting surveys and gathering data for them.

As dean, Stapel appeared to reward his colleagues’ faith in him almost immediately. In April 2011 he published a paper in Science, the first study the small university had ever landed in that prestigious journal. Stapel’s research focused on what psychologists call “priming”: the idea that small stimuli can affect our behaviour in unnoticed but significant ways. “Could being discriminated against depend on such seemingly trivial matters as garbage on the streets?” Stapel’s paper in Science asked. He proceeded to show that white commuters at the Utrecht railway station tended to sit further away from visible minorities when the station was dirty. Similarly, Stapel found that white people were more likely to give negative answers on a quiz about minorities if they were interviewed on a dirty street, rather than a clean one.

Stapel had a knack for devising and executing such clever studies, cutting through messy problems to extract clean data. Since becoming a professor a decade earlier, he had published more than 100 papers, showing, among other things, that beauty product advertisements, regardless of context, prompted women to think about themselves more negatively, and that judges who had been primed to think about concepts of impartial justice were less likely to make racially motivated decisions.

His findings regularly reached the public through the media. The idea that huge, intractable social issues such as sexism and racism could be affected in such simple ways had a powerful intuitive appeal, and hinted at the possibility of equally simple, elegant solutions. If anything united Stapel’s diverse interests, it was this Gladwellian bent. His studies were often featured in the popular press, including the Los Angeles Times and New York Times, and he was a regular guest on Dutch television programmes.

But as Stapel’s reputation skyrocketed, a small group of colleagues and students began to view him with suspicion. “It was too good to be true,” a professor who was working at Tilburg at the time told me. (The professor, who I will call Joseph Robin, asked to remain anonymous so that he could frankly discuss his role in exposing Stapel.) “All of his experiments worked. That just doesn’t happen.”

A student of Stapel’s had mentioned to Robin in 2010 that some of Stapel’s data looked strange, so that autumn, shortly after Stapel was made Dean, Robin proposed a collaboration with him, hoping to see his methods first-hand. Stapel agreed, and the data he returned a few months later, according to Robin, “looked crazy. It was internally inconsistent in weird ways; completely unlike any real data I had ever seen.” Meanwhile, as the student helped get hold of more datasets from Stapel’s former students and collaborators, the evidence mounted: more “weird data”, and identical sets of numbers copied directly from one study to another.
In August 2011, the whistleblowers took their findings to the head of the department, Marcel Zeelenberg, who confronted Stapel with the evidence. At first, Stapel denied the charges, but just days later he admitted what his accusers suspected: he had never interviewed any commuters at the railway station, no women had been shown beauty advertisements and no judges had been surveyed about impartial justice and racism.

Stapel hadn’t just tinkered with numbers, he had made most of them up entirely, producing entire datasets at home in his kitchen after his wife and children had gone to bed. His method was an inversion of the proper scientific method: he started by deciding what result he wanted and then worked backwards, filling out the individual “data” points he was supposed to be collecting.

On 7 September 2011, the university revealed that Stapel had been suspended. The media initially speculated that there might have been an issue with his latest study – announced just days earlier, showing that meat-eaters were more selfish and less sociable – but the problem went much deeper. Stapel’s students and colleagues were about to learn that his enviable skill with data was, in fact, a sham, and his golden reputation, as well as nearly a decade of results that they had used in their own work, were built on lies.

Chris Hartgerink was studying late at the library when he heard the news. The extent of Stapel’s fraud wasn’t clear by then, but it was big. Hartgerink, who was then an undergraduate in the Tilburg psychology programme, felt a sudden disorientation, a sense that something solid and integral had been lost. Stapel had been a mentor to him, hiring him as a research assistant and giving him constant encouragement. “This is a guy who inspired me to actually become enthusiastic about research,” Hartgerink told me. “When that reason drops out, what remains, you know?”

Hartgerink wasn’t alone; the whole university was stunned. “It was a really difficult time,” said one student who had helped expose Stapel. “You saw these people on a daily basis who were so proud of their work, and you know it’s just based on a lie.” Even after Stapel resigned, the media coverage was relentless. Reporters roamed the campus – first from the Dutch press, and then, as the story got bigger, from all over the world.

On 9 September, just two days after Stapel was suspended, the university convened an ad-hoc investigative committee of current and former faculty. To help determine the true extent of Stapel’s fraud, the committee turned to Marcel van Assen, a statistician and psychologist in the department. At the time, Van Assen was growing bored with his current research, and the idea of investigating the former dean sounded like fun to him. Van Assen had never much liked Stapel, believing that he relied more on the force of his personality than reason when running the department. “Some people believe him charismatic,” Van Assen told me. “I am less sensitive to it.”

Van Assen – who is 44, tall and rangy, with a mop of greying, curly hair – approaches his work with relentless, unsentimental practicality. When speaking, he maintains an amused, half-smile, as if he is joking. He once told me that to fix the problems in psychology, it might be simpler to toss out 150 years of research and start again; I’m still not sure whether or not he was serious.

To prove misconduct, Van Assen said, you must be a pitbull: biting deeper and deeper, clamping down not just on the papers, but the datasets behind them, the research methods, the collaborators – using everything available to bring down the target. He spent a year breaking down the 45 studies Stapel produced at Tilburg and cataloguing their individual aberrations, noting where the effect size – a standard measure of the difference between the two groups in an experiment –seemed suspiciously large, where sequences of numbers were copied, where variables were too closely related, or where variables that should have moved in tandem instead appeared adrift.

The committee released its final report in October 2012 and, based largely on its conclusions, 55 of Stapel’s publications were officially retracted by the journals that had published them. Stapel also returned his PhD to the University of Amsterdam. He is, by any measure, one of the biggest scientific frauds of all time. (RetractionWatch has him third on their all-time retraction leaderboard.) The committee also had harsh words for Stapel’s colleagues, concluding that “from the bottom to the top, there was a general neglect of fundamental scientific standards”. “It was a real blow to the faculty,” Jacques Hagenaars, a former professor of methodology at Tilburg, who served on the committee, told me.

By extending some of the blame to the methods and attitudes of the scientists around Stapel, the committee situated the case within a larger problem that was attracting attention at the time, which has come to be known as the “replication crisis”. For the past decade, the scientific community has been grappling with the discovery that many published results cannot be reproduced independently by other scientists – in spite of the traditional safeguards of publishing and peer-review – because the original studies were marred by some combination of unchecked bias and human error.

After the committee disbanded, Van Assen found himself fascinated by the way science is susceptible to error, bias, and outright fraud. Investigating Stapel had been exciting, and he had no interest in returning to his old work. Van Assen had also found a like mind, a new professor at Tilburg named Jelte Wicherts, who had a long history working on bias in science and who shared his attitude of upbeat cynicism about the problems in their field. “We simply agree, there are findings out there that cannot be trusted,” Van Assen said. They began planning a new sort of research group: one that would investigate the very practice of science.

Van Assen does not like assigning Stapel too much credit for the creation of the Meta-Research Center, which hired its first students in late 2012, but there is an undeniable symmetry: he and Wicherts have created, in Stapel’s old department, a platform to investigate the sort of “sloppy science” and misconduct that very department had been condemned for.

Hartgerink joined the group in 2013. “For many people, certainly for me, Stapel launched an existential crisis in science,” he said. After Stapel’s fraud was exposed, Hartgerink struggled to find “what could be trusted” in his chosen field. He began to notice how easy it was for scientists to subjectively interpret data – or manipulate it. For a brief time he considered abandoning a future in research and joining the police.


There are probably several very famous papers that have fake data, and very famous people who have done it


Van Assen, who Hartgerink met through a statistics course, helped put him on another path. Hartgerink learned that a growing number of scientists in every field were coming to agree that the most urgent task for their profession was to establish what results and methods could still be trusted – and that many of these people had begun to investigate the unpredictable human factors that, knowingly or not, knocked science off its course. What was more, he could be a part of it. Van Assen offered Hartgerink a place in his yet-unnamed research group. All of the current projects were on errors or general bias, but Van Assen proposed they go out and work closer to the fringes, developing methods that could detect fake data in published scientific literature.

“I’m not normally an expressive person,” Hartgerink told me. “But I said: ‘Hell, yes. Let’s do that.’”

Hartgerink and Van Assen believe not only that most scientific fraud goes undetected, but that the true rate of misconduct is far higher than 2%. “We cannot trust self reports,” Van Assen told me. “If you ask people, ‘At the conference, did you cheat on your fiancee?’ – people will very likely not admit this.”

Uri Simonsohn, a psychology professor at University of Pennsylvania’s Wharton School who gained notoriety as a “data vigilante” for exposing two serious cases of fraud in his field in 2012, believes that as much as 5% of all published research contains fraudulent data. “It’s not only in the periphery, it’s not only in the journals people don’t read,” he told me. “There are probably several very famous papers that have fake data, and very famous people who have done it.”
But as long as it remains undiscovered, there is a tendency for scientists to dismiss fraud in favour of more widely documented – and less seedy – issues. Even Arturo Casadevall, an American microbiologist who has published extensively on the rate, distribution, and detection of fraud in science, told me that despite his personal interest in the topic, my time would be better served investigating the broader issues driving the replication crisis. Fraud, he said, was “probably a relatively minor problem in terms of the overall level of science”.

This way of thinking goes back at least as far as scientists have been grappling with high-profile cases of misconduct. In 1983, Peter Medawar, the British immunologist and Nobel laureate, wrote in the London Review of Books: “The number of dishonest scientists cannot, of course, be known, but even if they were common enough to justify scary talk of ‘tips of icebergs’, they have not been so numerous as to prevent science’s having become the most successful enterprise (in terms of the fulfilment of declared ambitions) that human beings have ever engaged upon.”

From this perspective, as long as science continues doing what it does well – as long as genes are sequenced and chemicals classified and diseases reliably identified and treated – then fraud will remain a minor concern. But while this may be true in the long run, it may also be dangerously complacent. Furthermore, scientific misconduct can cause serious harm, as, for instance, in the case of patients treated by Paolo Macchiarini, a doctor at Karolinska Institute in Sweden who allegedly misrepresented the effectiveness of an experimental surgical procedure he had developed. Macchiarini is currently being investigated by a Swedish prosecutor after several of the patients who received the procedure later died.

Even in the more mundane business of day-to-day research, scientists are constantly building on past work, relying on its solidity to underpin their own theories. If misconduct really is as widespread as Hartgerink and Van Assen think, then false results are strewn across scientific literature, like unexploded mines that threaten any new structure built over them. At the very least, if science is truly invested in its ideal of self-correction, it seems essential to know the extent of the problem.

But there is little motivation within the scientific community to ramp up efforts to detect fraud. Part of this has to do with the way the field is organised. Science isn’t a traditional hierarchy, but a loose confederation of research groups, institutions, and professional organisations. Universities are clearly central to the scientific enterprise, but they are not in the business of evaluating scientific results, and as long as fraud doesn’t become public they have little incentive to go after it. There is also the questionable perception, although widespread in the scientific community, that there are already measures in place that preclude fraud. When Gore and his fellow congressmen held their hearings 35 years ago, witnesses routinely insisted that science had a variety of self-correcting mechanisms, such as peer-review and replication. But, as the science journalists William Broad and Nicholas Wade pointed out at the time, the vast majority of cases of fraud are actually exposed by whistleblowers, and that holds true to this day.
And so the enormous task of keeping science honest is left to individual scientists in the hope that they will police themselves, and each other. “Not only is it not sustainable,” said Simonsohn, “it doesn’t even work. You only catch the most obvious fakers, and only a small share of them.” There is also the problem of relying on whistleblowers, who face the thankless and emotionally draining prospect of accusing their own colleagues of fraud. (“It’s like saying someone is a paedophile,” one of the students at Tilburg told me.) Neither Simonsohn nor any of the Tilburg whistleblowers I interviewed said they would come forward again. “There is no way we as a field can deal with fraud like this,” the student said. “There has to be a better way.”

In the winter of 2013, soon after Hartgerink began working with Van Assen, they began to investigate another social psychology researcher who they noticed was reporting suspiciously large effect sizes, one of the “tells” that doomed Stapel. When they requested that the researcher provide additional data to verify her results, she stalled – claiming that she was undergoing treatment for stomach cancer. Months later, she informed them that she had deleted all the data in question. But instead of contacting the researcher’s co-authors for copies of the data, or digging deeper into her previous work, they opted to let it go.

They had been thoroughly stonewalled, and they knew that trying to prosecute individual cases of fraud – the “pitbull” approach that Van Assen had taken when investigating Stapel – would never expose more than a handful of dishonest scientists. What they needed was a way to analyse vast quantities of data in search of signs of manipulation or error, which could then be flagged for public inspection without necessarily accusing the individual scientists of deliberate misconduct. After all, putting a fence around a minefield has many of the same benefits as clearing it, with none of the tricky business of digging up the mines.

As Van Assen had earlier argued in a letter to the journal Nature, the traditional approach to investigating other scientists was needlessly fraught – since it combined the messy task of proving that a researcher had intended to commit fraud with a much simpler technical problem: whether the data underlying their results was valid. The two issues, he argued, could be separated.

Scientists can commit fraud in a multitude of ways. In 1974, the American immunologist William Summerlin famously tried to pass a patch of skin on a mouse darkened with permanent marker pen as a successful interspecies skin-graft. But most instances are more mundane: the majority of fraud cases in recent years have emerged from scientists either falsifying images – deliberately mislabelling scans and micrographs – or fabricating or altering their recorded data. And scientists have used statistical tests to scrutinise each other’s data since at least the 1930s, when Ronald Fisher, the father of biostatistics, used a basic chi-squared test to suggest that Gregor Mendel, the father of genetics, had cherrypicked some of his data.

In 2014, Hartgerink and Van Assen started to sort through the variety of tests used in ad-hoc investigations of fraud in order to determine which were powerful and versatile enough to reliably detect statistical anomalies across a wide range of fields. After narrowing down a promising arsenal of tests, they hit a tougher problem. To prove that their methods work, Hartgerink and Van Assen have to show they can reliably distinguish false from real data. But research misconduct is relatively uncharted territory. Only a handful of cases come to light each year – a dismally small sample size – so it’s hard to get an idea of what constitutes “normal” fake data, what its features and particular quirks are. Hartgerink devised a workaround, challenging other academics to produce simple fake datasets, a sort of game to see if they could come up with data that looked real enough to fool the statistical tests, with an Amazon gift card as a prize.

By 2015, the Meta-Research group had expanded to seven researchers, and Hartgerink was helping his colleagues with a separate error-detection project that would become Statcheck. He was pleased with the study that Michèle Nuitjen published that autumn, which used Statcheck to show that something like half of all published psychology papers appeared to contain calculation errors, but as he tinkered with the program and the database of psychology papers they had assembled, he found himself increasingly uneasy about what he saw as the closed and secretive culture of science.
When scientists publish papers in journals, they release only the data they wish to share. Critical evaluation of the results by other scientists – peer review – takes place in secret and the discussion is not released publicly. Once a paper is published, all comments, concerns, and retractions must go through the editors of the journal before they reach the public. There are good, or at least defensible, arguments for all of this. But Hartgerink is part of an increasingly vocal group that believes that the closed nature of science, with authority resting in the hands of specific gatekeepers – journals, universities, and funders – is harmful, and that a more open approach would better serve the scientific method.

Hartgerink realised that with a few adjustments to Statcheck, he could make public all the statistical errors it had exposed. He hoped that this would shift the conversation away from talk of broad, representative results – such as the proportion of studies that contained errors – and towards a discussion of the individual papers and their mistakes. The critique would be complete, exhaustive, and in the public domain, where the authors could address it; everyone else could draw their own conclusions.

In August 2016, with his colleagues’ blessing, he posted the full set of Statcheck results publicly on the anonymous science message board PubPeer. At first there was praise on Twitter and science blogs, which skew young and progressive – and then, condemnations, largely from older scientists, who feared an intrusive new world of public blaming and shaming. In December, after everyone had weighed in, Nature, a bellwether of mainstream scientific thought for more than a century, cautiously supported a future of automated scientific scrutiny in an editorial that addressed the Statcheck controversy without explicitly naming it. Its conclusion seemed to endorse Hartgerink’s approach, that “criticism itself must be embraced”.

In the same month, the Office of Research Integrity (ORI), an obscure branch of the US National Institutes of Health, awarded Hartgerink a small grant – about $100,000 – to pursue new projects investigating misconduct, including the completion of his program to detect fabricated data. For Hartgerink and Van Assen, who had not received any outside funding for their research, it felt like vindication.

Yet change in science comes slowly, if at all, Van Assen reminded me. The current push for more open and accountable science, of which they are a part, has “only really existed since 2011”, he said. It has captured an outsize share of the science media’s attention, and set laudable goals, but it remains a small, fragile outpost of true believers within the vast scientific enterprise. “I have the impression that many scientists in this group think that things are going to change.” Van Assen said. “Chris, Michèle, they are quite optimistic. I think that’s bias. They talk to each other all the time.”

When I asked Hartgerink what it would take to totally eradicate fraud from the scientific process, he suggested that scientists make all of their data public; register the intentions of their work before conducting experiments, to prevent post-hoc reasoning, and that they have their results checked by algorithms during and after the publishing process.

To any working scientist – currently enjoying nearly unprecedented privacy and freedom for a profession that is in large part publicly funded – Hartgerink’s vision would be an unimaginably draconian scientific surveillance state. For his part, Hartgerink believes the preservation of public trust in science requires nothing less – but in the meantime, he intends to pursue this ideal without the explicit consent of the entire scientific community, by investigating published papers and making the results available to the public.

Even scientists who have done similar work uncovering fraud have reservations about Van Assen and Hartgerink’s approach. In January, I met with Dr John Carlisle and Dr Steve Yentis at an anaesthetics conference that took place in London, near Westminster Abbey. In 2012, Yentis, then the editor of the journal Anaesthesia, asked Carlisle to investigate data from a researcher named Yoshitaka Fujii, who the community suspected was falsifying clinical trials. In time, Carlisle demonstrated that 168 of Fujii’s trials contained dubious statistical results. Yentis and the other journal editors contacted Fujii’s employers, who launched a full investigation. Fujii currently sits at the top of the RetractionWatch leaderboard with 183 retracted studies. By sheer numbers he is the biggest scientific fraud in recorded history.


You’re saying to a person, ‘I think you’re a liar.’ How many fraudulent papers are worth one false accusation?

Carlisle, who, like Van Assen, found that he enjoyed the detective work (“it takes a certain personality, or personality disorder”, he said), showed me his latest project, a larger-scale analysis of the rate of suspicious clinical trial results across multiple fields of medicine. He and Yentis discussed their desire to automate these statistical tests – which, in theory, would look a lot like what Hartgerink and Van Assen are developing – but they have no plans to make the results public; instead they envision that journal editors might use the tests to screen incoming articles for signs of possible misconduct.

“It is an incredibly difficult balance,” said Yentis, “you’re saying to a person, ‘I think you’re a liar.’ We have to decide how many fraudulent papers are worth one false accusation. How many is too many?”

With the introduction of programs such as Statcheck, and the growing desire to conduct as much of the critical conversation as possible in public view, Yentis expects a stormy reckoning with those very questions. “That’s a big debate that hasn’t happened,” he said, “and it’s because we simply haven’t had the tools.”

For all their dispassionate distance, when Hartgerink and Van Assen say that they are simply identifying data that “cannot be trusted”, they mean flagging papers and authors that fail their tests. And, as they learned with Statcheck, for many scientists, that will be indistinguishable from an accusation of deceit. When Hartgerink eventually deploys his fraud-detection program, it will flag up some very real instances of fraud, as well as many unintentional errors and false positives – and present all of the results in a messy pile for the scientific community to sort out. Simonsohn called it “a bit like leaving a loaded gun on a playground”.

When I put this question to Van Assen, he told me it was certain that some scientists would be angered or offended by having their work and its possible errors exposed and discussed. He didn’t want to make anyone feel bad, he said – but he didn’t feel bad about it. Science should be about transparency, criticism, and truth.

“The problem, also with scientists, is that people think they are important, they think they have a special purpose in life,” he said. “Maybe you too. But that’s a human bias. I think when you look at it objectively, individuals don’t matter at all. We should only look at what is good for science and society.”

Tuesday, 9 February 2016

The curious case of Julian Assange

Editorial in The Hindu




Personal liberty still eludes WikiLeaks founder and Editor-in-Chief Julian Assange, despite a ruling by a United Nations legal panel that has declared his confinement “arbitrary and illegal”. The ruling of the Working Group on Arbitrary Detention — the authoritative UN body that pronounces on illegal detentions based on binding and legal international instruments — has met with support, but not surprisingly, with a bitter backlash as well, notably from governments that have suffered incalculable damage from WikiLeaks’ relentless exposures. Sweden and Britain have rejected the panel’s findings outright, despite the fact that they are signatories to the International Covenant on Civil and Political Rights, the European Convention on Human Rights and the other treaties upon which the UN legal panel has based its recommendation. The same countries have in the past upheld rulings of the same panel on similar cases such as the ‘arbitrary detention’ of the Myanmar leader Aung San Suu Kyi and former Maldives President Mohamed Nasheed. The British Foreign Secretary, Philip Hammond, has called the ruling “ridiculous”, and dismissed the distinguished panel as comprising “lay people, not lawyers”. As for the Swedish Prosecutor’s Office, it has declared that the UN body’s opinion “has no formal impact on the ongoing investigation, according to Swedish law”. In other words, both countries argue that his confinement is not arbitrary but self-imposed, and he is at ‘liberty’ to step out, be arrested, and face the consequences.

The specific allegation of rape that Mr. Assange faces in Sweden must be seen in the larger international political context of his confinement. He has made it clear he is not fleeing Swedish justice, offering repeatedly to give evidence to the Swedish authorities, with the caveat that he be questioned at his refuge in London, either in person or by webcam. While he will have to prove his innocence, Mr. Assange is not being paranoid when he talks of his fear of extradition to the U.S.: Chelsea Manning, whose damning Iraq revelations were first carried on WikiLeaks, was held in a long pre-trial detention and convicted to 35 years of imprisonment. The U.S. Department of Justice has confirmed on more than one occasion that there is a pending prosecution and Grand Jury against him and WikiLeaks. Mr. Assange’s defence team argues that the Swedish police case is but a smokescreen for a larger political game plan centred on Washington, which is determined to root out whistle-blowers such as Mr. Assange, Edward Snowden and Chelsea Manning for exposing dirty state secrets. It was WikiLeaks that carried the shocking video evidence of the wholesale collateral murder by the U.S.-led forces of civilians in Iraq and Afghanistan, in addition to thousands of pages of evidence of other violations of sovereignty and international law. By defying the UN panel’s carefully considered recommendation that Mr. Assange be freed and awarded compensation, Britain and Sweden are damaging their own international standing. They must reverse their untenable stand and do what law and decency dictate by allowing Mr. Assange an opportunity to prove his innocence without fearing extradition to the United States.

Tuesday, 10 March 2015

Top Australian surgeon advises female doctors to allow sexual harassment to get ahead

Lucy Clarke-Billings in The Independent

A senior surgeon has triggered controversy after telling junior female doctors to go along with sexual abuse at work for the sake of their careers. 

Australian vascular surgeon Dr Gabrielle McMullin drew criticism for comments made at the launch of her book - Pathways to Gender Equality.

Speaking in an ABC radio interview after the event, she said she encouraged women in her field to protect their climb up the professional ladder by “complying with requests” for sex.

The Sydney-based surgeon said sexism is so rife among her colleagues, young women should probably just accept unwanted sexual advances because speaking out would tarnish their reputations.

Dr McMullin, who studied medicine in Dublin, Ireland, said she stands by the comments she made on Friday but that her advice was “irony”.

"What I tell my trainees is that, if you are approached for sex, probably the safest thing to do in terms of your career is to comply with the request," she said after the launch.

Her shocking comments triggered angry reactions from sex abuse and domestic violence campaigners, who claimed her remarks were “appalling” and “irresponsible”.

Dr McMullin told ABC's AM program the story of Dr Caroline Tan, a young doctor who won a sexual harassment case in 2008 against a surgeon who forced himself on her while she was training at a Melbourne Hospital.

Dr Tan didn't tell anyone what had happened until the surgeon started giving her reports that were so bad they threatened the career she had worked so hard for.

But McMullin warns complaining to the supervising body is the 'worst thing' trainees could do.

“Despite that victory, she has never been appointed to a public position in a hospital in Australasia,” she said. “Her career was ruined by this one guy asking for sex on this night.

“And realistically, she would have been much better to have given him a blow-job on that night.”

Dr McMullin's comments have been roundly criticised by others in the medical profession and in women’s rights groups. 

But she said many people had thanked her for speaking out and some had come forward with more appalling stories of their experiences.

She said her critics had misunderstood her stance.

"Of course I don't condone any form of sexual harassment and the advice that I gave to potential surgical trainees was irony, but unfortunately that is the truth at the moment, that women do not get supported if they make a complaint," she told the ABC.

"And that's where the problem is, so what I'm suggesting is that we need a solution for that problem not to condone that behaviour.

"It's not dealt with properly, women still feel that their careers are compromised if they complain, just like rape victims are victimised if they complain," she said.

One victim, who did not want to be identified for fear of losing her job, told the ABC she experienced years of sexual harassment from a senior surgeon.

The victim said if she revealed her identify, she would not be considered a safe person to work with.

"If you complain... you'll be exposed, you'll be hung up to dry, you won't be able to work," she said.

"You'd be seen as a liability, that's my opinion. You absolutely would be seen as a liability moving forward.

"It's well and good that the legislation and laws say x, y and z but that wouldn't happen in practise. It would be unlikely to."

Kate Drummond, chair of the Women in Surgery committee at the Royal Australasian College of Surgeons, disagreed with this suggestion.

"I think we have robust processes, not only through the college for the trainees but also through the workplace," she told the ABC'S The World Today's program.

"I mean, these are people who work in hospitals and there are clear workplace processes to deal with these kinds of problems.

"And so I think there are parallel processes that we would encourage people to use and also to take the support of people like those of us in the Women in Surgery committee and we're very happy to strongly support these people."

Ms Drummond said there had been less than one complaint per year to the Women in Surgery committee regarding sexual harassment.