Search This Blog

Showing posts with label social. Show all posts
Showing posts with label social. Show all posts

Saturday 17 June 2023

Economics Essay 55: Unemployment

Explain why unemployment creates social and economic costs. 

Unemployment creates both social and economic costs due to its detrimental effects on individuals, communities, and the overall economy. Here are the reasons why unemployment has such consequences:

  1. Economic Costs: a. Loss of Output: Unemployment leads to a loss of productive resources and potential output in the economy. When individuals are jobless, their skills and talents remain underutilized, resulting in a decline in overall economic productivity. This loss of output translates into a decrease in the country's gross domestic product (GDP) and potential economic growth. b. Lower Tax Revenue: Unemployment reduces tax revenue for the government. Unemployed individuals pay fewer income taxes, and businesses experience lower profits, resulting in decreased tax collections. This reduction in tax revenue limits the government's ability to fund essential public services and investments in infrastructure, education, healthcare, and social welfare programs. c. Increased Government Spending: Unemployment often leads to increased government spending on unemployment benefits, welfare programs, and social assistance. These expenditures are necessary to provide support to unemployed individuals and their families, but they place a strain on public finances and can contribute to budget deficits and national debt. d. Reduced Consumer Spending: Unemployed individuals typically have lower disposable income, leading to a decrease in consumer spending. This reduction in aggregate demand can have a negative multiplier effect, affecting businesses across various sectors and leading to further job losses.

  2. Social Costs: a. Income Inequality and Poverty: Unemployment exacerbates income inequality and increases the risk of poverty. Without a steady income, individuals and families struggle to meet their basic needs, including housing, healthcare, and education. Long-term unemployment can push individuals into a cycle of poverty, making it challenging for them to escape. b. Social Exclusion and Marginalization: Unemployment can lead to social exclusion and feelings of marginalization. Individuals who are unable to find work may experience a loss of self-esteem, a sense of purpose, and a feeling of being disconnected from society. This can have detrimental effects on mental health and overall well-being. c. Strained Social Services: High unemployment rates put pressure on social services, such as healthcare and social assistance programs. Increased demand for these services coupled with limited resources can strain the capacity of social support systems, making it more difficult for individuals and families to access the assistance they need. d. Social Unrest and Crime: Prolonged unemployment can contribute to social unrest and an increase in crime rates. Frustration, desperation, and a lack of opportunities may drive some individuals to engage in illegal activities as a means of survival.

In conclusion, unemployment creates significant social and economic costs. The economic costs include a loss of output, reduced tax revenue, increased government spending, and a decrease in consumer spending. The social costs encompass income inequality, poverty, social exclusion, strained social services, and the potential for social unrest and crime. Addressing unemployment through policies and programs that promote job creation and support the unemployed is crucial to mitigate these costs and foster a more inclusive and prosperous society.

Friday 16 June 2023

Fallacies of Capitalism 7: The Rational Actor Fallacy

How does the "rational economic actor" fallacy overlook the role of cognitive biases, imperfect information, and bounded rationality in decision-making within a capitalist system? 

The "rational economic actor" fallacy assumes that individuals in a capitalist system always make decisions in a perfectly rational and self-interested manner. However, this belief overlooks the influence of cognitive biases, imperfect information, and bounded rationality, which can lead to suboptimal decision-making. Let's understand this concept with simple examples:

  1. Cognitive biases: Humans are prone to cognitive biases, which are systematic errors in thinking that affect decision-making. For example, the availability bias occurs when people rely on easily accessible information rather than considering a broader range of data. In a capitalist system, this bias can lead individuals to make decisions based on recent news or vivid examples rather than carefully analyzing all relevant information. This can result in suboptimal choices, such as investing in trendy but risky assets without considering their long-term potential.

  2. Imperfect information: In many economic transactions, individuals do not have access to complete and accurate information. For instance, when buying a used car, the seller may withhold information about the vehicle's hidden problems. This information asymmetry can lead to suboptimal decisions. Buyers, lacking complete knowledge, may overpay for a faulty car. In a capitalist system, imperfect information can distort market outcomes and hinder individuals from making fully rational choices.

  3. Bounded rationality: Bounded rationality recognizes that individuals have limited cognitive abilities to process information and make complex decisions. People often rely on simplifying heuristics and rules of thumb instead of undertaking thorough analysis. For example, when choosing a product, individuals may rely on brand reputation rather than researching all available options. In a capitalist system, bounded rationality can lead individuals to make decisions based on incomplete information or superficial analysis, resulting in suboptimal outcomes.

  4. Emotional influences: Human decision-making is also influenced by emotions, which can deviate from strict rationality. For example, investors may be driven by fear or greed during market fluctuations, leading to irrational investment decisions. In a capitalist system, emotional biases can contribute to market volatility and inefficient allocation of resources.

  5. Social influences: People's decisions are often influenced by social factors, such as peer pressure or social norms, which may override individual rationality. For instance, individuals may conform to popular trends or engage in conspicuous consumption to fit into a particular social group. In a capitalist system, social influences can drive individuals to make choices that prioritize social acceptance over their own best interests.

In summary, the "rational economic actor" fallacy overlooks the role of cognitive biases, imperfect information, bounded rationality, emotional influences, and social factors in decision-making within a capitalist system. Recognizing these limitations is crucial for understanding that individuals do not always act in perfectly rational and self-interested ways. Policymakers and market participants should consider these factors to design regulations, incentives, and interventions that account for the complexity of human decision-making and promote better outcomes in the capitalist system.

Monday 7 June 2021

Israel - A PSYCHOTIC BREAK FROM REALITY?

Nadeem F. Paracha in The Dawn 

Illustration by Abro


The New York Times, in its May 28, 2021 issue, published a collage of photographs of 67 children under the age of 18 who had been killed in the recent Israeli air attacks on Gaza and by Hamas on Tel Aviv. Two of the children had been killed in Israel by shrapnel from rockets fired by Hamas. It is only natural for any normal human being to ask, how can one kill children?

Similar collages appear every year on social media of the over 140 students who were mercilessly gunned down in 2014 at the Army Public School in Peshawar. The killings were carried out by the militant organisation the Tehreek-i-Taliban Pakistan (TTP). Most Pakistanis could not comprehend how even a militant group could massacre school children. But there were also those who questioned why the children were targeted.

The ‘why’ in this context is apparently understood at an individual level when certain individuals sexually assault children and often kill them. Psychologists are of the view that such individuals — paedophiles — are mostly men who have either suffered sexual abuse as children themselves, or are overwhelmed by certain psychological disorders that lead to developing questionable sexual urges.

In the 1982 anthology Behaviour Modification and Therapy, W.L. Marshall writes that paedophilia co-occurs with low self-esteem, depression and other personality disorders. These can be because of the individual’s own experiences as a sexually abused child or, according to the 2008 issue of the Journal of Psychiatric Research, paedophiles may have different brain structures which cause personality disorders and social failings, leading them to develop deviant sexual behaviours.

But why do some paedophiles end up murdering their young victims? This may be to eliminate the possibility of their victims naming them after the assault, or the young victims die because their bodies are still not developed to accommodate even the most basic sexual acts. According to a 1992 study by the Behavioural Science Unit of the Federal Bureau of Investigation (FBI) in the US, some paedophiles can also develop sadism as a disorder, which eventually compels them to derive pleasure by inflicting pain and killing their young victims.

Why did Israel kill so many children in its bombardment of Gaza? Could it be that it has something in common with apocalyptic terror groups, for whom killing children is simply collateral damage in a divinely ordained cosmic battle?

Now the question is, are modern-day governments, militaries and terrorist groups that knowingly massacre children, also driven by the same sadistic impulses? Do they extract pleasure from slaughtering children? It is possible that military massacres that include the death of a large number of children are acts of frustration and blind rage by soldiers made to fight wars that are being lost.

The March 1968 ‘My Lai massacre’, carried out by US soldiers in Vietnam, is a case in point. Over 500 people, including children, were killed in that incident. Even women who were carrying babies in their arms, were shot dead. Just a month earlier, communist insurgents had attacked South Vietnamese cities held by US forces. The insurgents were driven out, but they were able to kill a large number of US soldiers. Also, the war in Vietnam had become unpopular in the US. Soldiers were dismayed by stories about returning US marines being insulted, ridiculed and rejected at home for fighting an unjust and immoral war.

Indeed, desperate armies have been known to kill the most vulnerable members of the enemy, such as children, in an attempt to psychologically compensate for their inability to fight effectively against their adult opponents. But what about the Israeli armed forces? What frustrations are they facing? They have successfully neutralised anti-Israel militancy. And the Palestinians and their supporters are no match against Israel’s war machine. So why did Israeli forces knowingly kill so many Palestinian children in Gaza?

A May 21, 2021 report published on the Al-Jazeera website quotes a Palestinian lawyer, Youssef al-Zayed, as saying that Israeli forces were ‘intentionally targeting minors to terrorise an entire generation from speaking out.’ Ever since 1987, Palestinian children have been in the forefront of protests against armed Israeli forces. The children are often armed with nothing more than stones.

What Israel is doing against its Arab population, and in the Palestinian Territories that are still largely under its control, can be called ‘democide.’ Coined by the American political scientist Rudolph Rummel, the word democide means acts of genocide by a government/ state against a segment of its own population. Such acts constitute the systematic elimination of people belonging to minority religious or ethnic communities. According to Rummel, this is done because the persecuted communities are perceived as being ‘future threats’ by the dominant community.

So, do terrorist outfits such as TTP, Islamic State and Boko Haram, for example, who are known to also target children, do so because they see children as future threats?

In a 2018 essay for the Journal of Strategic Studies, the forensic psychologist Karl Umbrasas writes that terror outfits who kill indiscriminately can be categorised as ‘apocalyptic groups.’ According to Umbrasas, such groups operate like ‘apocalyptic cults’ and are not restrained by the socio-political and moral restraints that compel non-apocalyptic militant outfits to only focus on attacking armed, non-civilian targets. Umbrasas writes that apocalyptic terror groups justify acts of indiscriminate destruction through their often distorted and violent interpretations of sacred texts.

Such groups are thus completely unrepentant about targeting even children. To them the children, too, are part of the problem that they are going to resolve through a ‘cosmic war.’ The idea of a cosmic war constitutes an imagined battle between metaphysical forces — good and evil — that is behind many cases of religion-related violence.

Interestingly, this was also how the Afghan civil war of the 1980s between Islamist groups and Soviet troops was framed by the US, Saudi Arabia and Pakistan. The cosmic bit evaporated for the three states after the departure of Soviet troops, but the idea of the cosmic conflict remained in the minds of various terror groups in the region.

The moral codes of apocalyptic terror groups transcend those of the modern world. So, for example, on May 9 this year, when a terrorist group targeted a girls’ school in Afghanistan, killing 80, it is likely it saw girl students as part of the evil side in the divinely ordained cosmic war that the group imagines itself to be fighting.

This indeed is the result of a psychotic break from reality. But it is a reality that apocalyptic terror outfits do not accept. To them, this reality is a social construct. There is no value of the physical human body in such misshaped metaphysical ideas. Therefore, even if a cosmic war requires the killing of children, it is just the destruction of bodies, no matter what their size.

Sunday 24 January 2021

On the Indian Farmers' Agitation for MSP

By Girish Menon


In this article I will try to explain the logic behind the Delhi protests by farmers demanding a Minimum Support Price (MSP).





















If you are a businessman who has produced say 1000 units of a good; and are able to sell only 10 units at the price that you desired. Then it means you will have an unsold stock of 990 units. You now have a choice:


Either keep them in storage and sell it to folks who may come in the future and pay your asking price.


Or get rid of your unsold stock at whatever price the haggling buyers are willing to pay. 


If you decide on the storage option then it follows that your goods are not perishable, it’s value does not diminish with age, you have adequate storage facilities and you have the resources to continue living even when most of your goods are unsold.


If you decide on the distress sale option it could mean that your goods are perishable and/or it’s value diminishes with age and/or you don’t have storage facilities and/or you are desperate to unload your stuff because for you whatever money you get today is important for your survival,


If one were to approach any small farmers’ output, I think such a farmer does not have the storage option available to him. Hence, he will have to sell his output to the intermediary at any price offered. This could mean a low price which results in a loss or a high price resulting in a profit to the farmer.


Whether the price is high or low depends on the volume of output produced by all farmers of the same output. And, no farmer is able to predict the likely future harvest price he would get at the moment he decides what crop to grow.


Thus a subsistence farmer, without storage facilities, is betting on the future price he could get at harvest time. This is a bet that destroys subsistence farmers from time to time when market prices turn really low due to a bumper harvest.


Subjecting subsistence farmers to ‘market forces’ means that some farmers will get bankrupted and be forced to leave their village and go to the city in search of a means of living. In many developed countries, governments have tried to prevent farmer exodus from villages by intervening and ensuring that farmers receive a decent return for their toils,


MSP is a government guarantee of a minimum price that protects farmers who cannot get their desired price at the market, The original draft of the farm law bills passed by the Indian Parliament has no mention of MSP. Also, in Punjab etc., some of these agitating farmers are already being supported with MSP by the state government and they fear that the new bills will take away their protection.


This is a simple explanation of the demand for MSP.


It must also be remembered that:


  • Unlike the subsistence farmer, the middleman who buys the farmers’ output is usually a part of a powerful cartel and who enjoys more market power than the farmer.

  • As depicted in ‘Peepli Live’ destitute farmers, if forced to leave their villages, will add to supply of cheap labour in an era of already high unemployment.

  • These destitute may squat on a city’s scarce public spaces and be an ‘eyesore’ to the better off city dwellers.

  • Some farmers may even contemplate suicide and this will produce less than desirable PR optics for any 'caring' government.



Friday 15 January 2021

Conspiracy theorists destroy a rational society: resist them

John Thornhill in The FT

Buzz Aldrin’s reaction to the conspiracy theorist who told him the moon landings never happened was understandable, if not excusable. The astronaut punched him in the face. 

Few things in life are more tiresome than engaging with cranks who refuse to accept evidence that disproves their conspiratorial beliefs — even if violence is not the recommended response. It might be easier to dismiss such conspiracy theorists as harmless eccentrics. But while that is tempting, it is in many cases wrong. 

As we have seen during the Covid-19 pandemic and in the mob assault on the US Congress last week, conspiracy theories can infect the real world — with lethal effect. Our response to the pandemic will be undermined if the anti-vaxxer movement persuades enough people not to take a vaccine. Democracies will not endure if lots of voters refuse to accept certified election results. We need to rebut unproven conspiracy theories. But how? 

The first thing to acknowledge is that scepticism is a virtue and critical scrutiny is essential. Governments and corporations do conspire to do bad things. The powerful must be effectively held to account. The US-led war against Iraq in 2003, to destroy weapons of mass destruction that never existed, is a prime example.  

The second is to re-emphasise the importance of experts, while accepting there is sometimes a spectrum of expert opinion. Societies have to base decisions on experts’ views in many fields, such as medicine and climate change, otherwise there is no point in having a debate. Dismissing the views of experts, as Michael Gove famously did during the Brexit referendum campaign, is to erode the foundations of a rational society. No sane passenger would board an aeroplane flown by an unqualified pilot.  

In extreme cases, societies may well decide that conspiracy theories are so harmful that they must suppress them. In Germany, for example, Holocaust denial is a crime. Social media platforms that do not delete such content within 24 hours of it being flagged are fined. 

In Sweden, the government is even establishing a national psychological defence agency to combat disinformation. A study published this week by the Oxford Internet Institute found “computational propaganda” is now being spread in 81 countries. 

Viewing conspiracy theories as political propaganda is the most useful way to understand them, according to Quassim Cassam, a philosophy professor at Warwick university who has written a book on the subject. In his view, many conspiracy theories support an implicit or explicit ideological goal: opposition to gun control, anti-Semitism or hostility to the federal government, for example. What matters to the conspiracy theorists is not whether their theories are true, but whether they are seductive. 

So, as with propaganda, conspiracy theories must be as relentlessly opposed as they are propagated. 

That poses a particular problem when someone as powerful as the US president is the one shouting the theories. Amid huge controversy, Twitter and Facebook have suspended Donald Trump’s accounts. But Prof Cassam says: “Trump is a mega disinformation factory. You can de-platform him and address the supply side. But you still need to address the demand side.” 

On that front, schools and universities should do more to help students discriminate fact from fiction. Behavioural scientists say it is more effective to “pre-bunk” a conspiracy theory — by enabling people to dismiss it immediately — than debunk it later. But debunking serves a purpose, too. 

As of 2019, there were 188 fact-checking sites in more than 60 countries. Their ability to inject facts into any debate can help sway those who are curious about conspiracy theories, even if they cannot convince true believers. 

Under intense public pressure, social media platforms are also increasingly filtering out harmful content and nudging users towards credible sources of information, such as medical bodies’ advice on Covid. 

Some activists have even argued for “cognitive infiltration” of extremist groups, suggesting that government agents should intervene in online chat rooms to puncture conspiracy theories. That may work in China but is only likely to backfire in western democracies, igniting an explosion of new conspiracy theories. 

Ultimately, we cannot reason people out of beliefs that they have not reasoned themselves into. But we can, and should, punish those who profit from harmful irrationality. There is a tried-and-tested method of countering politicians who peddle and exploit conspiracy theories: vote them out of office.

Tuesday 22 December 2020

Time spent in the pub is a wise investment

Sarah O'Connor in The FT


When I joined the Financial Times as a trainee in 2007, I spent a lot of time learning about credit default swaps and a similar amount of time in the pub. The CDS knowledge proved useful in the ensuing financial crisis, but 13 years later, I am glad of the hours spent in the pub too. 

It was how I got to know my colleagues, who taught me the FT’s folklore, its funny anecdotes and its subtle power dynamics. I just thought I was having fun, but an economist would have said I was building “social capital”, defined by the UK’s statistical office as “the extent and nature of our connections with others and the collective attitudes and behaviours between people that support a well-functioning, close-knit society”. 

Social capital is a fuzzy concept and hard to measure. But Covid-19 has made us think about who has it, who doesn’t, how we build it and how we lose it. 

I was near the end of my maternity leave when the pandemic started, so it has now been almost 18 months since I last worked in the office. I’m grateful every day for my store of social capital, which has helped me to stay connected, though I do get a twinge of anxiety with every new byline I don’t recognise. 

It has been much tougher for people starting out this year. If it is hard to maintain relationships via video calls, it is harder still to build them from scratch. I spoke recently to some senior accountants about their new crop of trainees. They were learning their trade, but there was no opportunity for general chit-chat before and after virtual meetings, and the trainees seemed to find it harder to ask “daft questions” in video calls than when “sitting round a table with a packet of biscuits”. 

Next year, employers will have to think creatively about how to help new employees “catch up” on forming social capital, especially in a world of “hybrid” work where people stay at home for several days a week. 

Inadequate social capital is a problem for organisations as well as individuals. Research suggests that social capital boosts efficiency by reducing transaction and monitoring costs. In other words, “society wastes resources when people distrust and are dishonest with each other”, according to Dimitri Zenghelis, leader of the Wealth Economy project at Cambridge university, which explores social and natural capital.  

I am often struck by the inefficiencies of distrustful workplaces. Companies using screenshot and mouse-tracking software can end up in a cat-and-mouse game with resentful workers using tech workarounds of their own. Employers who doubt the honesty and motivation of their staff compel line managers to hold “return to work” meetings with employees after every sickness absence, even of only a day. Factories and warehouses often have long queues at shift changes as staff go through scanners to prove they are not stealing. Covid-19 might push some employers further in this direction, particularly if they decide to use more offshore workers with whom they have no prior relationship. 

On the other hand, this year’s forced experiment with homeworking has made some employers realise their staff can be trusted to work productively without oversight. The key will be to hold on to that trust, and the efficiencies it brings, rather than slip back into old habits of micromanagement. 

Social capital matters for economies, too. For his book Extreme Economies, economist Richard Davies travelled to nine unusual places, from a refugee camp in Jordan to an Indonesian town destroyed by the 2004 tsunami. He was struck by how societies with higher social capital were more resilient when disaster struck. In Glasgow, by contrast, he argued that the replacement of tenement homes with tower blocks had dismantled the social capital of the people who lived there, making it harder for them to cope with economic decline. 

For both individuals and economies, social capital is an important buffer against unexpected hardships. Yet in the UK, where the Office for National Statistics has been trying to track various indicators of social capital over time, the trend has not been good. We exchanged favours or stopped to talk with our neighbours less often in 2017/18 than we did in 2011/12. Our sense of belonging to our neighbourhoods also fell. Parents became less likely to regularly give help to, and receive help from, their adult children. 

The pandemic has strained our ability to maintain the bonds between us, but it has also reminded us just how important they are. Any plan to “build back better” when the crisis ends should include plenty of time in the pub.

Saturday 15 April 2017

Telling children 'hard work gets you to the top' is simply a lie

Hashi Mohamed in The Guardian


I know about social mobility: I went to underperforming state schools, and am now a barrister. Could somebody take the same route today? It’s highly unlikely




‘Those inside the system naturally recruit in their own image. This then entrenches the lack of any potential for upward mobility and means that the vast majority are excluded.’ Photograph: Mick Tsikas/AAP


It is a common promise made to the next generation. “If you work hard, and do the right thing, you will be able to get on in life.” I believe that it is a promise that we have no capacity to fulfil. And that’s because its underlying assumptions must be revisited.
Imagine a life living in quads. You attend a highly prestigious school in which you dash from one quad to the next for your classes. You then continue on to yet another prestigious institution for your tertiary education, say Oxford or Cambridge University, and yet more quads with manicured lawns. Then you end up in the oasis of Middle Temple working as a barrister: more manicured lawns and, yes, you guessed it, more quads. You have clearly led a very square and straight life. Effortlessly gliding from one world to the next with clear continuity, familiarity and ease.

Now contrast the above oasis with the overcrowded and under-performing schools of inner cities, going home to a bedroom which you share with many other siblings. A home you are likely to vacate when the council can’t house you there anymore. Perhaps a single-parent household where you have caring duties at a young age, or a household where no one works. A difficult neighbourhood where the poverty of ambition is palpable, stable families a rarity, and role models very scarce.


The unwritten rules are rarely shared and 'diversity' and 'open recruitment' have made little if any difference


The former trajectory, in some or all its forms, is much more likely to lend itself to a more successful life in Britain. The latter means you may have the grades and talent, despite the odds, but you’re still lacking the crucial ingredients essential to succeeding. I don’t have to imagine much of this. I have experienced both of these extremes in my short lifetime.

My mother gave birth to 12 children. I arrived in London at the age of nine, speaking practically no English. I attended some of the worst performing schools in inner-city London and was raised exclusively on state benefits. Many years later I was lucky enough to attend Oxford on a full scholarship for my postgraduate degree. Now as a barrister I am a lifetime member of The Honourable Society of Lincoln’s Inn.

Is my route possible for anyone in the next generation with whom I share a similar background? I believe not. And this is not because they are any less able or less dedicated to succeed.

What I have learned in this short period of time is that the pervasive narrative of “if you work hard you will get on” is a complete myth. It’s not true and we need stop saying it. This is because “working hard, and doing the right thing” barely gets you to the starting line. Furthermore, it means something completely different depending on to which context you’re applying this particular notion. So much more is required.

I have come to understand that the systems that underpin the top professions in Britain are set up to serve only a certain section of society: they’re readily identifiable by privileged backgrounds, particular schools and accents. To some this may seem obvious, so writing it may be superfluous. But it wasn’t obvious to me growing up, and it isn’t obvious to many others. The unwritten rules are rarely shared and “diversity” and “open recruitment” have tried but made little if any difference.

Those inside the system then naturally recruit in their own image. This then entrenches the lack of any potential for upward mobility and means that the vast majority are excluded.

As a form of short-term distraction, we are obsessed with elevating token success stories which distort the overall picture.
The story of the Somali boy who got a place at Eton, or the girl from the East End who is now going to MIT. These stories may seem inspiring at first blush, but they skew the complex picture that exists in deprived communities. It perpetuates the simple notion that what’s required is working hard, and that all else afterwards falls neatly into place. This simple ritual we seem to constantly engage in is therefore as much about setting up false hopes for other children, as it is about privileged, middle-class-led institutions making themselves feel good.

The reality is that there are many like them trying hard to do better, but may be lacking the environment to fully realise their potential. Are they worth less? When told to “dream big” and it will happen, who will tell them that failure had nothing to do with their lack of vision? But that real success, especially from their starting point, often boils down to a complex combination of circumstances: luck, sustained stability, the right teachers at the right time, and even not experiencing moments of grief at crucial, destabilising junctures.

Improving educational attainment is critical, and so much progress has been made over the years to improve this. But this is not enough. Employers must see hiring youngsters from poorer backgrounds as good for business as well as for a fairer society. They must be assisted with a real chance to succeed, in a non-judgmental context and inclusive environment. They must do more to focus on potential rather than polish. More leadership and more risk-taking are required on this front.
Perversely, class and accents remain an overwhelmingly important way of judging intelligence. In France or Germany, for example, your accent rarely matters. Your vocabulary and conjugation will give much more away, but never your accent, apart from regional perhaps. I don’t see this mindset shifting, so my advice to youngsters has remained: you need to adapt yourself. You need to find the right way to speak to different people, at different times in different contexts. This is not compromising who you are, but rather adapting to the relevant surroundings.

We need to do more to double down on improving environments both at home and at school which continuously constrain potential. If the adage that hard work truly matters rings true, then we must do more – at all levels of society – to make it a reality.

Why rightwingers are desperate for Sweden to ‘fail’

Christian Christen in The Guardian

Of course Sweden isn’t perfect, but those who love to portray it as teeming with terrorists and naive towards reality, are just cynical hypocrites

‘When terrible events take place, they are framed as evidence of the decline and fall of the European social democratic project, the failure of European immigration policies and of Swedish innocence lost.’ Photograph: Fredrik Sandberg/AFP/Getty Images



There are few countries in the world that have “lost their innocence” as many times as Sweden. Even before a suspected terrorist and Isis supporter killed four and injured many more in last week’s attack in central Stockholm, Sweden’s policies were being portrayed on the programmes of Fox News and pages of the Daily Mail as, at best, exercises in well-meaning-but-naive multiculturalism, and at worst terrorist appeasement.
So, when terrible events take place, they are framed as evidence of the decline and fall of the European social democratic project, the failure of European immigration policies and of Swedish innocence lost.

When Donald Trump argued against the intake of Syrian refugees to the US earlier this year, he used supposed problems in Sweden as part of his rationale. “You look at what’s happening last night in Sweden,” the president said at a rally in Florida in February. “Sweden. Who would believe this? Sweden. They took in large numbers. They’re having problems like they never thought possible.” The White House later clarified that Trump had been speaking about general “rising crime”, when he seemed to be describing a then non-existent terror attack.


Sweden is a capitalist, economic power – usually found near the top of rankings of innovative and competitive economies


The obsession with Sweden has a lot to do with the country’s history of taking in refugees and asylum seekers, combined with social democratic politics. Both are poison to the political right. When prime minister Olof Palme was shot walking home (without bodyguards) from a cinema in 1986, we were told that Swedish innocence and utopian notions of a non-violent society had come to an end. But Swedes miraculously regained their innocence, only to lose it again in 2003 when the popular foreign minister Anna Lindh (also without bodyguards) was stabbed to death in a Stockholm department store. This possession and dispossession of innocence – which some call naivety – has ebbed and flowed with the years.

The election to parliament and subsequent rise of the anti-immigration Sweden Democrats were discussed in similar terms, as was the decision in late 2015 by the Swedish government to halt the intake of refugees after a decades-long policy of humanitarian acceptance.

Yet the notion of a doe-eyed Sweden buffeted by the cruel winds of the real world is a nonsense. Sweden is an economic power – usually found near the top of rankings of innovative and competitive economies. Companies that are household names, from H&M to Ericsson and Skype, and food packaging giant Tetra Pak, are Swedish. It plays the capitalist game better than most (and not always in an ethical manner. The country is, per capita, one of the largest weapons exporters in the world. As for the argument that Swedes are in denial, unwilling to discuss the impact of immigration? This comes as news to citizens who see the issue addressed regularly in the Swedish media, most obviously in the context of the rise of the Sweden Democrats.




Stockholm attack suspect 'known to security services'




Between 2014 and 2016, Sweden received roughly 240,000 asylum seekers: far and away the most refugees per capita in Europe. But the process has not been smooth. Throughout 2016 and 2017, the issue of men leaving Sweden to fight for Isis has been a major story, as has the Swedish government’s perceived lack of preparation about what to do when these fighters return. There is also much debate on the practice of gender segregation in some Muslim schools in Sweden.

As Stockholm goes through a period of mourning for last week’s attack, it is worth asking: is Sweden the country divorced from reality? If we are speaking of naivety in relation to terrorism, a good place to start might be US foreign policy in the Middle East , and not Sweden’s humanitarian intake of the immigrants and refugees created (at least in part) as a result of that US policy.

Has Swedish immigration policy always been well thought-out? No. Is Sweden marked by social and economic divisions? Yes. But the presentation of Sweden as some kind of case study in failed utopianism often comes from those who talk a big game on democracy, human rights and equality, but who refuse to move beyond talk into action.
So, when pundits and experts opine on Swedish “innocence lost” it is worth remembering that Sweden has never been innocent. It is also worth remembering that Sweden was willing to put its money where its mouth was when it came to taking in refugees and immigrants fleeing the conflicts and instability fuelled by countries unwilling to deal with the consequences of their actions. This shirking of responsibility while condemning the efforts of others is far worse than being naive. It’s cynical hypocrisy.

Wednesday 15 February 2017

In an age of robots, schools are teaching our children to be redundant

Illustration by Andrzej Krauze


GeorgeMonbiot
 in The Guardian


In the future, if you want a job, you must be as unlike a machine as possible: creative, critical and socially skilled. So why are children being taught to behave like machines?

Children learn best when teaching aligns with their natural exuberance, energy and curiosity. So why are they dragooned into rows and made to sit still while they are stuffed with facts?

We succeed in adulthood through collaboration. So why is collaboration in tests and exams called cheating?

Governments claim to want to reduce the number of children being excluded from school. So why are their curriculums and tests so narrow that they alienate any child whose mind does not work in a particular way?

The best teachers use their character, creativity and inspiration to trigger children’s instinct to learn. So why are character, creativity and inspiration suppressed by a stifling regime of micromanagement?

There is, as Graham Brown-Martin explains in his book Learning {Re}imagined, a common reason for these perversities. Our schools were designed to produce the workforce required by 19th-century factories. The desired product was workers who would sit silently at their benches all day, behaving identically, to produce identical products, submitting to punishment if they failed to achieve the requisite standards. Collaboration and critical thinking were just what the factory owners wished to discourage.

As far as relevance and utility are concerned, we might as well train children to operate a spinning jenny. Our schools teach skills that are not only redundant but counter-productive. Our children suffer this life-defying, dehumanising system for nothing.


At present we are stuck with the social engineering of an industrial workforce in a post-industrial era

The less relevant the system becomes, the harder the rules must be enforced, and the greater the stress they inflict. One school’s current advertisement in the Times Educational Supplement asks: “Do you like order and discipline? Do you believe in children being obedient every time? … If you do, then the role of detention director could be for you.” Yes, many schools have discipline problems. But is it surprising when children, bursting with energy and excitement, are confined to the spot like battery chickens?

Teachers are now leaving the profession in droves, their training wasted and their careers destroyed by overwork and a spirit-crushing regime of standardisation, testing and top-down control. The less autonomy they are granted, the more they are blamed for the failures of the system. A major recruitment crisis beckons, especially in crucial subjects such as physics and design and technology. This is what governments call efficiency.

Any attempt to change the system, to equip children for the likely demands of the 21st century, rather than those of the 19th, is demonised by governments and newspapers as “social engineering”. Well, of course it is. All teaching is social engineering. At present we are stuck with the social engineering of an industrial workforce in a post-industrial era. Under Donald Trump’s education secretary, Betsy DeVos, and a nostalgic government in Britain, it’s likely only to become worse.




When they are allowed to apply their natural creativity and curiosity, children love learning. They learn to walk, to talk, to eat and to play spontaneously, by watching and experimenting. Then they get to school, and we suppress this instinct by sitting them down, force-feeding them with inert facts and testing the life out of them.

There is no single system for teaching children well, but the best ones have this in common: they open up rich worlds that children can explore in their own ways, developing their interests with help rather than indoctrination. For example, the Essa academy in Bolton gives every pupil an iPad, on which they create projects, share material with their teachers and each other, and can contact their teachers with questions about their homework. By reducing their routine tasks, this system enables teachers to give the children individual help.

Other schools have gone in the opposite direction, taking children outdoors and using the natural world to engage their interests and develop their mental and physical capacities (the Forest School movement promotes this method). But it’s not a matter of high-tech or low-tech; the point is that the world a child enters is rich and diverse enough to ignite their curiosity, and allow them to discover a way of learning that best reflects their character and skills.

There are plenty of teaching programmes designed to work with children, not against them. For example, the Mantle of the Expert encourages them to form teams of inquiry, solving an imaginary task – such as running a container port, excavating a tomb or rescuing people from a disaster – that cuts across traditional subject boundaries. A similar approach, called Quest to Learn, is based on the way children teach themselves to play games. To solve the complex tasks they’re given, they need to acquire plenty of information and skills. They do it with the excitement and tenacity of gamers.




No grammar schools, lots of play: the secrets of Europe’s top education system



The Reggio Emilia approach, developed in Italy, allows children to develop their own curriculum, based on what interests them most, opening up the subjects they encounter along the way with the help of their teachers. Ashoka Changemaker schools treat empathy as “a foundational skill on a par with reading and math”, and use it to develop the kind of open, fluid collaboration that, they believe, will be the 21st century’s key skill.

The first multi-racial school in South Africa, Woodmead, developed a fully democratic method of teaching, whose rules and discipline were overseen by a student council. Its integrated studies programme, like the new system in Finland, junked traditional subjects in favour of the students’ explorations of themes, such as gold, or relationships, or the ocean. Among its alumni are some of South Africa’s foremost thinkers, politicians and businesspeople.

In countries such as Britain and the United States, such programmes succeed despite the system, not because of it. Had these governments set out to ensure that children find learning difficult and painful, they could not have done a better job. Yes, let’s have some social engineering. Let’s engineer our children out of the factory and into the real world.

Tuesday 7 February 2017

The hi-tech war on science fraud

Stephen Buranyi in The Guardian


One morning last summer, a German psychologist named Mathias Kauff woke up to find that he had been reprimanded by a robot. In an email, a computer program named Statcheck informed him that a 2013 paper he had published on multiculturalism and prejudice appeared to contain a number of incorrect calculations – which the program had catalogued and then posted on the internet for anyone to see. The problems turned out to be minor – just a few rounding errors – but the experience left Kauff feeling rattled. “At first I was a bit frightened,” he said. “I felt a bit exposed.”

Kauff wasn’t alone. Statcheck had read some 50,000 published psychology papers and checked the maths behind every statistical result it encountered. In the space of 24 hours, virtually every academic active in the field in the past two decades had received an email from the program, informing them that their work had been reviewed. Nothing like this had ever been seen before: a massive, open, retroactive evaluation of scientific literature, conducted entirely by computer.

Statcheck’s method was relatively simple, more like the mathematical equivalent of a spellchecker than a thoughtful review, but some scientists saw it as a new form of scrutiny and suspicion, portending a future in which the objective authority of peer review would be undermined by unaccountable and uncredentialed critics.

Susan Fiske, the former head of the Association for Psychological Science, wrote an op-ed accusing “self-appointed data police” of pioneering a new “form of harassment”. The German Psychological Society issued a statement condemning the unauthorised use of Statcheck. The intensity of the reaction suggested that many were afraid that the program was not just attributing mere statistical errors, but some impropriety, to the scientists.

The man behind all this controversy was a 25-year-old Dutch scientist named Chris Hartgerink, based at Tilburg University’s Meta-Research Center, which studies bias and error in science. Statcheck was the brainchild of Hartgerink’s colleague Michèle Nuijten, who had used the program to conduct a 2015 study that demonstrated that about half of all papers in psychology journals contained a statistical error. Nuijten’s study was written up in Nature as a valuable contribution to the growing literature acknowledging bias and error in science – but she had not published an inventory of the specific errors it had detected, or the authors who had committed them. The real flashpoint came months later,when Hartgerink modified Statcheck with some code of his own devising, which catalogued the individual errors and posted them online – sparking uproar across the scientific community.

Hartgerink is one of only a handful of researchers in the world who work full-time on the problem of scientific fraud – and he is perfectly happy to upset his peers. “The scientific system as we know it is pretty screwed up,” he told me last autumn. Sitting in the offices of the Meta-Research Center, which look out on to Tilburg’s grey, mid-century campus, he added: “I’ve known for years that I want to help improve it.” Hartgerink approaches his work with a professorial seriousness – his office is bare, except for a pile of statistics textbooks and an equation-filled whiteboard – and he is appealingly earnest about his aims. His conversations tend to rapidly ascend to great heights, as if they were balloons released from his hands – the simplest things soon become grand questions of ethics, or privacy, or the future of science.

“Statcheck is a good example of what is now possible,” he said. The top priority,for Hartgerink, is something much more grave than correcting simple statistical miscalculations. He is now proposing to deploy a similar program that will uncover fake or manipulated results – which he believes are far more prevalent than most scientists would like to admit.

When it comes to fraud – or in the more neutral terms he prefers, “scientific misconduct” – Hartgerink is aware that he is venturing into sensitive territory. “It is not something people enjoy talking about,” he told me, with a weary grin. Despite its professed commitment to self-correction, science is a discipline that relies mainly on a culture of mutual trust and good faith to stay clean. Talking about its faults can feel like a kind of heresy. In 1981, when a young Al Gore led a congressional inquiry into a spate of recent cases of scientific fraud in biomedicine, the historian Daniel Kevles observed that “for Gore and for many others, fraud in the biomedical sciences was akin to pederasty among priests”.

The comparison is apt. The exposure of fraud directly threatens the special claim science has on truth, which relies on the belief that its methods are purely rational and objective. As the congressmen warned scientists during the hearings, “each and every case of fraud serves to undermine the public’s trust in the research enterprise of our nation”.

But three decades later, scientists still have only the most crude estimates of how much fraud actually exists. The current accepted standard is a 2009 study by the Stanford researcher Daniele Fanelli that collated the results of 21 previous surveys given to scientists in various fields about research misconduct. The studies, which depended entirely on scientists honestly reporting their own misconduct, concluded that about 2% of scientists had falsified data at some point in their career.

If Fanelli’s estimate is correct, it seems likely that thousands of scientists are getting away with misconduct each year. Fraud – including outright fabrication, plagiarism and self-plagiarism – accounts for the majority of retracted scientific articles. But, according to RetractionWatch, which catalogues papers that have been withdrawn from the scientific literature, only 684 were retracted in 2015, while more than 800,000 new papers were published. If even just a few of the suggested 2% of scientific fraudsters – which, relying on self-reporting, is itself probably a conservative estimate – are active in any given year, the vast majority are going totally undetected. “Reviewers and editors, other gatekeepers – they’re not looking for potential problems,” Hartgerink said.

But if none of the traditional authorities in science are going to address the problem, Hartgerink believes that there is another way. If a program similar to Statcheck can be trained to detect the traces of manipulated data, and then make those results public, the scientific community can decide for itself whether a given study should still be regarded as trustworthy.

Hartgerink’s university, which sits at the western edge of Tilburg, a small, quiet city in the southern Netherlands, seems an unlikely place to try and correct this hole in the scientific process. The university is best known for its economics and business courses and does not have traditional lab facilities. But Tilburg was also the site of one of the biggest scientific scandals in living memory – and no one knows better than Hartgerink and his colleagues just how devastating individual cases of fraud can be.

In September 2010, the School of Social and Behavioral Science at Tilburg University appointed Diederik Stapel, a promising young social psychologist, as its new dean. Stapel was already popular with students for his warm manner, and with the faculty for his easy command of scientific literature and his enthusiasm for collaboration. He would often offer to help his colleagues, and sometimes even his students, by conducting surveys and gathering data for them.

As dean, Stapel appeared to reward his colleagues’ faith in him almost immediately. In April 2011 he published a paper in Science, the first study the small university had ever landed in that prestigious journal. Stapel’s research focused on what psychologists call “priming”: the idea that small stimuli can affect our behaviour in unnoticed but significant ways. “Could being discriminated against depend on such seemingly trivial matters as garbage on the streets?” Stapel’s paper in Science asked. He proceeded to show that white commuters at the Utrecht railway station tended to sit further away from visible minorities when the station was dirty. Similarly, Stapel found that white people were more likely to give negative answers on a quiz about minorities if they were interviewed on a dirty street, rather than a clean one.

Stapel had a knack for devising and executing such clever studies, cutting through messy problems to extract clean data. Since becoming a professor a decade earlier, he had published more than 100 papers, showing, among other things, that beauty product advertisements, regardless of context, prompted women to think about themselves more negatively, and that judges who had been primed to think about concepts of impartial justice were less likely to make racially motivated decisions.

His findings regularly reached the public through the media. The idea that huge, intractable social issues such as sexism and racism could be affected in such simple ways had a powerful intuitive appeal, and hinted at the possibility of equally simple, elegant solutions. If anything united Stapel’s diverse interests, it was this Gladwellian bent. His studies were often featured in the popular press, including the Los Angeles Times and New York Times, and he was a regular guest on Dutch television programmes.

But as Stapel’s reputation skyrocketed, a small group of colleagues and students began to view him with suspicion. “It was too good to be true,” a professor who was working at Tilburg at the time told me. (The professor, who I will call Joseph Robin, asked to remain anonymous so that he could frankly discuss his role in exposing Stapel.) “All of his experiments worked. That just doesn’t happen.”

A student of Stapel’s had mentioned to Robin in 2010 that some of Stapel’s data looked strange, so that autumn, shortly after Stapel was made Dean, Robin proposed a collaboration with him, hoping to see his methods first-hand. Stapel agreed, and the data he returned a few months later, according to Robin, “looked crazy. It was internally inconsistent in weird ways; completely unlike any real data I had ever seen.” Meanwhile, as the student helped get hold of more datasets from Stapel’s former students and collaborators, the evidence mounted: more “weird data”, and identical sets of numbers copied directly from one study to another.
In August 2011, the whistleblowers took their findings to the head of the department, Marcel Zeelenberg, who confronted Stapel with the evidence. At first, Stapel denied the charges, but just days later he admitted what his accusers suspected: he had never interviewed any commuters at the railway station, no women had been shown beauty advertisements and no judges had been surveyed about impartial justice and racism.

Stapel hadn’t just tinkered with numbers, he had made most of them up entirely, producing entire datasets at home in his kitchen after his wife and children had gone to bed. His method was an inversion of the proper scientific method: he started by deciding what result he wanted and then worked backwards, filling out the individual “data” points he was supposed to be collecting.

On 7 September 2011, the university revealed that Stapel had been suspended. The media initially speculated that there might have been an issue with his latest study – announced just days earlier, showing that meat-eaters were more selfish and less sociable – but the problem went much deeper. Stapel’s students and colleagues were about to learn that his enviable skill with data was, in fact, a sham, and his golden reputation, as well as nearly a decade of results that they had used in their own work, were built on lies.

Chris Hartgerink was studying late at the library when he heard the news. The extent of Stapel’s fraud wasn’t clear by then, but it was big. Hartgerink, who was then an undergraduate in the Tilburg psychology programme, felt a sudden disorientation, a sense that something solid and integral had been lost. Stapel had been a mentor to him, hiring him as a research assistant and giving him constant encouragement. “This is a guy who inspired me to actually become enthusiastic about research,” Hartgerink told me. “When that reason drops out, what remains, you know?”

Hartgerink wasn’t alone; the whole university was stunned. “It was a really difficult time,” said one student who had helped expose Stapel. “You saw these people on a daily basis who were so proud of their work, and you know it’s just based on a lie.” Even after Stapel resigned, the media coverage was relentless. Reporters roamed the campus – first from the Dutch press, and then, as the story got bigger, from all over the world.

On 9 September, just two days after Stapel was suspended, the university convened an ad-hoc investigative committee of current and former faculty. To help determine the true extent of Stapel’s fraud, the committee turned to Marcel van Assen, a statistician and psychologist in the department. At the time, Van Assen was growing bored with his current research, and the idea of investigating the former dean sounded like fun to him. Van Assen had never much liked Stapel, believing that he relied more on the force of his personality than reason when running the department. “Some people believe him charismatic,” Van Assen told me. “I am less sensitive to it.”

Van Assen – who is 44, tall and rangy, with a mop of greying, curly hair – approaches his work with relentless, unsentimental practicality. When speaking, he maintains an amused, half-smile, as if he is joking. He once told me that to fix the problems in psychology, it might be simpler to toss out 150 years of research and start again; I’m still not sure whether or not he was serious.

To prove misconduct, Van Assen said, you must be a pitbull: biting deeper and deeper, clamping down not just on the papers, but the datasets behind them, the research methods, the collaborators – using everything available to bring down the target. He spent a year breaking down the 45 studies Stapel produced at Tilburg and cataloguing their individual aberrations, noting where the effect size – a standard measure of the difference between the two groups in an experiment –seemed suspiciously large, where sequences of numbers were copied, where variables were too closely related, or where variables that should have moved in tandem instead appeared adrift.

The committee released its final report in October 2012 and, based largely on its conclusions, 55 of Stapel’s publications were officially retracted by the journals that had published them. Stapel also returned his PhD to the University of Amsterdam. He is, by any measure, one of the biggest scientific frauds of all time. (RetractionWatch has him third on their all-time retraction leaderboard.) The committee also had harsh words for Stapel’s colleagues, concluding that “from the bottom to the top, there was a general neglect of fundamental scientific standards”. “It was a real blow to the faculty,” Jacques Hagenaars, a former professor of methodology at Tilburg, who served on the committee, told me.

By extending some of the blame to the methods and attitudes of the scientists around Stapel, the committee situated the case within a larger problem that was attracting attention at the time, which has come to be known as the “replication crisis”. For the past decade, the scientific community has been grappling with the discovery that many published results cannot be reproduced independently by other scientists – in spite of the traditional safeguards of publishing and peer-review – because the original studies were marred by some combination of unchecked bias and human error.

After the committee disbanded, Van Assen found himself fascinated by the way science is susceptible to error, bias, and outright fraud. Investigating Stapel had been exciting, and he had no interest in returning to his old work. Van Assen had also found a like mind, a new professor at Tilburg named Jelte Wicherts, who had a long history working on bias in science and who shared his attitude of upbeat cynicism about the problems in their field. “We simply agree, there are findings out there that cannot be trusted,” Van Assen said. They began planning a new sort of research group: one that would investigate the very practice of science.

Van Assen does not like assigning Stapel too much credit for the creation of the Meta-Research Center, which hired its first students in late 2012, but there is an undeniable symmetry: he and Wicherts have created, in Stapel’s old department, a platform to investigate the sort of “sloppy science” and misconduct that very department had been condemned for.

Hartgerink joined the group in 2013. “For many people, certainly for me, Stapel launched an existential crisis in science,” he said. After Stapel’s fraud was exposed, Hartgerink struggled to find “what could be trusted” in his chosen field. He began to notice how easy it was for scientists to subjectively interpret data – or manipulate it. For a brief time he considered abandoning a future in research and joining the police.


There are probably several very famous papers that have fake data, and very famous people who have done it


Van Assen, who Hartgerink met through a statistics course, helped put him on another path. Hartgerink learned that a growing number of scientists in every field were coming to agree that the most urgent task for their profession was to establish what results and methods could still be trusted – and that many of these people had begun to investigate the unpredictable human factors that, knowingly or not, knocked science off its course. What was more, he could be a part of it. Van Assen offered Hartgerink a place in his yet-unnamed research group. All of the current projects were on errors or general bias, but Van Assen proposed they go out and work closer to the fringes, developing methods that could detect fake data in published scientific literature.

“I’m not normally an expressive person,” Hartgerink told me. “But I said: ‘Hell, yes. Let’s do that.’”

Hartgerink and Van Assen believe not only that most scientific fraud goes undetected, but that the true rate of misconduct is far higher than 2%. “We cannot trust self reports,” Van Assen told me. “If you ask people, ‘At the conference, did you cheat on your fiancee?’ – people will very likely not admit this.”

Uri Simonsohn, a psychology professor at University of Pennsylvania’s Wharton School who gained notoriety as a “data vigilante” for exposing two serious cases of fraud in his field in 2012, believes that as much as 5% of all published research contains fraudulent data. “It’s not only in the periphery, it’s not only in the journals people don’t read,” he told me. “There are probably several very famous papers that have fake data, and very famous people who have done it.”
But as long as it remains undiscovered, there is a tendency for scientists to dismiss fraud in favour of more widely documented – and less seedy – issues. Even Arturo Casadevall, an American microbiologist who has published extensively on the rate, distribution, and detection of fraud in science, told me that despite his personal interest in the topic, my time would be better served investigating the broader issues driving the replication crisis. Fraud, he said, was “probably a relatively minor problem in terms of the overall level of science”.

This way of thinking goes back at least as far as scientists have been grappling with high-profile cases of misconduct. In 1983, Peter Medawar, the British immunologist and Nobel laureate, wrote in the London Review of Books: “The number of dishonest scientists cannot, of course, be known, but even if they were common enough to justify scary talk of ‘tips of icebergs’, they have not been so numerous as to prevent science’s having become the most successful enterprise (in terms of the fulfilment of declared ambitions) that human beings have ever engaged upon.”

From this perspective, as long as science continues doing what it does well – as long as genes are sequenced and chemicals classified and diseases reliably identified and treated – then fraud will remain a minor concern. But while this may be true in the long run, it may also be dangerously complacent. Furthermore, scientific misconduct can cause serious harm, as, for instance, in the case of patients treated by Paolo Macchiarini, a doctor at Karolinska Institute in Sweden who allegedly misrepresented the effectiveness of an experimental surgical procedure he had developed. Macchiarini is currently being investigated by a Swedish prosecutor after several of the patients who received the procedure later died.

Even in the more mundane business of day-to-day research, scientists are constantly building on past work, relying on its solidity to underpin their own theories. If misconduct really is as widespread as Hartgerink and Van Assen think, then false results are strewn across scientific literature, like unexploded mines that threaten any new structure built over them. At the very least, if science is truly invested in its ideal of self-correction, it seems essential to know the extent of the problem.

But there is little motivation within the scientific community to ramp up efforts to detect fraud. Part of this has to do with the way the field is organised. Science isn’t a traditional hierarchy, but a loose confederation of research groups, institutions, and professional organisations. Universities are clearly central to the scientific enterprise, but they are not in the business of evaluating scientific results, and as long as fraud doesn’t become public they have little incentive to go after it. There is also the questionable perception, although widespread in the scientific community, that there are already measures in place that preclude fraud. When Gore and his fellow congressmen held their hearings 35 years ago, witnesses routinely insisted that science had a variety of self-correcting mechanisms, such as peer-review and replication. But, as the science journalists William Broad and Nicholas Wade pointed out at the time, the vast majority of cases of fraud are actually exposed by whistleblowers, and that holds true to this day.
And so the enormous task of keeping science honest is left to individual scientists in the hope that they will police themselves, and each other. “Not only is it not sustainable,” said Simonsohn, “it doesn’t even work. You only catch the most obvious fakers, and only a small share of them.” There is also the problem of relying on whistleblowers, who face the thankless and emotionally draining prospect of accusing their own colleagues of fraud. (“It’s like saying someone is a paedophile,” one of the students at Tilburg told me.) Neither Simonsohn nor any of the Tilburg whistleblowers I interviewed said they would come forward again. “There is no way we as a field can deal with fraud like this,” the student said. “There has to be a better way.”

In the winter of 2013, soon after Hartgerink began working with Van Assen, they began to investigate another social psychology researcher who they noticed was reporting suspiciously large effect sizes, one of the “tells” that doomed Stapel. When they requested that the researcher provide additional data to verify her results, she stalled – claiming that she was undergoing treatment for stomach cancer. Months later, she informed them that she had deleted all the data in question. But instead of contacting the researcher’s co-authors for copies of the data, or digging deeper into her previous work, they opted to let it go.

They had been thoroughly stonewalled, and they knew that trying to prosecute individual cases of fraud – the “pitbull” approach that Van Assen had taken when investigating Stapel – would never expose more than a handful of dishonest scientists. What they needed was a way to analyse vast quantities of data in search of signs of manipulation or error, which could then be flagged for public inspection without necessarily accusing the individual scientists of deliberate misconduct. After all, putting a fence around a minefield has many of the same benefits as clearing it, with none of the tricky business of digging up the mines.

As Van Assen had earlier argued in a letter to the journal Nature, the traditional approach to investigating other scientists was needlessly fraught – since it combined the messy task of proving that a researcher had intended to commit fraud with a much simpler technical problem: whether the data underlying their results was valid. The two issues, he argued, could be separated.

Scientists can commit fraud in a multitude of ways. In 1974, the American immunologist William Summerlin famously tried to pass a patch of skin on a mouse darkened with permanent marker pen as a successful interspecies skin-graft. But most instances are more mundane: the majority of fraud cases in recent years have emerged from scientists either falsifying images – deliberately mislabelling scans and micrographs – or fabricating or altering their recorded data. And scientists have used statistical tests to scrutinise each other’s data since at least the 1930s, when Ronald Fisher, the father of biostatistics, used a basic chi-squared test to suggest that Gregor Mendel, the father of genetics, had cherrypicked some of his data.

In 2014, Hartgerink and Van Assen started to sort through the variety of tests used in ad-hoc investigations of fraud in order to determine which were powerful and versatile enough to reliably detect statistical anomalies across a wide range of fields. After narrowing down a promising arsenal of tests, they hit a tougher problem. To prove that their methods work, Hartgerink and Van Assen have to show they can reliably distinguish false from real data. But research misconduct is relatively uncharted territory. Only a handful of cases come to light each year – a dismally small sample size – so it’s hard to get an idea of what constitutes “normal” fake data, what its features and particular quirks are. Hartgerink devised a workaround, challenging other academics to produce simple fake datasets, a sort of game to see if they could come up with data that looked real enough to fool the statistical tests, with an Amazon gift card as a prize.

By 2015, the Meta-Research group had expanded to seven researchers, and Hartgerink was helping his colleagues with a separate error-detection project that would become Statcheck. He was pleased with the study that Michèle Nuitjen published that autumn, which used Statcheck to show that something like half of all published psychology papers appeared to contain calculation errors, but as he tinkered with the program and the database of psychology papers they had assembled, he found himself increasingly uneasy about what he saw as the closed and secretive culture of science.
When scientists publish papers in journals, they release only the data they wish to share. Critical evaluation of the results by other scientists – peer review – takes place in secret and the discussion is not released publicly. Once a paper is published, all comments, concerns, and retractions must go through the editors of the journal before they reach the public. There are good, or at least defensible, arguments for all of this. But Hartgerink is part of an increasingly vocal group that believes that the closed nature of science, with authority resting in the hands of specific gatekeepers – journals, universities, and funders – is harmful, and that a more open approach would better serve the scientific method.

Hartgerink realised that with a few adjustments to Statcheck, he could make public all the statistical errors it had exposed. He hoped that this would shift the conversation away from talk of broad, representative results – such as the proportion of studies that contained errors – and towards a discussion of the individual papers and their mistakes. The critique would be complete, exhaustive, and in the public domain, where the authors could address it; everyone else could draw their own conclusions.

In August 2016, with his colleagues’ blessing, he posted the full set of Statcheck results publicly on the anonymous science message board PubPeer. At first there was praise on Twitter and science blogs, which skew young and progressive – and then, condemnations, largely from older scientists, who feared an intrusive new world of public blaming and shaming. In December, after everyone had weighed in, Nature, a bellwether of mainstream scientific thought for more than a century, cautiously supported a future of automated scientific scrutiny in an editorial that addressed the Statcheck controversy without explicitly naming it. Its conclusion seemed to endorse Hartgerink’s approach, that “criticism itself must be embraced”.

In the same month, the Office of Research Integrity (ORI), an obscure branch of the US National Institutes of Health, awarded Hartgerink a small grant – about $100,000 – to pursue new projects investigating misconduct, including the completion of his program to detect fabricated data. For Hartgerink and Van Assen, who had not received any outside funding for their research, it felt like vindication.

Yet change in science comes slowly, if at all, Van Assen reminded me. The current push for more open and accountable science, of which they are a part, has “only really existed since 2011”, he said. It has captured an outsize share of the science media’s attention, and set laudable goals, but it remains a small, fragile outpost of true believers within the vast scientific enterprise. “I have the impression that many scientists in this group think that things are going to change.” Van Assen said. “Chris, Michèle, they are quite optimistic. I think that’s bias. They talk to each other all the time.”

When I asked Hartgerink what it would take to totally eradicate fraud from the scientific process, he suggested that scientists make all of their data public; register the intentions of their work before conducting experiments, to prevent post-hoc reasoning, and that they have their results checked by algorithms during and after the publishing process.

To any working scientist – currently enjoying nearly unprecedented privacy and freedom for a profession that is in large part publicly funded – Hartgerink’s vision would be an unimaginably draconian scientific surveillance state. For his part, Hartgerink believes the preservation of public trust in science requires nothing less – but in the meantime, he intends to pursue this ideal without the explicit consent of the entire scientific community, by investigating published papers and making the results available to the public.

Even scientists who have done similar work uncovering fraud have reservations about Van Assen and Hartgerink’s approach. In January, I met with Dr John Carlisle and Dr Steve Yentis at an anaesthetics conference that took place in London, near Westminster Abbey. In 2012, Yentis, then the editor of the journal Anaesthesia, asked Carlisle to investigate data from a researcher named Yoshitaka Fujii, who the community suspected was falsifying clinical trials. In time, Carlisle demonstrated that 168 of Fujii’s trials contained dubious statistical results. Yentis and the other journal editors contacted Fujii’s employers, who launched a full investigation. Fujii currently sits at the top of the RetractionWatch leaderboard with 183 retracted studies. By sheer numbers he is the biggest scientific fraud in recorded history.


You’re saying to a person, ‘I think you’re a liar.’ How many fraudulent papers are worth one false accusation?

Carlisle, who, like Van Assen, found that he enjoyed the detective work (“it takes a certain personality, or personality disorder”, he said), showed me his latest project, a larger-scale analysis of the rate of suspicious clinical trial results across multiple fields of medicine. He and Yentis discussed their desire to automate these statistical tests – which, in theory, would look a lot like what Hartgerink and Van Assen are developing – but they have no plans to make the results public; instead they envision that journal editors might use the tests to screen incoming articles for signs of possible misconduct.

“It is an incredibly difficult balance,” said Yentis, “you’re saying to a person, ‘I think you’re a liar.’ We have to decide how many fraudulent papers are worth one false accusation. How many is too many?”

With the introduction of programs such as Statcheck, and the growing desire to conduct as much of the critical conversation as possible in public view, Yentis expects a stormy reckoning with those very questions. “That’s a big debate that hasn’t happened,” he said, “and it’s because we simply haven’t had the tools.”

For all their dispassionate distance, when Hartgerink and Van Assen say that they are simply identifying data that “cannot be trusted”, they mean flagging papers and authors that fail their tests. And, as they learned with Statcheck, for many scientists, that will be indistinguishable from an accusation of deceit. When Hartgerink eventually deploys his fraud-detection program, it will flag up some very real instances of fraud, as well as many unintentional errors and false positives – and present all of the results in a messy pile for the scientific community to sort out. Simonsohn called it “a bit like leaving a loaded gun on a playground”.

When I put this question to Van Assen, he told me it was certain that some scientists would be angered or offended by having their work and its possible errors exposed and discussed. He didn’t want to make anyone feel bad, he said – but he didn’t feel bad about it. Science should be about transparency, criticism, and truth.

“The problem, also with scientists, is that people think they are important, they think they have a special purpose in life,” he said. “Maybe you too. But that’s a human bias. I think when you look at it objectively, individuals don’t matter at all. We should only look at what is good for science and society.”