Search This Blog

Showing posts with label scientific. Show all posts
Showing posts with label scientific. Show all posts

Monday 25 August 2014

The Myth of Common Sense: Why The Social World Is Less Obvious Than It Seems

“Mankind, it seems, makes a poorer performance of government than of almost any other human activity.”
-Barbara TuchmanThe March of Folly
“This is not rocket science”
-Bill Frist on fixing health care, The New York Times
As these two quotes illustrate, there is something strangely conflicted about contemporary views on government and policy. On one hand, many people are in apparent agreement that government frequently accomplishes less than it ought to, sometimes embarrassingly so. Yet on the other hand, many of these same people are also of the opinion that the failings of government do not imply any great difficulty of the problems themselves—that they are not rocket science, as it were.
Typically the conflict is resolved with reference to the presumed incompetence, pigheadedness, or outright corruption of our leaders. If only we elected the right people, gave them the right incentives, and—above all—if only our political leaders exhibited a little more common sense, everything would be alright. That both “we” and “they” consistently fail to follow these simple steps proves only that common sense is not nearly common enough.
There may be some truth to this attitude. But as a sociologist, I’ve also learned to be skeptical of common sense, especially when it is invoked as the solution to complex social problems.
Sociology has had to confront the criticism that it has “discovered” little that an intelligent person couldn’t have figured out on his or her own.
Sociology, of course, has its own conflicted history with common sense. For almost as long as it has existed, that is, sociology has had to confront the criticism that it has “discovered” little that an intelligent person couldn’t have figured out on his or her own.
Why is it, for example, that most social groups, from friendship circles to workplaces, are so homogenous in terms of race, education level, and even gender? Why do some things become popular and not others? How much does the media influence society? Is more choice better or worse? Do taxes stimulate the economy?
Social scientists have struggled with all these questions for generations, and continue to do so. Yet many people feel they could answer these questions themselves—simply by examining their own experience. Unlike for problems in physics and biology, therefore, where we need experts to tell us what is true, when the topic is human or social behavior, we’re all “experts,” so we trust our own opinions at least as much as we trust those of social scientists.
Nor is this tendency necessarily a bad reaction—any theory should be consistent with empirical reality, and in the case of social science, that reality includes everyday experience. But not everything about the social world is transparent from common sense alone—in part because not everything that seems like common sense turns out to be true, and in part because common sense is extremely good at making the world seem more orderly than it really is.
Common sense isn’t anything like a scientific theory of the world.
As sociologists are fond of pointing out, common sense isn’t anything like a scientific theory of the world. Rather it is a hodge-podge of accumulated advice, experiences, aphorisms, norms, received wisdom, inherited beliefs, and introspection that is neither coherent nor even internally self-consistent. Birds of a feather flock together, but opposites also attract. Two minds are better than one, except when too many cooks spoil the broth. Does absence make the heart grow fonder, or is out of sight out of mind?  At what point does try, try again turn into flogging a dead horse? And if experience is the best teacher, when should one also maintain a beginner’s mind?
The problem with common sense is not that it isn’t sensible, but that what is sensible turns out to depend on lots of other features of the situation. And in general, it’s impossible to know which of these many potential features are relevant until after the fact (a fundamental problem that philosophers and cognitive scientists call the “frame problem”).
Nevertheless, once we do know the answer, it is almost always possible to pick and choose from our wide selection of common-sense statements about the world to produce something that sounds likely to be true. And because we only ever have to account for one outcome at a time (because we can ignore the “counterfactuals,” things that could’ve happened, but didn’t), it is always possible to construct an account of what did happen that not only makes sense, but also sounds like a causal story.
To take a common example, think about how we explain success. Why is the Mona Lisa the most famous painting in the world?  Why did J.K. Rowling‘s Harry Potter books sell over 300 million copies?  And why is Madonna the most successful female musical artist of all time? Now that we know who these superstars are, their success seems easy to explain—common sense even. They simply outperformed the competition. Whether they did that through pure genius, clever marketing, or sheer tenacity is a matter of debate (you be the judge), but in the end, it doesn’t really matter.  In the competitive marketplace of ideas, a product succeeds because it represents what people want—otherwise, they wouldn’t have devoted their scarce time, money, and attention to it. Right?
Well, sort of. But if that’s true, why are superstars so hard to pick out in advance?  Why did several children’s publishers reject the initial Harry Pottermanuscript? Why did no one pay much attention to the Mona Lisa for nearly 400 years? And why did music critics dismiss the early Madonna as an attention seeker with limited talent? Whatever their personal preferences, how could they have failed to understand the demands of the marketplace, which after all is precisely what they are rewarded for doing?
Nor is it just the critics who get their predictions wrong—marketers can’t seem to figure out what will sell either. If they could, they wouldn’t waste their efforts on the vast majority of books, movies, and albums that lose money. So what explains why some cultural products are stunningly successful, while most aren’t; and why at the same time, no one, including the experts, seems to be able to predict which is which?
A few years ago, my students and I studied exactly this question by setting up an experiment in which roughly 15,000 participants were asked to listen to, rate, and download songs by unknown bands off a website we created. Some of the participants had to make their decisions independently, while others had information about which of the songs other people liked.  We found two results. First, in the “social influence” condition, popular songs were more popular (and unpopular songs less popular) than in the independent case. But second, it became harder to predict which particular songs would be the most popular.
What these results suggest is that in the real world, where social influence is much stronger than in our artificial experiment, enormous differences in success may indeed be due to small, random fluctuations early on in an artist’s career, which then get amplified by a process of cumulative advantage—a “rich-get-richer” phenomenon that is thought to arise in many social systems.
The random fluctuations arising from social influence were larger than those arising from differences in quality.
A critical feature of this experiment was our ability to create multiple “worlds” in which randomly assigned groups of people could create different versions of history in parallel with each other. By observing how popular the same song became in different worlds, we could measure directly how much of its success can be attributed to some intrinsic “quality,” and how much results from random chance. We found that although, on average, good songs do better than bad songs, the random fluctuations arising from social influence were larger than those arising from differences in quality.
The problem with this explanation, however, is that in real life we never get to experience these multiple histories—only the one history that we have lived through. So although one can argue that Madonna or Harry Potter or even Shakespeare may owe their success more to random chance and cumulative advantage than to intrinsic superiority, it is impossible to refute the common sense view that history took the path that it did because the winners embodied precisely the greatness to which we attribute them. And for a Madonna or a Harry Potter or a Shakespeare fan, that is typically the end of the argument.
Common sense, in other words, is extremely good at making the world seem sensible, quickly classifying believable information as old news, rejecting explanations that don’t coincide with experience, and ignoring counterfactuals. Viewed this way, common sense starts to seem less like a way to understand the world, than a way to survive without having to understand it.
That may have been a perfectly fine design for most of evolutionary history, where humans lived in small groups and could safely ignore most of what was going on in the world. But increasingly the problems of the modern world—distributions of wealth, sustainable development, public health—require us to understand cause and effect in complex systems, with consequences unfolding over years or decades. And for these kinds of problems, there’s no reason to believe that common sense is much of a guide at all.
Fortunately, in recent years the explosive growth of the Internet has brought with it the ability to measure the actions and interactions of millions of people simultaneously. Not just social networking services like Facebook, Twitter and YouTube, but email interactions, instant messaging, and Internet phone calls all provide measurable traces of the person-to-person interactions that have always been at the core of social life, but have until recently been invisible to science.
Already this flood of data has generated enormous interest in the research community, with thousands of physicists and computer scientists beginning to pay attention to problems traditionally in the domain of the social sciences. It’s tempting to look back at past technological breakthroughs, such as the telescope or the microscope, when new instrumentation made the previously invisible visible, and wonder if perhaps social science is on the edge of its own scientific revolution.
Social problems, that is, must be viewed not as the subject of rhetorical debates, but as scientific problems
But if we are to make use of these impressive new capabilities to address the kinds of problems that governments have historically failed to solve, we also need to think differently about the problems themselves. Social problems, that is, must be viewed not as the subject of rhetorical debates, but as scientific problems, in the sense that some combination of theory, data, and experiment can provide useful insights beyond that which can be derived through intuition and experience alone.
Clearly we’re a long way from a world in which cause and effect in social and economic systems can be established with the level of certainty we’ve come to expect from the physical sciences. In fact, the world of human behavior is sufficiently complicated and unpredictable that no matter how long or hard we try, we will always be stuck with some level of uncertainty, in which case leaders will have to do what they’ve always done and make the best decisions they can under the circumstances.
It sounds like a lot of effort for an uncertain payoff, but curing cancer has also proven to be an enormously complex undertaking, far more resistant to medical science than was once thought, yet no one is throwing up their hands on that one. It is time to apply the same admirable resolve to understand the world—no matter how long it takes—that we display in our struggles to address the important problems of physical and medical science to social problems as well.

Thursday 29 December 2011

Just because you're an atheist doesn't make you rational

Once you make it your primary aim to refute the existence of God, you miss what's really fundamental
Having followed the latest debate about religion, I'd say the conclusion is obvious that the only thing as disturbing as the religious is the modern atheist. I'd noticed this before, after I was slightly critical of Richard Dawkins and received piles of fuming replies, that made me think that what his followers would like is to scientifically create an eternity in laboratory conditions so that they could burn me there for all of it.
It's not the rationality that's alarming, it's the smugness. Instead of trying to understand religion, if the modern atheist met a peasant in a village in Namibia, he'd shriek: "Of course, GOD didn't create light, it's a mixture of waves and particles you idiot, it's OBVIOUS."

The connection between the religious and the modern atheist was illustrated after the death of Christopher Hitchens, when it was reported that "tributes were led by Tony Blair". I know you can't dictate who leads your tributes, and it's probable that when Blair's press office suggested that he made one to someone who'd passed on, he said: "Oh, which dictator I used to go on holiday with has died NOW?"

But the commendation was partly Hitchens's fault. Because the difference between the modern atheist and the Enlightenment thinkers who fought the church in the 18th century is that back then they didn't make opposition to religion itself their driving ideology. They opposed the lack of democracy justified by the idea that a king was God's envoy on earth, and they wished for a rational understanding of the solar system, rather than one based on an order ordained by God that matched the view that everyone in society was born into a fixed status.

But once you make it your primary aim to refute the existence of God, you can miss what's really fundamental. For example, the ex-canon of St Paul's, presumably a believer unless he managed to fudge the issue in the interview, was on the radio this week expressing why he resigned in support of the protesters outside his old cathedral. He spoke with inspiring compassion, but was interrupted by an atheist who declared the Christian project is doomed because we're scientifically programmed to look after ourselves at the expense of anyone else. So the only humane rational scientific thought to have was "GO Christian, GO, Big up for the Jesus posse."

Similarly, Hitchens appears to have become obsessed with defying religion, so made himself one of the most enthusiastic supporters for a war he saw as being against the craziness of Islam. But the war wasn't about God or Allah, it was about more earthly matters, which the people conducting that war understood. And, as that war became predictably disastrous, they were grateful for whatever support they could find. And so a man dedicated to disproving GOD was praised in his death by the soppiest, sickliest, most, irrational, hypocritical Christian of them all.

So the only thing I know for certain is that I would become a Christian, if I could just get round the fact that there is no GOD.

Saturday 10 September 2011

Graduates in Science, Engineering and Maths are more versatile than others

The versatility of science graduates should be celebrated not criticised. What's the problem if science graduates end up in alternative careers? If anything, we need more of it.

Imran Khan guardian.co.uk, Friday 9 September 2011 13.33 BST larger

'If you study engineering, physics or chemistry as your first degree, you're almost 90% likely to be in either full-time employment or further study three years later.' Photograph: Martin Shields/Alamy

The Guardian reported that "only about half of all science graduates find work that requires their scientific knowledge" – a fact that "casts doubt on the government's drive to encourage teenagers to study [science]". Yet year on year, the Confederation of British Industry (CBI) reports that its members are finding it difficult to get enough staff with science, technology, engineering and maths (Stem) skills. This year more than two in five employers had trouble. The Science Council has just released a report showing that a fifth our workforce is employed in a scientific role. So what's going on?



The concerns come from the paper, Is there a shortage of scientists? A re-analysis of supply for the UK. Its author suggests there is no shortage of scientists and engineers in the UK, despite what the CBI says and contrary to the messages of successive governments. However, both the paper and the Guardian's reporting are based on some pretty odd assumptions. While it's true that about half of Stem graduates end up in careers outside science, that's not an argument to say that too many young people are studying science.



For a start, a Stem degree is a fantastic preparation for a huge range of careers. We should celebrate that fact, not mourn it. Statistics show (table 7) that if you study engineering, physics or chemistry as your first degree, you're almost 90% likely to be in either full-time employment or further study three years later. Those figures compare with 73% for the creative arts, and 78% for languages and historical or philosophical studies. The average across all graduates is just above 80%. That's because a Stem degree gives you a huge range of skills that are in demand in wide variety of jobs, not just in science. Isn't that a good thing? We could "fix" it by training science graduates to be useless in the wider economy, but at the moment we have a higher education sector that is successfully producing young people equipped with highly transferable skills.



Moreover, what's the problem if Stem graduates end up in careers outside science and engineering? If anything, we need more of it. We're crying out for more scientists and engineers to teach in schools, get into politics and the civil service, and become involved in running companies. The scientific method should be more embedded in society, not less. In the UK, we have only two MPs with a PhD. China, the most populous country and fastest growing economy in the world, has been led for the past eight years by two men who are professional engineers. I'm not saying it's better – but wouldn't it be nice to have some diversity among all the lawyers and economists?



We don't worry when law graduates don't become lawyers, history graduates don't become historians, or English graduates don't become … er … So why be concerned about the versatile engineer or chemist? True, we do need more people going into research and development if the UK is to successfully rebalance its economy. To achieve that we must increase investment in research and skills so that employers have a reason to come here, and in turn attract our science and engineering graduates into science and engineering jobs. Yes, each company and lab leader will be looking for the very best staff, so with the best will in the world you're not going to get every single engineering graduate into their first-choice profession. But how is that different from any other type of graduate?



It's a shame that the Guardian's report focused on the misleading figures when there was much else of value in the study. We see that there is far too much social and gender stratification in the people who actually go into science and engineering. This is unacceptable, given the benefits that those subjects give to their students. It's 2011, and yet we still only have around one in 10 female graduate engineers. You're more likely to take science and maths A-levels if you attend an independent school, with pupils at state-maintained schools over-represented in arts and humanities subjects instead.



There is emphatically still a need for more scientists and engineers – and, far from retrenching support for science and engineering, we should be concentrating on making these subjects more accessible to everyone.