Search This Blog

Showing posts with label deception. Show all posts
Showing posts with label deception. Show all posts

Friday 11 August 2023

Economics for Dummies: Unveiling the Truth Behind Government Claims on Inflation

ChatGPT

Inflation, a ubiquitous economic phenomenon, wields a significant impact on the purchasing power of individuals and the stability of economies. Governments often tout their achievements in taming inflation, but a deeper examination reveals a nuanced reality. This essay explores the intricacies of inflation, clarifies the deceptive nature of government claims regarding inflation reduction, and provides illustrative examples to shed light on the distinction between inflation rates and actual price changes.

The Inflation Mirage: When governments proudly announce the reduction of inflation from 10% to 5%, it is not a declaration of falling prices but rather a claim of moderating the rate at which prices increase. Inflation is not a direct measure of price levels but a gauge of how quickly those levels are changing. Imagine a roller coaster; if it slows down from an extreme speed to a slower one, it is not moving backward, merely decelerating its forward motion.

Understanding the Steps: To comprehend the mechanics, consider a hypothetical good priced at £1 at the end of 2021. If the inflation rate for 2022 is 10% and 5% for 2023, the price evolution can be broken down into stages.

Step 1: 2022 Inflation Price at the end of 2021 = £1 Inflation in 2022 = 10% Price after 2022 inflation = £1 * (1 + 0.10) = £1.10

Step 2: 2023 Inflation Price after 2022 inflation = £1.10 Inflation in 2023 = 5% Price after 2023 inflation = £1.10 * (1 + 0.05) = £1.155

The Price Illusion: Governments' claims of lowering inflation from 10% to 5% create an illusion of prices falling. However, the reality is that while the rate of price increase has slowed down, prices are still ascending. This can be compared to a marathon runner who has reduced their speed; they are still moving forward, just not as swiftly as before.

Deconstructing Government Claims: Governments may employ such claims for various reasons, including instilling confidence in economic policies or promoting their efforts to stabilize the economy. However, this communication can lead to misunderstanding and misinterpretation by the public. For instance, an individual may perceive a 5% inflation rate as a signal to expect a decrease in their expenses, only to find that their cost of living continues to rise, albeit at a slightly slower pace.

Examples:

  1. Real Estate: If a government announces a reduction in inflation from 10% to 5%, potential homebuyers might anticipate lower house prices. However, the reality could be that property prices are still increasing, but at a diminished rate. This could affect individuals' decisions regarding homeownership and mortgage commitments.


  2. Consumer Goods: A consumer who witnesses a lower inflation rate might believe that their monthly grocery bills will decrease. Yet, the prices of essential commodities may still be rising, putting pressure on their household budget.

The distinction between inflation rates and actual price changes is a crucial concept that citizens must grasp to make informed financial decisions. Governments' claims of lowering inflation, while important for economic stability, should not be misconstrued as a signal of falling prices. Recognizing the difference between a decrease in the rate of price escalation and a true decline in prices is pivotal in navigating the complex landscape of personal finance and economic planning.

Sunday 23 April 2023

The Confidence Game......2

 There’s a likely apocryphal story about the French poet Jacques Prevert. One day he was walking past a blind man who held up a sign “Blind man without a pension”. He stopped to chat. How was it going? Were people helpful? “Not great”, the man replied.


Could I borrow your sign?” Prevert asked. The blind man nodded.


The poet took the sign, flipped it over and wrote a message.


The next day, he again walked past the blind man, “How is it going now?” he asked. “Incredible,” the man replied. “I’ve never received so much money in my life.”


On the sign, Prevert had written: “Spring is coming, but I won’t see it.”


Give us a compelling story, and we open up. Scepticism gives way to belief. The same approach that makes a blind man’s cup overflow with donations can make us more receptive to almost any persuasive message, for good or for ill.


When we step into a magic show, we come in actively wanting to be fooled. We  want deception to cover our eyes and make our world a tiny bit more fantastical, more awesome than it was before. And the magician, in many ways, uses the exact same approaches as the confidence man - only without the destruction of the con’s end game. “Magic is a kind of a conscious, willing con,” says Michael Shermer, a science historian and writer. “You’re not being foolish to fall for it. If you don’t fall for it, the magician is doing something wrong.” 


At their root, magic tricks and confidence games share the same fundamental principle: a manipulation of our beliefs. Magic operates at the most basic level of visual perception, manipulating how we see and experience reality. It changes for an instant what we think possible, quite literally taking advantage of our eyes’ and brains’ foibles to create an alternative version of the world. A con does the same thing, but can go much deeper. Long cons, the kind that take weeks, months or even years to unfold, manipulate reality at a higher level, playing with our most basic beliefs about humanity and the world.


The real confidence game feeds on the desire for magic, exploiting our endless taste for an existence that is more extraordinary and somehow more meaningful.


When we fall for a con, we aren’t actively seeking deception - or at least we don’t think we are. As long as the desire for magic, for a reality that is somehow greater than our everyday existence remains, the confidence game will thrive.


Extracted from The Confidence Game by Maria Konnikova


Friday 25 February 2022

Deception and destruction can still blind the enemy

From The Economist

There are four ways for those who would hide to fight back against those trying to find them: destruction, deafening, disappearance and deception. Technological approaches to all of those options will be used to counter the advantages that bringing more sensors to the battlespace offers. As with the sensors, what those technologies achieve will depend on the tactics used.

Destruction is straightforward: blow up the sensor. Missiles which home in on the emissions from radars are central to establishing air superiority; one of the benefits of stealth, be it that of an f-35 or a Harop drone, lies in getting close enough to do so reliably.

Radar has to reveal itself to work, though. Passive systems can be both trickier to sniff out and cheaper to replace. Theatre-level air-defence systems are not designed to spot small drones carrying high-resolution smartphone cameras, and would be an extraordinarily expensive way of blowing them up.

But the ease with which American drones wandered the skies above Iraq, Afghanistan and other post-9/11 war zones has left a mistaken impression about the survivability of uavs. Most Western armies have not had to worry about things attacking them from the sky since the Korean war ended in 1953. Now that they do, they are investing in short-range air defences. Azerbaijan’s success in Nagorno-Karabakh was in part down to the Armenians not being up to snuff in this regard. Armed forces without many drones—which is still most of them—will find their stocks quickly depleted if used against a seasoned, well-equipped force.

Stocks will surely increase if it becomes possible to field more drones for the same price. And low-tech drones which can be used as flying ieds will make things harder when fighting irregular forces. But anti-drone options should get better too. Stephen Biddle of Columbia University argues that the trends making drones more capable will make anti-drone systems better, too. Such systems actually have an innate advantage, he suggests; they look up into the sky, in which it is hard to hide, while drones look down at the ground, where shelter and camouflage are more easily come by. And small motors cannot lift much by way of armour.

Moving from cheap sensors to the most expensive, satellites are both particularly valuable in terms of surveillance and communication and very vulnerable. America, China, India and Russia, all of which would rely on satellites during a war, have all tested ground-launched anti-satellite missiles in the past two decades; some probably also have the ability to kill one satellite with another. The degree to which they are ready to gouge out each other’s eyes in the sky will be a crucial indicator of escalation should any of those countries start fighting each other. Destroying satellites used to detect missile launches could presage a pre-emptive nuclear strike—and for that very reason could bring one about.

Everybody has a plan until they get punched in the face

Satellites are also vulnerable to sensory overload, as are all sensors. Laser weapons which blind humans are outlawed by international agreement but those that blind cameras are not; nor are microwave beams which fry electronics. America says that Russia tries to dazzle its orbiting surveillance systems with lasers on a regular basis.

The ability to jam, overload or otherwise deafen the other side’s radar and radios is the province of electronic warfare (ew). It is a regular part of military life to probe your adversaries’ ew capabilities when you get a chance. The deployment of American and Russian forces close to each other in northern Syria provided just such an opportunity. “They are testing us every day,” General Raymond Thomas, then head of American special forces, complained in 2018, “knocking our communications down” and going so far as “disabling” America’s own ec-130 electronic-warfare planes.

In Green Dagger, an exercise held in California last October, an American Marine Corps regiment was tasked with seizing a town and two villages defended by an opposing force cobbled together from other American marines, British and Dutch commandos and Emirati special forces. It struggled to do so. When small teams of British commandos attacked the regiment’s rear areas, paralysing its advance, the marines were hard put to target them before they moved, says Jack Watling of the Royal United Services Institute, a think-tank in London. One reason was the commandos’ effective ew attacks on the marines’ command posts.

Just as what sees can be blinded and what hears, deafened, what tries to understand can be confused. Britain’s national cyber-strategy, published in December, explicitly says that one task of the country’s new National Cyber Force, a body staffed by spooks and soldiers, is to “disrupt online and communications systems”. Armies that once manoeuvred under air cover will now need to do so under “cyber-deception cover”, says Ed Stringer, a retired air marshal who led recent reforms in British military thinking. “There’s a point at which the screens of the opposition need to go a bit funny,” says Mr Stringer, “not so much that they immediately spot what you’re doing but enough to distract and confuse.” In time the lines between ew, cyber-offence and psychological operations seem set to blur.

The ability to degrade the other side’s sensors, interrupt its communications and mess with its head does not replace old-fashioned camouflage and newfangled stealth; they remain the bread and butter of a modern military. Tanks are covered in foliage; snipers wear ghillie suits. Warplanes use radiation-absorbent material and angled surfaces so as not to reflect radio waves back to the radar that sent them. Russia has platoons dedicated to spraying the air with aerosols designed to block ultraviolet, infrared and radar waves. During their recent border stand-off, India and China both employed camouflage designed to confuse sensors with a broader spectral range than the human eye.

According to Mr Biddle, over the past 30 years “cover and concealment”, along with other tactics, have routinely allowed forces facing American precision weapons to avoid major casualties. He points to the examples of al-Qaeda at the Battle of Tora Bora in eastern Afghanistan in 2001 and Saddam Hussein’s Republican Guard in 2003, both of whom were overrun in close combat rather than through long-range strikes. Weapons get more lethal, he says, but their targets adapt.

Hiding is made easier by the fact that the seekers’ new capabilities, impressive as they may be, are constrained by the realities of budgets and logistics. Not everything armies want can be afforded; not everything they procure can be put into the field in a timely manner. In real operations, as opposed to PowerPoint presentations, sensor coverage is never unlimited.

“There is no way that we're going to be able to see everything, all of the time, everywhere,” says a British general. “It's just physically impossible. And therefore there will always be something that can happen without us seeing it.” In the Green Dagger exercise the attacking marine regiment lacked thermal-imaging equipment and did not have prompt access to satellite pictures. It was a handicap, but a realistic one. Rounding up commandos was not the regiment’s “main effort”, in military parlance. It might well not have been kitted out for it.

When hiding is hard, it helps to increase the number of things the enemy has to look at. “With modern sensors…it is really, really difficult to avoid being detected,” says Petter Bedoire, the chief technology officer for Saab, a Swedish arms company. “So instead you need to saturate your adversaries’ sensors and their situational awareness.” A system looking at more things will make more mistakes. Stretch it far enough and it could even collapse, as poorly configured servers do when hackers mount “denial of service” attacks designed to overwhelm them with internet traffic.

Dividing your forces is a good way to increase the cognitive load. A lot of small groups are harder to track and target than a few big ones, as the commandos in Green Dagger knew. What is more, if you take shots at one group you reveal some of your shooters to the rest. The less valuable each individual target is, the bigger an issue that becomes.

Decoys up the ante. During the first Gulf war Saddam Hussein unleashed his arsenal of Scud missiles on Bahrain, Israel and Saudi Arabia. The coalition Scud hunters responsible for finding the small (on the scale of a vast desert) mobile missile launchers he was using seemed to have all the technology they might wish for: satellites that could spot the thermal-infrared signature of a rocket launch, aircraft bristling with radar and special forces spread over tens of thousands of square kilometres acting as spotters. Nevertheless an official study published two years later concluded that there was no “indisputable” proof that America had struck any launchers at all “as opposed to high-fidelity decoys”.

One of the advantages data fusion offers seekers is that it demands more of decoys; in surveillance aircraft electronic emissions, radar returns and optical images can now be displayed on a single screen, highlighting any discrepancies between an object’s visual appearance and its electronic signature. But decoy-making has not stood still. Iraq’s fake Scuds looked like the real thing to un observers just 25 metres away; verisimilitude has improved “immensely” since then, particularly in the past decade, says Steen Bisgaard, the founder of GaardTech, an Australian company which builds replica vehicles to serve as both practice targets and decoys.

Mr Bisgaard says he can sell you a very convincing mobile simulacrum of a British Challenger II tank, one with a turret and guns that move, the heat signature of a massive diesel engine and a radio transmitter that works at military wavelengths, all for less than a 20th of the £5m a real tank would set you back. Shipped in a flat pack it can be assembled in an hour or so.

Seeing a tank suddenly appear somewhere, rather than driving there, would be something of a giveaway. But manoeuvre can become part of the mimicry. Rémy Hemez, a French army officer, imagines a future where armies deploy large “robotic decoy formations using ai to move along and create a diversion”. Simulating a build up like the one which Russia has emplaced on Ukraine’s border is still beyond anyone’s capabilities. But decoys and deception—in which Russia’s warriors are well versed—can be used to confuse.

Disappearance and deception often have synergy. Stealth technologies do not need to make an aircraft completely invisible. Just making its radar cross-section small enough that a cheap little decoy can mimic it is a real advantage. The same applies, mutatis mutandis, to submarines. If you build lots of intercontinental-ballistic-missile silos but put icbms into only a few—a tactic China may be exploring—an enemy will have to use hundreds of its missiles to be sure of getting a dozen or so of yours.

Shooting at decoys is not just a waste of material. It also reveals where your shooters are. Silent Impact, a 155mm artillery shell produced by src, an American firm, can transmit electronic signals as if it were a radar or a weapons platform as it flies through the sky and settles to the ground under a parachute. Any enemy who takes the bait reveals the position of their guns.

The advent of ai should offer new ways of telling the real from the fake; but it could also offer new opportunities for deception. The things that make an ai say “Tank!” may be quite different to what humans think of as tankiness, thus unmasking decoys that fool humans. At the same time the ai may ignore features which humans consider blindingly obvious. Benjamin Jensen of American University tells the story of marines training against a high-end sentry camera equipped with object-recognition software. The first marines, who tried to sneak up by crawling low, were quickly detected. Then one of them grabbed a piece of tree bark, placed it in front of his face and walked right up to the camera unmolested. The system saw nothing out of the ordinary about an ambulatory plant.

The problem is that ais, and their masters, learn. In time they will rumble such hacks. Basing a subsequent all-out assault on Birnam Wood tactics would be to risk massacre. “You can always beat the algorithm once by radical improvisation,” says Mr Jensen. “But it's hard to know when that will happen.”

The advantages of staying put

Similar uncertainties will apply more widely. Everyone knows that sensors and autonomous platforms can get cheaper and cheaper, that computing at the edge can reduce strain on the capacity of data systems, and that all this can make kill chains shorter. But the rate of progress—both your progress, and your adversaries’—is hard to gauge. Who has the advantage will often not be known until the forces contest the battlespace.

The unpredictability extends beyond who will win particular fights. It spreads out to the way in which fighting will best be done. Over the past century military thinking has contrasted attrition, which wears down the opponent’s resources in a frontal slugfest, and manoeuvre, which seeks to use fast moving forces to disrupt an enemy’s decision-making, logistics and cohesion. Manoeuvre offers the possibility of victory without the wholesale destruction of the enemies’ forces, and in the West it has come to hold the upper hand, with attrition often seen as a throwback to a more primitive age.

That is a mistake, argues Franz-Stefan Gady of the International Institute for Strategic Studies, a think-tank. Surviving in an increasingly transparent battlespace may well be possible. But it will take effort. Both attackers who want to take ground and defenders who wish to hold it will need to build “complex multiple defensive layers” around their positions, including air defences, electronic countermeasures and sensors of their own. Movement will still be necessary—but it will be dispersed. Consolidated manoeuvres big and sweeping enough to generate “shock and awe” will be slowed down by unwieldy aerial electromagnetic umbrellas and advertise themselves in advance, thereby producing juicy targets.

The message of Azerbaijan’s victory is not that blitzkrieg has been reborn and “the drone will always get through”. It is that preparation and appropriate tactics matter as much as ever, and you need to know what to prepare against. The new technologies of hide and seek will sometimes—if Mr Gady is right, often—favour the defence. A revolution in sensors, data and decision-making built to make targeting easier and kill chains quicker may yet result in a form of warfare that is slower, harder and messier.

Sunday 8 July 2018

Why are modern batsmen weak against legspin in the short formats?

Ian Chappell in Cricinfo


It's not only the range of strokes that has dramatically evolved in short-format batting but also the mental approach. Contrast the somnambulistic approach of Essex's Brian Ward in a 1969 40-over game with England's record-breaking assault on the Australian bowling at Trent Bridge recently.

Ward decided that Somerset offspinner Brian Langford was the danger man in the opposition attack, and eight consecutive maidens resulted, handing the bowler the never-to-be-repeated figures of 8-8-0-0. On the other hand England's batsmen this year displayed no such inhibitions in rattling up 481 off 50 overs, and Australia's bowlers, headed by Andrew Tye, with 9-0-100-0, were pummelled.

Nevertheless one thing has remained constant in the short formats: a wariness around spin bowling, although currently it's more likely to be the wrist variety than fingerspin.

The list of successful wristspinners in short-format cricket is growing rapidly and there have been some outstanding recent performances. Afghanistan's Rashid Khan was the joint leading wicket-taker in the BBL; England's Adil Rashid (along with spin-bowling companion Moeen Ali), took the most wicketsin the recent whitewash of Australia; and in successive T20Is against England, India's duo of Yuzvendra Chahal and Kuldeep Yadav have claimed the rare distinction of a five-wicket haul. It's a trail of destruction that have would gladdened the heart of Bill "Tiger" O'Reilly, a great wristspinner himself and the most insistent promoter of the art there has ever been.

Wristspinners are extremely successful in the shorter formats and are being eagerly sought after for the many T20 leagues. Their enormous success is mostly down to the deception they provide, since they are able to turn it from both leg and off with only a minimal change of action. Kuldeep provided a perfect example when he bamboozled both Jonny Bairstow and Joe Root with successive wrong'uns in the opening T20 at Old Trafford.

The fact that Bairstow - a wicketkeeper by trade - was deceived by the wrong'un is symptomatic of a malaise that is sweeping international batting - a general inability to read wristspinners. This failing is not only the root cause of wicket loss from mishits but also contributes to a desirable bowling economy rate for the bowlers, as batsmen are hesitant to attack a delivery they are unsure about. This inability to read wristspinners is mystifying.

If a batsman watches the ball out of the hand, the early warning signals are available. A legbreak is delivered with the back of the hand turned towards the bowler's face, while with the wrong'un, it's facing the batsman. As a further indicator, the wrong'un, because it's bowled out of the back of the hand, has a slightly loftier trajectory. Final confirmation is provided by the seam position, which is tilted towards first slip for the legspinner, and leg slip for the wrong'un. Any batsman waiting to pick the delivery off the pitch is depriving himself of scoring opportunities and putting his wicket in danger.

When Shane Warne was at his devastating peak, fans marvelled at his repertoire and said it was the main reason for his success. "Picking him is the easy part," I explained, "it's playing him that's difficult."

Richie Benaud, another master of the art, summed up spin bowling best: "It's the subtle variations," he proffered, "that bring the most success."

O'Reilly was not only an aggressive leggie but also a wily one, and he bent his back leg when he wanted to vary his pace. This action altered his release point without slowing his arm speed, and consequently it was difficult for the batsman to detect the subtle variation.

This type of information is crucial to successful batsmanship, but following Kuldeep's demolition job, Jos Buttler said it might take one or two games for English batsmen to get used to the left-armer. This is an indictment of the current system for developing young batsmen, where you send them into international battle minus a few important tools.

Thursday 8 February 2018

A simple guide to statistics in the age of deception

Tim Harford in The Financial Times

Image result for statistics



“The best financial advice for most people would fit on an index card.” That’s the gist of an offhand comment in 2013 by Harold Pollack, a professor at the University of Chicago. Pollack’s bluff was duly called, and he quickly rushed off to find an index card and scribble some bullet points — with respectable results. 


When I heard about Pollack’s notion — he elaborated upon it in a 2016 book — I asked myself: would this work for statistics, too? There are some obvious parallels. In each case, common sense goes a surprisingly long way; in each case, dizzying numbers and impenetrable jargon loom; in each case, there are stubborn technical details that matter; and, in each case, there are people with a sharp incentive to lead us astray. 

The case for everyday practical numeracy has never been more urgent. Statistical claims fill our newspapers and social media feeds, unfiltered by expert judgment and often designed as a political weapon. We do not necessarily trust the experts — or more precisely, we may have our own distinctive view of who counts as an expert and who does not.  

Nor are we passive consumers of statistical propaganda; we are the medium through which the propaganda spreads. We are arbiters of what others will see: what we retweet, like or share online determines whether a claim goes viral or vanishes. If we fall for lies, we become unwittingly complicit in deceiving others. On the bright side, we have more tools than ever to help weigh up what we see before we share it — if we are able and willing to use them. 

In the hope that someone might use it, I set out to write my own postcard-sized citizens’ guide to statistics. Here’s what I learnt. 

Professor Pollack’s index card includes advice such as “Save 20 per cent of your money” and “Pay your credit card in full every month”. The author Michael Pollan offers dietary advice in even pithier form: “Eat Food. Not Too Much. Mostly Plants.” Quite so, but I still want a cheeseburger.  

However good the advice Pollack and Pollan offer, it’s not always easy to take. The problem is not necessarily ignorance. Few people think that Coca-Cola is a healthy drink, or believe that credit cards let you borrow cheaply. Yet many of us fall into some form of temptation or other. That is partly because slick marketers are focused on selling us high-fructose corn syrup and easy credit. And it is partly because we are human beings with human frailties. 

With this in mind, my statistical postcard begins with advice about emotion rather than logic. When you encounter a new statistical claim, observe your feelings. Yes, it sounds like a line from Star Wars, but we rarely believe anything because we’re compelled to do so by pure deduction or irrefutable evidence. We have feelings about many of the claims we might read — anything from “inequality is rising” to “chocolate prevents dementia”. If we don’t notice and pay attention to those feelings, we’re off to a shaky start. 

What sort of feelings? Defensiveness. Triumphalism. Righteous anger. Evangelical fervour. Or, when it comes to chocolate and dementia, relief. It’s fine to have an emotional response to a chart or shocking statistic — but we should not ignore that emotion, or be led astray by it. 

There are certain claims that we rush to tell the world, others that we use to rally like-minded people, still others we refuse to believe. Our belief or disbelief in these claims is part of who we feel we are. “We all process information consistent with our tribe,” says Dan Kahan, professor of law and psychology at Yale University. 

In 2005, Charles Taber and Milton Lodge, political scientists at Stony Brook University, New York, conducted experiments in which subjects were invited to study arguments around hot political issues. Subjects showed a clear confirmation bias: they sought out testimony from like-minded organisations. For example, subjects who opposed gun control would tend to start by reading the views of the National Rifle Association. Subjects also showed a disconfirmation bias: when the researchers presented them with certain arguments and invited comment, the subjects would quickly accept arguments with which they agreed, but devote considerable effort to disparage opposing arguments.  

Expertise is no defence against this emotional reaction; in fact, Taber and Lodge found that better-informed experimental subjects showed stronger biases. The more they knew, the more cognitive weapons they could aim at their opponents. “So convenient a thing it is to be a reasonable creature,” commented Benjamin Franklin, “since it enables one to find or make a reason for everything one has a mind to do.” 

This is why it’s important to face up to our feelings before we even begin to process a statistical claim. If we don’t at least acknowledge that we may be bringing some emotional baggage along with us, we have little chance of discerning what’s true. As the physicist Richard Feynman once commented, “You must not fool yourself — and you are the easiest person to fool.” 

The second crucial piece of advice is to understand the claim. That seems obvious. But all too often we leap to disbelieve or believe (and repeat) a claim without pausing to ask whether we really understand what the claim is. To quote Douglas Adams’s philosophical supercomputer, Deep Thought, “Once you know what the question actually is, you’ll know what the answer means.” 

For example, take the widely accepted claim that “inequality is rising”. It seems uncontroversial, and urgent. But what does it mean? Racial inequality? Gender inequality? Inequality of opportunity, of consumption, of education attainment, of wealth? Within countries or across the globe? 

Even given a narrower claim, “inequality of income before taxes is rising” (and you should be asking yourself, since when?), there are several different ways to measure this. One approach is to compare the income of people at the 90th percentile and the 10th percentile, but that tells us nothing about the super-rich, nor the ordinary people in the middle. An alternative is to examine the income share of the top 1 per cent — but this approach has the opposite weakness, telling us nothing about how the poorest fare relative to the majority.  

There is no single right answer — nor should we assume that all the measures tell a similar story. In fact, there are many true statements that one can make about inequality. It may be worth figuring out which one is being made before retweeting it. 

Perhaps it is not surprising that a concept such as inequality turns out to have hidden depths. But the same holds true of more tangible subjects, such as “a nurse”. Are midwives nurses? Health visitors? Should two nurses working half-time count as one nurse? Claims over the staffing of the UK’s National Health Service have turned on such details. 

All this can seem like pedantry — or worse, a cynical attempt to muddy the waters and suggest that you can prove anything with statistics. But there is little point in trying to evaluate whether a claim is true if one is unclear what the claim even means. 

Imagine a study showing that kids who play violent video games are more likely to be violent in reality. Rebecca Goldin, a mathematician and director of the statistical literacy project STATS, points out that we should ask questions about concepts such as “play”, “violent video games” and “violent in reality”. Is Space Invaders a violent game? It involves shooting things, after all. And are we measuring a response to a questionnaire after 20 minutes’ play in a laboratory, or murderous tendencies in people who play 30 hours a week? “Many studies won’t measure violence,” says Goldin. “They’ll measure something else such as aggressive behaviour.” Just like “inequality” or “nurse”, these seemingly common sense words hide a lot of wiggle room. 

Two particular obstacles to our understanding are worth exploring in a little more detail. One is the question of causation. “Taller children have a higher reading age,” goes the headline. This may summarise the results of a careful study about nutrition and cognition. Or it may simply reflect the obvious point that eight-year-olds read better than four-year-olds — and are taller. Causation is philosophically and technically a knotty business but, for the casual consumer of statistics, the question is not so complicated: just ask whether a causal claim is being made, and whether it might be justified. 

Returning to this study about violence and video games, we should ask: is this a causal relationship, tested in experimental conditions? Or is this a broad correlation, perhaps because the kind of thing that leads kids to violence also leads kids to violent video games? Without clarity on this point, we don’t really have anything but an empty headline.  

We should never forget, either, that all statistics are a summary of a more complicated truth. For example, what’s happening to wages? With tens of millions of wage packets being paid every month, we can only ever summarise — but which summary? The average wage can be skewed by a small number of fat cats. The median wage tells us about the centre of the distribution but ignores everything else. 

Or we might look at the median increase in wages, which isn’t the same thing as the increase in the median wage — not at all. In a situation where the lowest and highest wages are increasing while the middle sags, it’s quite possible for the median pay rise to be healthy while median pay falls.  

Sir Andrew Dilnot, former chair of the UK Statistics Authority, warns that an average can never convey the whole of a complex story. “It’s like trying to see what’s in a room by peering through the keyhole,” he tells me.  

In short, “you need to ask yourself what’s being left out,” says Mona Chalabi, data editor for The Guardian US. That applies to the obvious tricks, such as a vertical axis that’s been truncated to make small changes look big. But it also applies to the less obvious stuff — for example, why does a graph comparing the wages of African-Americans with those of white people not also include data on Hispanic or Asian-Americans? There is no shame in leaving something out. No chart, table or tweet can contain everything. But what is missing can matter. 

Channel the spirit of film noir: get the backstory. Of all the statistical claims in the world, this particular stat fatale appeared in your newspaper or social media feed, dressed to impress. Why? Where did it come from? Why are you seeing it?  

Sometimes the answer is little short of a conspiracy: a PR company wanted to sell ice cream, so paid a penny-ante academic to put together the “equation for the perfect summer afternoon”, pushed out a press release on a quiet news day, and won attention in a media environment hungry for clicks. Or a political donor slung a couple of million dollars at an ideologically sympathetic think-tank in the hope of manufacturing some talking points. 

Just as often, the answer is innocent but unedifying: publication bias. A study confirming what we already knew — smoking causes cancer — is unlikely to make news. But a study with a surprising result — maybe smoking doesn’t cause cancer after all — is worth a headline. The new study may have been rigorously conducted but is probably wrong: one must weigh it up against decades of contrary evidence. 

Publication bias is a big problem in academia. The surprising results get published, the follow-up studies finding no effect tend to appear in lesser journals if they appear at all. It is an even bigger problem in the media — and perhaps bigger yet in social media. Increasingly, we see a statistical claim because people like us thought it was worth a Like on Facebook. 

David Spiegelhalter, president of the Royal Statistical Society, proposes what he calls the “Groucho principle”. Groucho Marx famously resigned from a club — if they’d accept him as a member, he reasoned, it couldn’t be much of a club. Spiegelhalter feels the same about many statistical claims that reach the headlines or the social media feed. He explains, “If it’s surprising or counter-intuitive enough to have been drawn to my attention, it is probably wrong.”  

OK. You’ve noted your own emotions, checked the backstory and understood the claim being made. Now you need to put things in perspective. A few months ago, a horrified citizen asked me on Twitter whether it could be true that in the UK, seven million disposable coffee cups were thrown away every day.  

I didn’t have an answer. (A quick internet search reveals countless repetitions of the claim, but no obvious source.) But I did have an alternative question: is that a big number? The population of the UK is 65 million. If one person in 10 used a disposable cup each day, that would do the job.  

Many numbers mean little until we can compare them with a more familiar quantity. It is much more informative to know how many coffee cups a typical person discards than to know how many are thrown away by an entire country. And more useful still to know whether the cups are recycled (usually not, alas) or what proportion of the country’s waste stream is disposable coffee cups (not much, is my guess, but I may be wrong).  

So we should ask: how big is the number compared with other things I might intuitively understand? How big is it compared with last year, or five years ago, or 30? It’s worth a look at the historical trend, if the data are available.  

Finally, beware “statistical significance”. There are various technical objections to the term, some of which are important. But the simplest point to appreciate is that a number can be “statistically significant” while being of no practical importance. Particularly in the age of big data, it’s possible for an effect to clear this technical hurdle of statistical significance while being tiny. 

One study was able to demonstrate that unborn children exposed to a heatwave while in the womb went on to earn less as adults. The finding was statistically significant. But the impact was trivial: $30 in lost income per year. Just because a finding is statistically robust does not mean it matters; the word “significance” obscures that. 

In an age of computer-generated images of data clouds, some of the most charming data visualisations are hand-drawn doodles by the likes of Mona Chalabi and the cartoonist Randall Munroe. But there is more to these pictures than charm: Chalabi uses the wobble of her pen to remind us that most statistics have a margin of error. A computer plot can confer the illusion of precision on what may be a highly uncertain situation. 

“It is better to be vaguely right than exactly wrong,” wrote Carveth Read in Logic (1898), and excessive precision can lead people astray. On the eve of the US presidential election in 2016, the political forecasting website FiveThirtyEight gave Donald Trump a 28.6 per cent chance of winning. In some ways that is impressive, because other forecasting models gave Trump barely any chance at all. But how could anyone justify the decimal point on such a forecast? No wonder many people missed the basic message, which was that Trump had a decent shot. “One in four” would have been a much more intuitive guide to the vagaries of forecasting.

Exaggerated precision has another cost: it makes numbers needlessly cumbersome to remember and to handle. So, embrace imprecision. The budget of the NHS in the UK is about £10bn a month. The national income of the United States is about $20tn a year. One can be much more precise about these things, but carrying the approximate numbers around in my head lets me judge pretty quickly when — say — a £50m spending boost or a $20bn tax cut is noteworthy, or a rounding error. 

My favourite rule of thumb is that since there are 65 million people in the UK and people tend to live a bit longer than 65, the size of a typical cohort — everyone retiring or leaving school in a given year — will be nearly a million people. Yes, it’s a rough estimate — but vaguely right is often good enough. 

Be curious. Curiosity is bad for cats, but good for stats. Curiosity is a cardinal virtue because it encourages us to work a little harder to understand what we are being told, and to enjoy the surprises along the way.  

This is partly because almost any statistical statement raises questions: who claims this? Why? What does this number mean? What’s missing? We have to be willing — in the words of UK statistical regulator Ed Humpherson — to “go another click”. If a statistic is worth sharing, isn’t it worth understanding first? The digital age is full of informational snares — but it also makes it easier to look a little deeper before our minds snap shut on an answer.  

While curiosity gives us the motivation to ask another question or go another click, it gives us something else, too: a willingness to change our minds. For many of the statistical claims that matter, we have already reached a conclusion. We already know what our tribe of right-thinking people believe about Brexit, gun control, vaccinations, climate change, inequality or nationalisation — and so it is natural to interpret any statistical claim as either a banner to wave, or a threat to avoid.  

Curiosity can put us into a better frame of mind to engage with statistical surprises. If we treat them as mysteries to be resolved, we are more likely to spot statistical foul play, but we are also more open-minded when faced with rigorous new evidence. 

In research with Asheley Landrum, Katie Carpenter, Laura Helft and Kathleen Hall Jamieson, Dan Kahan has discovered that people who are intrinsically curious about science — they exist across the political spectrum — tend to be less polarised in their response to questions about politically sensitive topics. We need to treat surprises as a mystery rather than a threat.  

Isaac Asimov is thought to have said, “The most exciting phrase in science isn’t ‘Eureka!’, but ‘That’s funny…’” The quip points to an important truth: if we treat the open question as more interesting than the neat answer, we’re on the road to becoming wiser.  

In the end, my postcard has 50-ish words and six commandments. Simple enough, I hope, for someone who is willing to make an honest effort to evaluate — even briefly — the statistical claims that appear in front of them. That willingness, I fear, is what is most in question.  

“Hey, Bill, Bill, am I gonna check every statistic?” said Donald Trump, then presidential candidate, when challenged by Bill O’Reilly about a grotesque lie that he had retweeted about African-Americans and homicides. And Trump had a point — sort of. He should, of course, have got someone to check a statistic before lending his megaphone to a false and racist claim. We all know by now that he simply does not care. 

But Trump’s excuse will have struck a chord with many, even those who are aghast at his contempt for accuracy (and much else). He recognised that we are all human. We don’t check everything; we can’t. Even if we had all the technical expertise in the world, there is no way that we would have the time. 

My aim is more modest. I want to encourage us all to make the effort a little more often: to be open-minded rather than defensive; to ask simple questions about what things mean, where they come from and whether they would matter if they were true. And, above all, to show enough curiosity about the world to want to know the answers to some of these questions — not to win arguments, but because the world is a fascinating place. 

Sunday 6 March 2016

How can we know ourself?

 Questioner: How can we know ourselves?
Jiddu Krishnamurti: You know your face because you have often looked at it reflected in the mirror. Now, there is a mirror in which you can see yourself entirely - not your face, but all that you think, all that you feel, your motives, your appetites, your urges and fears. That mirror is the mirror of relationship: the relationship between you and your parents, between you and your teachers, between you and the river, the trees, the earth, between you and your thoughts. Relationship is a mirror in which you can see yourself, not as you would wish to be, but as you are. I may wish, when looking in an ordinary mirror, that it would show me to be beautiful, but that does not happen because the mirror reflects my face exactly as it is and I cannot deceive myself. Similarly, I can see myself exactly as I am in the mirror of my relationship with others. I can observe how I talk to people: most politely to those who I think can give me something, and rudely or contemptuously to those who cannot. I am attentive to those I am afraid of. I get up when important people come in, but when the servant enters I pay no attention. So, by observing myself in relationship, I have found out how falsely I respect people, have I not? And I can also discover myself as I am in my relationship with the trees and the birds, with ideas and books.
You may have all the academic degrees in the world, but if you don't know yourself you are a most stupid person. To know oneself is the very purpose of all education. Without self-knowledge,merely to gather facts or take notes so that you can pass examinations is a stupid way of existence. You may be able to quote the Bhagavad Gita, the Upanishads, the Koran and the Bible, but unless you know yourself you are like a parrot repeating words. Whereas, the moment you begin to know yourself, however little, there is already set going an extraordinary process of creativeness. It is a discovery to suddenly see yourself as you actually are: greedy, quarrelsome, angry, envious, stupid. To see the fact without trying to alter it, just to see exactly what you are is an astonishing revelation. From there you can go deeper and deeper, infinitely, because there is no end to self-knowledge.
 Through self-knowledge you begin to find out what is God, what is truth, what is that state which is timeless. Your teacher may pass on to you the knowledge which he received from his teacher, and you may do well in your examinations, get a degree and all the rest of it; but, without knowing yourself as you know your own face in the mirror, all other knowledge has very little meaning. Learned people who don't know themselves are really unintelligent; they don't know what thinking is, what life is. That is why it is important for the educator to be educated in the true sense of the word, which means that he must know the workings of his own mind and heart, see himself exactly as he is in the mirror of relationship. Self-knowledge is the beginning of wisdom. in self-knowledge is the whole universe; it embraces all the struggles of humanity.

Wednesday 6 August 2014

Cricket - The fear of the ringer

 

Jonathan Wilson in Cricinfo
Slow straight bowling can become infused with mystery and terror when you think you're facing a ringer  © PA Photos
Enlarge
Cricket, probably more than any other sport, encourages the ringer. Everybody who has ever played at any kind of amateur level knows that Sunday morning feeling, either calling round mates and mates of mates to see if anybody fancies making up the numbers, or getting an unexpected phone call from somebody you last saw in a bar at university ten years earlier seeing if you fancy a game.
It happens in other sports as well, of course, but cricket, as an individual sport dressed in a team game's clothing, seems more conducive to the ringer. A footballer or a hockey player suddenly introduced to an unfamiliar team will stand out a mile, the holistic nature of those sports meaning he won't be making a run he needs to, or he'll be providing cover where none is needed. In cricket, though, you pick up the ball and bowl, or pick up the bat and bat, and - apart from knowing the idiosyncrasies of how other batsmen run or the vagaries of who fields best where, essentially you can just get on with it. 
Even better, because of the self-regulatory element of cricket, the way a batsman can retire, or a bowler can be taken off if he's bowling so well he threatens to unbalance the game, it doesn't really matter if there's one player who's far better than everybody else. It doesn't really matter if there's one player who's far worse: even good players score ducks, so the weak link doesn't stand out as he would in another sport.
The best ringer I ever played with was the West Indies offspinner Omari Banks who, aged 16 or 17, for reasons I can't recall, joined our college team for a tour. He was an up-and-coming star, we were told, a bowler who was expected to play Test cricket sooner rather than later.
A first glance was confusing. He belted the ball miles and clearly had a superb eye, but his frequently short offbreaks were remarkably unthreatening. He must be a quick taking it easy on us, we thought; five years later, he was taking three wickets (for lots) and scoring 47 not out as West Indies chased down 418 to beat Australia in Antigua. There was something rather comforting in that: he'd seemed far more like a batsman than a bowler to us.
Clean though his hitting had been, the truth is Banks had been a little bit of a disappointment to us. Hearing we were getting a West Indies bowler, we'd assumed we could play along and then chuck him the ball as soon as a partnership began to get annoying, effectively guaranteeing wins.
Absurdly, the following year, I found myself cast unwittingly in the Banks role - in relative terms; nobody would ever have pretended I was on the verge of a Test debut. I'd just finished my Masters and was temping at a data entry centre in Sunderland. A mate was working at the City of Newcastle Development Agency and called me one day to see if I fancied playing against British Airways the following day. By starting work early and taking only 15 minutes for lunch, I was able to get up to Ponteland, to a bleak field near the airport, in time to play.
"What do you do?" the captain asked. The honest answer would have been, "Nothing very well," but I grunted, "Bits and pieces."
He nodded and, having won the toss and opted to bat, asked me to open. I had occasionally opened for my college Second XI as an undergrad, so it didn't seem that odd, although at Durham I'd tended to bat at seven or eight for the Graduate Society. On a horrible, sticky pitch, I ground my way to 27 at which, having heard the grumbles from the boundary, I slashed at a wide one and was caught at deep cover. My slow start having forced others to play overly aggressively, I ended up top-scoring as we made 90-odd in 20 overs.
That, I assumed, was that. I fielded at backward point and took a catch, but the game seemed to be drifting slowly away from us when the captain suddenly asked me to bowl the 13th over. This seemed very strange, but I wasn't going to say no. The batsman was set, had scored 30 or so, and looked far better than anybody else in the game.
My first ball, a pushed through offbreak, was blocked. The second he clubbed through midwicket for four, although it had turned a little and it had come slightly off the inside edge. The third ball I tossed up, it didn't turn, he played for spin that wasn't there and chipped it to cover. "Thinking cricket," said the captain, apparently in the belief there'd been some element of skill of planning in what had just happened.
What happened next was mystifying. The new batsman blocked out the over. They blocked out the 15th over as well. Ludicrously I had figures of 2-1-4-1. Suddenly they needed over a run a ball. The third over, the batsman, having to force the pace, came down the track, yorked himself and was stumped. Two balls later the new batsman did the same thing. We ended up winning by 12 runs and, without really knowing how, I'd taken 3 for 14.
It later turned out my mate had rather oversold me, or rather, our captain had assumed the level of college cricket at my university was rather higher than it was. After I'd batted so sluggishly, he'd assumed I must be a bowler and so had decided to give me four overs at the death. He'd even let on to the opposing captain that I was a ringer, with a suitably inflated suggestion of my abilities. When I'd then fortuitously dismissed their best player, it confirmed their fears, which explained the nine successive balls nobody had tried to hit. Slow straight bowling had become infused with mystery and terror.
None of it was real, of course. The wickets had been conjured by fear of the ringer. It was a valuable lesson: pretend you know what you're doing, and opponents might just destroy themselves by believing you.

Wednesday 28 November 2012

Tobacco companies ordered to admit they lied over smoking danger



US judge says tobacco firms must spend their own money on a public campaign admitting deception about the risks of smoking
  • guardian.co.uk
Cigarettes on display
The public advertisements are to be published in various media for as long as two years Photograph: Alex Segre/Alamy
Major tobacco companies who spent decades denying they lied to the US public about the dangers of cigarettes must spend their own money on a public advertising campaign saying they did lie, a federal judge ruled on Tuesday.
The ruling sets out what might be the harshest sanction to come out of a historic case that the Justice Department brought in 1999 accusing the tobacco companies of racketeering.
US District judge Gladys Kessler wrote that the new advertising campaign would be an appropriate counterweight to the companies' "past deception" dating to at least 1964.
The advertisements are to be published in various media for as long as two years.
Details of the campaign - like how much it will cost and which media will be involved - are still to be determined and could lead to another prolonged fight.

Kessler's ruling on Tuesday, which the companies could try to appeal, aims to finalise the wording of five different statements the companies will be required to use.
One of them begins: "A federal court has ruled that the defendant tobacco companies deliberately deceived the American public by falsely selling and advertising low tar and light cigarettes as less harmful than regular cigarettes."
Another statement includes the wording: "Smoking kills, on average, 1,200 Americans. Every day."
The wording was applauded by health advocates who have waited years for tangible results from the case.
"Requiring the tobacco companies to finally tell the truth is a small price to pay for the devastating consequences of their wrongdoing," said Matthew Myers, president of the Campaign for Tobacco-Free Kids, an anti-tobacco group in Washington.
"These statements do exactly what they should do. They're clear, to the point, easy to understand, no legalese, no scientific jargon, just the facts," said Ellen Vargyas, general counsel for the American Legacy Foundation.
The largest cigarette companies in the United States spent $8.05 billion in 2010 to advertise and promote their products, down from $12.5 billion in 2006, according to a report issued in September by the Federal Trade Commission.
The major tobacco companies, which fought having to use words like "deceived" in the statements, citing concern for their rights of free speech, had a muted response.
"We are reviewing the judge's ruling and considering next steps," said Bryan Hatchell, a spokesman for Reynolds American Inc.
Philip Morris USA, a unit of Altria Group Inc, is studying the decision, a spokesman said.
The Justice Department, which urged the strong language, was pleased with the ruling, a spokesman said.
Kessler's ruling considered whether the advertising campaign - known as "corrective statements" - would violate the companies' rights, given that the companies never agreed with her 2006 decision that they violated racketeering law.
But she concluded the statements were allowed because the final wording is "purely factual" and not controversial.
She likened the advertising campaign to other statements that US officials have forced wayward companies to make.
The Federal Trade Commission, she wrote, once ordered a seller of supposed "cancer remedies" to send a letter on its own letterhead to customers telling them the commission had found its advertising to be deceptive.
"The government regularly requires wrongdoers to make similar disclosures in a number of different contexts," Kessler wrote.