Search This Blog

Showing posts with label probability. Show all posts
Showing posts with label probability. Show all posts

Monday, 6 July 2020

This pandemic has exposed the uselessness of orthodox economics

Post Covid-19, our priority should be to build resilient systems explicitly designed to withstand worst-case scenarios opines Jonathan Aldred in The Guardian 

 
‘Framing the future in terms of probabilities gives us the illusion of knowledge and control, which is extraordinarily tempting, but it’s all hubris.’ Photograph: Daniel Sorabji/AFP via Getty Images


Even before the pandemic came along, the world economy faced a set of deepening crises: a climate emergency, extreme inequality and huge disruption to the world of work, with robots and AI systems replacing humans.

Conventional economic theories have had little to offer. On the contrary, they have acted like a cage around our thinking, vetoing a range of progressive policy ideas as unaffordable, counter-productive, incompatible with free markets, and so on. Worse than that, economics has led us, in a subtle, insidious way, to internalise a set of values and ways of seeing the world that prevents us even imagining various forms of radical change.

Since economic orthodoxy is so completely embedded in our thinking, escape from it demands more than a short-term spending splurge to prevent immediate economic collapse, vital though that is. We must dig deeper to uncover the economic roots of the mess we’re in. Putting it more positively, what do we want from post-coronavirus economics?

Mainstream economics has taught us that the only rational way to deal with an uncertain future is to quantify it, by assigning a probability to every possibility. But even with the best expertise in the world, our knowledge often falls far short. Frequently we struggle to predict which outcomes are more likely. Worse still, there may be outcomes we haven’t even considered, futures that no one had imagined, as the pandemic has so vividly shown.

Framing the future in terms of probabilities gives us the illusion of knowledge and control, which is extraordinarily tempting, but it’s all hubris. In the run-up to the 2007 financial crisis, bankers were proud of their models. Then that August, the Goldman Sachs chief financial officer admitted the bank had spotted huge price moves in some financial markets, several times in one week. Yet according to its models, each of these moves was supposed to be less likely than winning the UK national lottery jackpot 21 times in a row. World events sometimes demand humility.

There are clear lessons here for how to address the climate emergency: rather than focusing on the average climate impacts predicted by mathematical models that depend on probabilistic knowledge that is highly unreliable, we must think seriously about worst-case scenarios, and take steps to avoid them. Yet economic orthodoxy pushes us away from precautionary action. If mainstream economics has a single overarching objective or principle, it is efficiency.

Efficiency means getting the most “bang for your buck”, the most benefit for every pound spent. Any other course of action is wasteful, surely? But eliminating waste implies eliminating excess capacity, and we now see the consequences of that in health systems worldwide. Our obsession with efficiency, if it means failing to plan for a pandemic or a climate emergency, will cost lives.

Our priority should be resilience, not efficiency. We need to build resilient systems and economies that are explicitly designed to withstand worst-case scenarios – and have a fighting chance of coping with unforeseen disasters too.

Ultimately the problem with economic orthodoxy lies in how it frames our values and priorities. Decisions must always be about trade-offs – the weighing up of costs against benefits, ideally measured through prices in markets. If we take our ignorance about the future seriously, this cost-benefit calculus should not even get started. Because costs outweighing benefits is the oldest excuse for not taking precautions – and is a recipe for disaster when the benefits, or the costs of inaction, are vastly undervalued.

Cost-benefit thinking also leads us to assume that all values can be expressed in monetary terms. Many politicians and business leaders fixate on statements such as “a 2°C rise in average global temperature will reduce GDP by up to 2%”, as though a fall in GDP measures the true costs of the climate emergency.

In practice, this thinking means that the value of everything is measured by how much people offer to pay for it. Since the rich can always pay more than the poor, priorities get skewed towards the desires of the rich, away from the needs of the poor. So more money is spent on R&D for anti-wrinkle creams than for malaria treatments. Big Pharma has been relatively uninterested in developing vaccines, because a vaccination programme only works if the poor get vaccinated too, which limits the price manufacturers can charge.

We might seem to be beyond that now: the world has woken up, and rich countries will spend “whatever it takes” to tackle the pandemic. But Covid 19 vaccine research – and countless other fields of medical research with the potential to save as many lives in the long-term – needs continuous, reliable funding over many years. Once the market sees better profit elsewhere, funding will be cut, and the researchers will retire or move on, their experience lost.

Economic orthodoxy supports the narrative that this pandemic is a unique disaster no one could have prepared for, and with no wider lessons for economics and politics. This story suits some of the world’s billionaires, but it’s not true. There is an alternative: the pandemic provides further evidence that to tackle the climate emergency, inequality and any emerging crises, we must re-think our economics from the bottom up.

Monday, 28 July 2014

How we misunderstand risk in sport

Aggression, defence, success, failure, innovation - they are all about our willingness to take risks and how we judge them
Ed Smith in Cricinfo 
July 28, 2014

Same risk, different outcome: when a batsman goes after a bowler, he could end up being dismissed or hitting a six © Getty Images

The World Bank recently asked me to give a speech at a forum in London called "Understanding Risk". Initiall, I was unsure how I could approach the subject. How could I, an ex-sportsman turned writer, address financial experts on the question of risk?
On reflection, I realised there is another profession, followed around the world and relentlessly scrutinised, that relies almost entirely on the assessment of risk. Without risk, there can be no reward. Without risk, there are no triumphs. Without risk, there can be no progress.
And yet this entire profession, this whole sphere of human endeavour, doesn't really understand risk at all. It uses the term sloppily, even incorrectly. It criticises good risks and celebrates bad risks. It cannot distinguish between probabilities and outcomes.
It has changed its approach to risk, swapping one flawed approach for the opposite mistake. In the old amateur days, when it was run and managed like an old boys' club, there was little or no calculation of risk - merely unscientific anecdotes and old wives' tales. But the brave new dawn of social science didn't prove any better. In fact, it might be even worse. People put too much faith in maths, metrics and quantification. It has lurched from old boy's network to a pseudo-science - without pausing en route where it ought permanently to reside: with the acknowledgment that risk requires subjective but expert judgement. There is no perfect formula. If there was, everyone with a brain would succeed.
The sphere I describe, of course, is not finance or banking but professional sport. Sporting strategy - sometimes analytical and planned, sometimes instant and intuitive - always revolves around the assessment of risk. Taking risks is what sportsmen do for a living. And yet the analysis of risk does not match this practical reality. We usually talk in clichés not truths, often criticising good risks and praising bad risk-taking.
Here are four ways the sports world often misinterprets risk.

Risk is everywhere

In cricket, every attacking shot played by a batsman carries an element of risk, no matter how small. Stop playing shots and you cannot score runs. "You miss 100% of the shots you don't take," as Wayne Gretzky, the greatest ice hockey player of all time, put it.
And it is amazing when you stop playing attacking shots how much better bowlers bowl. Effective risk-taking has an intimidatory effect. Total risk-aversion the opposite: it emboldens your opponent, making him feel safe and relaxed.
In football, when a midfield player advances up the pitch, he is trying to orchestrate a goal while also reducing his own team's defensive protection. In risking creating a goal, he increases the risk of conceding one. Defenders, too, constantly weigh risks. Pressing the opposition, trying to get the ball back from them, is a risk. In moving up the pitch without possession, you create space behind you - if they are good enough to keep the ball and get past you.
But the alternative - safety-first defending - brings risks of another kind. If you never press, and always retreat into the safety of deep defensive organisation, then you rarely regain the ball. You dig your own trench, unable to threaten or frighten the opposition, merely sitting there waiting for the next wave of attack.
Tennis is all about risk. With your groundstrokes, if you are determined never to lose a point by hitting the ball long, not even once, then sadly you won't play with enough depth to make life difficult for your opponent. You will make zero errors and still lose.
And when it's your turn to return, if you never run round your backhand in the hope of hitting a forehand winner, then you will allow your opponent to settle into a comfortable serving rhythm. In the pursuit of good returning, you have to risk getting aced. You have to risk failure in the short term to give yourself a chance in the long term. You have to dare to be great.

Being right is not the same thing as events turning out well

You can be right and fail. You can be wrong and succeed.
Sport is about problem-solving. And the best way to discover new, better methods is to allow people to experiment through trial and error. Don't see what everyone else is doing and copy it. Find a better way
Sport rarely allows for this. We say that winning "justified the decision", a classic failure to distinguish between ex ante and ex poste thinking. Instead, the real question should be: would I do the same thing again, given the information I had at the time? Coaches and captains often make the right calls and lose. And they often make the wrong calls and win. It is stupid to judge a man's judgement on a sample size of one event.
The same point applies to risks taken by players. An unthinking tribal fan will shout "hero" when a risk-taking batsman hits a six, then scream "idiot" when the same shot ends up in a fielder's hands.
What a champion to take on the bowler! What a fool to take such a risk! The inconsistency here is not the batsman's, it is the spectator's. Coward/hero, fool/champion, disgrace/legend. The same risk can lead to either assessment.

Many crucial risks are invisible 

There are risks that no one sees that still have to be taken. Critics delude themselves that the only form of bravery in sport is guts and determination. At least as important is nerve, or, put differently, the capacity to endure risk imperceptibly.
When I was commentating with Sourav Ganguly at Lord's last week, he told me that Virender Sehwag used to shout, "He missed a four!" while he was in the dressing room watching team-mates batting. Ganguly quite rightly added that missing an opportunity to do something good is just as much of a mistake as making a visible error.
Many teams imperceptibly yield an advantage through timidity, fearfulness, and anxiety about standing out for the wrong reasons - an advantage they never subsequently reverse.
During the last Ashes series, I used this column to develop the metaphor of looking at sport as an old-fashioned battlefield. As the front lines engage and each army tries to advance, the direction of travel will be determined by tiny acts of skill and bravery - and equally imperceptible acts of risk aversion.
Somewhere on the front line, an infantryman inches a foot closer to his ally, hiding his own shield slightly behind his friend's. Hence one man becomes fractionally safer - but if the action is repeated a thousand times, the front line becomes significantly narrower and weaker as a whole. No one individual can be singled out as a hopeless failure. But the group suffers a collective diminution.
So it is in sport. When a batsman fails to hit a half-volley for four because he is too cautious, an opportunity is wasted to exploit an advantage offered to his team.
We talk a great deal about momentum, but not enough about how momentum is created. Once the whole army is retreating, even the bravest soldiers can fail to hold the line. We talk of courage when the tide has already turned. So in place of the usual clichés, "out-fought", "out-toughed", "out-hungered", I have a simpler word: outplayed. Or, even better, "quietly, perhaps indiscernibly, defeated by superior risk-taking".

The essential risk of being prepared to look silly

This is how sport moves forward. In 1968, a professional athlete had a crazy idea. Madder still, he had this idea just before the tournament event of his life. He wanted to rip up the coaching manual and do it all his own way. His coaches told him to forget about it, to stick with the old way of doing things, not to rock the boat.
He ignored them. He was a high-jumper, and he instinctively wanted to go over the bar head first, back down - not, as everyone else did, leg first, face down. At the 1968 Mexico Olympics, despite everyone telling him he was mad, he went ahead with his revolutionary technique. And how did it work out? He won a gold medal and set a new world record. He was called Dick Fosbury and he'd just invented the Fosbury Flop.
Sport is about problem-solving. A challenge is set: kick the ball into the net; hit the ball over the boundary; jump over the bar. From then on, solutions evolve, sometimes deliberately, sometimes by accident. And the best way to discover new, better methods is to allow people to experiment through trial and error. Don't see what everyone else is doing and copy it. Find a better way.
The left-field question is the one to ask. Why shouldn't I jump over the high-jump bar head first? Why shouldn't I aim my sweep shot towards off side where there aren't any fielders (the reverse sweep, the switch hit)?
Sport moves forward when it is irreverent, resistant to authority. The greatest cricketer of all time, Don Bradman, used a technique that no one has dared to try out a second time. His bat swing started way out to the side, rather than as a straight pendulum line from behind him.
Let me repeat. The method that made Bradman one and a half times better than the second-best player was consigned to the rubbish bin of sporting ideas. Bradman was prepared to look stupid by risking a unique rather than textbook technique. Others have been unwilling or unable to follow.
Bradman, however, benefited from one huge slice of luck. He escaped the greatest risk that can befall any genius: formal education. He learnt to bat on his own, using the empirical method, without a coaching manual. As a child he would repeatedly hit a golf ball against the curved brick base of his family's water tank.
Here is a startling thought. How many Bradmans were persuaded to try the usual technique? How many Fosburys were talked out of taking a chance?
In the course of trying to be different and better, you have to bear the risk of being different and worse.

Tuesday, 22 April 2014

Understanding Risk - Risk explained to a sixteen year old



By Girish Menon

Risk is the consequence one has to suffer when the outcome of an event is not what you expected or have invested in.

For e.g. as a GCSE student you have invested in getting the grades required by the sixth form college that you wish to go to.

The GCSE exam therefore is the event.

From an individual's point of view this event has only two possible outcome viz. you get the grades or you don't.

Your investment is time, money and effort in order to get the desired outcome.

The risk is what you will have lost when despite all your investment you did not get the desired grades and hence you are not able to do what you had wanted to do.

From a mathematical point of view since there are only two possible outcomes one could say that the probability of either outcome is 0.5.

Your investment with spending time studying, taking tuitions, buying books.... are to lower the probability to failure to as low a figure as possible.

Can you lower the probability of failure to 0? Yes, by invoking the ceteris paribus assumption. If all 'other factors' that affect a student's ability to take an exam are constant, then a student who has studied all the topics and solved past papers will not fail.

Else, some or all the 'other factors' may conspire to bring about a result that the student may not desire. It is impossible to list all the 'other factors' and hence one is unable to control them. Hence, the exam performance of even a hitherto good student remains uncertain.

If the above example, with only two possible outcomes, shows the uncertainty and unpredictability  in the exam results of a diligent student then one shudders to think about other events where all the outcomes possible cannot be identified.

Let's move to study the English Premier League. Here, each team plays 38 matches and each match can have only three outcomes. When one considers picking a winner  of the league one could look at the teams, the manager etc. But, 'other factors' such as injury to key players, the referee...... may scupper the best laid plans.

When one looks at investing in the shares of a company one may study its books of accounts. Assuming that these books are accurate, this information may be inadequate because it is information from the past and the firm which made a huge killing last season may now be facing turbulent conditions of which you an outside investor maybe unaware of. The 'other factors' that may impinge on a firm's performance will include the behaviour of the staff inside the firm, behaviour of other firms, the government's policies and even global events.

Yet, as a risk underwriter one has to take into account all of these factors, quantify each factor based on its importance and likelihood of happening and then estimate the risk of failure. The key thing to remember is that the quantitative value that you have given each factor is at best only a rough estimate and could be wrong. Which is why every risk underwriter follows Keynes' dictum, 'When the facts change, I change my mind'. George Soros, the celebrated investor, has been rumoured to say no to an investment decision that he may have approved only a few hours ago.

Even if Keynes and Soros may have changed their minds on receipt of new information I am willing to bet that their investment record will show many wrong decisions.

So if the risk in investment decisions itself cannot be accurately predicted imagine the dilemma a politician makes when he decides to take his nation to war.


Hence the best way sportsmen, businessmen and politicians overcome the uncertainty of decision making is by posturing. Pretending that you are the best and everything is within your control. They hope that this will scare away the challengers and doubters and victory becomes a self fulfilling prophecy. Alas! It unfortunately does not work every time either. 

(The author is a lecturer in economics.)

Friday, 26 July 2013

The DRS problem: it's not the humans stupid


Kartikeya Date 

The controversial Trott decision: what many observers don't get is that it wasn't actually the third umpire who made the final call  © PA Photos
Enlarge

The DRS is a system in which umpiring decisions can be reviewed by players. Events on the field can also be reviewed by umpires in some circumstances before a decision is made. A widely held view about recent problems with the system is that while the DRS is fine, the way it is used by players, and on occasion by umpires, has caused difficulties.
I hold the view that the problem, if there is one, is with the system, not with the way it is used. The way the system is defined strictly determines the way it is used.
The DRS system I refer to is described in detail by the ICC in its Playing Handbook (pdf). It is worth clearing up a few misconceptions at the outset.
The TV umpire does not overturn a decision under the DRS. The TV umpire is explicitly prohibited from discussing whether or not a particular appeal should result in an out or a not out. Further, there is no standard in the DRS requiring "conclusive evidence to the contrary" to overturn a decision, as many commentators are fond of telling us.
The rules make only three points. First, the TV umpire must limit himself to the facts. Second, if some of the evidence requested by the umpire on the field does not permit a conclusion with "a high degree of confidence", the TV umpire should convey to the umpire on the field that a conclusive answer is not possible (the conclusion in this case is not the decision itself but about individual points of fact potentially influencing it). Finally, if some information is not available to the TV umpire, he is required to report this to the on-field umpire. He is also required to provide all other evidence requested by the on-field umpire. If we go by the ICC's DRS rules, at no point in the review process is the TV umpire required to provide a definitive conclusion by putting together all the evidence.
The Guardian reported that the ICC did admit to a protocol error in the way the umpires addressed Australia's review in Jonathan Trott's first-ball lbw dismissal in the second innings at Trent Bridge. The ICC has declined to say what the protocol error was, citing a long-standing policy of not revealing communication between umpires. A number of observers think that the absence of one Hot Spot camera angle should have automatically meant that the outcome of the review should have been inconclusive, allowing Dar's original not-out decision to stand. I think this is a misreading of the ICC's DRS rules.
Let's reconstruct the case of Trott. Umpire Erasmus in the TV umpire's box would not be asked "Is Trott LBW?", or even "Did Trott hit the ball with the bat?" Going by the ICC's rules, he would be asked a different series of questions. Does Hot Spot show a touch? No. Does the replay show a touch?Inconclusive. No clear evidence of a deviation. (Some people have argued that there was evidence of deviation on the replay. I disagree. As did Michael Atherton on live commentary.) Does the square-of-the-wicket Hot Spot show a touch? This angle is unavailable. Can you hear any relevant sound on the stump microphone? Inconclusive. Did the ball pitch in line? Yes. Did it hit the pads in line? Yes. Does the ball-track predict that it would have hit the stumps?Yes.
According to the rules, Erasmus would be prevented from providing probabilities or maybes. It would have to be yes, no, or can't say. After getting all these factual responses from Erasmus, Dar would have to make up his mind. Did what he heard from Erasmus merit reversal? As we know, he decided that it did. The protocol error could have been that Erasmus neglected to mention that one of the Hot Spot angles was unavailable. It could also have been that Dar weighed all the facts Erasmus provided to him incorrectly and reached the wrong conclusion, though it is difficult to construe this last possibility as a protocol error, since the protocol explicitly requires the on-field umpire to exercise judgement, which is what Dar did. "The on-field umpire must then make his decision based on those factual questions that were answered by the third umpire, any other factual information offered by the third umpire and his recollection and opinion of the original incident" (See 3.3[k] of Appendix 2 of the Standard Test Match Playing Conditions, ICC Playing Handbook 2012-13).
This is the central faultline in the understanding of the DRS. To some technophiles, it promises an end to interpretation; that, with the DRS, there is to be no more "in the opinion of the umpire". Technology will show everything clearly - make every decision self-evident.
Not so. Under the DRS, a judgement has to be made about whether or not evidence is conclusive. A judgement also has to be made about whether all the evidence (often conflicting, due to the limitations of the technologies involved), taken together, merits a reversal. There have been instances where outside edges have been ruled to have occurred, though there was no heat signature on the bat.
The ICC has consistently insisted that the idea is not to render umpires obsolete. It is right, but in a convoluted way. What the DRS does is allow umpires a limited, strictly defined second look at an event. But it does so on the players' terms. Umpires are currently not allowed to review a decision after it has been made on the field. The "umpire review" element of the DRS takes place before the decision is made on the field in the first instance. Simon Taufel, who has wide experience of both DRS and non-DRS international matches, has questioned whether this is reasonable.
So far, the DRS has been badly burnt in the ongoing Ashes, and has received criticism from some unexpected quarters. Add to this a recent report that a few boards other than India's also oppose it. I suspect that the DRS will not survive in its present form for long.
The ICC is experimenting with real-time replays, which it says will allow TV umpires to initiate reviews. The ICC has long claimed that this is currently not done because it will waste time. The ICC's statistics suggest that in an average DRS Test match, 49 umpiring decisions are made (a decision is said to be made when an appeal from the fielding side is answered). Let's say an average Test lasts 12 sessions. This suggests that on average about four appeals are made per session of Test cricket when the DRS is employed. These numbers don't suggest that allowing umpires to initiate reviews will result in too much extra wasted time, do they? It should be kept in mind, though, that the ICC assesses time wasted relative to the progress of the game, and not simply as a measure in seconds or minutes.
The most damaging consequence of the DRS is off the field. It has now become a point of debate among professional observers of cricket about whether dismissals are determined by the umpire. The idea that the umpire is an expert whose role it is to exercise judgement, and whose judgement is to be respected, is now only superficially true. Time and again, eminently reasonable lbw decisions are reversed for fractions, and as a result are considered clear mistakes. Cricket has lost the ability to appreciate the close decision, the marginal event. It has lost the essential sporting capacity to concede that an event on the field is so close that perhaps a decision in favour of the opposition is reasonable.

Thursday, 30 May 2013

The mathematics of spot-fixing

by Dilip D'Souza
Spot-fixing: suddenly on a whole lot of minds. Three young cricketers accused of doing it for no real reason I can fathom except greed. After all, they were already earning money legitimately far in excess of the great majority of their countrymen.
Still, I’m not here to pass judgement on these men. They are innocent until we find otherwise, and that finding will eventually come from a court. And anyway, who knows what motivates young men with lots of money? No, I’m here to discuss what makes spot-fixing possible; especially, some of the mathematics behind it all.
But let’s start with this: what makes a bet possible? Of course, I suspect it is almost human nature to want to gamble. But that desire is founded on probabilities. You consider an upcoming event, you estimate the probability of it turning out a certain way, and you choose to place a bet (or not) based on that estimate. There are fellows called bookies who will take your bet. Based on their own estimate of what’s going to happen, they will give you what’s called “odds” on the event.
For example: Imagine two cricket captains about to toss a coin. Both of them, and all of us, know the probability of it landing heads is 1/2. If you find a bookie willing to take a bet on this, it’s likely he’ll give you odds of 1:1; meaning, for every rupee you bet, you’ll get a rupee back if the coin does in fact land heads. A pretty stupid bet to make, you’ll agree. Because if you keep betting, you’ll lose your rupee half the time—when the coin lands tails. And when it lands heads, you simply get your rupee back.
But consider tossing a dice instead. The probability of a “1” is 1/6, and that opens up more apparently interesting betting possibilities. A bookie will likely offer odds of 5:1 on a “1”; that is, for every rupee you bet, you’ll get back five if the dice shows “1”. (If it shows anything else, you lose your rupee.) Sounds exciting, this chance to quintuple (wow!) your money? Would you take these odds and place a bet like this?
Yet here’s the thing, and this is why I used the word “apparently” above. Please don’t stop breathing at the mention of quintupling your money. For the mathematics says this is actually just as stupid a bet to make as with the coin. Again, if you keep betting, you’ll lose your rupee five out of every six times. (Put it another way: five of every six bettors who place such a bet will lose their money.) Only once—that sixth time—will you get your five-rupee windfall.
The reason bookies might offer such odds—1:1 for the coin, 5:1 for the dice—is that they know their probabilities as well as you do, and naturally they don’t want to lose money. In fact, they will likely tweak the odds they offer just enough so they actually make money. That is, after all, why they do what they do.
So if you find a bookie offering quite different odds than you expect, it’s likely he knows something you don’t. Consider how that might pan out. Let’s say the coin the captains use is actually a fake—it has tails on both sides. But let’s say only our devious bookie knows this. He says to you the avid bettor: “Ten times your money back if it comes up heads!” You think: “Wow! There’s an attractive proposition!” and you gamble Rs.1,000, for you’ve estimated that there’s a 50-50 chance you’re going to waltz home withRs.10,000.
Then you lose, as—face it—you were always likely to do. Bookie laughs all the way to the bank with yourRs.1,000.
All of which is essentially how spot-fixing must work.
So now imagine you are a fervent cricket-watcher. (Which I’m willing to bet you are, unless you’re Lady Gaga.) From years following the game, you know that bowler J bowls a no-ball about once in every six-ball over. Along comes bookie W to whisper in your ear: “Psst! Hundred times your money back if J bowls exactly one no-ball in his first over in the Siliguri Master Chefs game!” Your eyes widen and you fork out the Rs.10,000 you didn’t win when he offered you the coin bet, starry visions of a million-rupee payoff whirling through your head. Hundreds of other cricket fanatics like you do the same. (Rather silly cricket fanatics, but never mind.)
What you don’t know, of course, is that bookie W has instructed bowler J to bowl not just one, but two no-balls in that first over. For doing so, J will get a slice of all the money W has collected in bets.
So J bowls his two no-balls at the Master Chefs. You lose. Bookie W and bowler J laugh all the way to the bank. Simple.

Wednesday, 27 March 2013

Cricket, Physics and the Laws of Probability



In the recently concluded test match between New Zealand and England an event occurred which in this writer's opinion once again questions the predictability of an lbw decision as a method of dismissing a batsman and especially the DRS system which is being touted as a scientific fact. On the last ball of the 99th over in the England second innings the ball, to quote Andy Zaltzman in Cricinfo:

The ball ricocheted from Prior's flailing bat/arms/head, and plonked downwards, in accordance the traditions of gravity, onto the timbers. It did not brush the stumps. It did not snick the stumps. It did not gently fondle the stumps. It hit the stumps. The bails, perhaps patriotically mindful of their origins in early cricket in England all those years ago, defied all the conventional principles of science by not falling off.

If the stumps and bails had behaved as cricketing precedent and Isaac Newton would have expected them to behave, England would have been seven wickets down with 43 overs left.

If the ball having hit the stumps fails to dislodge the bails then doesn't it introduce even more uncertainty into a DRS based lbw decision which its supporters claim to be irrefutable evidence? This incident requires that in an lbw appeal the DRS should not only predict whether the ball, if not impeded by the batsman illegally, would have gone on to hit the stumps but also if it would dislodge the bails.

Supporters of the DRS rely on the infallibility of scientific laws to promote their support for technology. Then, like true scientists they should admit the weakness of their science whenever an anomaly appears. Assuming for a moment that these scientific laws are infallible then how do they explain the reprieve that Prior obtained? Also, shouldn't the DRS have been used to declare Prior out since the ball had actually hit the stumps?

Hence I would like to make a suggestion which may unite the supporters and opponents of the DRS. I suggest that the LBW as a method of dismissing a batsman should be struck off from the laws of cricket. Instead, a run penalty should be imposed on the batsman every time the ball comes in contact with an  'illegal' part of his/her body. The DRS could be used to adjudicate on this decision. The penalty could be  ten runs and increasing every time the batsman uses such illegitimate methods to stay at the crease.

I look forward to a debate.

Related article

Abolish the LBW - it has no place in the modern world

Friday, 10 August 2012

Predictions are hard, especially about the future


By   Last updated: August 10th, 2012

Asteroid hitting earth
Tomorrow's weather: changeable, with a 66 per cent chance of extinction events
Yesterday I read a startling-ish statistic. A Twitter account calledUberFacts, which has around two and a half million followers, solemnly informed us that there is a 95 per cent chance that humans will be extinct in the next 9,000 years. Now, it's from Twitter, it's probably nonsense. But it got me thinking. What does it even mean?
Obviously, it means that we have a one in 20 chance of surviving to the 2,280th Olympiad, held on RoboColony 46 in the balmy Europan summer of 11012AD. But how can they possibly know that? Have they perhaps got access to other universes and a time machine, and gone forward to a thousand 11012ADs in a thousand alternate realities, and noted with sadness that only 50 such timelines contained humans?
One imagines not, or someone would have said. What they're doing is offering a prediction: if we were to run the universe 20 times, we'd probably survive once. So how might they arrive at that figure? More generally, what does it mean when sports commentators say "Sunderland have a 65 per cent chance of beating Swansea", or financial journalists say "There's an 80 per cent chance that Greece will leave the euro by the start of 2013"?
I don't have any idea how UberFacts arrived at their 95 per cent figure, because they didn't give a source. Someone else suggested it came from the Stern Review into the economic effect of climate change: I had a look around, and Stern in fact assumed a 10 per cent chance of human extinction in the next century. If we extrapolate that to a 9,000-year timescale, that's 90 centuries: 0.9 (the likelihood of not going extinct per century) to the power 90 = 0.00008, or a mere 0.008 per cent chance of survival. UberFacts were being extremely optimistic.
But we've just pushed our question back a stage. We know how we got to our 95 per cent figure (sort of). But how did we get that 0.9 in the first place?
Presumably we assess the ways we could get killed, or kill ourselves. In the journal Risk Analysis, Jason Matheny put forward a few possibilities:, including nuclear war, asteroid strikes, rogue microbes, climate change, or a physics experiment that goes wrong, "creating a 'true vacuum' or strangelets that destroy the planet".
Some of them are semi-predictable. It's not impossible to put a figure on the possibility of a fatal impact with a stellar object. If you know how many large objects are wandering around the relevant bits of solar system, you could put an estimate on the likelihood of one hitting us: a Nasa scientist put it at roughly one impact per 100 million years. You can build a predictive physical model, with known uncertainties, and come up with a figure of probability which is not meaningless. Climate models are an attempt to do something similar, but the sheer number of variables involved means that even the IPCC are unwilling to go past statements such as it is "likely" that, for instance, sea levels will rise, or "very likely" that temperatures will continue to go up: the odds of "total extinction" are not given. And as for the odds of nuclear war or accidentally creating a black hole, there's no model that can even pretend to be helpful.
That 0.9-chance-of-survival-per-century is not a mathematically arrived-at probability, but a guess (and, as a commenter has pointed out, rather a high one, since we've survived 500 centuries or so without trouble so far). You can call it something more flattering – a working assumption; an estimate – but it's a guess. And, obviously, the same applies to all the others: financial journalists aren't laboriously working through all the possible universes in which Greece does and doesn't leave the euro; sports commentators haven't created mathematical models of the Sunderland and Swansea players and run them through a simulation, with carefully defined error bars. They're expressing a guess, based on their own knowledge, and giving it a percentage figure to put a label on how confident they feel.
Is that the death knell for all percentage-expressed figures? No: there is a way of finding out whether a pundit's prediction is meaningful. You can look at an expert's predictions over time, and see whether her "70 per cent likelies" come out 70 per cent of the time, and whether her "definites" come up every time. Luckily, someone's done this: Philip Tetlock, a researcher at the University of California's business school. He has dedicated 25 years to doing precisely that, examining 284 expert commentators in dozens of fields, assessing their predictions over decades. You can read about Tetlock's work in Dan Gardner's fantastic book Future Babble: Why Expert Predictions Fail and Why We Believe them Anyway, but here's the top line: the experts he looked at would, on average, have been beaten by (in Tetlock's words) "a dart-throwing chimpanzee" – ie they were worse than random guesses.
What he also found, however, was that not all experts were the same. The ones who did worse than the imaginary chimp tended to be the Big Idea thinkers: the ones who have a clear theory about how the world works, and apply it to all situations. Those thinkers tended, paradoxically, to be the most confident. The ones who did (slightly) better than average were the ones who had no clear template, no grand theoretical vision; who accepted the world as complex and uncertain and doubted the ability of anyone, including themselves, to be able to predict it. It's a strange thing to learn: the people who are most certain of the rightness of their predictions are very likely to be wrong; the people who are most likely to be right are the ones who will tell you they probably aren't. This applied equally whether or not someone was Right-wing or Left-wing, a journalist or an academic, a doomsayer or an optimist. When it comes to predicting the future, the best lack all conviction, while the worst are full of passionate intensity.
So what does this tell us about UberFacts's supremely confident but infuriatingly unsourced "95 per cent" claim? Essentially: it's nonsense. It might be possible to make a reasonable guess at the chance of extinction per century, if it's done cautiously. But extending it to 9,000 years is simply taking the (considerable) likelihood that it's wrong and raising it to the power 90. It is a guess, and a meaningless one: in 11012AD there will either be humans, or there won't, and our spacefaring descendants won't know whether they've been lucky or not any more than we do.
But there is a wider lesson to learn than that you probably shouldn't trust huge sweeping predictions on Twitter. It's that you shouldn't trust sweeping predictions at all. Anyone who says that the euro is definitely going to collapse, or that climate change is definitely going to cause wars, or that humanity is 95 per cent doomed, is no doubt utterly sure of themselves, but is also, very probably, guessing.

Wednesday, 18 January 2012

Ian Stewart's top 10 popular mathematics books

Ian Stewart is an Emeritus Professor of Mathematics at Warwick University and a Fellow of the Royal Society. He has written over 80 books, mainly popular mathematics, and has won three gold medals for his work on the public understanding of science. In collaboration with Terry Pratchett and Jack Cohen he wrote the Science of Discworld series. His new book, 17 Equations That Changed the World, is published by Profile.
  1. Seventeen Equations that Changed the World
  2. by Ian Stewart
  3. Buy it from the Guardian bookshop
  1. Tell us what you think: Star-rate and review this book
Buy 17 Equations That Changed the World from the Guardian bookshop
"'Popular mathematics' may sound like a contradiction in terms. That's what makes the genre so important: we have to change that perception. Mathematics is the Cinderella science: undervalued, underestimated, and misunderstood. Yet it has been one of the main driving forces behind human society for at least three millennia, it powers all of today's technology, and it underpins almost every aspect of our daily lives.
"It's not really surprising that few outside the subject appreciate it, though. School mathematics is so focused on getting the right answer and passing the exam that there is seldom an opportunity to find out what it's all for. The hard core of real mathematics is extremely difficult, and it takes six or seven years to train a research mathematician after they leave school. Popular mathematics provides an entry route for non-specialists. It allows them to appreciate where mathematics came from, who created it, what it's good for, and where it's going, without getting tangled up in the technicalities. It's like listening to music instead of composing it.
"There are many ways to make real mathematics accessible. Its history reveals the subject as a human activity and gives a feel for the broad flow of ideas over the centuries. Biographies of great mathematicians tell us what it's like to work at the frontiers of human knowledge. The great problems, the ones that hit the news media when they are finally solved after centuries of effort, are always fascinating. So are the unsolved ones and the latest hot research areas. The myriad applications of mathematics, from medicine to the iPad, are an almost inexhaustible source of inspiration."

1. The Man Who Knew Infinity by Robert Kanigel


The self-taught Indian genius Srinivasa Ramanujan had a flair for strange and beautiful formulas, so unusual that mathematicians are still coming to grips with their true meaning. He was born into a poor Brahmin family in 1887 and was pursuing original research in his teens. In 1912, he was brought to work at Cambridge. He died of malnutrition and other unknown causes in 1920, leaving a rich legacy that is still not fully understood. There has never been another mathematical life story like it: absolutely riveting.

2. Gödel, Escher, Bach by Douglas Hofstadter


One of the great cult books, a very original take on the logical paradoxes associated with self-reference, such as "this statement is false". Hofstadter combines the mathematical logic of Kurt Gödel, who proved that some questions in arithmetic can never be answered, with the etchings of Maurits Escher and the music of Bach. Frequent dramatic dialogues between Lewis Carroll's characters Achilles and the Tortoise motivate key topics in a highly original manner, along with their friend Crab who invents the tortoise-chomping record player. DNA and computers get extensive treatment too.

3. The Colossal Book of Mathematics by Martin Gardner


In his long-running Mathematical Games column in Scientific American, Gardner – a journalist with no mathematical training – created the field of recreational mathematics. On the surface his columns were about puzzles and games, but they all concealed mathematical principles, some simple, some surprisingly deep. He combined a playful and clear approach to his subject with a well-developed taste for what was mathematically significant. The book consists of numerous selections from his columns, classified according to the mathematical area involved. Learn how to make a hexaflexagon and why playing Brussels sprouts is a waste of time.

4. Euclid in the Rainforest by Joseph Mazur


A thoroughly readable account of the meaning of truth in mathematics, presented through a series of quirky adventures in the Greek Islands, the jungles around the Orinoco River, and elsewhere. Examines tricky concepts like infinity, topology, and probability through tall tales and anecdotes. Three different kinds of truth are examined: formal classical logic, the role of the infinite, and inference by plausible reasoning. The story of the student who believed nothing except his calculator is an object lesson for everyone who thinks mathematics is just 'sums'.

5. Four Colours Suffice by Robin Wilson


In 1852 Francis Guthrie, a young South African mathematician, was attempting to colour the counties in a map of England. Guthrie discovered that he needed only four different colours to ensure that any two adjacent counties had different colours. After some experimentation he convinced himself that the same goes for any map whatsoever. This is the remarkable story of how mathematicians eventually proved he was right, but only with the aid of computers, bringing into question the meaning of "proof". It contains enough detail to be satisfying, but remains accessible and informative throughout.

6. What is Mathematics Really? by Reuben Hersh


The classic text What is Mathematics? by Richard Courant and Herbert Robbins focused on the subject's nuts and bolts. It answered its title question by example. Hersh takes a more philosophical view, based on his experience as a professional mathematician. The common working philosophy of most mathematicians is a kind of vague Platonism: mathematical concepts have some sort of independent existence in some ideal world. Although this is what it feels like to insiders, Hersh argues that mathematics is a collective human construct – like money or the Supreme Court. However, it is a construct constrained by its own internal logic; it's not arbitrary. You choose the concepts that interest you, but you don't get to choose how they behave.

7. Magical Mathematics by Persi Diaconis and Ron Graham


Both authors are top-rank mathematicians with years of stage performances behind them, and their speciality is mathematical magic. They show how mathematics relates to juggling and reveal the secrets behind some amazing card tricks. Here's one. The magician mails a pack of cards to anyone, asking them to shuffle it and choose a card. Then he shuffles the cards again, and mails half of them to the magician—not saying whether the chosen card is included. By return mail, the magician names the selected card. No trickery: it all depends on the mathematics of shuffles.

8. Games of Life by Karl Sigmund


Biologists' understanding of many vital features of the living world, such as sex and survival, depends on the theory of evolution. One of the basic theoretical tools here is the mathematics of game theory, in which several players compete by choosing from a list of possible strategies. The children's game of rock-paper-scissors is a good example. The book illuminates such questions as how genes spread through a population and the evolution of cooperation, by finding the best strategies for games such as cat and mouse, the battle of the sexes, and the prisoner's dilemma. On the borderline between popular science and an academic text, but eminently readable without specialist knowledge.

9. Mathenauts: Tales of Mathematical Wonder edited by Rudy Rucker


A collection of 23 science fiction short stories, each of which centres on mathematics. Two are by Martin Gardner, and many of the great writers of SF are represented: Isaac Asimov, Gregory Benford, Larry Niven, Frederik Pohl. The high point is Norman Kagan's utterly hilarious "The Mathenauts", in which only mathematicians can travel through space, because space is mathematical – and, conversely, anything mathematical can be reality. An isomorphomechanism is essential equipment. Between them, these tales cover most of the undergraduate mathematics syllabus, though not in examinable form.

10. The Mathematical Principles of Natural Philosophy by Isaac Newton


There ought to be a great classic in this top 10, and there is none greater. I've put it last because it's not popularisation in the strict sense. However, it slips in because it communicated to the world one of the very greatest ideas of all time: Nature has laws, and they can be expressed in the language of mathematics. Using nothing more complicated than Euclid's geometry, Newton developed his laws of motion and gravity, applying them to the motion of the planets and strange wobbles in the position of the Moon. He famously said that he "stood on the shoulders of giants", and so he did, but this book set the scientific world alight. As John Maynard Keyes wrote, Newton was a transitional figure of immense stature: "the last of the magicians … the last wonderchild to whom the Magi could do sincere and appropriate homage." No mathematical book has had more impact.

Monday, 2 June 2008

Nassim Nicholas Taleb: the prophet of boom and doom

 


A noisy cafe in Newport Beach, California. Nassim Nicholas Taleb is eating three successive salads, carefully picking out anything with a high carbohydrate content.

He is telling me how to live. "The only way you can say 'F*** you' to fate is by saying it's not going to affect how I live. So if somebody puts you to death, make sure you shave."

After lunch he takes me to Circuit City to buy two Olympus voice recorders, one for me and one for him. The one for him is to record his lectures – he charges about $60,000 for speaking engagements, so the $100 recorder is probably worth it. The one for me is because the day before he had drowned my Olympus with earl grey tea and, as he keeps saying, "I owe you." It didn't matter because I always use two recorders and, anyway, I had bought a replacement the next morning.

But it's important and it's not, strictly speaking, a cost to him. Every year he puts a few thousand dollars aside for contingencies – parking tickets, tea spills – and at the end of the year he gives what's left to charity. The money is gone from day one, so unexpected losses cause no pain. Now I have three Olympus recorders.

He spilt the tea – bear with me; this is important – while grabbing at his BlackBerry. He was agitated, reading every incoming e-mail, because the Indian consulate in New York had held on to his passport and he needed it to fly to Bermuda. People were being mobilised in New York and, for some reason, France, to get the passport.

The important thing is this: the lost passport and the spilt tea were black swans, bad birds that are always lurking, just out of sight, to catch you unawares and wreck your plans. Sometimes, however, they are good birds. The recorders cost $20 less than the marked price owing to a labelling screw-up at Circuit City. Stuff happens. The world is random, intrinsically unknowable. "You will never," he says, "be able to control randomness."

To explain: black swans were discovered in Australia. Before that, any reasonable person could assume the all-swans-are-white theory was unassailable. But the sight of just one black swan detonated that theory. Every theory we have about the human world and about the future is vulnerable to the black swan, the unexpected event. We sail in fragile vessels across a raging sea of uncertainty.

"The world we live in is vastly different from the world we think we live in."

Last May, Taleb published The Black Swan: The Impact of the Highly Improbable. It said, among many other things, that most economists, and almost all bankers, are subhuman and very, very dangerous. They live in a fantasy world in which the future can be controlled by sophisticated mathematical models and elaborate risk-management systems. Bankers and economists scorned and raged at Taleb. He didn't understand, they said. A few months later, the full global implications of the sub-prime-driven credit crunch became clear. The world banking system still teeters on the edge of meltdown. Taleb had been vindicated. "It was my greatest vindication. But to me that wasn't a black swan; it was a white swan. I knew it would happen and I said so. It was a black swan to Ben Bernanke [the chairman of the Federal Reserve]. I wouldn't use him to drive my car. These guys are dangerous. They're not qualified in their own field."

In December he lectured bankers at Société Générale, France's second biggest bank. He told them they were sitting on a mountain of risks – a menagerie of black swans. They didn't believe him. Six weeks later the rogue trader and black swan Jérôme Kerviel landed them with $7.2 billion of losses.
As a result, Taleb is now the hottest thinker in the world. He has a $4m advance on his next book. He gives about 30 presentations a year to bankers, economists, traders, even to Nasa, the US Fire Administration and the Department of Homeland Security. But he doesn't tell them what to do – he doesn't know. He just tells them how the world is. "I'm not a guru. I'm just describing a problem and saying, 'You deal with it.'"

Getting to know Taleb is a highly immersive experience. Everything matters. "Why are you not dressed Californian?" he asks at our first meeting. Everything in Newport Beach is very Californian. I'm wearing a jacket: it's cold. He's wearing shorts and a polo shirt. Clothes matter; they send signals. He warns against trusting anybody who wears a tie – "You have to ask, 'Why is he wearing a tie?'"
He has rules. In California he hires bikes, not cars. He doesn't usually carry his BlackBerry because he hates distraction and he really hates phone charges. But he does carry an Apple laptop everywhere and constantly uses it to illustrate complex points and seek out references. He says he answers every e-mail. He is sent thousands. He reads for 60 hours a week, but almost never a newspaper, and he never watches television.

"If something is going on, I hear about it. I like to talk to people, I socialise. Television is a waste of time. Human contact is what matters."

But the biggest rule of all is his eccentric and punishing diet and exercise programme. He's been on it for three months and he's lost 20lb. He's following the thinking of Arthur De Vany, an economist – of the acceptable type – turned fitness guru. The theory is that we eat and exercise according to our evolved natures. Early man did not eat carbs, so they're out. He did not exercise regularly and he did not suffer long-term stress by having an annoying boss. Exercise must be irregular and ferocious – Taleb often does four hours in the gym or 360 press-ups and then nothing for 10 days. Jogging is useless; sprinting is good. He likes to knacker himself completely before a long flight. Stress should also be irregular and ferocious – early men did not have bad bosses, but they did occasionally run into lions.

He's always hungry. At both lunches he orders three salads, which he makes me share. Our conversation swings from high philosophy and low economics back to dietary matters like mangoes – bad – and apples – good as long as they are of an old variety. New ones are bred for sugar content. His regime works. He looks great – springy and fit. He shows me an old identity card. He is fat and middle-aged in the photo. He looks 10 years younger than that. "Look at me! That photo was taken seven years ago. No carbs!"

This is risk management – facing up to those aspects of randomness about which something can be done. Some years ago he narrowly survived throat cancer. The change in his voice was at first misdiagnosed as damaged vocal cords from his time on the trading floor. It can recur. Also he has a high familial risk of diabetes. He is convinced the diet of civilisation – full of carbs and sugar – is the problem. The grand doctors who once announced that complex carbohydrates are good for you are, to him, criminals responsible for thousands of deaths.

So, you are wondering, who is this guy? He was born in 1960 in Lebanon, though he casts doubt on both these "facts". The year is "close enough" – he doesn't like to give out his birth date because of identity theft and he doesn't believe in national character. He has, however, a regional identity; he calls himself a Levantine, a member of the indecipherably complex eastern Mediterranean civilisation. "My body and soul are Mediterranean."

Both maternal and paternal antecedents are grand, privileged and politically prominent. They are also Christian – Greek Orthodox. Startlingly, this great sceptic, this non-guru who believes in nothing, is still a practising Christian. He regards with some contempt the militant atheism movement led by Richard Dawkins.

"Scientists don't know what they are talking about when they talk about religion. Religion has nothing to do with belief, and I don't believe it has any negative impact on people's lives outside of intolerance. Why do I go to church? It's like asking, why did you marry that woman? You make up reasons, but it's probably just smell. I love the smell of candles. It's an aesthetic thing."

Take away religion, he says, and people start believing in nationalism, which has killed far more people. Religion is also a good way of handling uncertainty. It lowers blood pressure. He's convinced that religious people take fewer financial risks.

He was educated at a French school. Three traditions formed him: Greek Orthodox, French Catholic and Arab. They also taught him to disbelieve conventional wisdom. Each tradition had a different history of the crusades, utterly different. This led him to disbelieve historians almost as much as he does bankers.

But, crucially, he also learnt from a very early age that grown-ups have a dodgy grasp of probability. It was in the midst of the Lebanese civil war and, hiding from the guns and bombs, he heard adults repeatedly say the war would soon be over. It lasted 15 years. He became obsessed with probability and, after a degree in management from the Wharton business school at Pennsylvania University, he focused on probability for his PhD at the University of Paris.

For the non-mathematician, probability is an indecipherably complex field. But Taleb makes it easy by proving all the mathematics wrong. Let me introduce you to Brooklyn-born Fat Tony and academically inclined Dr John, two of Taleb's creations. You toss a coin 40 times and it comes up heads every time. What is the chance of it coming up heads the 41st time? Dr John gives the answer drummed into the heads of every statistic student: 50/50. Fat Tony shakes his head and says the chances are no more than 1%. "You are either full of crap," he says, "or a pure sucker to buy that 50% business. The coin gotta be loaded."

The chances of a coin coming up heads 41 times are so small as to be effectively impossible in this universe. It is far, far more likely that somebody is cheating. Fat Tony wins. Dr John is the sucker. And the one thing that drives Taleb more than anything else is the determination not to be a sucker. Dr John is the economist or banker who thinks he can manage risk through mathematics. Fat Tony relies only on what happens in the real world.

In 1985, Taleb discovered how he could play Fat Tony in the markets. France, Germany, Japan, Britain and America signed an agreement to push down the value of the dollar. Taleb was working as an options trader at a French bank. He held options that had cost him almost nothing and that bet on the dollar's decline. Suddenly they were worth a fortune. He became obsessed with buying "out of the money" options. He had realised that when markets rise they tend to rise by small amounts, but when they fall – usually hit by a black swan – they fall a long way.

The big payoff came on October 19, 1987 – Black Monday. It was the biggest market drop in modern history. "That had vastly more influence on my thought than any other event in history."

It was a huge black swan – nobody had expected it, not even Taleb. But the point was, he was ready. He was sitting on a pile of out-of-the-money eurodollar options. So, while others were considering suicide, Taleb was sitting on profits of $35m to $40m. He had what he calls his "f***-off money", money that would allow him to walk away from any job and support him in his long-term desire to be a writer and philosopher.

He stayed on Wall Street until he got bored and moved to Chicago to become a trader in the pit, the open-outcry market run by the world's most sceptical people, all Fat Tonys. This he understood.
His first book, Dynamic Hedging: Managing Vanilla and Exotic Options, came out in 1997. He was moving away from being a pure trader, or "quant" – a quantitative analyst who applies sophisticated maths to investments – to being the philosopher he wanted to be. He was using the vast data pool provided by the markets and combining it with a sophisticated grasp of epistemology, the study of how and what we know, to form a synthesis unique in the modern world.

In the midst of this came his purest vindication prior to sub-prime. Long-Term Capital Management was a hedge fund set up in 1994 by, among others, Myron Scholes and Robert C Merton, joint winners of the 1997 Nobel prize in economics. It had the grandest of all possible credentials and used the most sophisticated academic theories of portfolio management. It went bust in 1998 and, because it had positions worth $1.25 trillion outstanding, it almost took the financial system down with it. Modern portfolio theory had not accounted for the black swan, the Russian financial crisis of that year. Taleb regards the Nobel prize in economics as a disgrace, a laughable endorsement of the worst kind of Dr John economics. Fat Tony should get the Nobel, but he's too smart. "People say to me, 'If economists are so incompetent, why do people listen to them?' I say, 'They don't listen, they're just teaching birds how to fly.' "

Taleb created his own hedge fund, Empirica, designed to help other hedge funds hedge their risks by using a refined form of his options wins – running small losses in quiet times and winning big in turbulent markets. It did okay but, after a good first year, performed poorly when the market went though a quiet spell. He's still involved in the markets, but mainly as a hobby – "like chess".

Finally, with two books – Fooled by Randomness: The Hidden Role of Chance in the Markets and in Life, and The Black Swan – and a stream of academic papers, he turned himself into one of the giants of modern thought. They're still trying to tear him down, of course; last year The American Statistician journal devoted a whole issue to attacking The Black Swan. But I wouldn't bother. A bad but rather ignorant review in The New York Times resulted in such a savage rebuttal from Taleb on his website, www.fooledbyrandomness.com, that reviewers across the US pulled out in fear of his wrath. He knows his stuff and he keeps being right.

And what he knows does not sound good. The sub-prime crisis is not over and could get worse. Even if the US economy survives this one, it will remain a mountain of risk and delusion. "America is the greatest financial risk you can think of."

Its primary problem is that both banks and government are staffed by academic economists running their deluded models. Britain and Europe have better prospects because our economists tend to be more pragmatic, adapting to conditions rather than following models. But still we are dependent on American folly.

The central point is that we have created a world we don't understand. There's a place he calls Mediocristan. This was where early humans lived. Most events happened within a narrow range of probabilities – within the bell-curve distribution still taught to statistics students. But we don't live there any more. We live in Extremistan, where black swans proliferate, winners tend to take all and the rest get nothing – there's Bill Gates, Steve Jobs and a lot of software writers living in a garage, there's Domingo and a thousand opera singers working in Starbucks. Our systems are complex but over-efficient. They have no redundancy, so a black swan strikes everybody at once. The banking system is the worst of all.

"Complex systems don't allow for slack and everybody protects that system. The banking system doesn't have that slack. In a normal ecology, banks go bankrupt every day. But in a complex system there is a tendency to cluster around powerful units. Every bank becomes the same bank so they can all go bust together."

He points out, chillingly, that banks make money from two sources. They take interest on our current accounts and charge us for services. This is easy, safe money. But they also take risks, big risks, with the whole panoply of loans, mortgages, derivatives and any other weird scam they can dream up. "Banks have never made a penny out of this, not a penny. They do well for a while and then lose it all in a big crash."

On top of that, Taleb has shown that increased economic concentration has raised our vulnerability to natural disasters. The Kobe earthquake of 1995 cost a lot more than the Tokyo earthquake of 1923. And there are countless other ways in which we have built a world ruled by black swans – some good but mostly bad. So what do we do as individuals and the world? In the case of the world, Taleb doesn't know. He doesn't make predictions, he insults people paid to do so by telling them to get another job. All forecasts about the oil price, for example, are always wrong, though people keep doing it. But he knows how the world will end.

"Governments and policy makers don't understand the world in which we live, so if somebody is going to destroy the world, it is the Bank of England saving Northern Rock. The biggest danger to human society comes from civil servants in an environment like this. In their attempt to control the ecology, they don't understand that the link between action and consequences can be more vicious. Civil servants say they need to make forecasts, but it's totally irresponsible to make people rely on you without telling them you're incompetent."

Bear Stearns – the US Northern Rock – was another vindication for Taleb. He's always said that whatever deal you do, you always end up dealing with J P Morgan. It was JPM that picked up Bear at a bargain-basement price. Banks should be more like New York restaurants. They come and go but the restaurant business as a whole survives and thrives and the food gets better. Banks fail but bankers still get millions in bonuses for applying their useless models. Restaurants tinker, they work by trial and error and watch real results in the real world. Taleb believes in tinkering – it was to be the title of his next book. Trial and error will save us from ourselves because they capture benign black swans. Look at the three big inventions of our time: lasers, computers and the internet. They were all produced by tinkering and none of them ended up doing what their inventors intended them to do. All were black swans. The big hope for the world is that, as we tinker, we have a capacity for choosing the best outcomes.

"We have the ability to identify our mistakes eventually better than average; that's what saves us." We choose the iPod over the Walkman. Medicine improved exponentially when the tinkering barber surgeons took over from the high theorists. They just went with what worked, irrespective of why it worked. Our sense of the good tinker is not infallible, but it might be just enough to turn away from the apocalypse that now threatens Extremistan.

He also wants to see diplomats dying of cirrhosis of the liver. It means they're talking and drinking and not going to war. Parties are among the great good things in Taleb's world.

And you and me? Well, the good investment strategy is to put 90% of your money in the safest possible government securities and the remaining 10% in a large number of high-risk ventures. This insulates you from bad black swans and exposes you to the possibility of good ones. Your smallest investment could go "convex" – explode – and make you rich. High-tech companies are the best. The downside risk is low if you get in at the start and the upside very high. Banks are the worst – all the risk is downside. Don't be tempted to play the stock market – "If people knew the risks they'd never invest."

There's much more to Taleb's view of the world than that. He is reluctant to talk about matters of human nature, ethics or any of the traditional concerns of philosophy because he says he hasn't read enough. But, when pressed, he comes alive.

"You have to worry about things you can do something about. I worry about people not being there and I want to make them aware." We should be mistrustful of knowledge. It is bad for us. Give a bookie 10 pieces of information about a race and he'll pick his horses. Give him 50 and his picks will be no better, but he will, fatally, be more confident.

We should be ecologically conservative – global warming may or may not be happening but why pollute the planet? – and probablistically conservative. The latter, however, has its limits. Nobody, not even Taleb, can live the sceptical life all the time – "It's an art, it's hard work." So he doesn't worry about crossing the road and doesn't lock his front door – "I can't start getting paranoid about that stuff." His wife locks it, however.

He believes in aristocratic – though not, he insists, elitist – values: elegance of manner and mind, grace under pressure, which is why you must shave before being executed. He believes in the Mediterranean way of talking and listening. One piece of advice he gives everybody is: go to lots of parties and listen, you might learn something by exposing yourself to black swans.

I ask him what he thinks are the primary human virtues, and eventually he comes up with magnanimity – punish your enemies but don't bear grudges; compassion – fairness always trumps efficiency; courage – very few people have this; and tenacity – tinker until it works for you.

"Let's be human the way we are human. Homo sum – I am a man. Don't accept any Olympian view of man and you will do better in society."

Above all, accept randomness. Accept that the world is opaque, majestically unknown and unknowable. From its depths emerge the black swans that can destroy us or make us free. Right now they're killing us, so remember to shave. But we can tinker our way out of it. It's what we do best. Listen to Taleb, an ancient figure, one of the great Mediterranean minds, when he says: "You find peace by coming to terms with what you don't know." Oh, and watch those carbs

Taleb's top life tips

1 Scepticism is effortful and costly. It is better to be sceptical about matters of large consequences, and be imperfect, foolish and human in the small and the aesthetic.

2 Go to parties. You can't even start to know what you may find on the envelope of serendipity. If you suffer from agoraphobia, send colleagues.

3 It's not a good idea to take a forecast from someone wearing a tie. If possible, tease people who take themselves and their knowledge too seriously.

4 Wear your best for your execution and stand dignified. Your last recourse against randomness is how you act — if you can't control outcomes, you can control the elegance of your behaviour. You will always have the last word.

5 Don't disturb complicated systems that have been around for a very long time. We don't understand their logic. Don't pollute the planet. Leave it the way we found it, regardless of scientific 'evidence'.

6 Learn to fail with pride — and do so fast and cleanly. Maximise trial and error — by mastering the error part.

7 Avoid losers. If you hear someone use the words 'impossible', 'never', 'too difficult' too often, drop him or her from your social network. Never take 'no' for an answer (conversely, take most 'yeses' as 'most probably').

8 Don't read newspapers for the news (just for the gossip and, of course, profiles of authors). The best filter to know if the news matters is if you hear it in cafes, restaurants... or (again) parties.

9 Hard work will get you a professorship or a BMW. You need both work and luck for a Booker, a Nobel or a private jet.

10 Answer e-mails from junior people before more senior ones. Junior people have further to go and tend to remember who slighted them.







Get Started!