Search This Blog

Showing posts with label recognition. Show all posts
Showing posts with label recognition. Show all posts

Saturday, 13 January 2018

The lesson for diagnosing a bubble

Tim Harford in The Financial Times

Image result for bubble finance


Here are three noteworthy pronouncements about bubbles. 

“Prices have reached what looks like a permanently high plateau.” That was Professor Irving Fisher in 1929, prominently reported barely a week before the most brutal stock market crash of the 20th century. He was a rich man, and the greatest economist of the age. The great crash destroyed both his finances and his reputation. 

“Those who sound the alarm of an approaching . . . crisis have somewhat exaggerated the danger.” That was a renowned commentator who shall remain nameless for now. 

“We are currently showing signs of entering the blow-off or melt-up phase of this very long bull market.” That was investor Jeremy Grantham on January 3 this year. The normally bearish Mr Grantham mused that while shares seem expensive, historical precedents make it plausible that the S&P 500 will soar from present levels of around 2,700 to more than 3,500 before the crash occurs. 

Mr Grantham’s speculation is striking because he has tended to be a savvy bubble watcher in the past. But as any toddler can attest, it is not an easy thing to catch one before it bursts. 

There are two obvious ways to diagnose a bubble. One is to look at the fundamentals: if the price of an asset is unmoored from the cash flow it is likely to generate, that is a warning sign. (It is anyone’s guess what this implies for bitcoin, an asset that has no cash flow at all.) 

The other approach is to look around: are people giddy with excitement? Can the media talk of little else? Are taxi drivers offering stock tips? 

At the moment, however, these two approaches tell a different story about US stocks. They are expensive by most reasonable measures. But there are few other signs of speculative mania. The price rise has been steady, broad-based and was hardly the leading news of 2017. Given how expensive bonds are, it is hardly a surprise that stocks also seem pricey. No wonder investors and commentators are unsure what to say or do. 

It seems all so much easier with hindsight: looking back, we can all enjoy a laugh at the Extraordinary Popular Delusions and the Madness of Crowds, to borrow the title of Charles Mackay’s famous 1841 book, which chuckles at the South Sea bubble and tulip mania. 

Yet even with hindsight things are not always clear. For example, I first became aware of the incipient dotcom bubble in the late 1990s, when a senior colleague told me that the upstart online bookseller Amazon.com was valued at more than every bookseller on the planet. A clearer instance of mania could scarcely be imagined. 

But Amazon is worth much more today than at the height of the bubble, and comparing it with any number of booksellers now seems quaint. The dotcom bubble was mad and my colleague correctly diagnosed the lunacy, but he should still have bought and held Amazon stock. 

Tales of the great tulip mania in 17th-century Holland seem clearer — most notoriously, the Semper Augustus bulb that sold for the price of an Amsterdam mansion. 

“The population, even to its lowest dregs, embarked in the tulip trade,” sneered Mackay more than 200 years later. 

But the tale grows murkier still. The economist Peter Garber, author of Famous First Bubbles, points out that a rare tulip bulb could serve as the breeding stock for generations of valuable flowers; as its descendants became numerous, one would expect the price of individual bulbs to fall. 

Some of the most spectacular prices seem to have been empty tavern wagers by almost-penniless braggarts, ignored by serious traders but much noticed by moralists. The idea that Holland was economically convulsed is hard to support: the historian Anne Goldgar, author of Tulipmania, has been unable to find anyone who actually went bankrupt as a result. 

It is easy to laugh at the follies of the past, especially if they have been exaggerated for the purposes of sermonising or for comic effect. Charles Mackay copied and exaggerated the juiciest reports he could find in order to get his point across. Then there is the matter of his own record as a financial guru. That comment, this time in full, “those who sound the alarm of an approaching railway crisis have somewhat exaggerated the danger”, was Mackay himself, writing in the Glasgow Argus in 1845, in full-throated support of the idea that the railway investment boom of the time would return a healthy profit to investors. It was, instead, a financial disaster. In the words of mathematician and bubble scholar Andrew Odlyzko, it was “by many measures the greatest technology mania in history, and its collapse was one of the greatest financial crashes”. 

Oddly, Mackay barely mentions the railway mania in subsequent editions of his book — nor his own role as cheerleader. This is a lesson to us all. It’s very easy to scoff at past bubbles; it is not so easy to know how to react when one may — or may not — be surrounded by one.

Sunday, 28 May 2017

When algorithms are racist

Ian Tucker in The Guardian





Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. She grew up in Mississippi, gained a Rhodes scholarship, and she is also a Fulbright fellow, an Astronaut scholar and a Google Anita Borg scholar. Earlier this year she won a $50,000 scholarship funded by the makers of the film Hidden Figures for her work fighting coded discrimination.


A lot of your work concerns facial recognition technology. How did you become interested in that area?

When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. At the time I thought this was a one-off thing and that people would fix this.

Later I was in Hong Kong for an entrepreneur event where I tried out another social robot and ran into similar problems. I asked about the code that they used and it turned out we’d used the same open-source code for face detection – this is where I started to get a sense that unconscious bias might feed into the technology that we create. But again I assumed people would fix this.

So I was very surprised to come to the Media Lab about half a decade later as a graduate student, and run into the same problem. I found wearing a white mask worked better than using my actual face.
This is when I thought, you’ve known about this for some time, maybe it’s time to speak up.


How does this problem come about?


Within the facial recognition community you have benchmark data sets which are meant to show the performance of various algorithms so you can compare them. There is an assumption that if you do well on the benchmarks then you’re doing well overall. But we haven’t questioned the representativeness of the benchmarks, so if we do well on that benchmark we give ourselves a false notion of progress.

When we look at it now it seems very obvious, but with work in a research lab, I understand you do the “down the hall test” – you’re putting this together quickly, you have a deadline, I can see why these skews have come about. Collecting data, particularly diverse data, is not an easy thing.
Outside of the lab, isn’t it difficult to tell that you’re discriminated against by an algorithm?

Absolutely, you don’t even know it’s an option. We’re trying to identify bias, to point out cases where bias can occur so people can know what to look out for, but also develop tools where the creators of systems can check for a bias in their design.

Instead of getting a system that works well for 98% of people in this data set, we want to know how well it works for different demographic groups. Let’s say you’re using systems that have been trained on lighter faces but the people most impacted by the use of this system have darker faces, is it fair to use that system on this specific population?

Georgetown Law recently found that one in two adults in the US has their face in the facial recognition network. That network can be searched using algorithms that haven’t been audited for accuracy. I view this as another red flag for why it matters that we highlight bias and provide tools to identify and mitigate it.


Besides facial recognition what areas have an algorithm problem?


The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone gets insurance of not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated – what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create, but that’s only going to happen if we are intentional.


If these systems are based on old data isn’t the danger that they simply preserve the status quo?
Absolutely. A study on Google found that ads for executive level positions were more likely to be shown to men than women – if you’re trying to determine who the ideal candidate is and all you have is historical data to go on, you’re going to present an ideal candidate which is based on the values of the past. Our past dwells within our algorithms. We know our past is unequal but to create a more equal future we have to look at the characteristics that we are optimising for. Who is represented? Who isn’t represented?

Isn’t there a counter-argument to transparency and openness for algorithms? One, that they are commercially sensitive and two, that once in the open they can be manipulated or gamed by hackers?

I definitely understand companies want to keep their algorithms proprietary because that gives them a competitive advantage, and depending on the types of decisions that are being made and the country they are operating in, that can be protected.

When you’re dealing with deep neural networks that are not necessarily transparent in the first place, another way of being accountable is being transparent about the outcomes and about the bias it has been tested for. Others have been working on black box testing for automated decision-making systems. You can keep your secret sauce secret, but we need to know, given these inputs, whether there is any bias across gender, ethnicity in the decisions being made.


Thinking about yourself – growing up in Mississippi, a Rhodes Scholar, a Fulbright Fellow and now at MIT – do you wonder that if those admissions decisions had been taken by algorithms you might not have ended up where you are?

If we’re thinking likely probabilities in the tech world, black women are in the 1%. But when I look at the opportunities I have had, I am a particular type of person who would do well. I come from a household where I have two college-educated parents – my grandfather was a professor in school of pharmacy in Ghana – so when you look at other people who have had the opportunity to become a Rhodes Scholar or do a Fulbright I very much fit those patterns. Yes, I’ve worked hard and I’ve had to overcome many obstacles but at the same time I’ve been positioned to do well by other metrics. So it depends on what you choose to focus on – looking from an identity perspective it’s as a very different story.

In the introduction to Hidden Figures the author Margot Lee Shetterly talks about how growing up near Nasa’s Langley Research Center in the 1960s led her to believe that it was standard for African Americans to be engineers, mathematicians and scientists…

That it becomes your norm. The movie reminded me of how important representation is. We have a very narrow vision of what technology can enable right now because we have very low participation. I’m excited to see what people create when it’s no longer just the domain of the tech elite, what happens when we open this up, that’s what I want to be part of enabling.

Wednesday, 25 April 2012

Predicting how a player is going to perform has always been a tricky business

Who'd have thunk it?


Ed Smith
April 25, 2012


Did you pick them first time? Did you recognise how good they were at first glance? Or did you conveniently revise your opinion much later, when the results started to come in? 

I've been asking myself that question as I've followed the career of Vernon Philander. He now has 51 wickets in just seven Tests. Only the Australian seamer CBT Turner, who reached the milestone in 1888, has reached 50 wickets faster than South Africa's new bowling sensation. I don't mean any disrespect to the legends of the past, but I think it's safe to say that Test cricket has moved on a bit since the days of Turner. So Philander has had statistically the best start to any Test bowling career in modern history.

Who saw that coming? I can claim only half-prescience, and I sadly lacked the courage to go on the record. I first encountered Philander when I was captain of Middlesex in 2008 and he joined the club as our overseas pro. I didn't know much about him beyond what I'd been told - "Allrounder, hard-hitting batter, maybe a bit more of a bowler." Armed with no more information than that, I found myself batting in the nets against our new signing just a couple of minutes after I'd met him.
After the usual pleasantries, it was down to the serious business of Philander bowling at me on a green net surface with a new ball in his hand. So what did I think? Honestly? I thought: "Hmm, I thought they said he was a 'useful allrounder'? Looks more like a genuine opening bowler to me. But I'd better keep it to myself - maybe I've just lost it a bit?"

Philander was just as impressive in matches as he was in the nets. He quickly went from bowling first-change to opening the bowling, then to being our strike bowler. Was he just having a great run of form or was he always this good? Looking back on it, I wish I'd said to everyone - "Forget the fact he can also bat, this bloke is a serious bowler."

When we form judgments of players, we tend to be conditioned by the labels that are already attached to them - "bowling allrounder", "wicketkeeper-batsman", "promising youngster". Once a player has been put in the wrong box, our opinions tend to be conditioned by what everyone else has said. We are clouded by the conventional wisdom that surrounds us.

Look at Andrew Flintoff. It took years for everyone to realise that he was one of the best fast bowlers in the world in the mid-2000s. That was partly because we were distracted by his swashbuckling batting. We were so busy judging him as an allrounder that we failed to notice that he was holding his own against the best in the world, purely as a bowler.

When I played against Matt Prior in his early days at Sussex I thought he was among their best batsmen. The fact that he also kept wicket led him to be underrated as a pure batsman. He could completely change a game in one session and was the often the player I was most happy to see dismissed.

The dressing room is often too slow to acknowledge that a young player is already a serious performer. It cuts against the overstated notion of "He's still got a lot to learn." I have a strange sense of satisfaction at having helped propel the then little-known fast bowler Graham Onions into the England team. Other players weren't convinced he was the genuine article. But he knocked me over so often in 2006 that I had no choice but to become his greatest advocate. I haven't changed my mind: when he is fit, he is one of the best bowlers around.

I played against Tim Bresnan in one of his first matches for Yorkshire. He thudded a short ball into my chest in his first over. "Can't believe that hurt," one of my team-mates scoffed, "it was only bowled by that debutant bloke." True enough. But every top player has to start out as a debutant.
 


 
The dressing room is often too slow to acknowledge that a young player is already a serious performer. It cuts against the overstated notion of "He's still got a lot to learn"
 




The gravest errors of judgement, of course, make for the really good stories. When Aravinda de Silva played for Kent in 1995, he brought along a young Sri Lankan to have a bowl in the nets at Canterbury. What did the Kent players think of the young lad, Aravinda wondered? The general view was that he was promising but not worth a contract.

It was Muttiah Muralitharan.

Sometimes, of course, everyone fails to predict the trajectory of a career. Earlier this month, Alan Richardson was named one of the Wisden cricketers of the year. That is exalted company to keep: Kumar Sangakkara and Alastair Cook were among the other winners.

Richardson is a 36-year-old county professional who has played for Derbyshire, Warwickshire, Middlesex and now Worcestershire. For much of his career, Richardson has had to fight for every game he has played. He started out as a trialist, travelling around the country looking for 2nd team opportunities. It wasn't until he turned 30 that he became an automatic selection in first-class cricket.
Richardson was a captain's dream at Middlesex: honest, loyal, honourable, hard-working and warm-hearted. By their early 30s, most seamers are in decline and have to suffer the indignity of watching batsmen they once bullied smash them around the ground. Not Richardson. Aged 34, he taught himself the away-swinger - typical of his relentless hunger for self-improvement. In 2011, Richardson clocked up more first-class wickets than anyone.

About to turn 37, he says his chances of playing for England have gone. I hope he's wrong. No one could more richly deserve the right to play for his country. Watching Richardson pull on an England cap would be one of the finest sights in cricket - the perfect example of character rewarded. And it would be further proof that some cricketers will always be quiet achievers, inching towards excellence without vanity or fanfare. They deserve the limelight more than anyone.

Former England, Kent and Middlesex batsman Ed Smith's new book, Luck - What It Means and Why It Matters, is out now.