Search This Blog

Showing posts with label algorithm. Show all posts
Showing posts with label algorithm. Show all posts

Friday 4 January 2013

How algorithms secretly shape the way we behave


Algorithms, the key ingredients of all significant computer programs, have probably influenced your Christmas shopping and may one day determine how you vote
Srudens
Program or be programmed? Schoolchildren learn to code. Photograph: Alamy
 
Keynes's observation (in his General Theory) that "practical men who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist" needs updating. Replace "economist" with "algorithm". And delete "defunct", because the algorithms that now shape much of our behaviour are anything but defunct. They have probably already influenced your Christmas shopping, for example. They have certainly determined how your pension fund is doing, and whether your application for a mortgage has been successful. And one day they may effectively determine how you vote.

On the face of it, algorithms – "step-by-step procedures for calculations" – seem unlikely candidates for the role of tyrant. Their power comes from the fact that they are the key ingredients of all significant computer programs and the logic embedded in them determines what those programs do. In that sense algorithms are the secret sauce of a computerised world.

And they are secret. Every so often, the veil is lifted when there's a scandal. Last August, for example, a "rogue algorithm" in the computers of a New York stockbroking firm, Knight Capital, embarked on 45 minutes of automated trading that eventually lost its owners $440m before it was stopped.

But, mostly, algorithms do their work quietly in the background. I've just logged on to Amazon to check out a new book on the subject – Automate This: How Algorithms Came to Rule Our World by Christopher Steiner. At the foot of the page Amazon tells me that two other books are "frequently bought together" with Steiner's volume: Nate Silver's The Signal and the Noise and Nassim Nicholas Taleb's Antifragile. This conjunction of interests is the product of an algorithm: no human effort was involved in deciding that someone who is interested in Steiner's book might also be interested in the writings of Silver and Taleb.

But book recommendations are relatively small beer – though I suspect they will have influenced a lot of online shopping at this time of year, as people desperately seek ideas for presents. The most powerful algorithm in the world is PageRank – the one that Google uses to determine the rankings of results from web searches – for the simple reason that, if your site doesn't appear in the first page of results, then effectively it doesn't exist. Not surprisingly, there is a perpetual arms race (euphemistically called search engine optimisation) between Google and people attempting to game PageRank. Periodically, Google tweaks the algorithm and unleashes a wave of nasty surprises across the web as people find that their hitherto modestly successful online niche businesses have suddenly – and unaccountably – disappeared.

PageRank thus gives Google awesome power. And, ever since Lord Acton's time, we know what power does to people – and institutions. So the power of PageRank poses serious regulatory issues for governments. On the one hand, the algorithm is a closely guarded commercial secret – for obvious reasons: if it weren't, then the search engine optimisers would have a field day and all search results would be suspect. On the other hand, because it's secret, we can't be sure that Google isn't skewing results to favour its own commercial interests, as some people allege.

Besides, there's more to power than commercial clout. Many years ago, the sociologist Steven Lukes pointed out that power comes in three varieties: the ability to stop people doing what they want to do; the ability to compel them to do things that they don't want to do: and the ability to shape the way they think. This last is the power that mass media have, which is why the Leveson inquiry was so important.

But, in a way, algorithms also have that power. Take, for example, the one that drives Google News. This was recently subjected to an illuminating analysis by Nick Diakopoulos from the Nieman Journalism Lab. Google claims that its selection of noteworthy news stories is "generated entirely by computer algorithms without human editors. No humans were harmed or even used in the creation of this page."

The implication is that the selection process is somehow more "objective" than a human-mediated one. Diakopoulos takes this cosy assumption apart by examining the way the algorithm works. There's nothing sinister about it, but it highlights the importance of understanding how software works. The choice that faces citizens in a networked world is thus: program or be programmed.

Tuesday 17 July 2012

Degree classifications are extremely crude - and pretty useless



When they graduate, students should simply be given a transcript of their marks as a record their study, says Jonathan Wolff
Universities themselves do not find the classifications useful.
Universities themselves do not find the classifications useful. Any student applying for further study will be asked for a transcript of all their marks, in addition to their degree result. Photograph: tomas del amo /Alamy
In Geneva a few weeks ago, as the American and European participants were discussing which insect repellent to use on their post-conference hikes, I had to leave in order to attend yet another meeting of an exam board. This year, I've been chair of several of our boards, "faculty observer" on others, and external examiner elsewhere, and so my desk has been littered with exam scripts and spreadsheets. My head is full of rules for dealing with classification and borderline cases. Degree schemes are like snowflakes: no two are alike.
North Americans rarely understand the expression "exam board" unless they have worked or studied in the UK. Of course, they grade their papers, often with substantial help from their teaching assistants. But once the marks are settled, that is it as far as the department is concerned. Marks go off to the university administration, and in due course find their way on to student transcripts.
Here, by contrast, at least two academics assess or moderate each paper. The mark then exists in a form of limbo until ratified by the exam board, the external examiner and the university examinations section. In some cases, a single essay will be read by three different people, and the mark adjusted twice, although this is rare. Marking in the UK is a process of handicraft, not mass production.
And what do we do with these finally tuned judgments? We put them into a computer that weights them for year of study, ignores some of the bad ones, and produces a number through some form of averaging process. That number will assign the candidate either to a clear degree class, or to a twilight borderline zone. If borderline, we then use another set of rules, apparently too complex for any computer, taking account of such things as "exit velocity", "spread of marks" and any extenuating conditions, in turn graded A, B, C, and X. In such discussions a score of academics can spend a couple of happy hours for each degree programme trying to detect whiffs of high-class performance. Inevitably, and tragically, some students will be consigned to a lower classification by a hair's breadth.
And after all of this, what do we end up with? Given that many students now regard a 2:2 as hugely disappointing, the great majority find a way to do what they need to achieve at least an upper second. Some, with talent and hard work, will do even better and will be awarded a first. Those who in the old days would have performed weakly are likely to have failed at an earlier stage, and so just won't be there in the graduating class. I haven't seen a third in years. Averaging between a 2:2 and a fail is a real challenge. Hence after all this work, we assign perhaps 20% of students to the first-class category and most of the rest to the upper second-class group, with a sprinkling of lower seconds.
In other words, the job of an exam board is to spend a huge amount of effort taking a rich profile of information – how students have done over a wide range of assessments – and turn it into extremely crude classification. And it is classification that we find useless for our own purposes. Any student who applies for further study will be asked for a transcript of all their marks, in addition to their degree result. Universities apparently don't think the degree classification conveys very much useful information, and so why should anyone else?
I'm coming to the conclusion that we should simply issue students with transcripts to record their study, and leave it at that. There are proposals to replace degree classifications with grade point averages, as in the US. That's a move in the right direction, but why have a summary measure at all? School achievement isn't summarised into a single number, and why should it be any different at university? If a student on a German and geography degree did brilliantly in German and miserably in geography what purpose is served by reducing it all to a single score? And so my plea: No more classifications. No more algorithms. No more borderlines. And, most heartfelt of all, no more exam boards.
• Jonathan Wolff is professor of philosophy at University College London