Search This Blog

Showing posts with label human. Show all posts
Showing posts with label human. Show all posts

Tuesday 22 December 2020

The Jan Vertonghen case shows concussion is all part of the sporting capitalism system

The defender, like many others, played through headaches and dizziness because his career depended on it writes Jonathan Liew in The Guardian


Tottenham’s Jan Vertonghen sustained a concussion in the 2019 Champions League semi-final against Ajax and for most of the following season had dizziness and headaches. Photograph: Matthew Childs/Action Images via Reuters


It was around the end of last year that people began to notice Jan Vertonghen was looking decidedly off the pace at Tottenham. He was slow off the mark, slow to the ball, slow to react. Occasionally entire passages of play seemed to pass him by. And so, naturally, as an underperforming player in a popular ball game, it felt only right that he should be subjected to the same pitch of ridicule and abuse as anyone else in his position.

I went back through social media during some of his poorer games last season and pulled out a few of the more representative comments from Spurs fans and others. “Legs gone.” “Sad, but hasn’t got a clue what day it is.” “Get this clown out of my club.” “Finished.” “Past it.” “Utter disgrace.” “Sell.” “Dead wood.” “Stealing a living.” “Happy if I never see him in the shirt again.”




Jan Vertonghen reveals head blow led to nine months of dizziness and headaches


Well, now we know what was really going on. Last week Vertonghen revealed that for most of last season he was enduring the after-effects of a concussion sustained against Ajax the previous April. “I suffered a lot from dizziness and headaches,” said Vertonghen, now at Benfica. “It affected me for eight or nine months. I still had a year left on my contract, and thought I had to play because I had to showcase myself to other clubs.”

On Monday a working group led by the Premier League and featuring the Football Association, the EFL, Professional Footballers’ Association and Women’s Super League sat down to discuss whether there should be restrictions on heading the ball in adult football. It follows a 2019 study by the University of Glasgow that found professional footballers were three times more likely to die of neurodegenerative diseases than the rest of the population.

Meanwhile, the former England hooker Steve Thompson is one of a number of former players launching legal action against World Rugby, the Rugby Football Union and Welsh Rugby Union for an alleged failure to protect them from repeated head traumas.

Thompson is 42 and has been diagnosed with dementia. He no longer remembers winning the World Cup in 2003. “Was it a massive love of my life?” he said of rugby union in an interview with this newspaper two weeks ago. “No, not really. But it was a job.”

A question to consider as you scroll through all this: what does it make you feel? Sadness? Or sadness with a “but”? But: Vertonghen and Thompson knew what they were doing. But: they were handsomely paid for their trouble. But: you can’t ban heading in football, that’s just ridiculous.

But: any of us could get a traumatic brain injury simply by walking down the street and into the path of a falling piano. Life is risky. Sport is dangerous.
Perhaps this is a moment to consider what we owe the people risking their safety for our entertainment

There is a broad school of thought here that at its core, the debate over head injuries in elite sport – one that can easily be extended to other areas of player welfare – is simply a matter of personal choice. If athletes are prepared to embark on a career in professional sport, then as long as they do so fully apprised of the risks and in possession of the latest medical science, who are we to impede them?

Occasionally you will even see this idea expressed in terms of liberation, self-actualisation, even gratification: the notion that danger is not only part of the basic thrill of sport, but also the very point. That the essence of sport is bound up in sacrifice. That on some level we are all animalistically addicted to testing ourselves, pushing ourselves, breaking ourselves. Or at the very least, watching with a beer while others do.


Steve Thompson: 'I can't remember winning the World Cup'


If we can no longer pay teenagers ridiculous money to give themselves brain damage for our gratification, then frankly are we even still free as a species? And ultimately, this is a question that cuts to the very core of what sport means, and who it serves. After all, choices are not made in a vacuum: they are influenced, impelled, incentivised.

Vertonghen played on because he felt his livelihood was on the line. Thompson played on because it was his job to do so. No scientific paper or well‑intentioned press release will ever override the profit motive. And so to focus on personal autonomy is to ignore the extent to which athletes, like all labour, are co-opted into an economy that they did not choose and over which they have little to no influence.

This is, of course, how sporting capitalism works: I get entertained, you get paid, and everything else is window dressing. Sporting capitalism simply buys off your fatigue, your mental health issues, your insecurities, your quality of life, your memory loss, your pain. If you tear a ligament, then it’s financially counterproductive for your club to make you play.

But a concussion? Well, we didn’t see anything, and obviously you can’t, so … how about we keep this one to ourselves? Partly this is a critique of a system that essentially regards the athlete as industrial plant: a part, a tool, a resource from which to extract performance value. But partly, too, this is a process in which we all participate. And for those of us who take pleasure from sport, perhaps this is a moment to consider what we owe the people risking their safety for our entertainment. To remember that welfare does not begin and end with a wage.

To bear in mind, above all, that within every superhuman athlete there is a human who bends and breaks like everyone else.

Tuesday 28 July 2020

America's 'untouchables': the silent power of the caste system

We cannot fully understand the current upheavals, or almost any turning point in American history, without accounting for the human pyramid that is encrypted into us all: the caste system. By Isabel Wilkerson in The Guardian 


In the winter of 1959, after leading the Montgomery bus boycott that arose from the arrest of Rosa Parks and before the trials and triumphs to come, Martin Luther King Jr and his wife, Coretta, landed in India, in the city then known as Bombay, to visit the land of Mahatma Gandhi, the father of nonviolent protest. They were covered in garlands upon arrival, and King told reporters: “To other countries, I may go as a tourist, but to India I come as a pilgrim.”

He had long dreamed of going to India, and they stayed an entire month. King wanted to see for himself the place whose fight for freedom from British rule had inspired his fight for justice in America. He wanted to see the so-called “untouchables”, the lowest caste in the ancient Indian caste system, whom he had read about and had sympathy for, but who had still been left behind after India gained its independence the decade before.

He discovered that people in India had been following the trials of his own oppressed people in the US, and knew of the bus boycott he had led. Wherever he went, the people on the streets of Bombay and Delhi crowded around him for an autograph. At one point in their trip, King and his wife journeyed to the southern tip of the country, to the city of Trivandrum in the state of Kerala, and visited with high-school students whose families had been untouchables. The principal made the introduction.

“Young people,” he said, “I would like to present to you a fellow untouchable from the United States of America.”

King was floored. He had not expected that term to be applied to him. He was, in fact, put off by it at first. He had flown in from another continent, and had dined with the prime minister. He did not see the connection, did not see what the Indian caste system had to do directly with him, did not immediately see why the lowest-caste people in India would view him, an American Negro and a distinguished visitor, as low-caste like themselves, see him as one of them. “For a moment,” he wrote, “I was a bit shocked and peeved that I would be referred to as an untouchable.”

Then he began to think about the reality of the lives of the people he was fighting for – 20 million people, consigned to the lowest rank in the US for centuries, “still smothering in an airtight cage of poverty,” quarantined in isolated ghettoes, exiled in their own country.

And he said to himself: “Yes, I am an untouchable, and every negro in the United States of America is an untouchable.” In that moment, he realised that the land of the free had imposed a caste system not unlike the caste system of India, and that he had lived under that system all of his life. It was what lay beneath the forces he was fighting in the US.


Martin Luther King Jr visiting India in 1959. Photograph: Rangaswamy Satakopan/AP

What Martin Luther King Jr, recognised about his country that day had begun long before the ancestors of our ancestors had taken their first breaths. More than a century and a half before the American Revolution, a human hierarchy had evolved on the contested soil of what would become the United States – a concept of birthright, the temptation of entitled expansion that would set in motion what has been called the world’s oldest democracy and, with it, a ranking of human value and usage.

It would twist the minds of men, as greed and self-reverence eclipsed human conscience and allowed the conquering men to take land and human bodies that they convinced themselves they had a right to. If they were to convert this wilderness and civilise it to their liking, they decided, they would need to conquer, enslave or remove the people already on it, and transport those they deemed lesser beings in order to tame and work the land to extract the wealth that lay in the rich soil and shorelines.

To justify their plans, they took pre-existing notions of their own centrality, reinforced by their self-interested interpretation of the Bible, and created a hierarchy of who could do what, who could own what, who was on top and who was on the bottom and who was in between. There emerged a ladder of humanity, global in nature, as the upper-rung people would descend from Europe, with rungs inside that designation – the English Protestants at the very top, as their guns and resources would ultimately prevail in the bloody fight for North America. Everyone else would rank in descending order, on the basis of their proximity to those deemed most superior. The ranking would continue downward until one arrived at the very bottom: African captives transported in order to build the New World and to serve the victors for all their days, one generation after the next, for 12 generations.

There developed a caste system, based upon what people looked like – an internalised ranking, unspoken, unnamed and unacknowledged by everyday citizens even as they go about their lives adhering to it and acting upon it subconsciously, to this day. Just as the studs and joists and beams that form the infrastructure of a building are not visible to those who live in it, so it is with caste. Its very invisibility is what gives it power and longevity. And though it may move in and out of consciousness, though it may flare and reassert itself in times of upheaval and recede in times of relative calm, it is an ever-present through-line in the country’s operation.

A caste system is an artificial construction, a fixed and embedded ranking of human value that sets the presumed supremacy of one group against the presumed inferiority of others, on the basis of ancestry and often of immutable traits – traits that would be neutral in the abstract, but are ascribed life-and-death meaning in a hierarchy favouring the dominant caste whose forebears designed it. A caste system uses rigid, often arbitrary boundaries to keep the ranked groupings apart, distinct from one another and in their assigned places.

Throughout human history, three caste systems have stood out. The tragically accelerated, chilling and officially vanquished caste system of Nazi Germany. The lingering, millennia-long caste system of India. And the shape-shifting, unspoken, race-based caste pyramid in the US. Each version relied on stigmatising those deemed inferior in order to justify the dehumanisation necessary to keep the lowest-ranked people at the bottom, and to rationalise the protocols of enforcement. A caste system endures because it is often justified as divine will, originating from a sacred text or the presumed laws of nature, reinforced throughout the culture and passed down through the generations.

As we go about our daily lives, caste is the wordless usher in a darkened theatre, the flashlight cast down the aisles, guiding us to our assigned seats for a performance. The hierarchy of caste is not about feelings or morality. It is about power: which groups have it and which do not. It is about resources: which caste is seen as worthy of them, and which are not; who gets to acquire and control them, and who does not. It is about respect, authority and assumptions of competence: who is accorded these, and who is not.

As a means of assigning value to entire swaths of humankind, caste guides each of us, often beyond the reaches of our awareness. It embeds into our bones an unconscious ranking of human characteristics, and sets forth the rules, expectations and stereotypes that have been used to justify brutalities against entire groups within our species. In the American caste system, the signal of rank is what we call race, the division of humans on the basis of their appearance. In the US, race is the primary tool and the visible decoy – the frontman – for caste.


Racial segregation at a bus station in North Carolina in 1940. Photograph: PhotoQuest/Getty Images

Race does the heavy lifting for a caste system that demands a means of human division. If we have been trained to see humans in the language of race, then caste is the underlying grammar that we encode as children, as when learning our mother tongue. Caste, like grammar, becomes an invisible guide not only to how we speak, but to how we process information – the autonomic calculations that figure into a sentence without our having to think about it. Many of us have never taken a class in grammar, yet we know in our bones that a transitive verb takes an object, that a subject needs a predicate, and we know without thinking the difference between third-person singular and third-person plural. We might mention “race”, referring to people as black or white or Latino or Asian or indigenous, when what lies beneath each label is centuries of history, and the assigning of assumptions and values to physical features in a structure of human hierarchy.

What people look like – or rather, the race they have been assigned, or are perceived to belong to – is the visible cue to their caste. It is the historic flashcard to the public, showing how people are to be treated, where they are expected to live, what kinds of positions they are expected to hold, whether they belong in this section of town or that seat in a boardroom, whether they should be expected to speak with authority on this or that subject, whether they will be administered pain relief in a hospital, whether their neighbourhood is likely to adjoin a toxic waste site or to have contaminated water flowing from their taps, whether they are more or less likely to survive childbirth in the most advanced nation in the world, whether they may be shot by authorities with impunity.

Caste and race are neither synonymous nor mutually exclusive. They can and do coexist in the same culture, and serve to reinforce each other. Caste is the bones, race the skin. Race is what we can see, the physical traits that have been given arbitrary meaning and become shorthand for who a person is. Caste is the powerful infrastructure that holds each group in its place.

Caste is fixed and rigid. Race is fluid and superficial, subject to periodic redefinition to meet the needs of the dominant caste in what is now the US. While the requirements to qualify as white have changed over the centuries, the fact of a dominant caste has remained constant from its inception – whoever fit the definition of white, at whatever point in history, was granted the legal rights and privileges of the dominant caste. Perhaps more critically and tragically, at the other end of the ladder, the subordinated caste, too, has been fixed from the beginning as the psychological floor beneath which all other castes cannot fall.

Caste is not a term often applied to the US. It is considered the language of India or feudal Europe. But some anthropologists and scholars of race in the US have made use of the term for decades. Before the modern era, one of the earliest Americans to take up the idea of caste was the antebellum abolitionist and US senator Charles Sumner, as he fought against segregation in the north. “The separation of children in the Public Schools of Boston, on account of color or race,” he wrote, “is in the nature of Caste, and on this account is a violation of Equality.” He quoted a fellow humanitarian: “Caste makes distinctions among creatures where God has made none.”

We cannot fully understand the current upheavals, or almost any turning point in American history, without accounting for the human pyramid that is encrypted into us all. The caste system, and the attempts to defend, uphold or abolish the hierarchy, underlay the American civil war and the civil rights movement a century later, and pervade the politics of the 21st-century US. Just as DNA is the code of instructions for cell development, caste has been the operating system for economic, political and social interaction in the US since the time of its gestation.

In 1944, the Swedish social economist Gunnar Myrdal and a team of the most talented researchers in the country produced a 2,800-page, two-volume work that is still considered perhaps the most comprehensive study of race in the US. It was titled An American Dilemma. Myrdal’s investigation into race led him to the realisation that the most accurate term to describe the workings of US society was not race, but caste – and that perhaps it was the only term that really addressed what seemed a stubbornly fixed ranking of human value.

The anthropologist Ashley Montagu was among the first to argue that race is a human invention – a social construct, not a biological one – and that in seeking to understand the divisions and disparities in the US, we have typically fallen into the quicksand and mythology of race. “When we speak of ‘the race problem in America’,” he wrote in 1942, “what we really mean is the caste system and the problems which that caste system creates in America.”

There was little confusion among some of the leading white supremacists of the previous century as to the connections between India’s caste system and that of the American south, where the purest legal caste system in the US existed. “A record of the desperate efforts of the conquering upper classes in India to preserve the purity of their blood persists to until this very day in their carefully regulated system of castes,” wrote Madison Grant, a popular eugenicist, in his 1916 bestseller, The Passing of the Great Race. “In our Southern States, Jim Crow cars and social discriminations have exactly the same purpose.”

In 1913, Bhimrao Ambedkar, a man born to the bottom of India’s caste system, born an untouchable in the central provinces, arrived in New York City from Bombay. He came to the US to study economics as a graduate student at Columbia, focused on the differences between race, caste and class. Living just blocks from Harlem, he would see first-hand the condition of his counterparts in the US. He completed his thesis just as the film The Birth of a Nation – the incendiary homage to the Confederate south – premiered in New York in 1915. He would study further in London and return to India to become the foremost leader of the untouchables, and a pre-eminent intellectual who would help draft a new Indian constitution. He would work to dispense with the demeaning term “untouchable”. He rejected the term Harijans, which had been applied to them by Gandhi, to their minds patronisingly. He renamed his people Dalits, meaning “broken people” – which, due to the caste system, they were.


A statue of Bhimrao Ambedkar under a flyover in Amritsar, India. Photograph: Narinder Nanu/AFP/Getty Images

It is hard to know what effect his exposure to the American social order had on him personally. But over the years, he paid close attention, as did many Dalits, to the subordinate caste in the US. Indians had long been aware of the plight of enslaved Africans, and of their descendants in the US. Back in the 1870s, after the end of slavery and during the brief window of black advancement known as Reconstruction, an Indian social reformer named Jyotirao Phule found inspiration in the US abolitionists. He expressed hope “that my countrymen may take their example as their guide”.

Many decades later, in the summer of 1946, acting on news that black Americans were petitioning the United Nations for protection as minorities, Ambedkar reached out to the best-known African American intellectual of the day, WEB Du Bois. He told Du Bois that he had been a “student of the Negro problem” from across the oceans, and recognised their common fates.

“There is so much similarity between the position of the Untouchables in India and of the position of the Negroes in America,” Ambedkar wrote to Du Bois, “that the study of the latter is not only natural but necessary.”

Du Bois wrote back to Ambedkar to say that he was, indeed, familiar with him, and that he had “every sympathy with the Untouchables of India”. It had been Du Bois who seemed to have spoken for the marginalised in both countries as he identified the double consciousness of their existence. And it was Du Bois who, decades before, had invoked an Indian concept in channelling the “bitter cry” of his people in the US: “Why did God make me an outcast and a stranger in mine own house?”

I began investigating the American caste system after nearly two decades of examining the history of the Jim Crow south, the legal caste system that grew out of enslavement and lasted into the early 70s, within the lifespans of many present-day Americans. I discovered that I was not writing about geography and relocation, but about the American caste system – an artificial hierarchy in which most everything that you could and could not do was based upon what you looked like, and which manifested itself north and south. I had been writing about a stigmatised people – 6 million of them – who were seeking freedom from the caste system in the south, only to discover that the hierarchy followed them wherever they went, much in the way that the shadow of caste (as I would soon discover) follows Indians in their own global diaspora.

The American caste system began in the years after the arrival of the first Africans to the Colony of Virginia in the summer of 1619, as the colony sought to refine the distinctions of who could be enslaved for life and who could not. Over time, colonial laws granted English and Irish indentured servants greater privileges than the Africans who worked alongside them, and the Europeans were fused into a new identity – that of being categorised as white, the polar opposite of black. The historian Kenneth M Stampp called this assigning of race a “caste system, which divided those whose appearance enabled them to claim pure Caucasian ancestry from those whose appearance indicated that some or all of their forebears were Negroes”. Members of the Caucasian caste, as he called it, “believed in ‘white supremacy’, and maintained a high degree of caste solidarity to secure it”.

While I was in the midst of my research, word of my inquiries spread to some Indian scholars of caste based in the US. They invited me to speak at an inaugural conference on caste and race at the University of Massachusetts in Amherst, the town where WEB Du Bois was born and where his papers are kept.

There, I told the audience that I had written a 600-page book about the Jim Crow era in the American south – the time of naked white supremacy – but that the word “racism” did not appear anywhere in the narrative. I told them that, after spending 15 years studying the topic and hearing the testimony of the survivors of the era, I had realised that the term was insufficient. “Caste” was the more accurate term, and I set out to them the reasons why. They were both stunned and heartened. The plates of Indian food kindly set before me at the reception thereafter sat cold due to the press of questions and the sharing that went on into the night.

At a closing ceremony, the hosts presented to me a bronze-coloured bust of the patron saint of the low-born of India, Bhimrao Ambedkar, the Dalit leader who had written to Du Bois all those decades before.

It felt like an initiation into a caste to which I had somehow always belonged. Over and over, they shared stories of what they had endured, and I responded in personal recognition, as if even to anticipate some particular turn or outcome. To their astonishment, I began to be able to tell who was high-born and who was low-born among the Indian people there, not from what they looked like, as one might in the US, but on the basis of the universal human response to hierarchy – in the case of an upper-caste person, an inescapable certitude in bearing, demeanour, behaviour and a visible expectation of centrality.

On the way home, I was snapped back into my own world when airport security flagged my suitcase for inspection. The TSA worker happened to be an African American who looked to be in his early 20s. He strapped on latex gloves to begin his work. He dug through my suitcase and excavated a small box, unwrapped the folds of paper and held in his palm the bust of Ambedkar that I had been given.

“This is what came up in the X-ray,” he said. It was heavy like a paperweight. He turned it upside down and inspected it from all sides, his gaze lingering at the bottom of it. He seemed concerned that something might be inside.

“I’ll have to swipe it,” he warned me. He came back after some time and declared it OK, and I could continue with it on my journey. He looked at the bespectacled face, with its receding hairline and steadfast expression, and seemed to wonder why I would be carrying what looked like a totem from another culture.

“So who is this?” he asked.

“Oh,” I said, “this is the Martin Luther King of India.”

“Pretty cool,” he said, satisfied now, and seeming a little proud.

He then wrapped Ambedkar back up as if he were King himself, and set him back gently into the suitcase.

Saturday 16 May 2020

Humans are not resources. Coronavirus shows why we must democratise work

Our health and lives cannot be ruled by market forces alone. Now thousands of scholars are calling for a way out of the crisis. Nancy Fraser, Susan Neiman , Chantal Mouffe, Saskia Sassen, Jan-Werner Müller, Dani Rodrik, Thomas Piketty, Gabriel Zucman, Ha-Joon Chang, and many others write in The Guardian 


 
Healthcare workers protest against the handling of the coronavirus crisis in Liège, Belgium, May 2020. Photograph: Yves Herman/Reuters


Working humans are so much more than “resources”. This is one of the central lessons of the current crisis. Caring for the sick; delivering food, medication and other essentials; clearing away our waste; stocking the shelves and running the registers in our grocery stores – the people who have kept life going through the Covid-19 pandemic are living proof that work cannot be reduced to a mere commodity. Human health and the care of the most vulnerable cannot be governed by market forces alone. If we leave these things solely to the market, we run the risk of exacerbating inequalities to the point of forfeiting the very lives of the least advantaged.

How to avoid this unacceptable situation? By involving employees in decisions relating to their lives and futures in the workplace – by democratising firms. By decommodifying work – by collectively guaranteeing useful employment to all. As we face the monstrous risk of pandemic and environmental collapse, making these strategic changes would allow us to ensure the dignity of all citizens while marshalling the collective strength and effort we need to preserve our life together on this planet.

Every morning, men and women, especially members of racialised communities, migrants and informal economy workers, rise to serve those among us who are able to remain under quarantine. They keep watch through the night. The dignity of their jobs needs no other explanation than that eloquently simple term “essential worker”. That term also reveals a key fact that capitalism has always sought to render invisible with another term, “human resource”. Human beings are not one resource among many. Without labor investors, there would be no production, no services, no businesses at all.

Every morning, quarantined men and women rise in their homes to fulfil from afar the missions of the organisations for which they work. They work into the night. To those who believe that employees cannot be trusted to do their jobs without supervision, that workers require surveillance and external discipline, these men and women are proving the contrary. They are demonstrating, day and night, that workers are not one type of stakeholder among many: they hold the keys to their employers’ success. They are the core constituency of the firm, but are, nonetheless, mostly excluded from participating in the government of their workplaces – a right monopolised by capital investors.

To the question of how firms and how society as a whole might recognise the contributions of their employees in times of crisis, democracy is the answer. Certainly, we must close the yawning chasm of income inequality and raise the income floor – but that alone is not enough. After the two world wars, women’s undeniable contribution to society helped win them the right to vote. By the same token, it is time to enfranchise workers.

Representation of labour investors in the workplace has existed in Europe since the close of the second world war, through institutions known as works councils. Yet these representative bodies have a weak voice at best in the government of firms, and are subordinate to the choices of the executive management teams appointed by shareholders. They have been unable to stop or even slow the relentless momentum of self-serving capital accumulation, ever more powerful in its destruction of our environment. These bodies should now be granted similar rights to those exercised by boards. To do so, firm governments (that is, top management) could be required to obtain double majority approval, from chambers representing workers as well as shareholders.

In Germany, the Netherlands and Scandinavia, different forms of codetermination (Mitbestimmung) put in place progressively after the second world war were a crucial step toward giving a voice to workers – but they are still insufficient to create actual citizenship in firms. Even in the United States, where worker organising and union rights have been considerably suppressed, there is now a growing call to give labour investors the right to elect representatives with a supermajority within boards. Issues such as the choice of a CEO, setting major strategies and profit distribution are too important to be left to shareholders alone. A personal investment of labour; that is, of one’s mind and body, one’s health – one’s very life – ought to come with the collective right to validate or veto these decisions.

This crisis also shows that work must not be treated as a commodity, that market mechanisms alone cannot be left in charge of the choices that affect our communities most deeply. For years now, jobs and supplies in the health sector have been subject to the guiding principle of profitability; today, the pandemic is revealing the extent to which this principle has led us astray. Certain strategic and collective needs must simply be made immune to such considerations. The rising body count across the globe is a terrible reminder that some things must never be treated as commodities. Those who continue arguing to the contrary are imperilling us with their dangerous ideology. Profitability is an intolerable yardstick when it comes to our health and our life on this planet.

Decommodifying work means preserving certain sectors from the laws of the so-called free market; it also means ensuring that all people have access to work and the dignity it brings. One way to do this is with the creation of a job guarantee. Article 23 of the Universal Declaration of Human Rights reminds us that everyone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment. A job guarantee would not only offer each person access to work that allows them to live with dignity, it would also provide a crucial boost to our collective capability to meet the many pressing social and environmental challenges we currently face. Guaranteed employment would allow governments, working through local communities, to provide dignified work while contributing to the immense effort of fighting environmental collapse. Across the globe, as unemployment skyrockets, job guarantee programs can play a crucial role in assuring the social, economic, and environmental stability of our democratic societies.

The European Union must include such a project in its green deal. A review of the mission of the European Central Bank so that it could finance this program, which is necessary to our survival, would give it a legitimate place in the life of each and every citizen of the EU. A countercyclical solution to the explosive unemployment on the way, this program will prove a key contribution to the EU’s prosperity.

We should not react now with the same innocence as in 2008, when we responded to the economic crisis with an unconditional bailout that swelled public debt while demanding nothing in return. If our governments step in to save businesses in the current crisis, then businesses must step in as well, and meet the general basic conditions of democracy. In the name of the democratic societies they serve, and which constitute them, in the name of their responsibility to ensure our survival on this planet, our governments must make their aid to firms conditional on certain changes to their behaviours. In addition to hewing to strict environmental standards, firms must be required to fulfil certain conditions of democratic internal government. A successful transition from environmental destruction to environmental recovery and regeneration will be best led by democratically governed firms, in which the voices of those who invest their labor carry the same weight as those who invest their capital when it comes to strategic decisions.

We have had more than enough time to see what happens when labor, the planet, and capital gains are placed in the balance under the current system: labor and the planet always lose. Thanks to research from the University of Cambridge, we know that “achievable design changes” could reduce global energy consumption by 73%. But those changes are labor intensive, and require choices that are often costlier over the short term. So long as firms are run in ways that seek to maximise profit for their capital investors alone, and in a world where energy is cheap, why make these changes? Despite the challenges of this transition, certain socially minded or co-operatively run businesses – pursuing hybrid goals that take financial, social and environmental considerations into account, and developing democratic internal governments – have already shown the potential of such positive impact.

Let us fool ourselves no longer: left to their own devices, most capital investors will not care for the dignity of labour investors, nor will they lead the fight against environmental catastrophe. Another option is available. Democratise firms; decommodify work; stop treating human beings as resources so that we can focus together on sustaining life on this planet.

Thursday 19 March 2020

Can computers ever replace the classroom?

With 850 million children worldwide shut out of schools, tech evangelists claim now is the time for AI education. But as the technology’s power grows, so too do the dangers that come with it. By Alex Beard in The Guardian 


For a child prodigy, learning didn’t always come easily to Derek Haoyang Li. When he was three, his father – a famous educator and author – became so frustrated with his progress in Chinese that he vowed never to teach him again. “He kicked me from here to here,” Li told me, moving his arms wide.

Yet when Li began school, aged five, things began to click. Five years later, he was selected as one of only 10 students in his home province of Henan to learn to code. At 16, Li beat 15 million kids to first prize in the Chinese Mathematical Olympiad. Among the offers that came in from the country’s elite institutions, he decided on an experimental fast-track degree at Jiao Tong University in Shanghai. It would enable him to study maths, while also covering computer science, physics and psychology.

In his first year at university, Li was extremely shy. He came up with a personal algorithm for making friends in the canteen, weighing data on group size and conversation topic to optimise the chances of a positive encounter. The method helped him to make friends, so he developed others: how to master English, how to interpret dreams, how to find a girlfriend. While other students spent the long nights studying, Li started to think about how he could apply his algorithmic approach to business. When he graduated at the turn of the millennium, he decided that he would make his fortune in the field he knew best: education.

In person, Li, who is now 42, displays none of the awkwardness of his university days. A successful entrepreneur who helped create a billion-dollar tutoring company, Only Education, he is charismatic, and given to making bombastic statements. “Education is one of the industries that Chinese people can do much better than western people,” he told me when we met last year. The reason, he explained, is that “Chinese people are more sophisticated”, because they are raised in a society in which people rarely say what they mean.

Li is the founder of Squirrel AI, an education company that offers tutoring delivered in part by humans, but mostly by smart machines, which he says will transform education as we know it. All over the world, entrepreneurs are making similarly extravagant claims about the power of online learning – and more and more money is flowing their way. In Silicon Valley, companies like Knewton and Alt School have attempted to personalise learning via tablet computers. In India, Byju’s, a learning app valued at $6 billion, has secured backing from Facebook and the Chinese internet behemoth Tencent, and now sponsors the country’s cricket team. In Europe, the British company Century Tech has signed a deal to roll out an intelligent teaching and learning platform in 700 Belgian schools, and dozens more across the UK. Their promises are being put to the test by the coronavirus pandemic – with 849 million children worldwide, as of March 2020, shut out of school, we’re in the midst of an unprecedented experiment in the effectiveness of online learning.

But it’s in China, where President Xi Jinping has called for the nation to lead the world in AI innovation by 2030, that the fastest progress is being made. In 2018 alone, Li told me, 60 new AI companies entered China’s private education market. Squirrel AI is part of this new generation of education start-ups. The company has already enrolled 2 million student users, opened 2,600 learning centres in 700 cities across China, and raised $150m from investors. The company’s chief AI officer is Tom Mitchell, the former dean of computer science at Carnegie Mellon University, and its payroll also includes a roster of top Chinese talent, including dozens of “super-teachers” – an official designation given to the most expert teachers in the country. In January, during the worst of the outbreak, it partnered with the Shanghai education bureau to provide free products to students throughout the city.

Though the most ambitious features have yet to be built into Squirrel AI’s system, the company already claims to have achieved impressive results. At its HQ in Shanghai, I saw footage of downcast human teachers who had been defeated by computers in televised contests to see who could teach a class of students more maths in a single week. Experiments on the effectiveness of different types of teaching videos with test audiences have revealed that students learn more proficiently from a video presented by a good-looking young presenter than from an older expert teacher.

When we met, Li rhapsodised about a future in which technology will enable children to learn 10 or even 100 times more than they do today. Wild claims like these, typical of the hyperactive education technology sector, tend to prompt two different reactions. The first is: bullshit – teaching and learning is too complex, too human a craft to be taken over by robots. The second reaction is the one I had when I first met Li in London a year ago: oh no, the robot teachers are coming for education as we know it. There is some truth to both reactions, but the real story of AI education, it turns out, is a whole lot more complicated.

At a Squirrel AI learning centre high in an office building in Hangzhou, a city 70 miles west of Shanghai, a cursor jerked tentatively over the words “Modern technology has opened our eyes to many things”. Slouched at a hexagonal table in one of the centre’s dozen or so small classrooms, Huang Zerong, 14, was halfway through a 90-minute English tutoring session. As he worked through activities on his MacBook, a young woman with the kindly manner of an older sister sat next to him, observing his progress. Below, the trees of Xixi National Wetland Park barely stirred in the afternoon heat.

A question popped up on Huang’s screen, on which a virtual dashboard showed his current English level, unit score and learning focus – along with the sleek squirrel icon of Squirrel AI.

“India is famous for ________ industry.”

Huang read through the three possible answers, choosing to ignore “treasure” and “typical” and type “t-e-c-h-n-o-l-o-g-y” into the box.

“T____ is changing fast,” came the next prompt.

Huang looked towards the young woman, then he punched out “e-c-h-n-o-l-o-g-y” from memory. She clapped her hands together. “Good!” she said, as another prompt flashed up.

Huang had begun his English course, which would last for one term, a few months earlier with a diagnostic test. He had logged into the Squirrel AI platform on his laptop and answered a series of questions designed to evaluate his mastery of more than 10,000 “knowledge points” (such as the distinction between “belong to” and “belong in”). Based on his answers, Squirrel AI’s software had generated a precise “learning map” for him, which would determine which texts he would read, which videos he would see, which tests he would take.

As he worked his way through the course – with the occasional support of the human tutor by his side, or one of the hundreds accessible via video link from Squirrel AI’s headquarters in Shanghai – its contents were automatically updated, as the system perceived that Huang had mastered new knowledge.

Huang said he was less distracted at the learning centre than he was in school, and felt at home with the technology. “It’s fun,” he told me after class, eyes fixed on his lap. “It’s much easier to concentrate on the system because it’s a device.” His scores in English also seemed to be improving, which is why his mother had just paid the centre a further 91,000 RMB (about £11,000) for another year of sessions: two semesters and two holiday courses in each of four subjects, adding up to around 400 hours in total.

“Anyone can learn,” Li explained to me a few days later over dinner in Beijing. You just needed the right environment and the right method, he said.

 
Derek Haoyang Li, the founder of Squirrel AI, at a web summit in Lisbon. Photograph: Cody Glenn/Sportsfile via Getty Images

The idea for Squirrel AI had come to him five years earlier. A decade at his tutoring company, Only Education, had left him frustrated. He had found that if you really wanted to improve a student’s progress, by far the best way was to find them a good teacher. But good teachers were rare, and turnover was high, with the best much in demand. Having to find and train 8,000 new teachers each year was limiting the amount students learned – and the growth of his business.

The answer, Li decided, was adaptive learning, where an intelligent computer-based system adjusts itself automatically to the best method for an individual learner. The idea of adaptive learning was not new, but Li was confident that developments in AI research meant that huge advances were now within reach. Rather than seeking to recreate the general intelligence of a human mind, researchers were getting impressive results by putting AI to work on specialised tasks. AI doctors are now equal to or better than humans at analysing X-rays for certain pathologies, while AI lawyers are carrying out legal research that would once have been done by clerks.

Following such breakthroughs, Li resolved to augment the efforts of his human teachers with a tireless, perfectly replicable virtual teacher. “Imagine a tutor who knows everything,” he told me, “and who knows everything about you.”

In Hangzhou, Huang was struggling with the word “hurry”. On his screen, a video appeared of a neatly groomed young teacher presenting a three-minute masterclass about how to use the word “hurry” and related phrases (“in a hurry” etc). Huang watched along.

Moments like these, where a short teaching input results in a small learning output, are known as “nuggets”. Li’s dream, which is the dream of adaptive education in general, is that AI will one day provide the perfect learning experience by ensuring that each of us get just the right chunk of content, delivered in the right way, at the right moment for our individual needs.

One way in which Squirrel AI improves its results is by constantly hoovering up data about its users. During Huang’s lesson, the system continuously tracked and recorded every one of his key strokes, cursor movements, right or wrong answers, texts read and videos watched. This data was time-stamped, to show where Huang had skipped over or lingered on a particular task. Each “nugget” (the video to watch or text to read) was then recommended to him based on an analysis of his data, accrued over hundreds of hours of work on Squirrel’s platform, and the data of 2 million other students. “Computer tutors can collect more teaching experience than a human would ever be able to collect, even in a hundred years of teaching,” Tom Mitchell, Squirrel AI’s chief AI officer, told me over the phone a few weeks later.

The speed and accuracy of Squirrel AI’s platform will depend, above all, on the number of student users it manages to sign up. More students equals more data. As each student works their way through a set of knowledge points, they leave a rich trail of information behind them. This data is then used to train the algorithms of the “thinking” part of the Squirrel AI system.

This is one reason why Squirrel AI has integrated its online business with bricks-and-mortar learning centres. Most children in China do not have access to laptops and high-speed internet. The learning centres mean the company can reach kids they otherwise would not be able to. One of the reasons Mitchell says he is glad to be working with Squirrel AI is the sheer volume of data that the company is gathering. “We’re going to have millions of natural examples,” he told me with excitement.

The dream of a perfect education delivered by machine is not new. For at least a century, generations of visionaries have predicted that the latest inventions will transform learning. “Motion pictures,” wrote the American inventor Thomas Edison in 1922, “are destined to revolutionise our schools.” The immersive power of movies would supposedly turbo-charge the learning process. Others made similar predictions for radio, television, computers and the internet. But despite small successes – the Open University, TV universities in China in the 1980s, or Khan Academy today, which reaches millions of students with its YouTube lessons – teachers have continued to teach, and learners to learn, in much the same way as before.

There are two reasons why today’s techno-evangelists are confident that AI can succeed where other technologies failed. First, they view AI not as a simple innovation but as a “general purpose technology” – that is, an epochal invention, like the printing press, which will fundamentally change the way we learn. Second, they believe its powers will shed new light on the working of the human brain – how repetitive practice grows expertise, for instance, or how interleaving (leaving gaps between learning different bits of material) can help us achieve mastery. As a result, we will be able to design adaptive algorithms to optimise the learning process.

UCL Institute of Education professor and machine learning expert Rose Luckin believes that one day we might see an AI-enabled “Fitbit for the mind” that would allow us to perceive in real-time what an individual knows, and how fast they are learning. The device would use sensors to gather data that forms a precise and ever-evolving map of a person’s abilities, which could be cross-referenced with insights into their motivational and nutritional state, say. This information would then be relayed to our minds, in real time, via a computer-brain interface. Facebook is already carrying out research in this field. Other firms are trialling eye tracking and helmets that monitor kids’ brainwaves.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

The supposed AI education revolution is not here yet, and it is likely that the majority of projects will collapse under the weight of their own hype. IBM’s adaptive tutor Knewton was pulled from US schools under pressure from parents concerned about their kids’ privacy, while Silicon Valley’s Alt School, launched to much fanfare in 2015 by a former Google executive, has burned through $174m of funding without landing on a workable business model. But global school closures owing to coronavirus may yet relax public attitudes to online learning – many online education companies are offering their products for free to all children out of school.

Daisy Christodoulou, a London-based education expert, suggests that too much time is spent speculating on what AI might one day do, rather than focusing on what it already can. It’s estimated that there are 900 million young people around the world today who aren’t currently on track to learn what they need to thrive. To help those kids, AI education doesn’t have to be perfect – it just needs to slightly improve on what they currently have.

In their book The Future of the Professions, Richard and Daniel Susskind argue that we tend to conceive of occupations as embodied in a person – a butcher or baker, doctor or teacher. As a result, we think of them as ‘too human’ to be taken over by machines. But to an algorithm, or someone designing one, a profession appears as something else: a long list of individual tasks, many of which may be mechanised. In education, that might be marking or motivating, lecturing or lesson planning. The Susskinds believe that where a machine can do any one of these tasks better and more cheaply than the average human, automation of that bit of the job is inevitable.

The point, in short, is that AI doesn’t have to match the general intelligence of humans to be useful – or indeed powerful. This is both the promise of AI, and the danger it poses. “People’s behaviour is already being manipulated,” Luckin cautioned. Devices that might one day enhance our minds are already proving useful in shaping them.

In May 2018, a group of students at Hangzhou’s Middle School No 11 returned to their classroom to find three cameras newly installed above the blackboard; they would now be under full-time surveillance in their lessons. “Previously when I had classes that I didn’t like very much, I would be lazy and maybe take naps,” a student told the local news, “but I don’t dare be distracted after the cameras were installed.” The head teacher explained that the system could read seven states of emotion on students’ faces: neutral, disgust, surprise, anger, fear, happiness and sadness. If the kids slacked, the teacher was alerted. “It’s like a pair of mystery eyes are constantly watching me,” the student told reporters.

The previous year, China’s state council had launched a plan for the role AI could play in the future of the country. Underpinning it were a set of beliefs: that AI can “harmonise” Chinese society; that for it to do so, the government should store data on every citizen; that companies, not the state, were best positioned to innovate; that no company should refuse access to the government to its data. In education, the paper called for new adaptive online learning systems powered by big data, and “all-encompassing ubiquitous intelligent environments” – or smart schools.

At AIAED, a conference in Beijing hosted by Squirrel AI, which I attended in May 2019, classroom surveillance was one of the most discussed topics – but the speakers tended to be more concerned about the technical question of how to optimise the effectiveness of facial and bodily monitoring technologies in the classroom, rather than the darker implications of collecting unprecedented amounts of data about children. These ethical questions are becoming increasingly important, with schools from India to the US currently trialling facial monitoring. In the UK, AI is being used today for things like monitoring student wellbeing, automating assessment and even in inspecting schools. Ben Williamson of the Centre for Research in Digital Education explains that this risks encoding biases or errors into the system and raises obvious privacy issues. “Now the school and university might be said to be studying their students too,” he told me.

While cameras in the classroom might outrage many parents in the UK or US, Lenora Chu, author of an influential book about the Chinese education system, argues that in China anything that improves a child’s learning tends to be viewed positively by parents. Squirrel AI even offers them the chance to watch footage of their child’s tutoring sessions. “There’s not that idea here that technology is bad,” said Chu, who moved from the US to Shanghai 10 years ago.

Rose Luckin suggested to me that a platform like Squirrel AI’s could one day mean an end to China’s notoriously punishing gaokao college entrance exam, which takes place for two days every June and largely determines a student’s education and employment prospects. If technology tracked a student throughout their school days, logging every keystroke, knowledge point and facial twitch, then the perfect record of their abilities on file could make such testing obsolete. Yet a system like this could also furnish the Chinese state – or a US tech company – with an eternal ledger of every step in a child’s development. It is not hard to imagine the grim uses to which this information could be put – for instance, if your behaviour in school was used to judge, or predict, your trustworthiness as an adult.

 
Students leaving a gaokao college entrance exam in Hangzhou, China. Photograph: Imaginechina/Rex/Shutterstock

On the one hand, said Chu, the CCP wants to use AI to better prepare young people for the future economy, and to close the achievement gap between rural and urban schools. To this end, companies like Squirrel AI receive government support, such as access to prime office space in top business districts. At the same time, the CCP, as the state council put it, sees AI as “opportunity of the millennium” for “social construction”. That is, social control. The ability of AI to “grasp group cognition and psychological changes in a timely manner” through the surveillance of people’s movements, spending and other behaviours means it can play “an irreplaceable role in effectively maintaining social stability”.

The surveillance state is already penetrating deep into people’s lives. In 2019, there was a significant spike in China in the registration of patents for facial recognition and surveillance technology. All new mobile phones in China must now be registered via a facial scan. At the hotels I stayed in, Chinese citizens handed over their ID cards and checked in using face scanners. On the high-speed train to Beijing, the announcer repeatedly warned travellers to abide by the rules in order to maintain their personal credit. The notorious social credit system, which has been under trial in a handful of Chinese cities ahead of an expected nationwide roll out this year, awards or detracts points from an individual’s trustworthiness score, which affects their ability to travel and borrow money, among other things.

The result, explained Chu, is that all these interventions exert a subtle control over what people think and say. “You sense how the wind is blowing,” she told me. For the 12 million Muslim Uighurs in Xinjiang, however, that control is anything but subtle. Police checkpoints, complete with facial scanners, are ubiquitous. All mobile phones must have Jingwang (“clean net”) app installed, allowing the government to monitor your movements and browsing. Iris and fingerprint scans are required to access health services. As many as 1.5 million Uighurs, including children, have been interned at some point in a re-education camp in the interests of “harmony”.

As we shape the use of AI in education, it’s likely that AI will shape us too. Jiang Xueqin, an education researcher from Chengdu, is sceptical that it will be as revolutionary as proponents claim. “Parents are paying for a drug,” he told me over the phone. He thought tutoring companies such as New Oriental, TAL and Squirrel AI were simply preying on parents’ anxieties about their kids’ performance in exams, and only succeeding because test preparation was the easiest element of education to automate – a closed system with limited variables that allowed for optimisation. Jiang wasn’t impressed with the progress made, or the way that it engaged every child in a desperate race to conform to the measures of success imposed by the system.

One student I met at the learning centre in Hangzhou, Zhang Hen, seemed to have a deep desire to learn about the world – she told me how she loved Qu Yuan, a Tang dynasty romantic poet, and how she was a fan of Harry Potter – but that wasn’t the reason she was here. Her goal was much simpler: she had come to the centre to boost her test scores. That may seem disappointing to idealists who want education to offer so much more, but Zhang was realistic about the demands of the Chinese education system. She had tough exams that she needed to pass. A scripted system that helped her efficiently master the content of the high school entrance exam was exactly what she wanted.

On stage at AIAED, Tom Mitchell had presented a more ambitious vision for adaptive learning that went far beyond helping students cram for mindless tests. Much of what he was most excited by was possible only in theory, but his enthusiasm was palpable. As appealing as his optimism was, though, I felt unconvinced. It was clear that adaptive technologies might improve certain types of learning, but it was equally obvious that they might narrow the aims of education and provide new tools to restrict our freedom.

Li insists that one day his system will help all young people to flourish creatively. Though he allows that for now an expert human teacher still holds an edge over a robot, he is confident that AI will soon be good enough to evaluate and reply to students’ oral responses. In less than five years, Li imagines training Squirrel AI’s platform with a list of every conceivable question and every possible response, weighting an algorithm to favour those labelled “creative”. “That thing is very easy to do,” he said, “like tagging cats.”

For Li, learning has always been like that – like tagging cats. But there’s a growing consensus that our brains don’t work like computers. Whereas a machine must crunch through millions of images to be able to identify a cat as the collection of “features” that are present only in those images labelled “cat” (two triangular ears, four legs, two eyes, fur, etc), a human child can grasp the concept of “cat” from just a few real life examples, thanks to our innate ability to understand things symbolically. Where machines can’t compute meaning, our minds thrive on it. The adaptive advantage of our brains is that they learn continually through all of our senses by interacting with the environment, our culture and, above all, other people.

Li told me that even if AI fulfilled all of its promise, human teachers would still play a crucial role helping kids learn social skills. At Squirrel AI’s HQ, which occupies three floors of a gleaming tower next door to Microsoft and Mobike in Shanghai, I met some of the company’s young teachers. Each sat at a work console in a vast office space, headphones on, eyes focused on a laptop screen, their desks decorated with plastic pot plants and waving cats. As they monitored the dashboards of up to six students simultaneously, the face of a young learner would appear on the screen, asking for help, either via a chat box or through a video link. The teachers reminded me of workers in the gig economy, the Uber drivers of education. When I logged on to try out a Squirrel English lesson for myself, the experience was good, but my tutor seemed to be teaching to a script.

Squirrel AI’s head of communications, Joleen Liang, showed me photos from a recent trip she had taken to the remote mountains of Henan, to deliver laptops to disadvantaged students. Without access to the adaptive technology, their education would be a little worse. It was a reminder that Squirrel AI’s platform, like those of its competitors worldwide, doesn’t have to be better than the best human teachers – to improve people’s lives, it just needs to be good enough, at the right price, to supplement what we’ve got. The problem is that it is hard to see technology companies stopping there. For better and worse, their ambitions are bigger. “We could make a lot of geniuses,” Li told me.

Sunday 1 September 2019

We know life is a game of chance, so why not draw lots to see who gets the job?

Sonia Sodha in The Guardian

Remove human bias from the interview process and the world might start to become a fairer place 


 
Interviews are an unreliable way of selecting the best person for the job. Photograph: Alamy


The sweaty palms, the swotting, the tricky question that prompts your heart to plummet: job interviews are no one’s idea of a good time. The other side of the equation is hardly fun either: days out of a busy schedule spent interviewing candidates, some of whom you know within a couple of minutes you would never offer a job.

Interviews are time-consuming for all involved. But we persist in doing them because recruitment decisions are some of the most important we take in the workplace and it follows we should invest time and energy into a robust recruitment process, right?

Wrong. It is long established that unstructured interviews are a notoriously unreliable way of selecting the best people for the job. This is perhaps unsurprising, when you consider the limited overlap between the skills needed to ace an interview and perform well day to day in a job or on a university course. And how many of us can honestly say we have been 100% truthful in a job interview?

Experimental studies show how unreliable interviewers are at accurately predicting someone’s capabilities. This is borne out on the rare occasions it gets tested in the real world. In the late 1970s, there was a doctor shortage in Texas and politicians instructed the state medical school to increase its admissions, after it had already selected 150 applicants after interview. So it took another 50 candidates who had reached the interview stage and been rejected, even though many of the stronger rejected candidates had already been snapped up by other medical schools. Researchers found these 50 students performed just as well as the original crop. Once the candidates got through the on-paper sift, they might as well have been drawn out of a hat.

Not only are interviews a generally bad way to spot talent, they are terrible at smuggling in bias. There are the obvious implicit biases – sexism, racism, ageism, class discrimination – but others also exist. According to psychologist Ron Friedman, we tend to perceive good-looking people to be more competent, tall candidates as having greater leadership potential and deep-voiced candidates as more trustworthy. Interviews also encourage us to pick people who look like us, think similarly to us and with whom we strike up an easy rapport. The myth of the meritocratic interview allows all sorts of prejudice to flourish.

These days, huge effort goes into trying to unpick these biases in interviews. Vast sums are spent on unconscious bias training, but the evidence as to its effectiveness is mixed at best. It turns out training a person’s subconscious to think differently isn’t as easy as a half-day course.


An element of random selection might engender a bit more humility on the part of white, middle-class men

This is why it is no substitute for breaking down the structures that allow these biases to fester. For example, managers might only be allowed to make an appointment once they have a sufficiently diverse shortlist. I’ve long been a believer in quotas for underrepresented groups where improving diversity is happening at a glacial pace, for example, in Oxbridge admissions.

But a recent conversation with a friend who works at Nesta, a charitable foundation, got me thinking about whether we should ditch the pretence that we can accurately predict people’s potential. Her organisation is experimenting with a lottery to award funding to staff for innovative projects. Employees can put forward their own proposal. All of those that meet a minimum set of criteria go into a draw, with a number selected for funding at random.

My initial thought was that this sounded bonkers. But ponder it more and the logic is sound. Not only does it eliminate human bias, it encourages creativity and avoids groupthink, discouraging staff from self-censoring because they think their idea is one management simply wouldn’t go for. It chimes with those who have argued that at least some science funding should be awarded by lottery, because in the contemporary world of peer review and scoring grids, risky ideas with potentially huge pay-offs do not attract sufficient funding.

Random selection embodies a very different conception of fairness to meritocracy. But if we accept that what we call meritocracy is predominantly a way for advantage to self-replicate, why not at least experiment with lotteries instead? Big graduate recruiters or Oxbridge courses could set “on paper” entry criteria, select candidates who meet them at random and test whether there are any differences with candidates selected by interview.

I am willing to bet that, as observed in Texas, they would do no worse. And that there would be other benefits: diversity of thought as well as diversity of demography. Quotas are often criticised for their potential to undermine those individuals who benefit from positive discrimination; everyone knows they are there not purely on merit, or so the argument goes. An element of random selection might engender a bit more humility on the part of white, middle-class men; it goes alongside being honest that meritocracy is a convenient mask for privilege.

The reason such experiments remain unlikely is that studies show that even when people are aware of the fallibility of interviews, they sustain incredible self-belief in their ability to buck the trend. Not only that, there are a lot of powerful people with a stake in maintaining the illusion of meritocracy. Oxford and Cambridge want to preserve the misconception that their selection procedures embody the creme de la creme of today selecting the creme de la creme of tomorrow.

But if you find yourself balking at random selection, ask yourself this: have you ever formed a first impression that was wrong? It might go against the grain, but making more liberal use of lotteries might produce not just a fairer but a better and more diverse world.

Sunday 18 February 2018

Robots + Capital - The redundancy of human beings

Tabish Khair in The Hindu



Human beings are being made redundant by something they created. This is not a humanoid, robot, or computer but money as capital


We have all read stories, or seen films, about robots taking over. How, some time in the future, human beings will be marginalised, effectively replaced by machines, real or virtual. Common to these stories is the trope of the world taken over by something constructed of inert material, something mechanical and ‘heartless’. Also common to these stories is the idea that this will happen in the future.

What if I tell you that it has already happened? The future is here!


The culprit that humans created

In fact, the future has been building up for some decades. Roughly from the 1970s onwards, human beings have been increasingly made redundant by something they created, and that was once of use to them. Except that this ‘something’ is not a humanoid, robot, or even a computer; it is money. Or, more precisely, it is money as capital.

It was precipitated in 1973, when floating exchange rates were introduced. As economist Samir Amin notes, this was the logical result of the “concomitance of the U.S. deficit (leading to an excess of dollars available on the market) and the crisis of productive investment” which had produced “a mass of floating capital with no place to go.” With floating exchange rates, this excess of dollars could be plunged into sheer financial speculation across national borders. Financial speculation had always been a part of capitalism, but floating exchange rates dissolved the ties between capital, goods (trade and production) and labour. Financial speculation gradually floated free of human labour and even of money, as a medium of exchange. If I were a theorist of capitalism of the Adam Smith variety, I would say that capitalism, as we knew it (and still, erroneously, imagine it), started dying in 1973.

Amin goes on to stress the consequences of this: The ratio between hedging operations on the one side and production and international trading on the other rose to 28:1 by 2002 — “a disproportion that has been constantly growing for about the last twenty years and which has never been witnessed in the entire history of capitalism.” In other words, while world trade was valued at $2 billion around 2005, international capital movements were estimated at $50 billion.

How can there be capital movements in such excess of trade? Adam Smith would have failed to understand it. Karl Marx, who feared something like this, would have failed to imagine its scale.

This is what has happened: capital, which was always the abstract logic of money, has riven free of money as a medium of exchange. It no longer needs anything to exchange — and, hence, anyone to produce — in order to grow. (I am exaggerating, but only a bit.)

Theorists have argued that money is a social relation and a medium of exchange. That is not true of most capital today, which need not be ploughed back into any kind of production, trade, labour or even services. It can just be moved around as numbers. This is what day traders do. They do not look at company balance sheets or supply-demand statistics; they simply look at numbers on the computer screen.

This is what explains the dichotomy — most obvious in Donald Trump’s U.S., but not absent in places like the U.K., France or India — between the rhetoric of politicians and their actual actions. Politicians might come to power by promising to ‘drain the swamp’, but what they do, once assured of political power, is to partake in the monopoly of finance capital. This abstract capital is the ‘robot’ — call it Robital — that has marginalised human beings.

I am not making a Marxist point about capital under classical capitalism: despite its tendency towards exploitation, this was still largely invested in human labour. This is not the case any longer. Finance capital does not really need humans — apart from the 1% that own most of it, and another 30% or so of necessary service providers, including IT ones, whose numbers should be expected to steadily shrink.

Robotisation has already taken place: it is only its physical enactment (actual robots) that is still building up. Robots, as replacements for human beings, are the consequence of the abstract nature of finance capital. Robotised agriculture and office robots are a consequence of this. If most humans are redundant and most capital is in the hands of a 1% superclass, it is inevitable that this capital will be invested in creating machines that can make the elite even less dependent on other human beings.

The underlying cause

My American friends wonder about the blindness of Republican politicians who refuse to provide medical support to ordinary Americans and even dismantle the few supports that exist. My British friends talk of the slow spread of homelessness in the U.K. My Indian friends worry about matters such as thousands of farmer suicides. The working middle class crumbles in most countries.

Here is the underlying cause of all of this: the redundancy of human beings, because capital can now replicate itself, endlessly, without being forced back into human labour and trade. We are entering an age where visible genocides — as in Syria or Yemen — might be matched by invisible ones, such as the unremarked deaths of the homeless, the deprived and the marginal.

Robital is here.

Sunday 19 February 2017

‘From bad to worse’: Greece hurtles towards a final reckoning

Helena Smith in The Guardian


Dimitris Costopoulos stood, worry beads in hand, under brilliant blue skies in front of the Greek parliament. Wearing freshly pressed trousers, polished shoes and a smart winter jacket – “my Sunday best” – he had risen at 5am to get on the bus that would take him to Athens 200 miles away and to the great sandstone edifice on Syntagma Square. By his own admission, protests were not his thing.

At 71, the farmer rarely ventures from Proastio, his village on the fertile plains of Thessaly. “But everything is going wrong,” he lamented on Tuesday, his voice hoarse after hours of chanting anti-government slogans.


---For Background Knowledge read:

Yanis Varoufakis and the Greek Tragedy


----

“Before there was an order to things, you could build a house, educate your children, spoil your grandchildren. Now the cost of everything has gone up and with taxes you can barely afford to survive. Once I’ve paid for fuel, fertilisers and grains, there is really nothing left.”

Costopoulos is Greece’s Everyman; the human voice in a debt crisis that refuses to go away. Eight years after it first erupted, the drama shows every sign of reigniting, only this time in a new dark age of Trumpian politics, post-Brexit Europe, terror attacks and rise of the populist far right.


“I grow wheat,” said Costopoulos, holding out his wizened hands. “I am not in the building behind me. I don’t make decisions. Honestly, I can’t understand why things are going from bad to worse, why this just can’t be solved.”

As Greece hurtles towards another full-blown confrontation with the creditors keeping it afloat, and as tensions over stalled bailout negotiations mount, it is a question many are asking.

The country’s epic struggle to avert bankruptcy should have been settled when Athens received €110bn in aid – the biggest financial rescue programme in global history – from the EU and International Monetary Fund in May 2010. Instead, three bailouts later, it is still wrangling over the terms of the latest €86bn emergency loan package, with lenders also at loggerheads and diplomats no longer talking of a can, but rather a bomb, being kicked down the road. Default looms if a €7.4bn debt repayment – money owed mostly to the European Central Bank – is not honoured in July.




Farmer Dimitris Costopoulos in front of the Greek parliament in Athens. Photograph: Helena Smith for the Observer

Amid the uncertainty, volatility has returned to the markets. So, too, has fear, with an estimated €2.2bn being withdrawn from banks by panic-stricken depositors since the beginning of the year. With talk of Greece’s exit from the euro being heard again, farmers, trade unions and other sectors enraged by the eviscerating effects of austerity have once more come out in protest.

From his seventh-floor office on Mitropoleos, Makis Balaouras, an MP with the governing Syriza party, has a good view of the goings-on in Syntagma. Demonstrations – what the former trade unionist calls “the movement” – are a fine thing. “I wish people were out there mobilising more,” he sighed. “Protests are in our ideological and political DNA. They are important, they send a message.”

This is the irony of Syriza, the leftwing party catapulted to power on a ticket to “tear up” the hated bailout accords widely blamed for extraordinary levels of Greek unemployment, poverty and emigration. Two years into office it has instead overseen the most punishing austerity measures to date, slashing public-sector salaries and pensions, cutting services, agreeing to the biggest privatisation programme in European history and raising taxes on everything from cars to beer – all of which has been the price of the loans that have kept default at bay and Greece in the euro.

In the maelstrom the economy has improved, with Athens achieving a noticeable primary surplus last year, but the social crisis has intensified.

For men like Balaouras, who suffered appalling torture for his leftwing beliefs at the hands of the 1967-74 colonels’ regime, the policies have been galling. With the IMF and EU arguing over the country’s ability to reach tough fiscal targets when the current bailout expires in August next year, the demand for €3.6bn of more measures has left many in Syriza reeling. Without upfront legislation on the reforms, creditors say, they cannot conclude a compliance review on which the next tranche of bailout aid hangs.

“We had an agreement,” insisted Balaouras, looking despondently down at his desert boots. “We kept to our side of the deal, but the lenders haven’t kept to their side because now they are asking for more. We want the review to end. We want to go forward. This situation is in the interests of no one. But to get there we have to have an honourable compromise. Without that there will be a clash.

It had been hoped that an agreement would be struck on Monday at what had been billed as a high-stakes meeting of euro area finance ministers. On Friday, EU officials announced that the deadline had been all but missed because there had been little convergence between the two sides.

With the Netherlands holding general elections next month, and France and Germany also heading to the polls in May and September, fears of the dispute becoming increasingly politicised have added to its complexity. Highlighting those concerns, the German chancellor, Angela Merkel, attempted to end the rift that has emerged between eurozone lenders and the IMF over the fund’s insistence that Greece can only begin to recover if its €320bn debt pile is reduced substantially.

In talks with Christine Lagarde, the Washington-based IMF’s managing director, Merkel agreed to discuss the issue during a further meeting between the two women to be held on Wednesday. The IMF has steadfastly refused to sign up to the latest bailout, arguing that Greek debt is not only unmanageable but on a trajectory to become explosive by 2030. Berlin, the biggest contributor of the €250bn Greece has so far received, says it will be unable to disburse further funds without the IMF on board.

The assumption is that the prime minister, Alexis Tsipras, will cave in, just as he did when the country came closest yet to leaving the euro at the height of the crisis in the summer of 2015. But the 41-year-old leader, like Syriza, has been pummelled in the polls. Persuading disaffected backbenchers to support more measures, and then selling them to a populace exhausted by repeated rounds of austerity, will be extremely difficult. Disappointment has increasingly given way to the death of hope – a sentiment reinforced by the realisation that Cyprus and other bailed-out countries, by contrast, are no longer under international supervision.

In his city centre office, the former finance minister Evangelos Venizelos pondered where Greece’s predicament was now. “[We are] at the same point we were several years ago,” he joked. “The only difference is that anti-European sentiment is growing. What was once a very friendly country towards Europe is becoming increasingly less so, and with that comes a lot of danger, a lot of risk.”

When historians look back they, too, may conclude that Greece has expended a great deal of energy not moving forward at all.

The arc of crisis that has swept the country – coursing like a cancer through its body politic, devastating its public health system, shattering lives – has been an exercise in the absurd. The feat of pulling off the greatest fiscal adjustment in modern times has spawned a slump longer and deeper than the Great Depression, with the Greek economy shrinking more than 25% since the crisis began.

Even if the latest impasse is broken and a deal is reached with creditors soon, few believe that in a country of weak governance and institutions it will be easy to enforce. Political turbulence will almost certainly beckon; the prospect of “Grexit” will grow.

“Grexit is the last thing we want, but we may arrive at a point of serious dilemmas,” said Venizelos. “Whatever deal is reached will be very difficult to implement, but that notwithstanding, it is not the memoranda [the bailout accords] that caused the crisis. The crisis was born in Greece long before.”

Like every crisis government before it, Tsipras’s administration is acutely aware that salvation will come only when Greece can return to the markets and raise funds. What happens in the weeks ahead could determine if that is likely to happen at all.

Back in Syntagma, Costopoulos the good-natured farmer ponders what lies ahead. Like every Greek, he stands to be deeply affected. “All I know is that we are all being pushed,” he said, searching for the right words. “Pushed in the direction of somewhere very explosive, somewhere we do not want to be.”

Thursday 5 January 2017

Japanese company replaces office workers with artificial intelligence

Justin McCurry in The Guardian

A future in which human workers are replaced by machines is about to become a reality at an insurance firm in Japan, where more than 30 employees are being laid off and replaced with an artificial intelligence system that can calculate payouts to policyholders.

Fukoku Mutual Life Insurance believes it will increase productivity by 30% and see a return on its investment in less than two years. The firm said it would save about 140m yen (£1m) a year after the 200m yen (£1.4m) AI system is installed this month. Maintaining it will cost about 15m yen (£100k) a year.

The move is unlikely to be welcomed, however, by 34 employees who will be made redundant by the end of March.

The system is based on IBM’s Watson Explorer, which, according to the tech firm, possesses “cognitive technology that can think like a human”, enabling it to “analyse and interpret all of your data, including unstructured text, images, audio and video”.

The technology will be able to read tens of thousands of medical certificates and factor in the length of hospital stays, medical histories and any surgical procedures before calculating payouts, according to the Mainichi Shimbun.

While the use of AI will drastically reduce the time needed to calculate Fukoku Mutual’s payouts – which reportedly totalled 132,000 during the current financial year – the sums will not be paid until they have been approved by a member of staff, the newspaper said.

Japan’s shrinking, ageing population, coupled with its prowess in robot technology, makes it a prime testing ground for AI.

According to a 2015 report by the Nomura Research Institute, nearly half of all jobs in Japan could be performed by robots by 2035.

Dai-Ichi Life Insurance has already introduced a Watson-based system to assess payments - although it has not cut staff numbers - and Japan Post Insurance is interested in introducing a similar setup, the Mainichi said.

AI could soon be playing a role in the country’s politics. Next month, the economy, trade and industry ministry will introduce AI on a trial basis to help civil servants draft answers for ministers during cabinet meetings and parliamentary sessions.

The ministry hopes AI will help reduce the punishingly long hours bureaucrats spend preparing written answers for ministers.

If the experiment is a success, it could be adopted by other government agencies, according the Jiji news agency.

If, for example a question is asked about energy-saving policies, the AI system will provide civil servants with the relevant data and a list of pertinent debating points based on past answers to similar questions.

The march of Japan’s AI robots hasn’t been entirely glitch-free, however. At the end of last year a team of researchers abandoned an attempt to develop a robot intelligent enough to pass the entrance exam for the prestigious Tokyo University.

“AI is not good at answering the type of questions that require an ability to grasp meanings across a broad spectrum,” Noriko Arai, a professor at the National Institute of Informatics, told Kyodo news agency.