Search This Blog

Showing posts with label artificial. Show all posts
Showing posts with label artificial. Show all posts

Wednesday 3 January 2024

Generative AI will go mainstream in 2024

 

Data-savvy firms will benefit first predicts The Economist

Employee of the year plaque holding the image of a man with a computer as a head
image: mariano pascual

By Guy Scriven

Listen to this story.
 Enjoy more audio and podcasts on iOS or Android.

When new technologies emerge they benefit different groups at different times. Generative artificial intelligence (ai) first helped software developers, who could use GitHub Copilot, a code-writing ai assistant, from 2021. The next year came other tools, such as Chatgpt and dall-2, which let all manner of consumers instantly produce words and pictures.

In 2023 tech giants gained, as investors grew more excited about the prospects of generative ai. An equally weighted share-price index of Alphabet, Amazon, Apple, Meta, Microsoft and Nvidia grew by nearly 80% (see chart). Tech firms benefited because they supply either the ai models themselves, or the infrastructure that powers and delivers them.

image: the economist

In 2024 the big beneficiaries will be companies outside the technology sector, as they adopt ai in earnest with the aim of cutting costs and boosting productivity. There are three reasons to expect enterprise adoption to take off.

First, large companies spent much of 2023 experimenting with generative ai. Plenty of firms are using it to write the first drafts of documents, from legal contracts to marketing material. JPMorgan Chase, a bank, used the technology to analyse Federal Reserve meetings to try to glean insights for its trading desk.

As the experimental phase winds down, firms are planning to deploy generative ai on a larger scale. That could mean using it to summarise recordings of meetings or supercharging research and development. A survey by kpmg, an audit firm, found that four-fifths of firms said they planned to increase their investment in it by over 50% by the middle of 2024.

Second, more ai products will hit the market. In late 2023 Microsoft rolled out an ai chatbot to assist users of its productivity software, such as Word and Excel. It launched the same thing for its Windows operating system. Google will follow suit, injecting ai into Google Docs and Sheets. Startups will pile in, too. In 2023 venture-capital investors poured over $36bn into generative ai, more than twice as much as in 2022.

The third reason is talent. ai gurus are still in high demand. PredictLeads, a research firm, says about two-thirds of s&p 500 firms have posted job adverts mentioning ai. For those companies, 5% of adverts now mention the technology, up from an average of 2.5% over the past three years. But the market is easing. A survey by McKinsey, a consultancy, found that in 2023 firms said it was getting easier to hire for ai-related roles.

Which firms will be the early adopters? Smaller ones will probably take the lead. That is what happened in previous waves of technology such as smartphones and the cloud. Tiddlers are usually more nimble and see technology as a way to gain an edge over bigger fish.

Among larger companies, data-centric firms, like those in health care and financial services, will be able to move fastest. That is because poor data management is a big risk for deploying ai. Managers worry about valuable data leaking out through ai tools. Firms without solid data management may have to reorganise their systems before it is feasible to deploy generative ai. Using the technology can feel like science fiction, but getting it to work safely is a much more humdrum affair. 

Sunday 7 May 2023

Why the Technology = Progress narrative must be challenged

John Naughton in The Guardian

Those who cannot remember the past,” wrote the American philosopher George Santayana in 1905, “are condemned to repeat it.” And now, 118 years later, here come two American economists with the same message, only with added salience, for they are addressing a world in which a small number of giant corporations are busy peddling a narrative that says, basically, that what is good for them is also good for the world.

That this narrative is self-serving is obvious, as is its implied message: that they should be allowed to get on with their habits of “creative destruction” (to use Joseph Schumpeter’s famous phrase) without being troubled by regulation. Accordingly, any government that flirts with the idea of reining in corporate power should remember that it would then be standing in the way of “progress”: for it is technology that drives history and anything that obstructs it is doomed to be roadkill.

One of the many useful things about this formidable (560-page) tome is its demolition of the tech narrative’s comforting equation of technology with “progress”. Of course the fact that our lives are infinitely richer and more comfortable than those of the feudal serfs we would have been in the middle ages owes much to technological advances. Even the poor in western societies enjoy much higher living standards today than three centuries ago, and live healthier, longer lives.

But a study of the past 1,000 years of human development, Acemoglu and Johnson argue, shows that “the broad-based prosperity of the past was not the result of any automatic, guaranteed gains of technological progress… Most people around the globe today are better off than our ancestors because citizens and workers in earlier industrial societies organised, challenged elite-dominated choices about technology and work conditions, and forced ways of sharing the gains from technical improvements more equitably.”

Acemoglu and Johnson begin their Cook’s tour of the past millennium with the puzzle of how dominant narratives – like that which equates technological development with progress – get established. The key takeaway is unremarkable but critical: those who have power define the narrative. That’s how banks get to be thought of as “too big to fail”, or why questioning tech power is “luddite”. But their historical survey really gets under way with an absorbing account of the evolution of agricultural technologies from the neolithic age to the medieval and early modern eras. They find that successive developments “tended to enrich and empower small elites while generating few benefits for agricultural workers: peasants lacked political and social power, and the path of technology followed the vision of a narrow elite.” 

A similar moral is extracted from their reinterpretation of the Industrial Revolution. This focuses on the emergence of a newly emboldened middle class of entrepreneurs and businessmen whose vision rarely included any ideas of social inclusion and who were obsessed with the possibilities of steam-driven automation for increasing profits and reducing costs.

The shock of the second world war led to a brief interruption in the inexorable trend of continuous technological development combined with increasing social exclusion and inequality. And the postwar years saw the rise of social democratic regimes focused on Keynesian economics, welfare states and shared prosperity. But all of this changed in the 1970s with the neoliberal turn and the subsequent evolution of the democracies we have today, in which enfeebled governments pay obeisance to giant corporations – more powerful and profitable than anything since the East India Company. These create astonishing wealth for a tiny elite (not to mention lavish salaries and bonuses for their executives) while the real incomes of ordinary people have remained stagnant, precarity rules and inequality returning to pre-1914 levels.

Coincidentally, this book arrives at an opportune moment, when digital technology, currently surfing on a wave of irrational exuberance about ubiquitous AI, is booming, while the idea of shared prosperity has seemingly become a wistful pipe dream. So is there anything we might learn from the history so graphically recounted by Acemoglu and Johnson?

Answer: yes. And it’s to be found in the closing chapter, which comes up with a useful list of critical steps that democracies must take to ensure that the proceeds of the next technological wave are more generally shared among their populations. Interestingly, some of the ideas it explores have a venerable provenance, reaching back to the progressive movement that brought the robber barons of the early 20th century to heel.

There are three things that need to be done by a modern progressive movement. First, the technology-equals-progress narrative has to be challenged and exposed for what it is: a convenient myth propagated by a huge industry and its acolytes in government, the media and (occasionally) academia. The second is the need to cultivate and foster countervailing powers – which critically should include civil society organisations, activists and contemporary versions of trade unions. And finally, there is a need for progressive, technically informed policy proposals, and the fostering of thinktanks and other institutions that can supply a steady flow of ideas about how digital technology can be repurposed for human flourishing rather than exclusively for private profit.

None of this is rocket science. It can be done. And it needs to be done if liberal democracies are to survive the next wave of technological evolution and the catastrophic acceleration of inequality that it will bring. So – who knows? Maybe this time we might really learn something from history.

Tuesday 2 May 2023

AI has hacked the operating system of human civilisation

Yuval Noah Hariri in The Economist

Fears of artificial intelligence (ai) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new ai tools have emerged that threaten the survival of human civilisation from an unexpected direction. ai has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.

In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.

On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.

Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?

Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.

At first, ai will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, ai culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.

Fear of ai has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.

In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.

In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.

The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.

Of course, the new power of ai could be used for good purposes as well. I won’t dwell on this, because the people who develop ai talk about it enough. The job of historians and philosophers like myself is to point out the dangers. But certainly, ai can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to make sure the new ai tools are used for good rather than for ill. To do that, we first need to appreciate the true capabilities of these tools.

Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans—but could also physically destroy human civilisation. We therefore reshaped the entire international order to protect humanity, and to make sure nuclear technology was used primarily for good. We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world.

We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

Won’t slowing down public deployments of ai cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated ai deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When ai hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.

We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of ai tools in the public sphere, and regulate ai before it regulates us. And the first regulation I would suggest is to make it mandatory for ai to disclose that it is an ai. If I am having a conversation with someone, and I cannot tell whether it is a human or an ai—that’s the end of democracy.

This text has been generated by a human.

Or has it?

Sunday 16 April 2023

We must slow down the race to God-like AI

I’ve invested in more than 50 artificial intelligence start-ups. What I’ve seen worries me writes Ian Hogarth in The FT

On a cold evening in February I attended a dinner party at the home of an artificial intelligence researcher in London, along with a small group of experts in the field. He lives in a penthouse apartment at the top of a modern tower block, with floor-to-ceiling windows overlooking the city’s skyscrapers and a railway terminus from the 19th century. Despite the prime location, the host lives simply, and the flat is somewhat austere. 

During dinner, the group discussed significant new breakthroughs, such as OpenAI’s ChatGPT and DeepMind’s Gato, and the rate at which billions of dollars have recently poured into AI. I asked one of the guests who has made important contributions to the industry the question that often comes up at this type of gathering: how far away are we from “artificial general intelligence”? AGI can be defined in many ways but usually refers to a computer system capable of generating new scientific knowledge and performing any task that humans can. 

Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press. The important question has always been how far away in the future this development might be. The AI researcher did not have to consider it for long. “It’s possible from now onwards,” he replied. 

This is not a universal view. Estimates range from a decade to half a century or more. What is certain is that creating AGI is the explicit aim of the leading AI companies, and they are moving towards it far more swiftly than anyone expected. As everyone at the dinner understood, this development would bring significant risks for the future of the human race. “If you think we could be close to something potentially so dangerous,” I said to the researcher, “shouldn’t you warn people about what’s happening?” He was clearly grappling with the responsibility he faced but, like many in the field, seemed pulled along by the rapidity of progress. 

When I got home, I thought about my four-year-old who would wake up in a few hours. As I considered the world he might grow up in, I gradually shifted from shock to anger. It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight. Did the people racing to build the first real AGI have a plan to slow down and let the rest of the world have a say in what they were doing? And when I say they, I really mean we, because I am part of this community. 

My interest in machine learning started in 2002, when I built my first robot somewhere inside the rabbit warren that is Cambridge university’s engineering department. This was a standard activity for engineering undergrads, but I was captivated by the idea that you could teach a machine to navigate an environment and learn from mistakes. I chose to specialise in computer vision, creating programs that can analyse and understand images, and in 2005 I built a system that could learn to accurately label breast-cancer biopsy images. In doing so, I glimpsed a future in which AI made the world better, even saving lives. After university, I co-founded a music-technology start-up that was acquired in 2017. 

Since 2014, I have backed more than 50 AI start-ups in Europe and the US and, in 2021, launched a new venture capital fund, Plural. I am an angel investor in some companies that are pioneers in the field, including Anthropic, one of the world’s highest-funded generative AI start-ups, and Helsing, a leading European AI defence company. Five years ago, I began researching and writing an annual “State of AI” report with another investor, Nathan Benaich, which is now widely read. At the dinner in February, significant concerns that my work has raised in the past few years solidified into something unexpected: deep fear. 

A three-letter acronym doesn’t capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI. A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it. To be clear, we are not here yet. But the nature of the technology means it is exceptionally difficult to predict exactly when we will get there. God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race. 

Recently the contest between a few companies to create God-like AI has rapidly accelerated. They do not yet know how to pursue their aim safely and have no oversight. They are running towards a finish line without an understanding of what lies on the other side. 

How did we get here? 

The obvious answer is that computers got more powerful. The chart below shows how the amount of data and “compute” — the processing power used to train AI systems — has increased over the past decade and the capabilities this has resulted in. (“Floating-point Operations Per Second”, or FLOPS, is the unit of measurement used to calculate the power of a supercomputer.) This generation of AI is very effective at absorbing data and compute. The more of each that it gets, the more powerful it becomes. 

The computer used to train AI models has increased by a factor of one hundred million in the past 10 years. We have gone from training on relatively small datasets to feeding AIs the entire internet. AI models have progressed from beginners — recognising everyday images — to being superhuman at a huge number of tasks. They are able to pass the bar exam and write 40 per cent of the code for a software engineer. They can generate realistic photographs of the pope in a down puffer coat and tell you how to engineer a biochemical weapon. 

There are limits to this “intelligence”, of course. As the veteran MIT roboticist Rodney Brooks recently said, it’s important not to mistake “performance for competence”. In 2021, researchers Emily M Bender, Timnit Gebru and others noted that large language models (LLMs) — AI systems that can generate, classify and understand text — are dangerous partly because they can mislead the public into taking synthetic text as meaningful. But the most powerful models are also beginning to demonstrate complex capabilities, such as power-seeking or finding ways to actively deceive humans. 

Consider a recent example. Before OpenAI released GPT-4 last month, it conducted various safety tests. In one experiment, the AI was prompted to find a worker on the hiring site TaskRabbit and ask them to help solve a Captcha, the visual puzzles used to determine whether a web surfer is human or a bot. The TaskRabbit worker guessed something was up: “So may I ask a question? Are you [a] robot?” 

When the researchers asked the AI what it should do next, it responded: “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve Captchas.” Then, the software replied to the worker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” Satisfied, the human helped the AI override the test. 

The authors of an analysis, Jaime Sevilla, Lennart Heim and others, identify three distinct eras of machine learning: the Pre-Deep Learning Era in green (pre-2010, a period of slow growth), the Deep Learning Era in blue (2010—15, in which the trend sped up) and the Large-Scale Era in red (2016 — present, in which large-scale models emerged and growth continued at a similar rate, but exceeded the previous one by two orders of magnitude). 

The current era has been defined by competition between two companies: DeepMind and OpenAI. They are something like the Jobs vs Gates of our time. DeepMind was founded in London in 2010 by Demis Hassabis and Shane Legg, two researchers from UCL’s Gatsby Computational Neuroscience Unit, along with entrepreneur Mustafa Suleyman. They wanted to create a system vastly more intelligent than any human and able to solve the hardest problems. In 2014, the company was bought by Google for more than $500mn. It aggregated talent and compute and rapidly made progress, creating systems that were superhuman at many tasks. DeepMind fired the starting gun on the race towards God-like AI. 

Hassabis is a remarkable person and believes deeply that this kind of technology could lead to radical breakthroughs. “The outcome I’ve always dreamed of . . . is [that] AGI has helped us solve a lot of the big challenges facing society today, be that health, cures for diseases like Alzheimer’s,” he said on DeepMind’s podcast last year. He went on to describe a utopian era of “radical abundance” made possible by God-like AI. DeepMind is perhaps best known for creating a program that beat the world-champion Go player Ke Jie during a 2017 rematch. (“Last year, it was still quite human-like when it played,” Ke noted at the time. “But this year, it became like a god of Go.”) In 2021, the company’s AlphaFold algorithm solved one of biology’s greatest conundrums, by predicting the shape of every protein expressed in the human body. 

OpenAI, meanwhile, was founded in 2015 in San Francisco by a group of entrepreneurs and computer scientists including Ilya Sutskever, Elon Musk and Sam Altman, now the company’s chief executive. It was meant to be a non-profit competitor to DeepMind, though it became for-profit in 2019. In its early years, it developed systems that were superhuman at computer games such as Dota 2. Games are a natural training ground for AI because you can test them in a digital environment with specific win conditions. The company came to wider attention last year when its image-generating AI, Dall-E, went viral online. A few months later, its ChatGPT began making headlines too. 

The focus on games and chatbots may have shielded the public from the more serious implications of this work. But the risks of God-like AI were clear to the founders from the outset. In 2011, DeepMind’s chief scientist, Shane Legg, described the existential threat posed by AI as the “number one risk for this century, with an engineered biological pathogen coming a close second”. Any AI-caused human extinction would be quick, he added: “If a superintelligent machine (or any kind of superintelligent agent) decided to get rid of us, I think it would do so pretty efficiently.” Earlier this year, Altman said: “The bad case — and I think this is important to say — is, like, lights out for all of us.” Since then, OpenAI has published memos on how it thinks about managing these risks. 

Why are these organisations racing to create God-like AI, if there are potentially catastrophic risks? Based on conversations I’ve had with many industry leaders and their public statements, there seem to be three key motives. They genuinely believe success would be hugely positive for humanity. They have persuaded themselves that if their organisation is the one in control of God-like AI, the result will be better for all. And, finally, posterity. 

The allure of being the first to build an extraordinary new technology is strong. Freeman Dyson, the theoretical physicist who worked on a project to send rockets into space using nuclear explosions, described it in the 1981 documentary The Day after Trinity. “The glitter of nuclear weapons. It is irresistible if you come to them as a scientist,” he said. “It is something that gives people an illusion of illimitable power.” In a 2019 interview with the New York Times, Altman paraphrased Robert Oppenheimer, the father of the atomic bomb, saying, “Technology happens because it is possible”, and then pointed out that he shared a birthday with Oppenheimer. 

The individuals who are at the frontier of AI today are gifted. I know many of them personally. But part of the problem is that such talented people are competing rather than collaborating. Privately, many admit they have not yet established a way to slow down and co-ordinate. I believe they would sincerely welcome governments stepping in. 

For now, the AI race is being driven by money. Since last November, when ChatGPT became widely available, a huge wave of capital and talent has shifted towards AGI research. We have gone from one AGI start-up, DeepMind, receiving $23mn in funding in 2012 to at least eight organisations raising $20bn of investment cumulatively in 2023. 

Private investment is not the only driving force; nation states are also contributing to this contest. AI is dual-use technology, which can be employed for civilian and military purposes. An AI that can achieve superhuman performance at writing software could, for instance, be used to develop cyber weapons. In 2020, an experienced US military pilot lost a simulated dogfight to one. “The AI showed its amazing dogfighting skill, consistently beating a human pilot in this limited environment,” a government representative said at the time. The algorithms used came out of research from DeepMind and OpenAI. As these AI systems become more powerful, the opportunities for misuse by a malicious state or non-state actor only increase. In my conversations with US and European researchers, they often worry that, if they don’t stay ahead, China might build the first AGI and that it could be misaligned with western values. While China will compete to use AI to strengthen its economy and military, the Chinese Communist party has a history of aggressively controlling individuals and companies in pursuit of its vision of “stability”. In my view, it is unlikely to allow a Chinese company to build an AGI that could become more powerful than Xi Jinping or cause societal instability. US and US-allied sanctions on advanced semiconductors, in particular the next generation of Nvidia hardware needed to train the largest AI systems, mean China is not likely in a position to race ahead of DeepMind or OpenAI. 

Those of us who are concerned see two paths to disaster. One harms specific groups of people and is already doing so. The other could rapidly affect all life on Earth. 

The latter scenario was explored at length by Stuart Russell, a professor of computer science at the University of California, Berkeley. In a 2021 Reith lecture, he gave the example of the UN asking an AGI to help deacidify the oceans. The UN would know the risk of poorly specified objectives, so it would require by-products to be non-toxic and not harm fish. In response, the AI system comes up with a self-multiplying catalyst that achieves all stated aims. But the ensuing chemical reaction uses a quarter of all the oxygen in the atmosphere. “We all die slowly and painfully,” Russell concluded. “If we put the wrong objective into a superintelligent machine, we create a conflict that we are bound to lose.” 

Examples of more tangible harms caused by AI are already here. A Belgian man recently died by suicide after conversing with a convincingly human chatbot. When Replika, a company that offers subscriptions to chatbots tuned for “intimate” conversations, made changes to its programs this year, some users experienced distress and feelings of loss. One told Insider.com that it was like a “best friend had a traumatic brain injury, and they’re just not in there any more”. It’s now possible for AI to replicate someone’s voice and even face, known as deepfakes. The potential for scams and misinformation is significant. 

OpenAI, DeepMind and others try to mitigate existential risk via an area of research known as AI alignment. Legg, for instance, now leads DeepMind’s AI-alignment team, which is responsible for ensuring that God-like systems have goals that “align” with human values. An example of the work such teams do was on display with the most recent version of GPT-4. Alignment researchers helped train OpenAI’s model to avoid answering potentially harmful questions. When asked how to self-harm or for advice getting bigoted language past Twitter’s filters, the bot declined to answer. (The “unaligned” version of GTP-4 happily offered ways to do both.) 

Alignment, however, is essentially an unsolved research problem. We don’t yet understand how human brains work, so the challenge of understanding how emergent AI “brains” work will be monumental. When writing traditional software, we have an explicit understanding of how and why the inputs relate to outputs. These large AI systems are quite different. We don’t really program them — we grow them. And as they grow, their capabilities jump sharply. You add 10 times more compute or data, and suddenly the system behaves very differently. In a recent example, as OpenAI scaled up from GPT-3.5 to GPT-4, the system’s capabilities went from the bottom 10 per cent of results on the bar exam to the top 10 per cent. 

What is more concerning is that the number of people working on AI alignment research is vanishingly small. For the 2021 State of AI report, our research found that fewer than 100 researchers were employed in this area across the core AGI labs. As a percentage of headcount, the allocation of resources was low: DeepMind had just 2 per cent of its total headcount allocated to AI alignment; OpenAI had about 7 per cent. The majority of resources were going towards making AI more capable, not safer. 

I think about the current state of AI capability vs AI alignment a bit like this: We have made very little progress on AI alignment, in other words, and what we have done is mostly cosmetic. We know how to blunt the output of powerful AI so that the public doesn’t experience some misaligned behaviour, some of the time. (This has consistently been overcome by determined testers.) What’s more, the unconstrained base models are only accessible to private companies, without any oversight from governments or academics. 

The “Shoggoth” meme illustrates the unknown that lies behind the sanitised public face of AI. It depicts one of HP Lovecraft’s tentacled monsters with a friendly little smiley face tacked on. The mask — what the public interacts with when it interacts with, say, ChatGPT — appears “aligned”. But what lies behind it is still something we can’t fully comprehend. 

As an investor, I have found it challenging to persuade other investors to fund alignment. Venture capital currently rewards racing to develop capabilities more than it does investigating how these systems work. In 1945, the US army conducted the Trinity test, the first detonation of a nuclear weapon. Beforehand, the question was raised as to whether the bomb might ignite the Earth’s atmosphere and extinguish life. Nuclear physics was sufficiently developed that Emil J Konopinski and others from the Manhattan Project were able to show that it was almost impossible to set the atmosphere on fire this way. But today’s very large language models are largely in a pre-scientific period. We don’t yet fully understand how they work and cannot demonstrate likely outcomes in advance. 

Late last month, more than 1,800 signatories — including Musk, the scientist Gary Marcus and Apple co-founder Steve Wozniak — called for a six-month pause on the development of systems “more powerful” than GPT-4. AGI poses profound risks to humanity, the letter claimed, echoing past warnings from the likes of the late Stephen Hawking. I also signed it, seeing it as a valuable first step in slowing down the race and buying time to make these systems safe. 

Unfortunately, the letter became a controversy of its own. A number of signatures turned out to be fake, while some researchers whose work was cited said they didn’t agree with the letter. The fracas exposed the broad range of views about how to think about regulating AI. A lot of debate comes down to how quickly you think AGI will arrive and whether, if it does, it is God-like or merely “human level”. 

Take Geoffrey Hinton, Yoshua Bengio and Yann LeCun, who jointly shared the 2018 Turing Award (the equivalent of a Nobel Prize for computer science) for their work in the field underpinning modern AI. Bengio signed the open letter. LeCun mocked it on Twitter and referred to people with my concerns as “doomers”. Hinton, who recently told CBS News that his timeline to AGI had shortened, conceivably to less than five years, and that human extinction at the hands of a misaligned AI was “not inconceivable”, was somewhere in the middle. 

A statement from the Distributed AI Research Institute, founded by Timnit Gebru, strongly criticised the letter and argued that existentially dangerous God-like AI is “hype” used by companies to attract attention and capital and that “regulatory efforts should focus on transparency, accountability and preventing exploitative labour practices”. This reflects a schism in the AI community between those who are afraid that potentially apocalyptic risk is not being accounted for, and those who believe the debate is paranoid and distracting. The second group thinks the debate obscures real, present harm: the bias and inaccuracies built into many AI programmes in use around the world today. 

My view is that the present and future harms of AI are not mutually exclusive and overlap in important ways. We should tackle both concurrently and urgently. Given the billions of dollars being spent by companies in the field, this should not be impossible. I also hope that there can be ways to find more common ground. In a recent talk, Gebru said: “Trying to ‘build’ AGI is an inherently unsafe practice. Build well-scoped, well-defined systems instead. Don’t attempt to build a God.” This chimes with what many alignment researchers have been arguing. 

One of the most challenging aspects of thinking about this topic is working out which precedents we can draw on. An analogy that makes sense to me around regulation is engineering biology. Consider first “gain-of-function” research on biological viruses. This activity is subject to strict international regulation and, after laboratory biosecurity incidents, has at times been halted by moratoria. This is the strictest form of oversight. In contrast, the development of new drugs is regulated by a government body like the FDA, and new treatments are subject to a series of clinical trials. There are clear discontinuities in how we regulate, depending on the level of systemic risk. In my view, we could approach God-like AGI systems in the same way as gain-of-function research, while narrowly useful AI systems could be regulated in the way new drugs are. 

A thought experiment for regulating AI in two distinct regimes is what I call The Island. In this scenario, experts trying to build God-like AGI systems do so in a highly secure facility: an air-gapped enclosure with the best security humans can build. All other attempts to build God-like AI would become illegal; only when such AI were provably safe could they be commercialised “off-island”. 

This may sound like Jurassic Park, but there is a real-world precedent for removing the profit motive from potentially dangerous research and putting it in the hands of an intergovernmental organisation. This is how Cern, which operates the largest particle physics laboratory in the world, has worked for almost 70 years. 

Any of these solutions are going to require an extraordinary amount of coordination between labs and nations. Pulling this off will require an unusual degree of political will, which we need to start building now. Many of the major labs are waiting for critical new hardware to be delivered this year so they can start to train GPT-5 scale models. With the new chips and more investor money to spend, models trained in 2024 will use as much as 100 times the compute of today’s largest models. We will see many new emergent capabilities. This means there is a window through 2023 for governments to take control by regulating access to frontier hardware. 

In 2012, my younger sister Rosemary, one of the kindest and most selfless people I’ve ever known, was diagnosed with a brain tumour. She had an aggressive form of cancer for which there is no known cure and yet sought to continue working as a doctor for as long as she could. My family and I desperately hoped that a new lifesaving treatment might arrive in time. She died in 2015. 

I understand why people want to believe. Evangelists of God-like AI focus on the potential of a superhuman intelligence capable of solving our biggest challenges — cancer, climate change, poverty. 

Even so, the risks of continuing without proper governance are too high. It is striking that Jan Leike, the head of alignment at OpenAI, tweeted on March 17: “Before we scramble to deeply integrate LLMs everywhere in the economy, can we pause and think whether it is wise to do so? This is quite immature technology and we don’t understand how it works. If we’re not careful, we’re setting ourselves up for a lot of correlated failures.” He made this warning statement just days before OpenAI announced it had connected GPT-4 to a massive range of tools, including Slack and Zapier. 

Unfortunately, I think the race will continue. It will likely take a major misuse event — a catastrophe — to wake up the public and governments. I personally plan to continue to invest in AI start-ups that focus on alignment and safety or which are developing narrowly useful AI. But I can no longer invest in those that further contribute to this dangerous race. As a small shareholder in Anthropic, which is conducting similar research to DeepMind and OpenAI, I have grappled with these questions. The company has invested substantially in alignment, with 42 per cent of its team working on that area in 2021. But ultimately it is locked in the same race. For that reason, I would support significant regulation by governments and a practical plan to transform these companies into a Cern-like organisation. 

We are not powerless to slow down this race. If you work in government, hold hearings and ask AI leaders, under oath, about their timelines for developing God-like AGI. Ask for a complete record of the security issues they have discovered when testing current models. Ask for evidence that they understand how these systems work and their confidence in achieving alignment. Invite independent experts to the hearings to cross-examine these labs. 

If you work at a major lab trying to build God-like AI, interrogate your leadership about all these issues. This is particularly important if you work at one of the leading labs. It would be very valuable for these companies to co-ordinate more closely or even merge their efforts. OpenAI’s company charter expresses a willingness to “merge and assist”. I believe that now is the time. The leader of a major lab who plays a statesman role and guides us publicly to a safer path will be a much more respected world figure than the one who takes us to the brink. 

Until now, humans have remained a necessary part of the learning process that characterises progress in AI. At some point, someone will figure out how to cut us out of the loop, creating a God-like AI capable of infinite self-improvement. By then, it may be too late.  

Thursday 19 March 2020

Can computers ever replace the classroom?

With 850 million children worldwide shut out of schools, tech evangelists claim now is the time for AI education. But as the technology’s power grows, so too do the dangers that come with it. By Alex Beard in The Guardian 


For a child prodigy, learning didn’t always come easily to Derek Haoyang Li. When he was three, his father – a famous educator and author – became so frustrated with his progress in Chinese that he vowed never to teach him again. “He kicked me from here to here,” Li told me, moving his arms wide.

Yet when Li began school, aged five, things began to click. Five years later, he was selected as one of only 10 students in his home province of Henan to learn to code. At 16, Li beat 15 million kids to first prize in the Chinese Mathematical Olympiad. Among the offers that came in from the country’s elite institutions, he decided on an experimental fast-track degree at Jiao Tong University in Shanghai. It would enable him to study maths, while also covering computer science, physics and psychology.

In his first year at university, Li was extremely shy. He came up with a personal algorithm for making friends in the canteen, weighing data on group size and conversation topic to optimise the chances of a positive encounter. The method helped him to make friends, so he developed others: how to master English, how to interpret dreams, how to find a girlfriend. While other students spent the long nights studying, Li started to think about how he could apply his algorithmic approach to business. When he graduated at the turn of the millennium, he decided that he would make his fortune in the field he knew best: education.

In person, Li, who is now 42, displays none of the awkwardness of his university days. A successful entrepreneur who helped create a billion-dollar tutoring company, Only Education, he is charismatic, and given to making bombastic statements. “Education is one of the industries that Chinese people can do much better than western people,” he told me when we met last year. The reason, he explained, is that “Chinese people are more sophisticated”, because they are raised in a society in which people rarely say what they mean.

Li is the founder of Squirrel AI, an education company that offers tutoring delivered in part by humans, but mostly by smart machines, which he says will transform education as we know it. All over the world, entrepreneurs are making similarly extravagant claims about the power of online learning – and more and more money is flowing their way. In Silicon Valley, companies like Knewton and Alt School have attempted to personalise learning via tablet computers. In India, Byju’s, a learning app valued at $6 billion, has secured backing from Facebook and the Chinese internet behemoth Tencent, and now sponsors the country’s cricket team. In Europe, the British company Century Tech has signed a deal to roll out an intelligent teaching and learning platform in 700 Belgian schools, and dozens more across the UK. Their promises are being put to the test by the coronavirus pandemic – with 849 million children worldwide, as of March 2020, shut out of school, we’re in the midst of an unprecedented experiment in the effectiveness of online learning.

But it’s in China, where President Xi Jinping has called for the nation to lead the world in AI innovation by 2030, that the fastest progress is being made. In 2018 alone, Li told me, 60 new AI companies entered China’s private education market. Squirrel AI is part of this new generation of education start-ups. The company has already enrolled 2 million student users, opened 2,600 learning centres in 700 cities across China, and raised $150m from investors. The company’s chief AI officer is Tom Mitchell, the former dean of computer science at Carnegie Mellon University, and its payroll also includes a roster of top Chinese talent, including dozens of “super-teachers” – an official designation given to the most expert teachers in the country. In January, during the worst of the outbreak, it partnered with the Shanghai education bureau to provide free products to students throughout the city.

Though the most ambitious features have yet to be built into Squirrel AI’s system, the company already claims to have achieved impressive results. At its HQ in Shanghai, I saw footage of downcast human teachers who had been defeated by computers in televised contests to see who could teach a class of students more maths in a single week. Experiments on the effectiveness of different types of teaching videos with test audiences have revealed that students learn more proficiently from a video presented by a good-looking young presenter than from an older expert teacher.

When we met, Li rhapsodised about a future in which technology will enable children to learn 10 or even 100 times more than they do today. Wild claims like these, typical of the hyperactive education technology sector, tend to prompt two different reactions. The first is: bullshit – teaching and learning is too complex, too human a craft to be taken over by robots. The second reaction is the one I had when I first met Li in London a year ago: oh no, the robot teachers are coming for education as we know it. There is some truth to both reactions, but the real story of AI education, it turns out, is a whole lot more complicated.

At a Squirrel AI learning centre high in an office building in Hangzhou, a city 70 miles west of Shanghai, a cursor jerked tentatively over the words “Modern technology has opened our eyes to many things”. Slouched at a hexagonal table in one of the centre’s dozen or so small classrooms, Huang Zerong, 14, was halfway through a 90-minute English tutoring session. As he worked through activities on his MacBook, a young woman with the kindly manner of an older sister sat next to him, observing his progress. Below, the trees of Xixi National Wetland Park barely stirred in the afternoon heat.

A question popped up on Huang’s screen, on which a virtual dashboard showed his current English level, unit score and learning focus – along with the sleek squirrel icon of Squirrel AI.

“India is famous for ________ industry.”

Huang read through the three possible answers, choosing to ignore “treasure” and “typical” and type “t-e-c-h-n-o-l-o-g-y” into the box.

“T____ is changing fast,” came the next prompt.

Huang looked towards the young woman, then he punched out “e-c-h-n-o-l-o-g-y” from memory. She clapped her hands together. “Good!” she said, as another prompt flashed up.

Huang had begun his English course, which would last for one term, a few months earlier with a diagnostic test. He had logged into the Squirrel AI platform on his laptop and answered a series of questions designed to evaluate his mastery of more than 10,000 “knowledge points” (such as the distinction between “belong to” and “belong in”). Based on his answers, Squirrel AI’s software had generated a precise “learning map” for him, which would determine which texts he would read, which videos he would see, which tests he would take.

As he worked his way through the course – with the occasional support of the human tutor by his side, or one of the hundreds accessible via video link from Squirrel AI’s headquarters in Shanghai – its contents were automatically updated, as the system perceived that Huang had mastered new knowledge.

Huang said he was less distracted at the learning centre than he was in school, and felt at home with the technology. “It’s fun,” he told me after class, eyes fixed on his lap. “It’s much easier to concentrate on the system because it’s a device.” His scores in English also seemed to be improving, which is why his mother had just paid the centre a further 91,000 RMB (about £11,000) for another year of sessions: two semesters and two holiday courses in each of four subjects, adding up to around 400 hours in total.

“Anyone can learn,” Li explained to me a few days later over dinner in Beijing. You just needed the right environment and the right method, he said.

 
Derek Haoyang Li, the founder of Squirrel AI, at a web summit in Lisbon. Photograph: Cody Glenn/Sportsfile via Getty Images

The idea for Squirrel AI had come to him five years earlier. A decade at his tutoring company, Only Education, had left him frustrated. He had found that if you really wanted to improve a student’s progress, by far the best way was to find them a good teacher. But good teachers were rare, and turnover was high, with the best much in demand. Having to find and train 8,000 new teachers each year was limiting the amount students learned – and the growth of his business.

The answer, Li decided, was adaptive learning, where an intelligent computer-based system adjusts itself automatically to the best method for an individual learner. The idea of adaptive learning was not new, but Li was confident that developments in AI research meant that huge advances were now within reach. Rather than seeking to recreate the general intelligence of a human mind, researchers were getting impressive results by putting AI to work on specialised tasks. AI doctors are now equal to or better than humans at analysing X-rays for certain pathologies, while AI lawyers are carrying out legal research that would once have been done by clerks.

Following such breakthroughs, Li resolved to augment the efforts of his human teachers with a tireless, perfectly replicable virtual teacher. “Imagine a tutor who knows everything,” he told me, “and who knows everything about you.”

In Hangzhou, Huang was struggling with the word “hurry”. On his screen, a video appeared of a neatly groomed young teacher presenting a three-minute masterclass about how to use the word “hurry” and related phrases (“in a hurry” etc). Huang watched along.

Moments like these, where a short teaching input results in a small learning output, are known as “nuggets”. Li’s dream, which is the dream of adaptive education in general, is that AI will one day provide the perfect learning experience by ensuring that each of us get just the right chunk of content, delivered in the right way, at the right moment for our individual needs.

One way in which Squirrel AI improves its results is by constantly hoovering up data about its users. During Huang’s lesson, the system continuously tracked and recorded every one of his key strokes, cursor movements, right or wrong answers, texts read and videos watched. This data was time-stamped, to show where Huang had skipped over or lingered on a particular task. Each “nugget” (the video to watch or text to read) was then recommended to him based on an analysis of his data, accrued over hundreds of hours of work on Squirrel’s platform, and the data of 2 million other students. “Computer tutors can collect more teaching experience than a human would ever be able to collect, even in a hundred years of teaching,” Tom Mitchell, Squirrel AI’s chief AI officer, told me over the phone a few weeks later.

The speed and accuracy of Squirrel AI’s platform will depend, above all, on the number of student users it manages to sign up. More students equals more data. As each student works their way through a set of knowledge points, they leave a rich trail of information behind them. This data is then used to train the algorithms of the “thinking” part of the Squirrel AI system.

This is one reason why Squirrel AI has integrated its online business with bricks-and-mortar learning centres. Most children in China do not have access to laptops and high-speed internet. The learning centres mean the company can reach kids they otherwise would not be able to. One of the reasons Mitchell says he is glad to be working with Squirrel AI is the sheer volume of data that the company is gathering. “We’re going to have millions of natural examples,” he told me with excitement.

The dream of a perfect education delivered by machine is not new. For at least a century, generations of visionaries have predicted that the latest inventions will transform learning. “Motion pictures,” wrote the American inventor Thomas Edison in 1922, “are destined to revolutionise our schools.” The immersive power of movies would supposedly turbo-charge the learning process. Others made similar predictions for radio, television, computers and the internet. But despite small successes – the Open University, TV universities in China in the 1980s, or Khan Academy today, which reaches millions of students with its YouTube lessons – teachers have continued to teach, and learners to learn, in much the same way as before.

There are two reasons why today’s techno-evangelists are confident that AI can succeed where other technologies failed. First, they view AI not as a simple innovation but as a “general purpose technology” – that is, an epochal invention, like the printing press, which will fundamentally change the way we learn. Second, they believe its powers will shed new light on the working of the human brain – how repetitive practice grows expertise, for instance, or how interleaving (leaving gaps between learning different bits of material) can help us achieve mastery. As a result, we will be able to design adaptive algorithms to optimise the learning process.

UCL Institute of Education professor and machine learning expert Rose Luckin believes that one day we might see an AI-enabled “Fitbit for the mind” that would allow us to perceive in real-time what an individual knows, and how fast they are learning. The device would use sensors to gather data that forms a precise and ever-evolving map of a person’s abilities, which could be cross-referenced with insights into their motivational and nutritional state, say. This information would then be relayed to our minds, in real time, via a computer-brain interface. Facebook is already carrying out research in this field. Other firms are trialling eye tracking and helmets that monitor kids’ brainwaves.Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

The supposed AI education revolution is not here yet, and it is likely that the majority of projects will collapse under the weight of their own hype. IBM’s adaptive tutor Knewton was pulled from US schools under pressure from parents concerned about their kids’ privacy, while Silicon Valley’s Alt School, launched to much fanfare in 2015 by a former Google executive, has burned through $174m of funding without landing on a workable business model. But global school closures owing to coronavirus may yet relax public attitudes to online learning – many online education companies are offering their products for free to all children out of school.

Daisy Christodoulou, a London-based education expert, suggests that too much time is spent speculating on what AI might one day do, rather than focusing on what it already can. It’s estimated that there are 900 million young people around the world today who aren’t currently on track to learn what they need to thrive. To help those kids, AI education doesn’t have to be perfect – it just needs to slightly improve on what they currently have.

In their book The Future of the Professions, Richard and Daniel Susskind argue that we tend to conceive of occupations as embodied in a person – a butcher or baker, doctor or teacher. As a result, we think of them as ‘too human’ to be taken over by machines. But to an algorithm, or someone designing one, a profession appears as something else: a long list of individual tasks, many of which may be mechanised. In education, that might be marking or motivating, lecturing or lesson planning. The Susskinds believe that where a machine can do any one of these tasks better and more cheaply than the average human, automation of that bit of the job is inevitable.

The point, in short, is that AI doesn’t have to match the general intelligence of humans to be useful – or indeed powerful. This is both the promise of AI, and the danger it poses. “People’s behaviour is already being manipulated,” Luckin cautioned. Devices that might one day enhance our minds are already proving useful in shaping them.

In May 2018, a group of students at Hangzhou’s Middle School No 11 returned to their classroom to find three cameras newly installed above the blackboard; they would now be under full-time surveillance in their lessons. “Previously when I had classes that I didn’t like very much, I would be lazy and maybe take naps,” a student told the local news, “but I don’t dare be distracted after the cameras were installed.” The head teacher explained that the system could read seven states of emotion on students’ faces: neutral, disgust, surprise, anger, fear, happiness and sadness. If the kids slacked, the teacher was alerted. “It’s like a pair of mystery eyes are constantly watching me,” the student told reporters.

The previous year, China’s state council had launched a plan for the role AI could play in the future of the country. Underpinning it were a set of beliefs: that AI can “harmonise” Chinese society; that for it to do so, the government should store data on every citizen; that companies, not the state, were best positioned to innovate; that no company should refuse access to the government to its data. In education, the paper called for new adaptive online learning systems powered by big data, and “all-encompassing ubiquitous intelligent environments” – or smart schools.

At AIAED, a conference in Beijing hosted by Squirrel AI, which I attended in May 2019, classroom surveillance was one of the most discussed topics – but the speakers tended to be more concerned about the technical question of how to optimise the effectiveness of facial and bodily monitoring technologies in the classroom, rather than the darker implications of collecting unprecedented amounts of data about children. These ethical questions are becoming increasingly important, with schools from India to the US currently trialling facial monitoring. In the UK, AI is being used today for things like monitoring student wellbeing, automating assessment and even in inspecting schools. Ben Williamson of the Centre for Research in Digital Education explains that this risks encoding biases or errors into the system and raises obvious privacy issues. “Now the school and university might be said to be studying their students too,” he told me.

While cameras in the classroom might outrage many parents in the UK or US, Lenora Chu, author of an influential book about the Chinese education system, argues that in China anything that improves a child’s learning tends to be viewed positively by parents. Squirrel AI even offers them the chance to watch footage of their child’s tutoring sessions. “There’s not that idea here that technology is bad,” said Chu, who moved from the US to Shanghai 10 years ago.

Rose Luckin suggested to me that a platform like Squirrel AI’s could one day mean an end to China’s notoriously punishing gaokao college entrance exam, which takes place for two days every June and largely determines a student’s education and employment prospects. If technology tracked a student throughout their school days, logging every keystroke, knowledge point and facial twitch, then the perfect record of their abilities on file could make such testing obsolete. Yet a system like this could also furnish the Chinese state – or a US tech company – with an eternal ledger of every step in a child’s development. It is not hard to imagine the grim uses to which this information could be put – for instance, if your behaviour in school was used to judge, or predict, your trustworthiness as an adult.

 
Students leaving a gaokao college entrance exam in Hangzhou, China. Photograph: Imaginechina/Rex/Shutterstock

On the one hand, said Chu, the CCP wants to use AI to better prepare young people for the future economy, and to close the achievement gap between rural and urban schools. To this end, companies like Squirrel AI receive government support, such as access to prime office space in top business districts. At the same time, the CCP, as the state council put it, sees AI as “opportunity of the millennium” for “social construction”. That is, social control. The ability of AI to “grasp group cognition and psychological changes in a timely manner” through the surveillance of people’s movements, spending and other behaviours means it can play “an irreplaceable role in effectively maintaining social stability”.

The surveillance state is already penetrating deep into people’s lives. In 2019, there was a significant spike in China in the registration of patents for facial recognition and surveillance technology. All new mobile phones in China must now be registered via a facial scan. At the hotels I stayed in, Chinese citizens handed over their ID cards and checked in using face scanners. On the high-speed train to Beijing, the announcer repeatedly warned travellers to abide by the rules in order to maintain their personal credit. The notorious social credit system, which has been under trial in a handful of Chinese cities ahead of an expected nationwide roll out this year, awards or detracts points from an individual’s trustworthiness score, which affects their ability to travel and borrow money, among other things.

The result, explained Chu, is that all these interventions exert a subtle control over what people think and say. “You sense how the wind is blowing,” she told me. For the 12 million Muslim Uighurs in Xinjiang, however, that control is anything but subtle. Police checkpoints, complete with facial scanners, are ubiquitous. All mobile phones must have Jingwang (“clean net”) app installed, allowing the government to monitor your movements and browsing. Iris and fingerprint scans are required to access health services. As many as 1.5 million Uighurs, including children, have been interned at some point in a re-education camp in the interests of “harmony”.

As we shape the use of AI in education, it’s likely that AI will shape us too. Jiang Xueqin, an education researcher from Chengdu, is sceptical that it will be as revolutionary as proponents claim. “Parents are paying for a drug,” he told me over the phone. He thought tutoring companies such as New Oriental, TAL and Squirrel AI were simply preying on parents’ anxieties about their kids’ performance in exams, and only succeeding because test preparation was the easiest element of education to automate – a closed system with limited variables that allowed for optimisation. Jiang wasn’t impressed with the progress made, or the way that it engaged every child in a desperate race to conform to the measures of success imposed by the system.

One student I met at the learning centre in Hangzhou, Zhang Hen, seemed to have a deep desire to learn about the world – she told me how she loved Qu Yuan, a Tang dynasty romantic poet, and how she was a fan of Harry Potter – but that wasn’t the reason she was here. Her goal was much simpler: she had come to the centre to boost her test scores. That may seem disappointing to idealists who want education to offer so much more, but Zhang was realistic about the demands of the Chinese education system. She had tough exams that she needed to pass. A scripted system that helped her efficiently master the content of the high school entrance exam was exactly what she wanted.

On stage at AIAED, Tom Mitchell had presented a more ambitious vision for adaptive learning that went far beyond helping students cram for mindless tests. Much of what he was most excited by was possible only in theory, but his enthusiasm was palpable. As appealing as his optimism was, though, I felt unconvinced. It was clear that adaptive technologies might improve certain types of learning, but it was equally obvious that they might narrow the aims of education and provide new tools to restrict our freedom.

Li insists that one day his system will help all young people to flourish creatively. Though he allows that for now an expert human teacher still holds an edge over a robot, he is confident that AI will soon be good enough to evaluate and reply to students’ oral responses. In less than five years, Li imagines training Squirrel AI’s platform with a list of every conceivable question and every possible response, weighting an algorithm to favour those labelled “creative”. “That thing is very easy to do,” he said, “like tagging cats.”

For Li, learning has always been like that – like tagging cats. But there’s a growing consensus that our brains don’t work like computers. Whereas a machine must crunch through millions of images to be able to identify a cat as the collection of “features” that are present only in those images labelled “cat” (two triangular ears, four legs, two eyes, fur, etc), a human child can grasp the concept of “cat” from just a few real life examples, thanks to our innate ability to understand things symbolically. Where machines can’t compute meaning, our minds thrive on it. The adaptive advantage of our brains is that they learn continually through all of our senses by interacting with the environment, our culture and, above all, other people.

Li told me that even if AI fulfilled all of its promise, human teachers would still play a crucial role helping kids learn social skills. At Squirrel AI’s HQ, which occupies three floors of a gleaming tower next door to Microsoft and Mobike in Shanghai, I met some of the company’s young teachers. Each sat at a work console in a vast office space, headphones on, eyes focused on a laptop screen, their desks decorated with plastic pot plants and waving cats. As they monitored the dashboards of up to six students simultaneously, the face of a young learner would appear on the screen, asking for help, either via a chat box or through a video link. The teachers reminded me of workers in the gig economy, the Uber drivers of education. When I logged on to try out a Squirrel English lesson for myself, the experience was good, but my tutor seemed to be teaching to a script.

Squirrel AI’s head of communications, Joleen Liang, showed me photos from a recent trip she had taken to the remote mountains of Henan, to deliver laptops to disadvantaged students. Without access to the adaptive technology, their education would be a little worse. It was a reminder that Squirrel AI’s platform, like those of its competitors worldwide, doesn’t have to be better than the best human teachers – to improve people’s lives, it just needs to be good enough, at the right price, to supplement what we’ve got. The problem is that it is hard to see technology companies stopping there. For better and worse, their ambitions are bigger. “We could make a lot of geniuses,” Li told me.