Search This Blog

Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Saturday, 4 May 2024

How Disinformation Works

From The Economist

Did you know that the wildfires which ravaged Hawaii last summer were started by a secret “weather weapon” being tested by America’s armed forces, and that American ngos were spreading dengue fever in Africa? That Olena Zelenska, Ukraine’s first lady, went on a $1.1m shopping spree on Manhattan’s Fifth Avenue? Or that Narendra Modi, India’s prime minister, has been endorsed in a new song by Mahendra Kapoor, an Indian singer who died in 2008?

These stories are, of course, all bogus. They are examples of disinformation: falsehoods that are intended to deceive. Such tall tales are being spread around the world by increasingly sophisticated campaigns. Whizzy artificial-intelligence (ai) tools and intricate networks of social-media accounts are being used to make and share eerily convincing photos, video and audio, confusing fact with fiction. In a year when half the world is holding elections, this is fuelling fears that technology will make disinformation impossible to fight, fatally undermining democracy. How worried should you be?

Disinformation has existed for as long as there have been two sides to an argument. Rameses II did not win the battle of Kadesh in 1274bc. It was, at best, a draw; but you would never guess that from the monuments the pharaoh built in honour of his triumph. Julius Caesar’s account of the Gallic wars is as much political propaganda as historical narrative. The age of print was no better. During the English civil war of the 1640s, press controls collapsed, prompting much concern about “scurrilous and fictitious pamphlets”.

The internet has made the problem much worse. False information can be distributed at low cost on social media; ai also makes it cheap to produce. Much about disinformation is murky. But in a special Science & technology section, we trace the complex ways in which it is seeded and spread via networks of social-media accounts and websites. Russia’s campaign against Ms Zelenska, for instance, began as a video on YouTube, before passing through African fake-news websites and being boosted by other sites and social-media accounts. The result is a deceptive veneer of plausibility.

Spreader accounts build a following by posting about football or the British royal family, gaining trust before mixing in disinformation. Much of the research on disinformation tends to focus on a specific topic on a particular platform in a single language. But it turns out that most campaigns work in similar ways. The techniques used by Chinese disinformation operations to bad-mouth South Korean firms in the Middle East, for instance, look remarkably like those used in Russian-led efforts to spread untruths around Europe.

The goal of many operations is not necessarily to make you support one political party over another. Sometimes the aim is simply to pollute the public sphere, or sow distrust in media, governments, and the very idea that truth is knowable. Hence the Chinese fables about weather weapons in Hawaii, or Russia’s bid to conceal its role in shooting down a Malaysian airliner by promoting several competing narratives.

All this prompts concerns that technology, by making disinformation unbeatable, will threaten democracy itself. But there are ways to minimise and manage the problem.

Encouragingly, technology is as much a force for good as it is for evil. Although ai makes the production of disinformation much cheaper, it can also help with tracking and detection. Even as campaigns become more sophisticated, with each spreader account varying its language just enough to be plausible, ai models can detect narratives that seem similar. Other tools can spot dodgy videos by identifying faked audio, or by looking for signs of real heartbeats, as revealed by subtle variations in the skin colour of people’s foreheads.

Better co-ordination can help, too. In some ways the situation is analogous to climate science in the 1980s, when meteorologists, oceanographers and earth scientists could tell something was happening, but could each see only part of the picture. Only when they were brought together did the full extent of climate change become clear. Similarly, academic researchers, ngos, tech firms, media outlets and government agencies cannot tackle the problem of disinformation on their own. With co-ordination, they can share information and spot patterns, enabling tech firms to label, muzzle or remove deceptive content. For instance, Facebook’s parent, Meta, shut down a disinformation operation in Ukraine in late 2023 after receiving a tip-off from Google.

But deeper understanding also requires better access to data. In today’s world of algorithmic feeds, only tech companies can tell who is reading what. Under American law these firms are not obliged to share data with researchers. But Europe’s new Digital Services Act mandates data-sharing, and could be a template for other countries. Companies worried about sharing secret information could let researchers send in programs to be run, rather than sending out data for analysis.

Such co-ordination will be easier to pull off in some places than others. Taiwan, for instance, is considered the gold standard for dealing with disinformation campaigns. It helps that the country is small, trust in the government is high and the threat from a hostile foreign power is clear. Other countries have fewer resources and weaker trust in institutions. In America, alas, polarised politics means that co-ordinated attempts to combat disinformation have been depicted as evidence of a vast left-wing conspiracy to silence right-wing voices online.
One person’s fact...

The dangers of disinformation need to be taken seriously and studied closely. But bear in mind that they are still uncertain. So far there is little evidence that disinformation alone can sway the outcome of an election. For centuries there have been people who have peddled false information, and people who have wanted to believe them. Yet societies have usually found ways to cope. Disinformation may be taking on a new, more sophisticated shape today. But it has not yet revealed itself as an unprecedented and unassailable threat.

Thursday, 20 July 2023

A Level Economics 41: Monopoly

Monopoly is a market structure characterized by a single seller or producer dominating the entire market for a particular product or service. There are different types of monopolies based on their sources and characteristics. Let's define and explain each type of monopoly along with their underpinning assumptions:

  1. Natural Monopoly:

    • Definition: A natural monopoly occurs when a single firm can efficiently supply the entire market at the lowest cost due to significant economies of scale. In other words, it is more cost-effective to have one firm producing the good or service rather than multiple competing firms.
    • Underpinning Assumptions: The key assumption in a natural monopoly is that there are substantial economies of scale relative to the size of the market. This means that as the firm produces more output, the average cost of production decreases significantly. Additionally, barriers to entry, such as high fixed costs and technical expertise, prevent other firms from entering the market and competing with the incumbent firm.

  2. Legal Monopoly:

    • Definition: A legal monopoly is a monopoly created or sanctioned by the government through laws or regulations. The government grants exclusive rights to a single firm to produce and sell a particular product or service, often due to reasons of public interest or national security.
    • Underpinning Assumptions: The underpinning assumption in a legal monopoly is that the government believes that a single firm can better serve the public interest and provide essential goods or services efficiently. Legal monopolies often exist in industries like utilities (e.g., water, electricity) and postal services.

  3. Technological Monopoly:

    • Definition: A technological monopoly arises when a firm possesses exclusive rights to a unique technology or patented invention, allowing it to be the sole producer of a product or service based on that technology.
    • Underpinning Assumptions: The key assumption in a technological monopoly is that the firm has developed a novel and protected technology that provides a significant competitive advantage. The exclusivity provided by patents prevents other firms from replicating the technology and competing in the market.

  4. Geographic Monopoly:

    • Definition: A geographic monopoly occurs when a single firm has control over the supply of a product or service in a specific geographical area or region.
    • Underpinning Assumptions: The underpinning assumption in a geographic monopoly is that there are barriers to entry specific to that particular location. These barriers could be geographical, legal, or due to high transportation costs, making it difficult for other firms to enter and compete in that specific market.

  5. Government Monopoly:

    • Definition: A government monopoly exists when a government agency or entity has exclusive control over the production and distribution of a particular good or service.
    • Underpinning Assumptions: The key assumption in a government monopoly is that the government is the most suitable entity to provide the good or service in question. This could be due to the necessity of ensuring uniformity, safety, or public welfare.

Underpinning assumptions in all types of monopoly include the presence of barriers to entry, which prevent or discourage other firms from entering the market and competing with the dominant firm. These barriers may include economies of scale, patents, control over essential resources, legal protection, or government grants. Monopolies often raise concerns about the potential for higher prices, reduced consumer choice, and reduced incentives for innovation. As a result, regulators and policymakers often monitor and intervene in monopolistic markets to promote competition and protect consumer welfare.

Saturday, 15 July 2023

A Level Economics 16: The Supply Curve

 Why do supply curves normally slope upward from left to right?


Supply curves typically slope upward from left to right due to the law of supply, which states that producers are willing to supply more of a good at higher prices and less at lower prices. Several factors contribute to this upward-sloping pattern:

  1. Production Costs: As the price of a good increases, producers have a greater incentive to supply more of it because higher prices often result in higher profits. However, producing additional units may require additional resources and incur higher production costs. For instance, suppliers may need to invest in additional labor, raw materials, or machinery, which can increase their costs. To cover these increased costs and earn higher profits, producers are willing to supply more at higher prices.

  2. Opportunity Costs: Opportunity cost refers to the value of the next best alternative forgone when making a choice. When the price of a good rises, suppliers face an opportunity cost of producing alternative goods they could have produced instead. As a result, suppliers allocate more resources and production efforts to the higher-priced good, which leads to an increase in supply.

  3. Increasing Marginal Costs: The concept of increasing marginal costs also contributes to the upward slope of the supply curve. As production increases, producers may encounter diminishing returns or face constraints that make it increasingly expensive to produce additional units. This results in higher marginal costs of production, which necessitates higher prices to justify supplying additional units of the good.

  4. Technological Constraints: Technological limitations can also influence the upward slope of the supply curve. Suppliers may face constraints in terms of production capacity, available technology, or access to resources. As the quantity supplied increases, producers may need to invest in more advanced technology or incur additional costs to expand production capacity, which can lead to higher prices.

  5. Supplier Behavior: Suppliers' expectations and behavior can influence the upward slope of the supply curve. If producers anticipate that prices will rise in the future, they may reduce current supply to take advantage of the expected higher prices. Conversely, if producers anticipate falling prices, they may increase current supply to avoid potential losses. Such behavior aligns with the upward-sloping supply curve.

Overall, the upward slope of the supply curve reflects the positive relationship between price and quantity supplied. Higher prices incentivize producers to allocate more resources, incur higher production costs, and overcome technological constraints to supply larger quantities of a good. This relationship captures the fundamental dynamics of supply in response to price changes.

Tuesday, 4 July 2023

Half Marks for Indian Education

 From The Economist

When Narendra modi, India’s prime minister, visited the White House last week, he did so as the leader of one of the world’s fastest-growing big economies. India is expanding at an annual rate of 6% and its gdp ranks fifth in the global pecking order. Its tech industry is flourishing and green firms are laying solar panels like carpets. Many multinationals are drawn there: this week Goldman Sachs held a board meeting in India. 

As the rich world and China grow older, India’s huge youth bulge—some 500m of its people are under 20—should be an additional propellant. Yet as we report, although India’s brainy elite hoovers up qualifications, education for most Indians is still a bust. Unskilled, jobless youngsters risk bringing India’s economic development to a premature stop.

India has made some strides in improving the provision of services to poor people. Government digital schemes have simplified access to banking and the distribution of welfare payments. Regarding education, there has been a splurge on infrastructure. A decade ago only a third of government schools had handwashing facilities and only about half had electricity; now around 90% have both. Since 2014 India has opened nearly 400 universities. Enrolment in higher education has risen by a fifth.

Yet improving school buildings and expanding places only gets you so far. India is still doing a terrible job of making sure that the youngsters who throng its classrooms pick up essential skills. Before the pandemic less than half of India’s ten-year-olds could read a simple story, even though most of them had spent years sitting obediently behind school desks (the share in America was 96%). School closures that lasted more than two years have since made this worse.

There are lots of explanations. Jam-packed curriculums afford too little time for basic lessons in maths and literacy. Children who fail to grasp these never learn much else. Teachers are poorly trained and badly supervised: one big survey of rural schools found a quarter of staff were absent. Officials sometimes hand teachers unrelated duties, from administering elections to policing social-distancing rules during the pandemic.

Such problems have led many families to send their children to private schools instead. These educate about 50% of all India’s children. They are impressively frugal, but do not often produce better results. Recently, there have been hopes that the country’s technology industry might revolutionise education. Yet relying on it alone is risky. In recent weeks India’s biggest ed-tech firm, Byju’s, which says it educates over 150m people worldwide and was once worth $22bn, has seen its valuation slashed because of financial troubles.

All this makes fixing government schools even more urgent. India should spend more on education. Last year the outlays were just 2.9% of gdp, low by international standards. But it also needs to reform how the system works by taking inspiration from models elsewhere in developing Asia.

As we report, in international tests pupils in Vietnam have been trouncing youngsters from much richer countries for a decade. Vietnam’s children spend less time in lessons than Indian ones, even when you count homework and other cramming. They also put up with larger classes. The difference is that Vietnam’s teachers are better prepared, more experienced and more likely to be held accountable if their pupils flunk.

With the right leadership, India could follow. It should start by collecting better information about how much pupils are actually learning. That would require politicians to stop disputing data that do not show their policies in a good light. And the ruling Bharatiya Janata Party should also stop trying to strip textbooks of ideas such as evolution, or of history that irks Hindu nativists. That is a poisonous distraction from the real problems. India is busy constructing roads, tech campuses, airports and factories. It needs to build up its human capital, too.

Sunday, 18 June 2023

Economics Essay 86: Technology and Monopoly Power

Discuss whether governments should consider increasing the regulation and taxation of technology firms which have acquired significant global monopoly power.

he question of whether governments should consider increasing the regulation and taxation of technology firms with significant global monopoly power is a complex one. It involves weighing the potential benefits of increased regulation and taxation against the potential drawbacks and unintended consequences.

There are arguments in favor of increasing regulation and taxation for such firms:

  1. Market Power and Anti-Competitive Practices: Technology firms with significant global monopoly power may use their market dominance to stifle competition and engage in anti-competitive practices. They may limit consumer choice, drive out smaller competitors, and impede innovation. Increased regulation can help ensure a level playing field and promote fair competition.

  2. Consumer Protection: Technology firms often collect and handle vast amounts of user data, raising concerns about privacy and data security. Increased regulation can provide stronger safeguards for consumer data and ensure that technology firms adhere to ethical standards in their operations.

  3. Tax Fairness: Some technology firms have been criticized for using complex structures and loopholes to minimize their tax obligations. Increasing taxation on these firms can help address concerns of tax avoidance and ensure a more equitable distribution of tax burdens across industries.

However, there are also arguments against increasing regulation and taxation:

  1. Innovation and Economic Growth: Technology firms are often at the forefront of innovation and contribute significantly to economic growth. Excessive regulation and taxation may stifle innovation by creating barriers to entry and discouraging investment. It is important to strike a balance between regulation and fostering an environment that encourages innovation and entrepreneurial activity.

  2. International Competitiveness: Technology firms with global reach operate in a highly interconnected and competitive global market. Unilateral regulation and taxation measures by a single country may lead to unintended consequences such as reduced competitiveness and disincentives for firms to operate in that country. International coordination and cooperation are crucial to address global issues related to technology firms.

  3. Potential for Regulatory Capture: Increased regulation may inadvertently lead to regulatory capture, where firms with significant resources influence the regulatory process to their advantage. This can undermine the intended purpose of regulation and perpetuate the dominance of large technology firms.

In conclusion, the issue of increasing regulation and taxation of technology firms with global monopoly power requires careful consideration. While there are valid concerns regarding market power, consumer protection, and tax fairness, it is essential to strike a balance that promotes competition, innovation, and economic growth. International cooperation and a comprehensive approach are necessary to address the challenges posed by these firms effectively.

Economics Essay 74: Technology and Perfect Competition

Evaluate the view that technological change tends to bring industries closer to the market structure of perfect competition.

The view that technological change tends to bring industries closer to the market structure of perfect competition is subject to evaluation. While technological advancements can introduce elements of competition and improve market efficiency, the extent to which they lead to perfect competition depends on various factors.

  1. Reduction of barriers to entry: Technological innovations can lower barriers to entry, making it easier for new firms to enter the market. For example, the internet and e-commerce platforms have facilitated the entry of small businesses and entrepreneurs into various industries. This increased competition can move industries towards a more competitive landscape.

  2. Increased information transparency: Technological advancements have improved information flows, allowing consumers to access and compare product information, prices, and reviews. This transparency enables consumers to make informed choices and encourages competition based on quality and price. It also enables new entrants to gain visibility and compete with established players. Thus, technology can enhance market transparency and promote more competitive outcomes.

  3. Disruption and market dynamics: Technological change can disrupt existing industries and business models, leading to increased competition. Disruptive innovations can challenge dominant firms and break down market power, promoting more competitive behavior. Examples include the rise of ride-sharing platforms challenging traditional taxi services or online streaming services disrupting traditional media.

However, there are also factors that may limit the convergence towards perfect competition:

  1. Network effects and economies of scale: Some industries exhibit network effects, where the value of a product or service increases as more people use it. This can create barriers to entry and give an advantage to established firms, hindering the move towards perfect competition. Similarly, industries with significant economies of scale may have cost advantages that make it difficult for new entrants to compete effectively.

  2. Intellectual property rights and patents: Technological advancements often involve intellectual property rights and patents. These legal protections can create barriers to entry and restrict competition, as firms can hold exclusive rights to certain technologies or innovations. This can limit the extent to which technology-driven industries move towards perfect competition.

  3. Market concentration and consolidation: In some cases, technological change has resulted in the concentration of market power in the hands of a few dominant firms. For example, in the tech industry, giants like Google, Facebook, and Amazon have acquired significant market share and established strong network effects. This concentration of power can undermine the competitive dynamics and hinder the achievement of perfect competition.

In conclusion, while technological change can introduce elements of competition and enhance market dynamics, its impact on moving industries closer to perfect competition is mixed. Reduction of barriers to entry and increased information transparency can promote competition, but network effects, economies of scale, intellectual property rights, and market concentration can act as counterforces. The extent to which technological change brings industries closer to perfect competition depends on the interplay of these factors and the specific characteristics of each industry.

Sunday, 7 May 2023

Why the Technology = Progress narrative must be challenged

John Naughton in The Guardian

Those who cannot remember the past,” wrote the American philosopher George Santayana in 1905, “are condemned to repeat it.” And now, 118 years later, here come two American economists with the same message, only with added salience, for they are addressing a world in which a small number of giant corporations are busy peddling a narrative that says, basically, that what is good for them is also good for the world.

That this narrative is self-serving is obvious, as is its implied message: that they should be allowed to get on with their habits of “creative destruction” (to use Joseph Schumpeter’s famous phrase) without being troubled by regulation. Accordingly, any government that flirts with the idea of reining in corporate power should remember that it would then be standing in the way of “progress”: for it is technology that drives history and anything that obstructs it is doomed to be roadkill.

One of the many useful things about this formidable (560-page) tome is its demolition of the tech narrative’s comforting equation of technology with “progress”. Of course the fact that our lives are infinitely richer and more comfortable than those of the feudal serfs we would have been in the middle ages owes much to technological advances. Even the poor in western societies enjoy much higher living standards today than three centuries ago, and live healthier, longer lives.

But a study of the past 1,000 years of human development, Acemoglu and Johnson argue, shows that “the broad-based prosperity of the past was not the result of any automatic, guaranteed gains of technological progress… Most people around the globe today are better off than our ancestors because citizens and workers in earlier industrial societies organised, challenged elite-dominated choices about technology and work conditions, and forced ways of sharing the gains from technical improvements more equitably.”

Acemoglu and Johnson begin their Cook’s tour of the past millennium with the puzzle of how dominant narratives – like that which equates technological development with progress – get established. The key takeaway is unremarkable but critical: those who have power define the narrative. That’s how banks get to be thought of as “too big to fail”, or why questioning tech power is “luddite”. But their historical survey really gets under way with an absorbing account of the evolution of agricultural technologies from the neolithic age to the medieval and early modern eras. They find that successive developments “tended to enrich and empower small elites while generating few benefits for agricultural workers: peasants lacked political and social power, and the path of technology followed the vision of a narrow elite.” 

A similar moral is extracted from their reinterpretation of the Industrial Revolution. This focuses on the emergence of a newly emboldened middle class of entrepreneurs and businessmen whose vision rarely included any ideas of social inclusion and who were obsessed with the possibilities of steam-driven automation for increasing profits and reducing costs.

The shock of the second world war led to a brief interruption in the inexorable trend of continuous technological development combined with increasing social exclusion and inequality. And the postwar years saw the rise of social democratic regimes focused on Keynesian economics, welfare states and shared prosperity. But all of this changed in the 1970s with the neoliberal turn and the subsequent evolution of the democracies we have today, in which enfeebled governments pay obeisance to giant corporations – more powerful and profitable than anything since the East India Company. These create astonishing wealth for a tiny elite (not to mention lavish salaries and bonuses for their executives) while the real incomes of ordinary people have remained stagnant, precarity rules and inequality returning to pre-1914 levels.

Coincidentally, this book arrives at an opportune moment, when digital technology, currently surfing on a wave of irrational exuberance about ubiquitous AI, is booming, while the idea of shared prosperity has seemingly become a wistful pipe dream. So is there anything we might learn from the history so graphically recounted by Acemoglu and Johnson?

Answer: yes. And it’s to be found in the closing chapter, which comes up with a useful list of critical steps that democracies must take to ensure that the proceeds of the next technological wave are more generally shared among their populations. Interestingly, some of the ideas it explores have a venerable provenance, reaching back to the progressive movement that brought the robber barons of the early 20th century to heel.

There are three things that need to be done by a modern progressive movement. First, the technology-equals-progress narrative has to be challenged and exposed for what it is: a convenient myth propagated by a huge industry and its acolytes in government, the media and (occasionally) academia. The second is the need to cultivate and foster countervailing powers – which critically should include civil society organisations, activists and contemporary versions of trade unions. And finally, there is a need for progressive, technically informed policy proposals, and the fostering of thinktanks and other institutions that can supply a steady flow of ideas about how digital technology can be repurposed for human flourishing rather than exclusively for private profit.

None of this is rocket science. It can be done. And it needs to be done if liberal democracies are to survive the next wave of technological evolution and the catastrophic acceleration of inequality that it will bring. So – who knows? Maybe this time we might really learn something from history.

Tuesday, 2 May 2023

AI has hacked the operating system of human civilisation

Yuval Noah Hariri in The Economist

Fears of artificial intelligence (ai) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new ai tools have emerged that threaten the survival of human civilisation from an unexpected direction. ai has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.

In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.

On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.

Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?

Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.

At first, ai will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, ai culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.

Fear of ai has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.

In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.

In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.

The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.

Of course, the new power of ai could be used for good purposes as well. I won’t dwell on this, because the people who develop ai talk about it enough. The job of historians and philosophers like myself is to point out the dangers. But certainly, ai can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to make sure the new ai tools are used for good rather than for ill. To do that, we first need to appreciate the true capabilities of these tools.

Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans—but could also physically destroy human civilisation. We therefore reshaped the entire international order to protect humanity, and to make sure nuclear technology was used primarily for good. We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world.

We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

Won’t slowing down public deployments of ai cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated ai deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When ai hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.

We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of ai tools in the public sphere, and regulate ai before it regulates us. And the first regulation I would suggest is to make it mandatory for ai to disclose that it is an ai. If I am having a conversation with someone, and I cannot tell whether it is a human or an ai—that’s the end of democracy.

This text has been generated by a human.

Or has it?