John Naughton in The Guardian
Those who cannot remember the past,” wrote the American philosopher George Santayana in 1905, “are condemned to repeat it.” And now, 118 years later, here come two American economists with the same message, only with added salience, for they are addressing a world in which a small number of giant corporations are busy peddling a narrative that says, basically, that what is good for them is also good for the world.
That this narrative is self-serving is obvious, as is its implied message: that they should be allowed to get on with their habits of “creative destruction” (to use Joseph Schumpeter’s famous phrase) without being troubled by regulation. Accordingly, any government that flirts with the idea of reining in corporate power should remember that it would then be standing in the way of “progress”: for it is technology that drives history and anything that obstructs it is doomed to be roadkill.
One of the many useful things about this formidable (560-page) tome is its demolition of the tech narrative’s comforting equation of technology with “progress”. Of course the fact that our lives are infinitely richer and more comfortable than those of the feudal serfs we would have been in the middle ages owes much to technological advances. Even the poor in western societies enjoy much higher living standards today than three centuries ago, and live healthier, longer lives.
But a study of the past 1,000 years of human development, Acemoglu and Johnson argue, shows that “the broad-based prosperity of the past was not the result of any automatic, guaranteed gains of technological progress… Most people around the globe today are better off than our ancestors because citizens and workers in earlier industrial societies organised, challenged elite-dominated choices about technology and work conditions, and forced ways of sharing the gains from technical improvements more equitably.”
Acemoglu and Johnson begin their Cook’s tour of the past millennium with the puzzle of how dominant narratives – like that which equates technological development with progress – get established. The key takeaway is unremarkable but critical: those who have power define the narrative. That’s how banks get to be thought of as “too big to fail”, or why questioning tech power is “luddite”. But their historical survey really gets under way with an absorbing account of the evolution of agricultural technologies from the neolithic age to the medieval and early modern eras. They find that successive developments “tended to enrich and empower small elites while generating few benefits for agricultural workers: peasants lacked political and social power, and the path of technology followed the vision of a narrow elite.”
A similar moral is extracted from their reinterpretation of the Industrial Revolution. This focuses on the emergence of a newly emboldened middle class of entrepreneurs and businessmen whose vision rarely included any ideas of social inclusion and who were obsessed with the possibilities of steam-driven automation for increasing profits and reducing costs.
The shock of the second world war led to a brief interruption in the inexorable trend of continuous technological development combined with increasing social exclusion and inequality. And the postwar years saw the rise of social democratic regimes focused on Keynesian economics, welfare states and shared prosperity. But all of this changed in the 1970s with the neoliberal turn and the subsequent evolution of the democracies we have today, in which enfeebled governments pay obeisance to giant corporations – more powerful and profitable than anything since the East India Company. These create astonishing wealth for a tiny elite (not to mention lavish salaries and bonuses for their executives) while the real incomes of ordinary people have remained stagnant, precarity rules and inequality returning to pre-1914 levels.
Coincidentally, this book arrives at an opportune moment, when digital technology, currently surfing on a wave of irrational exuberance about ubiquitous AI, is booming, while the idea of shared prosperity has seemingly become a wistful pipe dream. So is there anything we might learn from the history so graphically recounted by Acemoglu and Johnson?
Answer: yes. And it’s to be found in the closing chapter, which comes up with a useful list of critical steps that democracies must take to ensure that the proceeds of the next technological wave are more generally shared among their populations. Interestingly, some of the ideas it explores have a venerable provenance, reaching back to the progressive movement that brought the robber barons of the early 20th century to heel.
There are three things that need to be done by a modern progressive movement. First, the technology-equals-progress narrative has to be challenged and exposed for what it is: a convenient myth propagated by a huge industry and its acolytes in government, the media and (occasionally) academia. The second is the need to cultivate and foster countervailing powers – which critically should include civil society organisations, activists and contemporary versions of trade unions. And finally, there is a need for progressive, technically informed policy proposals, and the fostering of thinktanks and other institutions that can supply a steady flow of ideas about how digital technology can be repurposed for human flourishing rather than exclusively for private profit.
None of this is rocket science. It can be done. And it needs to be done if liberal democracies are to survive the next wave of technological evolution and the catastrophic acceleration of inequality that it will bring. So – who knows? Maybe this time we might really learn something from history.
'People will forgive you for being wrong, but they will never forgive you for being right - especially if events prove you right while proving them wrong.' Thomas Sowell
Search This Blog
Sunday, 7 May 2023
Saturday, 6 May 2023
Friday, 5 May 2023
Tuesday, 2 May 2023
Political lobbyists are pretending to be NGOs & fooling tax dept.
Jaitirth Rao in The Print
There has been quite a bit of noise about the current dispensation being against what is referred to as “civil society”. One expects this kind of diatribe from illiberal Lefties. But such is the stranglehold of these ideas and ideologies that this slanted view has now started gaining wider traction. The principal objection seems to be that the Foreign Contribution Regulation Act 2010 is being weaponised against some NGOs. This and related issues are worth examining in some detail.
When the Congress-led UPA 2 introduced draconian provisions in the FCRA law in 2010, I had gone on record opposing it. My article on that issue is available in the public domain. I mention this because I want it to be clear that I am not the usual adversary — the “fascist” supporter of the FCRA.
The FCRA is supposed to regulate foreign contributions. It has a provision that if foreign funds are received by an NGO, then the latter is required to use it for its own charitable purposes. The funds are not to be diverted to other NGOs or charity organisations. Based on the advice of some dubious and clever chartered accountants, some NGOs, instead of making contributions to other non-profits — which they are now prohibited from doing — have come up with an “innovative” solution. They are “paying” other NGOs for “services”. These services are usually in the grey and ambiguous domain of “consultancy”. Now, clearly, the NGOs are trying to “indirectly” achieve what the law prohibits them from doing “directly”.
None of these NGOs are babes in the woods. They are acquainted with common law cases. There are hundreds of cases in the US, a country close to the purse strings of these NGOs, saying that it is impermissible to do indirectly what is not permitted directly. How can it be that if the Indian State invokes a common law principle so clearly enunciated in the US, it suddenly becomes a fascist enemy of decent NGOs? As it turns out, virtually all the regulatory action against foreign-funded NGOs has been for this reason.
Don’t tread where MNCs failed
As someone who has dealt with tax authorities in nine different countries over the last 49 years, let me assure the clever chartered accountants advising these NGOs that corporations and banks have been experimenting with these devices and playing with these loopholes for decades and have rarely, if ever, succeeded. The amateurish attempts by these NGOs to fool the tax department are going to get them nowhere. Where large multinational corporations (MNCs) have failed, NGOs should not tread.
Several ill-advised NGOs have gone one step further. They have tried to pretend that contributions received from their foreign donors have not been donations but payments for the elusive consultancy services rendered by their Indian arms for their foreign payments. Such obviously foolish attempts are bound to get them into trouble. There is no point in complaining after the fact.
Foreign-funded NGOs are welcome in our country if they wish to perform “charitable” acts like helping the visually challenged, the terminally ill, or the differently abled. As a country, we have been reasonably kind in supporting causes like leprosy alleviation or livelihood creation, even if the ultimate aim behind these good deeds has been religious proselytisation. In this regard, we have gone against the dictums of MK Gandhi who vociferously opposed “do-good” missionaries. But when foreign-funded NGOs start getting involved in political lobbying in India, we have a problem.
Some of us are old enough to remember that the Central Intelligence Agency (CIA) subsidiary, the NGO known as the Congress for Cultural Freedom, funded Indian magazines like Quest in the ’50s and ’60s. Some of us have also read the testimony of Soviet Union archivist Vasili Mitrokhin who regularly made sure that more copies of Russian translations of Hindi poets were printed and “sold” than their Hindi originals. This too happened in the ’50s, ’60s, and ’70s. Again, some of us remember that the head of the Ford Foundation in Delhi could get on to Jawaharlal Nehru’s calendar easily and that some of our tragicomic policy initiatives came from this august institution. Foreign-funded NGOs trying to tell us what taxation policies we should follow are really pushing their luck. And that is exactly what several of them have done before and are doing right now. Fortunately, one of them is now under a regulatory scanner. The Indian State, as is usually the case, has been dilatory. But better late than never.
The anti-State menace
Foreign-funded NGOs and foreign media have been against the Indian State and any strong dispensation for more than 70 years now. They prefer pusillanimous clientelist governments in India. They pilloried Panditji for his soft stance with the Soviets during the 1956 Hungarian revolution. They are now upset that we are not as anti-Russia as they would like. They have also made a devil’s bargain with blatantly Islamist organisations such as the US-based Council on American-Islamic Relations (CAIR).
This is why they prefer to refer to Indian Muslim gangsters as politicians. They talk of trigger-happy police officers in India. There are, of course, no such officers in the US. They prefer to characterise the Citizenship (Amendment) Act 2019 as obnoxious and anti-Muslim. I beg to differ. The Act is in favour of persecuted religious minorities in India’s neighbouring countries. These NGOs and the media do not bleed for Sikh shopkeepers, Hindu girls, and Parsis in our neighbourhood. They support the quixotic “farmers’” agitation in India when everybody knows that it was a “middle-man” affair. And they are silent about Canada’s blatant persecution of its truckers.
Let us now revert to our own domestic uncivil society. Under the previous dispensation, a bunch of impractical Lefties got together. They had never run factories or created jobs but managed to ingratiate themselves with the powers that were and became members of the pompous National Advisory Council (NAC). Their “advice” usually resulted in the active sabotage of the intelligent policies that Manmohan Singh was trying to implement. One feels sorry for Singh, who had to constantly look over his shoulders to avoid being bitten by this overweening Dracula. The combined NGO menace got so bad that the hapless former PM, in an interview to Science journal, blamed American NGOs for sabotaging the India-US nuclear deal, which had the support of the elected governments of both countries.
The simple fact is that the so-called civil society NGOs, who had support from the NAC and who could defy Singh quite easily, are now defanged and stand without protection. All that they can do is write strong pieces in the English press in India and appeal to their patrons in foreign papers to give them some oxygen. There is an old English saying: “They say, let them say…”
Call them by their right name
It is interesting to note that for the illiberal Left, references to “civil society” almost invariably mean references to NGOs, many with explicit political agendas. Are Sangeetha Sabhas, Bhajan Mandalis, regional associations (like Kannada Sangha in Mumbai, Maratha Mandali in Chennai, Odiya Sahitya Sabha in Bengaluru, Durga Puja Association in Pune), and traditional charities (like the Red Cross, Saint Judes, National Association for the Blind) not part of civil society? If any of them run afoul of tax authorities, will there be any media coverage? The French traveller Alexis de Tocqueville makes reference to voluntary organisations as being central to the American democratic experience. To this day, more the three-quarters of the fire brigades in American small towns and suburbs are manned by volunteers. Churches and synagogues organise charitable activities. Rotary, Lions, and Giants clubs are part of civil society as also oddly enough is the Masonic Lodge.
All of these institutions derived their funding from members of their immediate physical communities. This is the civil society that de Tocqueville praised. He would be shocked if told that quasi-political lobbying groups who obtain money from foreign countries in order to influence American politics were to be referred to as members of the voluntary, citizen-supported civil society, which he held up as exemplars of grassroots democracy.
We need to get our vocabulary right and refer to political lobbyists by their correct name. Our ancients told us that getting the right “nama-rupa” or “word and form” will automatically make our arguments solid. When we revert to that tradition, it will be clear that genuine members of civil society are not complaining. Political lobbyists are indulging in grievance-mongering, which I hope and pray we quietly ignore.
AI has hacked the operating system of human civilisation
Yuval Noah Hariri in The Economist
Fears of artificial intelligence (ai) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new ai tools have emerged that threaten the survival of human civilisation from an unexpected direction. ai has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.
Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.
Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.
What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.
In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.
On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.
Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?
In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?
Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?
And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.
What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.
At first, ai will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, ai culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.
Fear of ai has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.
In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.
In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.
The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.
Of course, the new power of ai could be used for good purposes as well. I won’t dwell on this, because the people who develop ai talk about it enough. The job of historians and philosophers like myself is to point out the dangers. But certainly, ai can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to make sure the new ai tools are used for good rather than for ill. To do that, we first need to appreciate the true capabilities of these tools.
Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans—but could also physically destroy human civilisation. We therefore reshaped the entire international order to protect humanity, and to make sure nuclear technology was used primarily for good. We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world.
We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.
Won’t slowing down public deployments of ai cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated ai deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When ai hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.
We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of ai tools in the public sphere, and regulate ai before it regulates us. And the first regulation I would suggest is to make it mandatory for ai to disclose that it is an ai. If I am having a conversation with someone, and I cannot tell whether it is a human or an ai—that’s the end of democracy.
This text has been generated by a human.
Or has it?
Fears of artificial intelligence (ai) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new ai tools have emerged that threaten the survival of human civilisation from an unexpected direction. ai has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.
Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.
Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.
What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.
In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.
On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.
Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?
In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?
Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?
And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.
What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.
At first, ai will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, ai culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.
Fear of ai has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.
In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.
In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.
The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.
Of course, the new power of ai could be used for good purposes as well. I won’t dwell on this, because the people who develop ai talk about it enough. The job of historians and philosophers like myself is to point out the dangers. But certainly, ai can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis. The question we face is how to make sure the new ai tools are used for good rather than for ill. To do that, we first need to appreciate the true capabilities of these tools.
Since 1945 we have known that nuclear technology could generate cheap energy for the benefit of humans—but could also physically destroy human civilisation. We therefore reshaped the entire international order to protect humanity, and to make sure nuclear technology was used primarily for good. We now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world.
We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.
Won’t slowing down public deployments of ai cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated ai deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When ai hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.
We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of ai tools in the public sphere, and regulate ai before it regulates us. And the first regulation I would suggest is to make it mandatory for ai to disclose that it is an ai. If I am having a conversation with someone, and I cannot tell whether it is a human or an ai—that’s the end of democracy.
This text has been generated by a human.
Or has it?
Thursday, 27 April 2023
Subscribe to:
Comments (Atom)