AI poses an exceptional threat to human identity
Makarand R Paranjape Makarand R Paranjape | 09 Jun, 2023
(Illustration: Saurabh Singh)
WHAT IF I TOLD you, at the very outset, that this column was not written by me, but generated by an Artificial Intelligence (AI) programme, more popularly known as a chat box? Would that make a difference to its content— or, more importantly—its impression on you? Do not fear. I can assure you that it is not AI, but just plain old “I” who has written this column. But you may ask, “How do I know?” That is precisely the point. Luckily, at least right now, just as there are chat boxes to generate text, there are also programmes to verify if the said text is human or AI. You can check for yourself if this column is machine generated or written by a human.
But the difference between the two, as Large Language Models (LLMs) become more and more powerful, is bound to diminish. To the point that it might become impossible to tell which is which, and who is who. Will that erase the difference between humans and machines altogether? The devastating ending of the original science fiction classic Blade Runner (1982) brings back to mind the unbearable sadness of realising that the one you love is not a human but a mutant.
The irony is doubly sharp for the protagonist, Rick Deckard, a “blade runner”, played by Harrison Ford, tasked with hunting down mutants and destroying them. Precisely because they pose a threat not so much to humanity as to the very meaning of being human. But Rachel, his lady love, is so convincing that she eludes all his tests till the very end. The mutant he loves and must lose is a special type of replicant, implanted with human memories, capable even of becoming pregnant. What, then, is the difference between her and a real woman?
This question has already come to the fore with Roy, one of the most ferocious robots that Deckard chases, actually in a position to kill him. But even as Deckard is hanging from the proverbial cliff, in this case skyscraper, so easy to slay, after a gladiatorial struggle between man and machine, Roy actually pulls him up to save him. Why? Not only because, as he says, in the now cult classic 43-word exit monologue, his time is up:
“I’ve seen things you people wouldn’t believe… Attack ships on fire off the shoulder of Orion… I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in the rain… Time to die.”
No, it is not only because Roy’s memories will be lost, like “tears in the rain.” After all, our memories will also be lost. We will die too. But because, paradoxically, Roy in saving Deckard, the man who has been trying to kill him and has already killed his girlfriend, Pris, is more humane than Deckard himself. This reversal, what Aristotle termed peripeteia, actually exemplifies the paradox and enigma of AI. Will it be more human than us? Will it, eventually, replace us?
Since the beginning of this year, Open has been doing a number of stories on Artificial Intelligence (AI). For instance, ‘Artificial Intelligence at Your Command’ (shorturl.at/lwCO8). But in the last few months, the level of questions, concerns, and alarm, has changed remarkably. A spate of articles, talks, podcasts, and a remarkable open letter on March 22, 2023, signed by over 30,000 netizens, issued by the Future of Life Institute, calling for a ‘Pause Giant AI Experiments’, forces us to confront a Terminator type of question: does AI challenge, even foreclose, the very future of humanity itself?
An alarming salvo in such a direction was fired on May Day 2023 by Geoffrey Hinton, often called the “Godfather” of AI. Hinton had just quit tech giant Google after nearly a decade. He wanted to speak out freely against the dangers of AI, especially the speed at which it is progressing. His warning is not that AI has human-like abilities which can compete with, or even overtake us. Rather, it embodies an intelligence unlike ours—it is a new form of intelligence, which poses exceptional threats to what it means to be human.
The open letter from the Future of Life Institute went further:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”
What makes the letter special is that its first signatories include Yoshua Bengio, founder and scientific director at Mila, Stuart Russell, professor of computer science, director of the Center for Intelligent Systems at Berkeley, and co-author of the standard textbook Artificial Intelligence: A Modern Approach, Bart Selman, Cornell, professor of computer science, past president of AAAI, Elon Musk, CEO of SpaceX, Tesla & Twitter, Steve Wozniak, co-founder, Apple, Yuval Noah Harari, author and professor, Hebrew University of Jerusalem, Emad Mostaque, CEO, Stability AI, John J Hopfield, Princeton University, professor emeritus, inventor of associative neural networks, Connor Leahy, CEO, Conjecture, Jaan Tallinn, co-founder of Skype, among several other movers and shakers in the world.
Computing capacity, even when it speeds up enormously, is essentially binary, while intelligence is not. Therefore, regulatory design and implementation are still quite challenging when it comes to AI
Why would those who profit most from AI want to slow it down to the point of pressing the pause button on its progress? Their worries include AI posing “a major threat to humanity,” “AI outsmarting humans,” and “taking control of civilisation.” They claim we should ideally strive “to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development, in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
Do thinking machines pose a threat to humanity? With the new AI technology, language turns into code, and code into language. Will such “ubiquitous pseudo cognition… be a turning point in history?” This is the question posed by the Economist on April 22, 2023. And the simple answer? “A world which contained entities which think better and act quicker than humans and their institutions, and which had interests that were not aligned with those of humankind, would be a dangerous place.” Large language models, which are both “foundational” and “generative,” have propelled anxieties and fears of humanity’s existential risks from AI. Will bots finally replace us, as several dystopian science fiction texts have forewarned?
But there is a fundamental problem here—to regulate something you first have to define it. Neither experts nor governments are clear about what exactly constitutes AI. Computing capacity, even when it speeds up enormously, is essentially binary, while intelligence is not. Therefore, regulatory design and implementation are still quite challenging when it comes to AI. From the largely hands-off US approach to the “light touch” of the UK to greater regulation in the EU, to overregulation and state control in China, a spectrum of state and regulatory responses has emerged to AI. Even the open letter to pause its development does not care to define it. Rather, it takes for granted that all of us know what is meant by AI. Instead, it dwells on outcomes and consequences.
In the next part of this column, I will list some of the more immediate threats that AI poses to us. But right now, what we are forced to confront is what it means to have an alien intelligence in our midst. Can it supplant us altogether, rendering us obsolete? After all, we have done precisely that to competing species closest to us, including our ancestors, the Neanderthals.
To try to answer this question, let me return to the question I raised at the beginning. What if this column were written by a machine and not a human? The truth is that in very near time to the present moment, it will be impossible to tell the difference. Deep cognition, which is an exceptional human trait, and pseudo cognition, which LLMs are capable of engendering even as I write this, will display an irresolvable difference.
No one will be able to tell—let alone—tear them apart.
More Columns
‘AIPAC represents the most cynical side of politics where money buys power’ Ullekh NP
The Radical Shoma A Chatterji
PM Modi's Secret Plan Gives Non-Dynasts Political Chance Short Post