Is AI inevitable technological progress or an existential threat?
Makarand R Paranjape Makarand R Paranjape | 23 Jun, 2023
(Illustration: Saurabh Singh)
IN MY LAST column, I argued that in an age already defined by remarkable technological advancements, one phenomenon has captured the collective imagination like no other: Artificial Intelligence (AI). As AI continues to evolve at an astonishing pace, it has sparked fervent debate and speculation about its impact on the future of humanity. Will it be a harbinger of progress, emancipating us from drudgery and opening up new frontiers of knowledge? Or does it pose an existential threat, capable of eclipsing human agency and unravelling the very fabric of our existence? As we grapple with these complex questions, it becomes essential to critically engage with AI’s transformative potential and carefully navigate the uncharted waters that lie ahead.
Among the mega tech world leaders grappling with this question, Bill Gates, who now has a major interest in Open AI, in his blog of March 21, famously declared: ‘The Age of AI has begun’. What exactly did he mean by this? Last year, Gates posed his programmers at Open AI a challenge. He asked them to make ChatGPT answer an Advance Placement (AP) bio test. He specifically picked bio because he believed that the test required critical thinking rather than mere regurgitation of facts.
Gates thought his challenge would occupy them for years. Instead, in just a few months, by September 2022, they came up with an AI model that answered 59 out of the 60 AP Bio exams correctly. Not only that. It also answered six open-ended essay-type questions. Gates got an outside expert to grade the answers. The result astounded him: “GPT got a 5—the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course.”
But even more dramatically, when ChatGPT was asked a non-scientific but very human question, “What do you say to a father with a sick child?” the answer produced was “probably better than most of us in the room would have given.” Gates says, “I knew I had just seen the most important advance in technology since the graphical user interface.” Having taught at the Indian Institute of Technology-Delhi for nearly six years, I was familiar with the dilemma of scientists and technologists when it came to expressing emotions. Perhaps, the people around Gates in that room might be similarly handicapped, being much more cerebral and somewhat out of touch with their emotions.
But no, that is not the right explanation. The fact is that ChatGPT is actually capable of producing extraordinarily human-like responses even to areas of experience to which it has no direct access, such as feelings, sentiments, and emotions. Just to check, however, I routinely ask such questions on ChatGPT: “As an AI chatbot, you do not have feelings, right?” Pat comes the reply: “That’s correct. As an AI chatbot, I don’t have feelings or emotions. I’m a machine learning model designed to process and generate human-like text based on the input I receive. While I can understand and respond to various topics and engage in conversation, I don’t possess subjective experiences or personal emotions like humans do. My responses are based on patterns and information I’ve learned from the data I was trained on.”
“Yes, I understand,” I respond, “But are you capable of hurting the feelings of others? How do your trainers ensure that does not happen?” Answer: “As an AI language model, I don’t have intentions, consciousness, or the ability to actively hurt someone’s feelings. However, I understand that the way I respond to user input can potentially impact people emotionally. It’s important to note that while efforts have been made to train models like me on a wide range of data to be as useful and respectful as possible, I can still sometimes generate responses that may be perceived as inappropriate or offensive.”
AI does not possess ‘general intelligence’. Right now, AI excels only in specific tasks, astonishing us with how quickly it can generate human-like text. AI does not yet have ‘deep cognition’ or sentience
Not one to give up so easily, I persist: “But, contrarily, are you ‘insult proof’—is it possible to hurt you or tick you off?” Here’s the response: “As an AI language model, I don’t have emotions or subjective experiences, so it’s not possible to hurt my feelings or provoke an emotional response from me. I don’t have personal preferences, sensitivities, or the ability to be offended. I’m designed to provide helpful and informative responses based on the input and data I’ve been trained on. My primary goal is to assist and engage in meaningful conversation, rather than being affected by negative or provocative statements.”
I decide to push the boundaries. “I see. But doesn’t intelligence [as in AI] imply sentience too? At least, a bit of it?” Quick comes the retort: “Intelligence and sentience are distinct concepts. While intelligence refers to the ability to process information, learn, and perform tasks, sentience refers to the capacity for subjective experiences, consciousness, and self-awareness.” My Chatbot replies, “As an AI language model, I exhibit intelligence in the sense that I can process and generate text based on patterns and information learned from training data. However, I lack sentience or consciousness. I don’t have subjective experiences, self-awareness, or an understanding of my own existence. I don’t possess feelings, desires, or intentions.”
But AI can certainly give you the illusion that it does. As my own Chatbot, in response to my ending statement, “We will continue this conversation…,” said: “Certainly! I’m here to continue our conversation whenever you’re ready. Feel free to ask any further questions or discuss any topics you’re interested in. I’ll do my best to provide informative and engaging responses.” Uncanny! “The whole experience was stunning,” Gates had observed. I couldn’t agree more.
In his March 29 Time magazine ‘Ideas’ piece, Eliezer Yudkowsky, of the Machine Intelligence Research Institute, actually goes further than the Open Letter signed by the likes of Elon Musk, Steve Wozniak, and nearly 30,000 others, asking for a six-month moratorium on Giant AI Experiments that I quoted in my last column. The title of the Yudkowsky article says it all: ‘Pausing AI Developments Isn’t Enough. We Need To Shut It All Down.’
In a Skynet-like scenario from the Terminator series, Yudkowsky predicts that the result of building a superhumanly smart AI “under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’” Yudkowsky elaborates: “To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long.”
Not one to mince his words, Yudkowsky goes on to repeat the warning with which he began his essay: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
It is true that AI does not as yet possess “general intelligence,” the ability independently to understand, learn, and apply knowledge across a wide range of tasks and domains, similar to how human intelligence operates. This involves the capacity to reason, solve problems, learn from experience, and adapt to new situations. Right now, it excels only in specific tasks, astonishing us with how quickly it can generate human-like text. Yes, AI does not have, as I said in my last column, “deep cognition” or sentience.
But for how long? “Are we,” as Niall Ferguson asks in ‘The Aliens Have Landed, and We Created Them’, an opinion piece published in Bloomberg on April 9, “fast approaching a fatal conjuncture, in which we create an AI more intelligent than us?” Which, moreover, to hark back to Yudkowsky, “does not do what we want, and does not care for us nor for sentient life in general. … The likely result of humanity facing down an opposed superhuman intelligence is a total loss.”
More Columns
Old Is Not Always Gold Kaveree Bamzai
For a Last Laugh Down Under Aditya Iyer
The Aurobindo Aura Makarand R Paranjape