(Illustration: Saurabh Singh)
IT WAS TOWARDS THE END OF LAST YEAR that ChatGPT was unveiled to the world and by the beginning of 2023, the impact was evident. Everyone recognised something revolutionary in the journey of human technology had begun with what was called generative artificial intelligence (AI). Because what ChatGPT did was to almost entirely remove the distinction between man and machine in terms of interaction. It replied to questions thrown at it like a human being and sounded near-perfect in the illusion that it was conveying. The company that had created ChatGPT, OpenAI, became the harbinger of the AI era, not just for what it had unfurled but because it forced the other big technology companies to not delay their own AI launches for fear of losing out on market share. And in keeping with this phenomenon, 2023 ended with the rollout by Google of an advanced version of its AI product Gemini, along with an added claim that it could now even see and hear and respond with reasoning.
AI has been in the making for decades and it has numerous streams but generative AI is what brings the concept home to the general population. Before chatbots like ChatGPT, Microsoft’s Bing, which is also based on the former, and Google’s Bard, you did have products like Amazon’s Alexa or Apple’s Siri that responded to you, but they were limited and no one confused them for intelligence. These latest ones were a gargantuan leap. The technical label for them is Large Language Model (LLM) and while it appears that they are giving intelligent human-like answers, what they are doing is predicting what the next word should be in a sentence after being trained on massive amounts of data on how human beings speak and write. They are so good in this that obituaries were written about traditional forms of search like Google. But ChatGPT and Bard could also write essays, short stories, poems, do coding, make jokes, diagnose medical symptoms, give fashion tips, recipes, travel itineraries, workout routines, be tools for businesses in their operations and outreach, and so on, limitlessly. And anyone, from schoolchildren to senior citizens, could access them because the communication was in ordinary language.
Their popularity was reflected by how many users took to them. On November 30 this year, ChatGPT celebrated its first anniversary of becoming public and CNBC wrote: “It surpassed 1 million users just five days after it launched, according to Greg Brockman, OpenAI’s CEO at the time. Two months later, in January 2023, the application had around 100 million monthly active users, per a UBS study. And in October, ChatGPT drew around 1.7 billion visits worldwide, according to a Similar Web analysis.” At present, close to 200 million ChatGPT accounts have been created. Bard is being integrated into Google’s parent Alphabet’s products like YouTube, Gmail, etc, and they are targeting two billions users.
But the journey has not been smooth. Very soon after these AI products were launched, users started noticing something strange—they lied when they didn’t know an answer. The idea of truth or falsehood was alien to them. They made up information when the right one wasn’t available to them from their dataset. There is now even a word for it—AI hallucination. When Bard was launched, it got it wrong in the demo itself by getting a fact wrong. To a question—“What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”—it said the first photo of a planet outside the solar system was captured by the telescope. Astrophysicists were quick to debunk it because the first such photo had actually been taken as far back as 2004. Anyone who used ChatGPT or Bard extensively also came to this conclusion sooner or later that they were notoriously unreliable. Using them for studies or professional work could lead to trouble without a mechanism of double checking.
Hallucinations weren’t the only issue. They also had the potential to go off tangent and sometimes, in alarming ways. In February, a New York Times technology correspondent decided to have a conversation with Microsoft’s Bing. To his surprise, he found that after some time, Bing had metamorphosed into another personality called Sydney that seemed to be not benign. The article said: “As we got to know each other, Sydney told me about its dark fantasies [which included hacking computers and spreading misinformation], and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.” It led to panic and generative AI companies went on overdrive to make their products as personality-less as possible.
Safety has always been the biggest factor in the deployment of AI. OpenAI itself is not a for-profit company and its stated mission is to make AI as safe as possible given its inevitability. It came out with a unique corporate model in furtherance of this mission in which the primary responsibility of the board of directors was not to protect shareholder interest but safety. No one on the board was there because they had put money into the company and that was by design. The experiment went off-course when, in mid-November, the world of technology woke up to the shocking news that the visionary founder of OpenAI, Sam Altman, had been fired without any specific detail provided as reason. There were rumours that OpenAI had made a breakthrough in technology that could potentially put humanity at risk and the board did not think it was safe in Altman’s hands. As it is increasingly turning out now that the real reason was probably petty company politics over control. Altman returned to OpenAI because of a revolt by employees who threatened an en masse walkover to a new AI division Microsoft announced Altman would head for them. The episode was not a good advertisement for the future of AI being in stable hands.
OpenAI has put a cap on itself as to how much profit it can make. But the other multinationals like Alphabet, Microsoft, or Meta, which owns Facebook and Instagram and has an AI product called Galactica, or Elon Musk, who has unveiled his own Grok, are driven by profits and quarterly numbers. They might say that safety is the prime concern but competition in the marketplace creates its own dynamics, a reason Alphabet came out with Bard in a hurry. The war over AI market share began this year but it is already heating up, and should any of them come up with something that truly gives them an edge, but one riddled with safety concerns, it is uncertain how they will negotiate it.
GOVERNMENTS ARE TRYING to address this question by creating regulations that keep AI development under scrutiny. On October 30, US President Joe Biden issued an executive order for the “ Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”. It required increased disclosures by companies in the field so that dangers could be avoided, like the use of AI for deepfake imitations or by terrorists and other countries to make dangerous weapons. Close on its heels, the European Union (EU) came out with a deal by which its members would make comprehensive laws to address AI. The BBC reported: “The European Parliament will vote on the AI Act proposals early next year, but any legislation will not take effect until at least 2025. The US, UK and China are all rushing to publish their own guidelines. The proposals include safeguards on the use of AI within the EU as well as limitations on its adoption by law enforcement agencies. Consumers would have the right to launch complaints and fines could be imposed for violations.”
Meanwhile, the advance of the technology is proceeding at a speed that makes it difficult but all the more imperative for regulators to catch up. For instance, there are now AI models that can mimic the sense of smell. Simultaneously with ChatGPT and Bard, also came products like Dall-E and Midjourney in which, by merely giving verbal cues, the program could create images that matched what any skilful artist could do. Soon, videos will also happen by just writing a few lines of everyday English. The vast applications of generative AI are only about to begin as businesses and startups begin to customise generative AI to their specific needs and ventures. Future advances might be more incremental but the door to a whole new universe of technology has swung open. And at the far end of it, perhaps only a few decades down, there will be what is called Artificial General Intelligence, when AI becomes autonomous like a human mind but without its limitations. It is the singularity that everyone knows is inevitable but has little clue about how to meet.
About The Author
Madhavankutty Pillai has no specialisations whatsoever. He is among the last of the generalists. And also Open chief of bureau, Mumbai
More Columns
India’s Message to Yunus Open
India’s Heartbeat Veejay Sai
The Science of Sleep Dr. Kriti Soni