National Intelligence
Sharad Sharma Sharad Sharma | 25 Oct, 2024
(Illustration: Saurabh Singh)
A FOOD DELIVERY STARTUP WAS RECENTLY in the news for slashing its customer service workforce by 60 per cent thanks to AI-driven automation, with chatbots resolving 90 per cent of customer issues. The notion that AI might be a net positive for jobs is speculative.
When cloud software became available, it suddenly allowed smaller companies to consume software. Before that, if you wanted to have an email server, you needed to hire an IT person to run it, so only big companies could have the basic capability of doing email. Then came the cloud, and everybody could access good email. The number of companies that could now consume software expanded tremendously. Although it reduced the price of on-premise enterprise software, it created another market, SaaS (Software-as-a-Service). So overall, things were positive. Now, is that true of AI? Everyone using any software now will end up using AI-enabled software, so by definition, you will see a reduction in price all over again. It is unclear if the many new startups will benefit from it in the long run. Because whoever will use AI is already using cloud software, they will expect them to provide AI capabilities.
Wealth comes from solving problems, and these problems fall into three categories when it comes to AI—safety, diffusion, and strategic autonomy. There is money to be made in all three.
Diffusion is about how AI finds its way into various sectors and applications. Strategic autonomy is about national security and other critical sectors. The conventional sense is that new startups will make money in diffusion, but that is different from what we just discussed. Let us look at safety, for instance. Today, India is the cradle of humanity, producing 23 million babies in a year, as much as the next five countries, including China, put together. The largest number of 15-year-olds is in India. AI can uplift these children, but they are also vulnerable to the distractions of AI, which could be through pornography, gambling, or gaming. Is it not possible that you could have a lost generation? Jonathan Haidt claims that it has already happened in the West. If you were to prevent children from becoming vulnerable, you have to take some agency away from them when they are online. Whom do you give agency to when you take agency away from children? If you are in China, you give it to the state. In the West, you give it to teachers. If you are in India, you need to give it to the parents. This is a different way to solve the problem of safety in India. Such a system will have to be built locally, which will be a source of value creation and has the potential to create wealth.
AI will create value in all three areas, and diffusion, the subject of all conversation these days, is the weakest for new startups. AI will create only a few new players if existing players can provide AI-enabled software to existing customers. This means you have to break away from relying on existing foundation models and build your own. Which means you will need a lot of money. This means that, unlike SaaS, this is not a garage startup.
The other piece of the puzzle is data. When OpenAI came out with ChatGPT3, it was pathbreaking because they had taken public data but thrown more computing at it than anyone else had the means to do at that time. But they knew when they were launching it that this was not sustainable—anyone could copy it. So, to create ChatGPT4, they knew they would have to differentiate by feeding into it proprietary data sets that competitors did not have. They did that, and that is what makes ChatGPT4 better. The question is: How does one unlock training datasets in India? In the US, a legal system forces dataset users to be careful with privacy to avoid large penalties imposed by bilateral contracts. In India, a different approach is needed. This is the Data Empowerment and Protection Architecture (DEPA), part of India Stack. DEPA Training is India’s way of unlocking its continental datasets.
We hope India will become good at developing advanced reasoning models, such as neurosymbolic reasoning models, which can be used along with Genai to come up with more meaningful solutions. Hopefully, India will be at the cutting edge of AI, but not how people imagine today
There are companies like Sarvam AI that are trying to build a model that will be distinctly superior for Indian applications, which could be in language translation, judiciary, or in the area of health. It is still early days. There are two types of models—reasoning models and large-language models (LLMs). The latter’s solution will be more creative, but it could also give you crazy, illegal moves. Then, you submit it to another model that tells you if it is legal. So, two models work simultaneously: the first is an LLM, and the second is a pruning model. It is easy to prune for chess or protein folding, where you know the rules. But can you do this for other content where there is typically an Overton window? Take gay marriage, for instance—it would have been outside the Overton window in the past, but it is very much within the window today. What is in the Overton window changes with time in human culture, and if you ask an AI system to make a decision, it tends to take the zeitgeist to determine it. AI cannot do a perfect job of it yet. Counterintuitively, AI systems slow down the evolution of culture. They take the past and they use it to weed out stuff that would be unacceptable, but this is not okay because the past is continuously changing, and what was not acceptable yesterday is acceptable today. Ultimately, all AI solutions will require both generative and reasoning components. We know now that generative AI systems are not good at reasoning. They are not the panacea they were expected to be a year ago.
If I want dietary advice, I can lean on generative AI (GenAI), but if I wish surgical advice, I will certainly be killed if I do that. India hopes we will become good at developing advanced reasoning models, such as neurosymbolic reasoning models, which can be used along with GenAI to come up with more meaningful solutions. Hopefully, India will be at the cutting edge of AI, but not how people imagine today. It will not be in the form of garage startups but new AI models and solutions related to Indian problems of child safety and strategic autonomy.
(As told to V Shoba)
More Columns
Controversy Is Always Welcome Shaan Kashyap
A Sweet Start to Better Health Open
Can Diabetes Be Reversed? Open