Looks like generative AI is about to face multiple ‘Sputnik’ moments and that the disruption caused by DeepSeek is not the end of it. The latter could shake up the AI industry not only because it surpassed OpenAI’s ChatGPT to become the most downloaded free app on the Apple App Store, but also because the Chinese automated chatbot is open source, meaning anyone can copy, download and build on it. With DeepSeek winning on price and access, here comes another news that could leave OpenAI chief Sam Altman and other heavyweights sleepless. Armed with the power of some 20-odd former OpenAI researchers, including those from Meta and Mistral, a former hotshot of OpenAI has launched a startup that aims to develop AI systems aligned with human values.
Mira Murati, former chief technology officer of OpenAI, has launched Thinking Machines Lab. She isn’t the first among several others at OpenAI and other AI giants to clash with the philosophies of leviathans in the field. Many AI enthusiasts contend that these big entities have lost a lot of money and are not sustainable in the long run. AI, as developed by Big Tech, is seen as environment-unfriendly and has inherent biases built into the system. Several colleagues of Altman had clashed with him over the direction of his business development and his philosophy. They had said Murati raised questions about Altman’s priorities. OpenAI’s co-founder Ilya Sutskever had also launched a startup with former researcher Daniel Levy. Analysts say that a change is sweeping the tech world where startups are ready to compete with entrenched players in the race to build more efficient and humane AI technologies.
A Collaborative Model
Thinking Machines Lab builds multimodal systems to work collaboratively with people. “We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open-source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything,” the website says.
Safety First
The website says its approach embraces co-design of research and products to enable learning from real-world deployment and rapid iteration. According to them, “This work requires three core foundations: state-of-the-art model intelligence, high-quality infrastructure, and advanced multimodal capabilities.” The company says it will contribute to AI safety by preventing misuse of its models while maximising users’ freedom.
Lost In Translation
According to the company, while AI capabilities have advanced dramatically, key gaps remain. The scientific community’s understanding of frontier AI systems lags rapidly advancing capabilities, it argues. “Knowledge of how these systems are trained is concentrated within the top research labs, limiting both the public discourse on AI and people’s abilities to use AI effectively,” they say.
More Columns
Passion for the Preloved Saumyaa Vohra
Mum’s the Word Kaveree Bamzai
Losers Back Home, On Top in Thailand Kaveree Bamzai