Geoffrey Hinton won the 2024 Physics Nobel along with John Hopfield (Photo: Alamy)
A THOUSAND YEARS FROM NOW, when human beings look back at these decades, will it be like the moment when the wheel was invented? Or when some caveman first chanced upon making fire? The Nobel Prizes awarded this year for both Chemistry and Physics suggest that it is possible. The winners in both categories were related to artificial intelligence (AI). The Chemistry one had to do with the landmark Alphafold project in which AI could predict what proteins looked like, shaving off what would earlier have taken extraordinary amounts of time, resources and effort. That also meant creating uses out of these predicted proteins like coming up with molecules for new medicines. It was however what the Physics Nobel was for that underpinned the work of Alphafold. When those in the future look back to the beginning of AI, the decade they would zone in on would be the 1980s because that was when Geoffrey Hinton, one of the two Physics Nobel winners, came up with its foundations.
Hinton is called the Godfather of AI. The Nobel’s announcement of why he deserved the award referenced a method called Boltzmann Machines that he discovered. It read: “This can learn to recognise characteristic elements in a given type of data. Hinton used tools from statistical physics, the science of systems built from many similar components. The machine is trained by feeding it examples that are very likely to arise when the machine is run. The Boltzmann machine can be used to classify images or create new examples of the type of pattern on which it was trained. Hinton has built upon this work, helping initiate the current explosive development of machine learning.”
AI is underpinned by the idea of neural networks. This is software trying to behave like the brain. Work on it had been ongoing since the 1950s but it never seemed to deliver on its promise. It could not mimic the efficiency of the brain in even rudimentary ways. By the 1970s, this field of study had lost appeal to researchers and was thought of as something of a dead end. But then Hinton got interested, arriving at it from an inverse direction. It was actually in experimental psychology that he had first received his BA degree from Cambridge. He wanted to understand the mind, and not create a new one. It wouldn’t be until eight years later that he got a PhD in AI. He ventured into it because he thought that the optimum way to study the brain was to try and replicate it. The journey into AI for almost his entire career had more troughs than crests.
Hinton’s contribution was in how to approach the problem through something that we now know as deep learning. Earlier there was only one layer in a neural network through which data was being processed. Hinton’s approach was to have a series of layers in the neural network. It would be trained in some objective, say, identifying a picture. Weights would be assigned to what was rightly and wrongly interpreted, as the process went back and forth through these layers. The rights would get increasingly more weight and the wrongs less. Thus the program would eventually come to the correct answer through probability. It was being taught to learn what was right from wrong using test data. After that, when introduced to a real-world problem, it would get the right answer because, by then, it had already learnt the way to do it. This algorithm is termed backpropagation. Without Hinton’s contribution to the algorithm, AI would have taken much more time to fructify. The entire generative AI revolution, starting with ChatGPT being made public, is based on it. Hinton, in fact, worked with Google to create its first chatbot Bard, the main competitor to ChatGPT.
Unlike many who think of AI as just another tool, Hinton thinks AI is actually learning to think. People are mistaking it for a program with inputs and outputs with set objectives. AI, in his reckoning, is learning-driven. So, no one has a clue what is happening within the program as it keeps training itself with larger amounts of data
Share this on
Even though he had revolutionised the process and kept refining it, it still wasn’t enough and he remained in the backwaters of the scientific universe. This was because for it to be useful as an everyday tool, enormous amounts of data would need to exist for these models to be trained on. As he said in a Bloomberg documentary six years ago, “It didn’t work quite well enough because we didn’t have enough data, we didn’t have enough compute power, and people in AI and computer science had decided neural networks were wishful thinking. So it was a big disappointment.” The documentary went on to state that “he would show up at academic conferences and be banished to the back rooms. He was treated as really like a pariah.” But then came the internet and suddenly vast troves of data was being accumulated that could now be used for training. Results suddenly appeared magical and the big corporations got interested culminating in generative AI.
How much AI figures in the consciousness of humanity is apparent in this award because Hinton is not even a physicist. He is a computer scientist and the committee made a wide interpretation to fit him into the Physics prize. After getting the award, the New York Times interviewed him and one of his answers was that the award will hopefully now force people to take him more seriously. He was not alluding to work he had already done but what he has now been warning the world about. For many years now, Hinton has been saying that AI poses an imminent threat to humanity that we are ignoring in our fascination for its utility. He began to do this after he left Google last year. Unlike many who think of AI as just another tool for humans, Hinton thinks that they actually are learning to think, and are doing some form of it already. People are mistaking it for a program where there are inputs and outputs with set objectives. AI, in his reckoning, is learning-driven. So, mostly no one has a clue what is happening within the program as it keeps training itself over larger and larger amounts of data. They are also much better at it than humans. In one interview to MIT, he said, “Our brains have 100 trillion connections. Large language models have up to half a trillion, a trillion at most. Yet GPT-4 knows hundreds of times more than any one person does. So maybe it’s actually got a much better learning algorithm than us.” They are also better at communicating and he had no doubts about the nature of the beast. The article said, “Hinton now thinks there are two types of intelligence in the world: animal brains and neural networks.” “It’s a completely different form of intelligence,” he says. “A new and better form of intelligence.”
It is a paradox that the man instrumental in creating it and being feted with the most reputed prize in the world is now an evangelist against its unbridled development. But with the genie out of the bottle, it is not quite clear what can be done. Even Hinton does not really know. All everyone is doing is hoping he is not right.
About The Author
Madhavankutty Pillai has no specialisations whatsoever. He is among the last of the generalists. And also Open chief of bureau, Mumbai
More Columns
Beware the Digital Arrest Madhavankutty Pillai
The Music of Our Lives Kaveree Bamzai
Love and Longing Nandini Nair