US PRESIDENT Joe Biden, exercising an authority last invoked during Covid, laid out a policy recently to prevent the dangers arising out of Artificial Intelligence (AI). This, you might say, is better late than never and some of the major players in the field itself, like Elon Musk, the man who runs Tesla, Space X and X (formerly Twitter) have given warnings that AI will eventually clash with humans. The Biden order came with him giving out a personal anecdote about having seen his own deepfake, the use of AI to create digital impersonations.
The Wall Street Journal, entailing what the executive order meant, said: “In addition to the requirement to notify the government about powerful AI systems under development, the order would also require companies to share safety testing results.” Plus, it would “take steps to begin establishing new standards for AI safety and security, protect against fake AI-generated content, shield Americans’ privacy and civil rights and help workers whose jobs are threatened by AI.” In London, meanwhile, 28 countries held a summit that resolved to mitigate AI risks.
And yet, all of it, while necessary and inevitable, might not really be a solution. Because if you want to fix the problem, then even if a few big stakeholders are not part of this endeavour, the Pandora’s box remains open. Biden can get the big tech companies of the US and allies to toe the same line, but there are then others like China, who though part of the London summit, have zero interest to regulate AI as Biden tells them. Or Russia, which is in a state of quasi-war against the entire Western progressive democracies after the Ukraine invasion, who would be doing the reverse, trying to develop AI systems designed against the interests of its enemies. There is then the dark net, the online underworld, that goes by no rules. For every deepfake that will be watermarked, there will be enterprising geniuses without moral compasses creating better tools to bypass them. Just as it happens with computer viruses which everyone agrees do no good but remain a surprisingly resilient presence because some use it for malicious amusement while others for profit, and yet others, for war.
Technology is not a genie that ever gets bottled back. Take the nuclear bomb. Just one country, the US, had it in the beginning. Now, even a tiny mad nation like North Korea has got its hands on them. It is just a matter of time before a terrorist group also gets them. It will not even be this difficult with AI. You need rare raw material for nuclear bombs which are extremely difficult to gather but with something digital, all that is needed is ingenuity and access by stealth. AI will remain a clear and present danger and eventually human beings will have to use AI to protect us against AI. Because the artificial intelligence will keep getting more intelligent while ours has a limit.