From political campaigns, advertisements and scams to creators for hire, the age of AI forgeries is already here
Lhendup G Bhutia Lhendup G Bhutia | 01 Dec, 2023
(Illustration: Saurabh Singh)
A FEW MONTHS AGO, DivyendraSinghJadoun, a 30-year-old from Ajmer, received an unusual request. “It was a political guy from another state,” he says. “And he wanted to know if I could create a deepfake sex video.” This was by itself not uncommon. Jadoun, who goes by the name ‘The Indian Deepfaker’ on the internet and creates deepfake content for clients, gets a lot of requests of such nature from political entities. What was different this time was that a sex video featuring the caller was about to get leaked, and he was looking for someone to create a deepfake version with another individual’s face morphed into his. “He had come to know that this video was going to be released online by a rival, so he wanted a version with another person’s face, so he could claim that the original featuring him was fake,” Jadoun says, with a laugh. “I get a lot of requests. But that was new even for me.”
The phenomenon of deepfakes—using Artificial Intelligence (AI) tools to create multimedia content wherein people’s faces are swapped, voices and other characteristics duplicated, and sometimes entirely new characters with no presence in the real world created—has come into the public consciousness in India recently when a video of the face of the actress Rashmika Mandanna morphed into a British influencer named Zara Patel went viral. But it is a phenomenon that has been growing rapidly and is already fairly entrenched on Indian online channels. There are a multitude of AI tools, available to everyday internet users, where one can create deepfakes for little or no money. There are agencies and individuals like Jadoun who can create them for clients as diverse as ad filmmakers to political parties, for a fee. And the internet today is awash with deepfakes, from viral memes and harmless videos featuring public personalities, such as the one featuring Narendra Modi participating in a garba dance, a growing use of this technology in advertising, to more harmful uses like creating deepfakes of celebrities to promote betting apps or their use in pornography. Some gaming apps, for instance, use unauthorised deepfakes of celebrities like Shah Rukh Khan and Sadhguru to promote their apps on social media. A few public personalities seem to be waking up to the potential of this phenomenon to harm their persona and business. Actor Anil Kapoor went to court recently, and won an order to stop unauthorised AI uses of his likeness. He told a media outlet, “I think [the decision] is very progressive and great not only for me but for other actors also… Because of the way AI technology is evolving every day.”
Deepfakes have, of course, been around for some time. But a good one was still difficult to pull off. You required elaborate software and some expertise to create more realistic deepfakes. This has completely changed in the last decade or so. The tools to create them are available to everyone, and the deepfakes are getting better by the day.
Jadoun began to first dabble in deepfakes in 2020, creating amusing videos where he would morph the faces of characters from popular American shows like Walter White from Breaking Bad and Daenerys Targaryen from Game of Thrones on Hindi parodies of those shows. A more recent deepfake features the characters of Barbie and Ken in the recent Barbie film, except their faces have been altered to depict Hrithik Roshan and Kangana Ranaut, two celebrities rumoured to have been in a relationship that ended acrimoniously. Many of these are hilarious, and they unsurprisingly went viral.
What is interesting is that Jadoun has no background in technology. He was a former leader of a political party’s student union in Ajmer, and had learnt about this new technology when he came across the music video of ‘Action’, the 2020 song by the band Black Eyed Peas where the faces of its members had been morphed to characters from several popular Indian action films. “I just got really interested in how they [Black Eyed Peas] had done it and I began to look up online, read up research papers, and learnt how to use this technology,” he says.
The ability to create a deepfake which is so convincing that it becomes difficult to discern the real from a forgery certainly has a value, and unsurprisingly Jadoun was soon getting requests to create deepfakes for more specific purposes. These clients ranged from advertising firms and film producers who wanted him to ‘de-age’ the faces of popular actors and to morph them to the faces of child actors playing younger versions of them, or clone an actor’s voice and get it to speak in another language, to political entities and their PR agencies.
Jadoun says he refuses all forms of unethical work. Many such requests for unethical work come from political entities in India. But occasionally such requests also come from abroad, he says, pointing to a recent case where someone from an African country wanted him to create a deepfake video featuring his rival.
He does, however, create deepfakes for PR agencies representing political parties, he says, when they are used for unmalicious ends. “This could be a deepfake video to create a personalised greeting for a party worker,” he explains. One such video, he cites as an example, will feature a prominent leader before next year’s General Election where the leader’s voice will be cloned and his lip movement tweaked, so that a real video addressed to party workers will carry a short fabricated beginning where the leader will utter a party worker’s name and greet him. “We can create many such videos greeting a party worker, from some hundreds, thousands, or more,” he says, pointing out that he was in advanced discussions for creating these videos before the recent controversy with deepfakes led to a momentary pause. He also ensures, he says, that his deepfakes for political campaigns are those addressed to party workers, and not voters.
But wouldn’t it be unethical to create fabricated videos, even when they are meant for party workers, where a worker might be led to believe a prominent leader has wished him personally? “Usually, the party worker will know this portion has been fabricated,” Jadoun says. “But he might like to keep such a video, maybe show it to his friends and family. Many of these people work tirelessly for their parties, and something of this nature serves as an encouragement for them.”
Individuals like Jadoun also perform other types of political service. For instance, they take the real video of a speech made by a leader in one language, clone that individual’s voice, and have the same video play, but this time with the cloned voice making that same speech in a different language. This was also witnessed in 2020, just ahead of the Delhi Assembly elections, when it was reported that a firm had used AI tools to create a deepfake video of actor and politician Manoj Tiwari addressing voters. The original video had Tiwari speaking in Hindi; but in the deepfake version he was speaking Haryanvi.
The internet today is awash with deepfakes, from viral memes and harmless videos featuring public personalities, a growing use of this technology in advertising, to more harmful uses like creating deepfakes of celebrities to promote betting apps or their use in pornography
DEBARATI HALDER, PROFESSOR of law at Parul Institute of Law in Vadodara and honorary managing director of the Centre for Cyber Victim Counselling in Bengaluru, who researches online abuse in India and counsels its victims, points out that much of the conversation around deepfakes in India currently centres on its impact on public figures. “But that is a very narrow way of looking at it. Deepfakes will also impact ordinary individuals like you and me,” she says, while pointing out how one has already begun to see photographs of ordinary individuals being morphed and shared online. The other impact, she says, could be a proliferation of scams.
Earlier this year, on July 9, a series of WhatsApp messages popped up on PS Radhakrishnan’s phone. Radhakrishnan, a retired Coal India employee now based in Kozhikode, did not recognise the number. But he could tell, looking at the display image with that number and the messages being shared, that this was his old colleague Venu Kumar. “They began chatting,” says a police officer from Kozhikhode’s cyber crime department who tracked that case. “They asked after their family members, and he [Kumar] even shared photographs of himself and his family.”
What Radhakrishnan did not know was that the person he was chatting with was not Kumar but someone posing as him. The police believe this individual had probably sourced photos of Kumar and his family members from social media. When the scammer eventually asked for some money, using the ruse of an immediate surgery that needed to be performed for a family member, Radhakrishnan told him he was uncomfortable sharing money that way. “The complainant was worried it could be a scam, and he frankly told the other fellow about his fears,” the police official explains.
Immediately, a video call came from the same number. The call lasted a few seconds. The phone’s camera was zoomed in a bit too closely to the speaker’s face and a bright light seemed to be present on the sides of the frame. But the individual, Radhakrishnan told the police, looked remarkably like his old colleague.
The police believe this could be India’s first reported case of a deepfake scam. The scammers, the cops think, probably created a deepfake video of the colleague and played it on a different device, such as a phone or a tablet, when they called Radhakrishnan, which would explain why the complainant found the caller’s surroundings unusually bright.
Convinced that this was his former colleague, Radhakrishnan first sent `40,000, and it was only when the individual asked for more money that Radhakrishnan made a few calls and realised he had been duped. Four months since, the police traced the scammers to Gujarat in November. One individual was arrested, and another is absconding. “We are in the process of retrieving the money,” the policeman says. “We see so many types of cyber crimes, and we think we know everything. But none of us have seen anything like this before,” he adds.
Many believe scams using voice-based deepfakes, where banks and financial institutions or individuals like Radhakrishnan are targeted, will become more common as AI tools grow more advanced. Scams of such nature are still rare, but according to reports, there has been a spurt in their numbers in countries like the US.
Challenges of this nature are leading many internet firms to come up with initiatives to authenticate media and train moderation technology to recognise the inconsistencies that mark deepfakes. But they are in a constant struggle to outpace deepfake creators who often discover new ways to fix defects or remove watermarks. What could be an added challenge in countries like India, Jyoti Joshi, founder of an Indian AI startup called Kroop AI, points out, is that because many of these detection systems have been developed in the West and trained on datasets featuring mostly Caucasian faces and voices, they may not be very good at picking up deepfakes featuring South Asians.
Joshi, who pursued AI as a subject abroad and even worked for an AI-based system that detected depression in patients by analysing face and voice samples, returned to India and established Kroop AI two years ago. One of the platforms she has developed is a deepfake detection system that she says works with Indian faces and voices. “Deepfakes, also known as unethically generated synthetic data, are created with AI methods such as generative adversarial networks. These methods leave artefacts in the face and/or voice. These artefacts sometimes can be observed with the naked eye too in the form of off-lip movement, uneven eye blinking and different eye colour. Kroop AI’s proprietary technology analyses multiple markers in the voice and face of a person to identify if there is any modification or artefact. The product’s USP is its state-of-the-art performance with Indian and South Asian faces. Given the diversity in the country, this is an extremely important aspect for a commercial solution,” she says. “We have observed in the past that detectors created or offered by companies outside India did not work well with Indian faces, which is traditionally due to the bias in the data.”
A particular concern is what role deepfakes will play in disinformation campaigns run by nation-states. There have been several instances in recent times where deepfakes have fooled many, from a fake image of an explosion near the Pentagon earlier this year which was broadcast even by some Indian TV news channels to the use of two AI-generated news anchors, purportedly for a news channel called Wolf News, which was promoting Chinese interests and anti-US propaganda earlier this year online. There have even been reports of the alleged use of deepfakes, by both Israel and Hamas, in the ongoing war in Gaza.
Many are particularly concerned how China, investing heavily in AI, will use this new technology to serve its interests. Many have noticed a surge in anti-India propaganda online ever since the standoff in Ladakh began. A US-based data analytics firm called New Kite Data Labs published a report last year about a Beijing-based private AI firm, Speech Ocean, which was found collecting voice samples from India, particularly from the border regions of Jammu & Kashmir and Punjab. Speech Ocean, the report says, has close ties with the Chinese army. “We obtained data logs of voice file transfers from India to Speech Ocean servers in China. Speech Ocean works through local human resources firms in India to hire locals to record pre-scripted words, phrases, or conversations. These conversations are recorded on the Speech Ocean app downloaded to the user’s phone with voice files sent to China,” the authors wrote. The report does not mention what this form of data harvesting could be used for, but it is possible that large samples of Indian voices could be used to train AI systems to create deepfakes featuring Indian voices.
An avenue where the explosion of deepfakes is most noticeable is advertising. Ad filmmakers have begun to tap into this emerging technology to create a variety of stories, whether to de-age Sachin Tendulkar and have an adolescent version of him promote an insurance brand, make Salman Khan meet a younger version of himself for a soda brand, or have Hrithik Roshan, for a Zomato ad last year, mention particular outlets and their most popular dishes depending on where the viewer is based. Amusingly, the Zomato ad shown in some locations of Ujjain mentioned a restaurant called Mahakal and its thali, which led to some people confusing the restaurant for the Mahakaleshwar temple and calling for the platform’s boycott for disrespecting religious sentiments.
Jadoun also does a lot of deepfake work for advertisers. One such, for a popular tea brand, had Rashmika Mandanna de-aged to play younger versions of herself. Another ad that Jadoun is particularly proud of is for a mattress brand where he de-aged the actor Ayushmann Khurrana to play a child version of himself earlier this year. “It was much more complicated than other similar ads. The de-aged Tendulkar ad, for example, had him speaking directly to the camera, and you can find many photographs of Tendulkar as a child which you can use to feed your AI training model [to create a deepfaked child version of the character]. Traditionally, we feed it [the model] thousands of images,” says Jadoun. “In our case, we had only two not-so-clear photographs of Khuranna as a child, and our version wasn’t going to be speaking directly to the camera but jumping around. It was a big problem.”
So how did he resolve the issue? Jadoun had to create a new algorithm, he says. “And we fed the model with these two images again and again and again, until we got it right,” Jadoun says.
More Columns
Shyam Benegal (1934-2024): The Gentleman Artist Kaveree Bamzai
The Link Between Post-Meal Sugar Spikes and Chronic Conditions Like Diabetes Dr. Kriti Soni
The Edge of the Precipice Mohan Malik