
The Union Government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandating clear labelling of AI-generated and synthetic content across digital platforms.
According to a notification issued by the Ministry of Electronics and Information Technology (MeitY), intermediaries offering tools that enable the creation or sharing of “synthetic content” must ensure such material carries a prominent and visible label. Where technically possible, platforms are also required to embed permanent metadata or provenance identifiers to help trace the origin of such content.
The amendments formally define “audio, visual or audio-visual information” and “synthetically generated information” as content that is artificially created, modified, or altered using computer resources in a way that makes it appear realistic or indistinguishable from real people or events. However, routine editing, accessibility enhancements, and good-faith formatting have been excluded from this definition.
“Synthetically generated information means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event,” the notification said.
The revised rules aim to counter the rising threat of deepfakes, misinformation, and digital impersonation, while maintaining a balance between technological innovation and user safety. Non-compliance may attract penalties under the Information Technology Act, 2000, and other applicable criminal laws.
06 Feb 2026 - Vol 04 | Issue 57
The performance state at its peak
Under the new framework, intermediaries, especially major social media platforms, will face enhanced due-diligence obligations. These include deploying automated systems to prevent the creation and circulation of unlawful synthetic content such as child sexual abuse material, misleading impersonations, and false electronic records.
The notification specifies that prohibited content includes synthetically generated material that “contains child sexual exploitative and abuse material, non-consensual intimate imagery content, or is obscene, pornographic, paedophilic, invasive of another person’s privacy, including bodily privacy, vulgar, indecent or sexually explicit.”
Platforms will also be required to obtain declarations from users on whether uploaded content is AI-generated and to verify such disclosures through appropriate mechanisms.
Compliance timelines have been significantly tightened. Intermediaries must respond within three hours to lawful takedown orders in select cases, while grievance redressal and response timelines have also been reduced.
The amended rules, issued by the Ministry of Electronics and Information Technology, will come into effect from February 20, 2026, marking a major regulatory push to improve transparency and accountability in India’s digital ecosystem.
(With inputs from ANI)