India’s deepfake push is becoming a real test of how ai gets governed | FOMO Daily
9 min read
India’s deepfake push is becoming a real test of how ai gets governed
India’s deepfake response has moved beyond headline promises and into a layered techno-legal framework built around platform duties, labelling, provenance, takedowns, and wider AI governance. The big test now is not whether rules exist, but whether enforcement becomes fast, visible, and strong enough for ordinary people to trust
When Ashwini Vaishnaw said at the NDTV World Summit on October 18, 2025 that India would have deepfake regulations “very soon,” the remark sounded like a promise about the near future. What makes it more important now is that enough time has passed to see what that promise actually turned into. The minister said India would approach the issue in two ways, technical and legal, and argued that AI could not be governed by law alone. That framing matters because it helps explain what followed: India did not unveil one grand stand-alone deepfake law and call the job done. Instead, it kept moving through advisories, targeted platform obligations, AI governance guidance, and formal amendments to the IT Rules. What this really means is that the country’s deepfake response is best understood as a system under construction rather than a single legislative event.
How india got here
India’s deepfake response did not begin at that summit. The groundwork was already being laid earlier through platform advisories and broader digital regulation. In November 2023, the government told social media intermediaries to identify misinformation and deepfakes and remove such content within 36 hours of reporting. In December 2023, MeitY followed with another advisory focused on compliance with the existing IT Rules and on clearly communicating prohibited content to users. By April and August 2025, official statements were already describing deepfakes as threats to dignity, privacy, reputation, and platform accountability, while also pointing to CERT-In advisories and wider cyber protection measures. The problem is that advisories by themselves can only take a system so far. They signal intent and pressure platforms, but they do not always give people a clean answer to the question that matters most: what exactly must platforms now do, and how quickly must they do it?
What changed in february 2026
The biggest shift came with the updated Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, reflected in the official version updated as of February 10, 2026. Those rules moved the discussion beyond general warnings and into specific duties around what the rules call “synthetically generated information.” They say intermediaries that enable this kind of content must use reasonable and appropriate technical measures to stop unlawful synthetic content, including material involving non-consensual intimate imagery, false electronic records, certain dangerous content, or synthetic portrayals that deceptively misrepresent a natural person’s identity, voice, conduct, statement, or a real-world event. For other synthetic content not banned outright, the rules require prominent labelling, audio disclosure where relevant, and technical provenance measures such as permanent metadata or similar mechanisms, to the extent technically feasible. They also say intermediaries must not enable the suppression or removal of that label or metadata. This is where things change. Deepfakes stopped being treated only as a moderation headache and started being treated as a provenance, traceability, and compliance problem too.
Latest
Top Picks
The latest industry news, interviews, technologies, and resources.
Joachim Nagel’s call for wider access to Anthropic’s Mythos is really a warning about uneven cyber defence, concentrated AI power, and the risk of leaving key institutions outside the defensive perimeter. Europe’s answer will need to combine access, regulation, infrastructure, and real operational readiness rather than relying on any one of them alone
21 Apr 2026 · 1 min read
Why the new rules matter more than the headline
The headline version of this story is that India will regulate deepfakes. The more useful version is that India has started assigning operational responsibility to the platforms and services that enable them. The 2026 rules say significant social media intermediaries that let users display, upload, or publish information must ask users to declare whether content is synthetically generated, must deploy technical measures to verify that declaration, and must ensure clearly prominent labelling when the content is confirmed as synthetic. The explanatory note published during the amendment process said the purpose was to create a clear legal basis for labelling, traceability, and accountability for synthetic content, while the later FAQ made clear that the framework was aimed mainly at synthetic audio, visual, and audiovisual content. What this really means is that India is trying to move past the old model where everyone waits until a fake goes viral and then argues about who should have acted faster. The new direction is more proactive. It pushes platforms to identify, mark, and manage synthetic media before confusion becomes damage.
Where victims may feel the difference first
For ordinary people, the most meaningful part of regulation is rarely the legal theory. It is the response time. Here, the updated rules are more concrete than many people realise. The grievance framework says complaints should be acknowledged within 24 hours and resolved within seven days, while certain removal requests tied to unlawful information must be acted on as quickly as possible and resolved within 36 hours. There is also a faster path for sensitive harms. The rules say intermediaries must act within two hours on complaints relating to content that prima facie exposes a person’s private area, depicts nudity or sexual conduct, or involves impersonation in electronic form, including artificially morphed images of that individual. There is also an appeal route to the Grievance Appellate Committee if a user is not satisfied or the grievance is not resolved in time. That matters because deepfake harm is often measured in hours, not months. A reputation can be damaged before the formal legal system even starts moving. A two-hour or 36-hour response window does not solve everything, but it does show that India’s framework is trying to treat online harm as urgent rather than abstract.
Why india keeps calling this a techno-legal approach
The minister’s line at the summit about technical and legal solutions was not just a talking point. It has been repeated in official materials since then. A July 2025 government note said India was funding R&D at IITs for deepfake detection, privacy enhancement, and cybersecurity, and described the country’s AI strategy as a balanced techno-legal approach. The same policy direction appears in the India AI Governance Guidelines and in later official material on the IndiaAI Mission, which places safe and trusted AI alongside compute access, foundation models, and dataset governance. Government releases say the “Safe & Trusted AI” pillar includes work on deepfake detection, risk-assessment protocols, privacy-preserving architectures, and algorithm auditing tools, and that multiple projects have already been selected under that pillar. This is important because law on its own is usually reactive. Deepfake technology is not. It moves through models, tools, cheap interfaces, and endless new use cases. What this really means is that India is trying to keep one foot in enforcement and the other in capability building, so it is not forced to choose between innovation and protection every time a new synthetic-media crisis appears.
Why this still is not the final answer
Even so, it would be a mistake to pretend the work is finished. The FAQ released after the amendments makes a useful limitation very clear: the new synthetic-content framework is aimed at audio, visual, and audiovisual content such as deepfakes, while text-only AI outputs are not classified as SGI under that specific framework, even though they can still be unlawful under general rules. That distinction matters because AI-enabled deception does not always arrive as a fake video or cloned voice. It can arrive as fake news copy, fabricated customer support chats, scam messages, forged context around real images, or hybrid content that mixes text and synthetic media. The broader AI Governance Guidelines also admit that current voluntary frameworks lack legal enforceability in some areas, that responsibility across developers, deployers, and end users still needs clearer attribution, and that users often lack effective grievance systems. So while India has clearly moved beyond loose promises, it has not reached a neat endpoint. The framework is stronger than it was, but it is still a framework, not a final settled architecture.
Why enforcement will decide whether people trust it
This is the part that will matter most over the next year. Rules on paper can look tough and still fail in practice if platforms under-label, over-delay, or quietly treat compliance as a public relations exercise. The updated IT Rules require monthly compliance reporting from significant social media intermediaries, including complaints received and action taken, and they place named accountability roles inside those companies such as chief compliance officers, nodal contacts, and resident grievance officers. The AI Governance Guidelines also lean toward stronger audits, transparency, self-certifications, and proportionate liability depending on the role an actor plays in the AI value chain. That is a sensible direction, but it only becomes real when users, regulators, and courts can see whether harmful content was labelled, how quickly it was handled, whether repeat offenders were curbed, and whether platforms built working internal systems instead of polished policy pages. The problem is that public trust will not come from hearing that India has a plan. It will come from seeing that fake content is marked, harmful content is pulled down quickly, and victims are no longer left chasing five different agencies with screenshots in hand.
What this says about india’s wider ai strategy
There is a bigger story hiding underneath the deepfake debate. India appears to be using deepfakes as one of the first real tests of its wider approach to AI governance. The AI Governance Guidelines say the country wants a principle-based framework built around trust, people first, innovation over restraint, accountability, understandability, and resilience. At the same time, the IndiaAI Mission is backing sovereign model development, shared compute, safer datasets, and responsible AI research. That tells us the deepfake issue is not being treated as a side problem anymore. It is becoming one of the practical places where the government can show whether its larger AI philosophy holds together. Can India encourage builders while still protecting citizens. Can it ask platforms to move faster without freezing innovation. Can it develop technical tools for detection and provenance without pretending detection alone will save the day. These are not small questions. They are the real test of whether the country can build public trust in AI while also trying to scale its domestic AI ambitions.
What changes next
The next phase will probably look less like one dramatic law and more like tightening layers. More enforcement guidance is likely. Platform compliance will come under more scrutiny. Detection tools and provenance systems will keep improving. The AI Safety Institute proposed in the governance framework may become more important if India wants credible technical evaluation capacity rather than relying only on platform self-reporting. And public expectations will change too. Once the government has said synthetic content should be labelled, traceable where feasible, and removable within defined timelines, users will expect those promises to be visible in everyday platform behaviour. That is the standard India has now set for itself. The headline from the summit was that deepfake regulation was coming very soon. The reality in 2026 is more interesting than that. It came in pieces, through rules, duties, tools, and governance language. What this really means is that India has moved from promise to framework. The hard part now is making that framework feel real to the people most likely to be harmed by deepfakes in the first place
The Kelp DAO exploit became much bigger than a $293 million theft because the stolen rsETH was reused as collateral inside DeFi lending markets, helping trigger about $9 billion in net outflows from Aave. The episode exposed a deeper weakness in DeFi: when trust in one asset breaks, the damage can spread through every protocol that treated that asset as solid collateral.