Central & South Asia

India’s Deepfake Dilemma: The World’s Biggest Democracy Tests the World’s Newest Technology

India is pioneering a fourth model of AI governance, one grounded in democratic trust rather than state control or laissez-faire self-regulation. India’s early legislation on synthetic media signals its intent to shape national policy and global norms around truth and technology. The world’s largest democracy may become the first to regulate the world’s newest threat to democracy itself: AI-generated deepfakes.
By
India’s Deepfake Dilemma: The World’s Biggest Democracy Tests the World’s Newest Technology

AI-generated face morphing into the Indian flag.

November 05, 2025 06:07 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

If the 20th century was about who controlled oil, the 21st will be about who controls truth. India, the world’s largest democracy, has just entered this race.

On October 22, India’s Ministry of Electronics and Information Technology (MeitY) released draft amendments to the Information Technology Rules (2021) that propose regulating synthetic media, including deepfakes and AI-generated content. The draft, open for public consultation until November 6, introduces a legal definition of “synthetically generated information” and mandates clear labeling of any content created or modified by algorithms.

If adopted, it would make India one of the first major democracies to legislate the blurred boundary between fact and fabrication. The proposal, according to media reports, would require platforms that enable or host synthetic content to display disclaimers covering at least 10% of an image or the first 10% of an audio clip. Large platforms, i.e., those with over five million users, would need to deploy automated detection tools and collect user declarations identifying AI-generated media.

Those who comply retain safe-harbor protection under India’s IT law; those who don’t could lose immunity for user content. The government’s intent is clear: stem AI-driven misinformation, impersonation, and national security risks before they destabilize institutions or elections. Yet this ambition exposes a fundamental tension: how can a democracy encourage innovation while protecting reality itself?

A fourth path emerges

The world’s three main AI governance models have already diverged. The EU’s AI Act is rights-driven, emphasizing privacy and watermarking. The United States relies on self-regulation and voluntary industry pledges. China enforces state control through sweeping “deep synthesis” rules.

India is charting a fourth path: governance built on trust. By regulating synthetic media before it triggers a national crisis, New Delhi is attempting something rare: preemptive, proportionate regulation at scale in a democracy.

With over 900 million internet users and some of the world’s fastest-growing AI startups, India’s regulatory design will inevitably shape how emerging markets approach digital truth. In this sense, the draft is less about compliance and more about geopolitical signaling. It tells Washington, Brussels and Beijing alike that the Global South will not remain a passive consumer of tech rules set elsewhere.

From data sovereignty to truth sovereignty

India’s digital policy evolution — from data localization to AI regulation — reveals a larger pattern: the assertion of digital sovereignty. What began as a debate over where data should reside has become a question of who decides what is real.

In practice, “truth sovereignty” means protecting the informational integrity of a billion citizens in an open, multilingual and highly polarized media ecosystem.

It’s also a matter of soft power. If India can demonstrate that democracies can regulate AI media without resorting to censorship, it could export a new “Bangalore Consensus”: an innovation-friendly, rights-respecting and transparency-rooted approach.

The global stakes

AI-generated misinformation is already a transnational problem. A deepfake robocall in the US used AI voice clones to suppress voters. Market shocks in Southeast Asia have stemmed from manipulated videos. In an era where influence can travel at the speed of an upload, governance must catch up with the generation.

Against this backdrop, India’s experiment is a test case for the world: can regulation steer the digital future without strangling it? Failure would reinforce the view that only authoritarian systems can effectively police AI. Success would show that open societies can adapt fast enough to remain resilient. Either way, what India builds or breaks will resonate far beyond its borders.

The new arms race: trust

As the US and China compete over chips, India is competing over credibility. India’s true export won’t be semiconductors; it will be standards: frameworks for watermarking, provenance and responsible AI disclosure.

This is where India’s deepfake regulation transforms from policy to diplomacy. A coalition of democracies around shared principles of digital integrity — an Indo-Pacific Charter on AI Authenticity — could be as influential as the Paris Agreement was for climate change.

Because in this century, trust is the new strategic resource.

If India gets it right

If done right, these regulations could do for information integrity what Aadhaar did for digital identity: provide the infrastructure for authenticity at scale. If done wrong, they could entangle innovators in red tape and push creativity underground. Either way, the rest of the world should pay attention.

India is not just regulating technology. It is redesigning the contract between democracy and truth. And if it succeeds, the next export from the world’s largest democracy won’t be software or services; it will be trust.

[Kaitlyn Diana edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 3,000+ Contributors in 90+ Countries