FO° Talks: Deepfakes and Democracy: Why the Next Election Could Be Decided by AI

In this episode of FO° Talks, Catherine Lapey and Manish Maheshwari discuss how deepfakes threaten democracy by undermining shared reality, not just by spreading individual lies. The danger comes from hyperrealism and near-zero-cost mass production versus costly verification. Maheshwari proposes provenance systems like Aadhaar or HTTPS-style signals to restore trust without centralizing “truth.”

Check out our comment feature!

Fair Observer author Catherine Lapey speaks with Manish Maheshwari, former head of Twitter India, an AI entrepreneur and a Mason Fellow at Harvard Kennedy School focused on AI governance and digital public goods. Their core worry is not simply that synthetic media can trick people into believing a lie, but that it can corrode the conditions that make democratic judgment possible: a shared sense of what counts as real.

From fake news to fake reality

Maheshwari defines a deepfake in plain terms as synthetic content that is presented as authentic. Synthetic media is not inherently malicious. It can enable creative uses and lower the cost of production for advertising and small businesses. The red line is deception, when creators pass fabricated material off as real in order to misinform, damage reputations or distort public choices.

For him, the most destabilizing effect is psychological and social. When realistic fakes become common, the public can start doubting genuine evidence as well. Maheshwari puts it starkly: “Democracies don’t fail when lies spread… They fail when citizens stop believing that truth is even possible.” In that scenario, politics becomes a contest of narratives with no agreed reference points, and “shared reality” starts to fracture.

Why deepfakes scale differently

Lapey presses him on what makes deepfakes distinct from older forms of propaganda, defamation or “cheap fakes” such as misleading edits. Maheshwari highlights two shifts.

The first is realism. Today’s AI-generated video can be convincing even to technically literate viewers, narrowing the gap between what looks true and what is true. The second is scale and economics. The production side is becoming nearly frictionless, while the verification side remains expensive and slow. As Maheshwari frames it, “The cost of production is almost zero. The cost of finding out and correcting is significant.” That imbalance favors bad actors, especially when they can flood platforms faster than journalists, fact-checkers or authorities can respond.

Their discussion returns to elections. A realistic-looking clip of a leader saying something inflammatory can shape opinions immediately, and later debunking often cannot unwind the initial impact, particularly when the content is timed to land right before a vote.

Harassment, violence and the real-world downstream

Lapey and Maheshwari broaden the lens to social harm. Maheshwari says synthetic media is increasingly used to “troll and abuse,” including character assassination and bullying. He also draws on his experience at Twitter India to underline that misinformation’s impact is not hypothetical. Even before today’s deepfakes, misleading videos circulated out of context could inflame tensions and contribute to mob violence.

Deepfakes remove even the minimal constraint of needing a real clip to distort. Now, fabricated “evidence” can be generated from scratch, packaged to provoke outrage and distributed quickly and widely.

India’s draft rules and three governance models

The conversation then turns to regulation. Maheshwari compares three broad approaches to AI governance.

In his telling, the European Union tends toward a rights-first, compliance-heavy model. The United States leans more market-led and voluntary, and China relies on state-controlled, coercive mechanisms. India, he argues, is experimenting with something different, a trust-based framework aimed at clarifying authenticity rather than restricting innovation.

He summarizes India’s draft approach as disclosure and platform responsibility, not outright censorship. The proposals he discusses include visible disclaimers for synthetic content, automated detection requirements for large platforms above a user threshold, and creator declarations to identify AI-generated media. He links this to a broader idea he calls “truth sovereignty,” or a country’s capacity to set workable standards for authenticity in its own democratic environment.

Verification infrastructure and maintaining usability

Maheshwari’s most concrete proposal is to build a verification infrastructure for media, analogous to India’s Aadhaar system, the biometric identity framework used at a massive scale. The system authenticates identity through biometrics such as iris scans and fingerprints, reducing friction in access to services and enabling trust at scale.

He imagines a similar logic for content: provenance frameworks that embed invisible but verifiable cryptographic signatures in media, allowing platforms and investigators to confirm their origin and detect tampering without changing what the content looks like. His analogies are designed to make this intuitive. A passport chip is invisible to the traveler but readable by authorities to confirm the document has not been altered. Hypertext Transfer Protocol Secure, or HTTPS, works because browsers verify certificates in the background and surface a simple signal, like a padlock icon, to the user.

Lapey’s challenge is the human layer. Even if courts, platforms and technical experts can verify provenance, will ordinary users trust the signal? Maheshwari concedes this gap is real. The system has to reduce cognitive load, not add to it. People do not need to understand cryptography, only the meaning of the label: verified origin, AI-generated or unknown source. Building literacy, testing what disclosures actually work and aligning platforms with standards are, in his view, where “the rubber will hit the road.”

Closing stakes

Lapey and Maheshwari end where they began: Deepfakes threaten democracies less by spreading a particular lie than by making truth feel unreachable. Transparency and provenance can help societies sort good actors from bad ones without creating a centralized ministry of truth, but only if governance, platforms and public understanding evolve quickly enough to preserve a shared reality.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article/video are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

FO Talks: The American Jury System Explained: Democracy or Illusion?

April 20, 2026

FO Talks: War in Iran: Does the Future of the Middle East Look Bleak?

April 19, 2026

FO Live: How the US–Israel War in Iran Could Redraw Middle East Borders

FO Talks: How Nationalism, the Monarchy and Cambodia Shaped Thailand’s 2026 Election

April 17, 2026

FO Live: Wars Rage in Iran and Ukraine, Where is the United Nations?

FO Talks: From Minneapolis to Kuwait — Welfare Model Under Pressure in the AI Era

April 14, 2026

FO Talks: The $9 Trillion Crisis — AI, Burnout and the Collapse of White Collar Jobs

April 13, 2026

FO Live: Kanwal Sibal Explains Why India Is Europe’s Strategic Alternative to China

April 12, 2026

FO Talks: Is America Building a $1.5 Trillion War Machine to Fight China?

April 11, 2026

FO Talks: Nepal’s Political Earthquake as Gen Z Elevates a Rapper to Power

April 10, 2026

FO Talks: America First to Iran War — Making Sense of Donald Trump’s Foreign Policy

April 09, 2026

FO Talks: Will AI, Gold and Dedollarization Reshape Global Markets in 2026?

April 09, 2026

FO Live: When Will the Houthis Join the War to Support Iran?

FO Live: Did Trump and Netanyahu Miscalculate Iran’s Resolve?

FO Live: Afghanistan’s Last Woman Ambassador Defies Taliban Rule From Exile

FO Talks: The Collapse of New START Treaty Raises Global Nuclear Risks

April 06, 2026

FO Talks: The Epstein Files, Redactions and the Deep State Question

April 05, 2026

FO Talks: Why Pakistan’s Taliban Strategy Backfired and Triggered War on Its Own Border

April 05, 2026

FO Exclusive: Big Trouble in the US Private Credit Market

April 04, 2026

FO Talks: Is the Gulf Splitting? Saudi Arabia–UAE Power Struggle Intensifies

 

Fair Observer, 461 Harbor Blvd, Belmont, CA 94002, USA