FO° Talks: Deepfakes and Democracy: Why the Next Election Could Be Decided by AI

In this episode of FO° Talks, Catherine Lapey and Manish Maheshwari discuss how deepfakes threaten democracy by undermining shared reality, not just by spreading individual lies. The danger comes from hyperrealism and near-zero-cost mass production versus costly verification. Maheshwari proposes provenance systems like Aadhaar or HTTPS-style signals to restore trust without centralizing “truth.”

Check out our comment feature!

Fair Observer author Catherine Lapey speaks with Manish Maheshwari, former head of Twitter India, an AI entrepreneur and a Mason Fellow at Harvard Kennedy School focused on AI governance and digital public goods. Their core worry is not simply that synthetic media can trick people into believing a lie, but that it can corrode the conditions that make democratic judgment possible: a shared sense of what counts as real.

From fake news to fake reality

Maheshwari defines a deepfake in plain terms as synthetic content that is presented as authentic. Synthetic media is not inherently malicious. It can enable creative uses and lower the cost of production for advertising and small businesses. The red line is deception, when creators pass fabricated material off as real in order to misinform, damage reputations or distort public choices.

For him, the most destabilizing effect is psychological and social. When realistic fakes become common, the public can start doubting genuine evidence as well. Maheshwari puts it starkly: “Democracies don’t fail when lies spread… They fail when citizens stop believing that truth is even possible.” In that scenario, politics becomes a contest of narratives with no agreed reference points, and “shared reality” starts to fracture.

Why deepfakes scale differently

Lapey presses him on what makes deepfakes distinct from older forms of propaganda, defamation or “cheap fakes” such as misleading edits. Maheshwari highlights two shifts.

The first is realism. Today’s AI-generated video can be convincing even to technically literate viewers, narrowing the gap between what looks true and what is true. The second is scale and economics. The production side is becoming nearly frictionless, while the verification side remains expensive and slow. As Maheshwari frames it, “The cost of production is almost zero. The cost of finding out and correcting is significant.” That imbalance favors bad actors, especially when they can flood platforms faster than journalists, fact-checkers or authorities can respond.

Their discussion returns to elections. A realistic-looking clip of a leader saying something inflammatory can shape opinions immediately, and later debunking often cannot unwind the initial impact, particularly when the content is timed to land right before a vote.

Harassment, violence and the real-world downstream

Lapey and Maheshwari broaden the lens to social harm. Maheshwari says synthetic media is increasingly used to “troll and abuse,” including character assassination and bullying. He also draws on his experience at Twitter India to underline that misinformation’s impact is not hypothetical. Even before today’s deepfakes, misleading videos circulated out of context could inflame tensions and contribute to mob violence.

Deepfakes remove even the minimal constraint of needing a real clip to distort. Now, fabricated “evidence” can be generated from scratch, packaged to provoke outrage and distributed quickly and widely.

India’s draft rules and three governance models

The conversation then turns to regulation. Maheshwari compares three broad approaches to AI governance.

In his telling, the European Union tends toward a rights-first, compliance-heavy model. The United States leans more market-led and voluntary, and China relies on state-controlled, coercive mechanisms. India, he argues, is experimenting with something different, a trust-based framework aimed at clarifying authenticity rather than restricting innovation.

He summarizes India’s draft approach as disclosure and platform responsibility, not outright censorship. The proposals he discusses include visible disclaimers for synthetic content, automated detection requirements for large platforms above a user threshold, and creator declarations to identify AI-generated media. He links this to a broader idea he calls “truth sovereignty,” or a country’s capacity to set workable standards for authenticity in its own democratic environment.

Verification infrastructure and maintaining usability

Maheshwari’s most concrete proposal is to build a verification infrastructure for media, analogous to India’s Aadhaar system, the biometric identity framework used at a massive scale. The system authenticates identity through biometrics such as iris scans and fingerprints, reducing friction in access to services and enabling trust at scale.

He imagines a similar logic for content: provenance frameworks that embed invisible but verifiable cryptographic signatures in media, allowing platforms and investigators to confirm their origin and detect tampering without changing what the content looks like. His analogies are designed to make this intuitive. A passport chip is invisible to the traveler but readable by authorities to confirm the document has not been altered. Hypertext Transfer Protocol Secure, or HTTPS, works because browsers verify certificates in the background and surface a simple signal, like a padlock icon, to the user.

Lapey’s challenge is the human layer. Even if courts, platforms and technical experts can verify provenance, will ordinary users trust the signal? Maheshwari concedes this gap is real. The system has to reduce cognitive load, not add to it. People do not need to understand cryptography, only the meaning of the label: verified origin, AI-generated or unknown source. Building literacy, testing what disclosures actually work and aligning platforms with standards are, in his view, where “the rubber will hit the road.”

Closing stakes

Lapey and Maheshwari end where they began: Deepfakes threaten democracies less by spreading a particular lie than by making truth feel unreachable. Transparency and provenance can help societies sort good actors from bad ones without creating a centralized ministry of truth, but only if governance, platforms and public understanding evolve quickly enough to preserve a shared reality.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article/video are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

FO Talks: Could a US Strike Unite Iran Instead of Breaking It?

March 12, 2026

FO Talks: Iran War — Former Israeli Negotiator Josef Olmert Explains What Comes Next

March 11, 2026

FO Talks: India–US Trade Deal Agreement and the Real Beginning of Liberalization 2.0

March 10, 2026

FO Exclusive: A New Iran–US Conflict Looms Large

March 09, 2026

FO Exclusive: A Hot Mess After the Supreme Court Strikes Down Trump Tariffs

March 08, 2026

FO Exclusive: Global Lightning Roundup of February 2026

March 07, 2026

FO Talks: India and China Can No Longer Avoid Each Other, Militarily and Economically

March 02, 2026

FO Talks: Can Spirituality Transform Capitalism?

March 01, 2026

FO Talks: Esther Wojcicki on Raising Resilient Children in an Age of Fear and Authoritarianism

FO Talks: Are Companies Using Software to Quietly Eliminate Your Legal Rights?

February 27, 2026

FO Talks: Josef Olmert on Why a US Strike on Iran Now Seems Inevitable

February 26, 2026

FO° Talks: Great Power Competition Is Back: How the US Plans to Deter China and Russia

February 20, 2026

FO° Talks: End of American Global Leadership? Trump, Tariffs and the Rise of a Multipolar World

February 19, 2026

FO Talks: Decoding Mark Carney’s Davos Speech Amid Rising Global Strategic Competition

February 17, 2026

FO Talks: Iran Is Breaking From Within, But Regime Collapse Won’t Look Like 1979

February 16, 2026

FO Talks: Is Sovereignty Dead? Trump’s Maduro Arrest and the End of Global Norms

February 15, 2026

FO Exclusive: Xi Jinping’s Military Purge Signals Rising Paranoia in China

February 10, 2026

FO Exclusive: Mark Carney Challenges American Hegemony at Davos

February 09, 2026

FO Exclusive: The Trump Administration Tries Regime Change and Oil Grab in Venezuela

February 08, 2026

FO Exclusive: Global Lightning Roundup of January 2026

February 07, 2026

 

Fair Observer, 461 Harbor Blvd, Belmont, CA 94002, USA