FO° Talks: Deepfakes and Democracy: Why the Next Election Could Be Decided by AI

In this episode of FO° Talks, Catherine Lapey and Manish Maheshwari discuss how deepfakes threaten democracy by undermining shared reality, not just by spreading individual lies. The danger comes from hyperrealism and near-zero-cost mass production versus costly verification. Maheshwari proposes provenance systems like Aadhaar or HTTPS-style signals to restore trust without centralizing “truth.”

Check out our comment feature!

Fair Observer author Catherine Lapey speaks with Manish Maheshwari, former head of Twitter India, an AI entrepreneur and a Mason Fellow at Harvard Kennedy School focused on AI governance and digital public goods. Their core worry is not simply that synthetic media can trick people into believing a lie, but that it can corrode the conditions that make democratic judgment possible: a shared sense of what counts as real.

From fake news to fake reality

Maheshwari defines a deepfake in plain terms as synthetic content that is presented as authentic. Synthetic media is not inherently malicious. It can enable creative uses and lower the cost of production for advertising and small businesses. The red line is deception, when creators pass fabricated material off as real in order to misinform, damage reputations or distort public choices.

For him, the most destabilizing effect is psychological and social. When realistic fakes become common, the public can start doubting genuine evidence as well. Maheshwari puts it starkly: “Democracies don’t fail when lies spread… They fail when citizens stop believing that truth is even possible.” In that scenario, politics becomes a contest of narratives with no agreed reference points, and “shared reality” starts to fracture.

Why deepfakes scale differently

Lapey presses him on what makes deepfakes distinct from older forms of propaganda, defamation or “cheap fakes” such as misleading edits. Maheshwari highlights two shifts.

The first is realism. Today’s AI-generated video can be convincing even to technically literate viewers, narrowing the gap between what looks true and what is true. The second is scale and economics. The production side is becoming nearly frictionless, while the verification side remains expensive and slow. As Maheshwari frames it, “The cost of production is almost zero. The cost of finding out and correcting is significant.” That imbalance favors bad actors, especially when they can flood platforms faster than journalists, fact-checkers or authorities can respond.

Their discussion returns to elections. A realistic-looking clip of a leader saying something inflammatory can shape opinions immediately, and later debunking often cannot unwind the initial impact, particularly when the content is timed to land right before a vote.

Harassment, violence and the real-world downstream

Lapey and Maheshwari broaden the lens to social harm. Maheshwari says synthetic media is increasingly used to “troll and abuse,” including character assassination and bullying. He also draws on his experience at Twitter India to underline that misinformation’s impact is not hypothetical. Even before today’s deepfakes, misleading videos circulated out of context could inflame tensions and contribute to mob violence.

Deepfakes remove even the minimal constraint of needing a real clip to distort. Now, fabricated “evidence” can be generated from scratch, packaged to provoke outrage and distributed quickly and widely.

India’s draft rules and three governance models

The conversation then turns to regulation. Maheshwari compares three broad approaches to AI governance.

In his telling, the European Union tends toward a rights-first, compliance-heavy model. The United States leans more market-led and voluntary, and China relies on state-controlled, coercive mechanisms. India, he argues, is experimenting with something different, a trust-based framework aimed at clarifying authenticity rather than restricting innovation.

He summarizes India’s draft approach as disclosure and platform responsibility, not outright censorship. The proposals he discusses include visible disclaimers for synthetic content, automated detection requirements for large platforms above a user threshold, and creator declarations to identify AI-generated media. He links this to a broader idea he calls “truth sovereignty,” or a country’s capacity to set workable standards for authenticity in its own democratic environment.

Verification infrastructure and maintaining usability

Maheshwari’s most concrete proposal is to build a verification infrastructure for media, analogous to India’s Aadhaar system, the biometric identity framework used at a massive scale. The system authenticates identity through biometrics such as iris scans and fingerprints, reducing friction in access to services and enabling trust at scale.

He imagines a similar logic for content: provenance frameworks that embed invisible but verifiable cryptographic signatures in media, allowing platforms and investigators to confirm their origin and detect tampering without changing what the content looks like. His analogies are designed to make this intuitive. A passport chip is invisible to the traveler but readable by authorities to confirm the document has not been altered. Hypertext Transfer Protocol Secure, or HTTPS, works because browsers verify certificates in the background and surface a simple signal, like a padlock icon, to the user.

Lapey’s challenge is the human layer. Even if courts, platforms and technical experts can verify provenance, will ordinary users trust the signal? Maheshwari concedes this gap is real. The system has to reduce cognitive load, not add to it. People do not need to understand cryptography, only the meaning of the label: verified origin, AI-generated or unknown source. Building literacy, testing what disclosures actually work and aligning platforms with standards are, in his view, where “the rubber will hit the road.”

Closing stakes

Lapey and Maheshwari end where they began: Deepfakes threaten democracies less by spreading a particular lie than by making truth feel unreachable. Transparency and provenance can help societies sort good actors from bad ones without creating a centralized ministry of truth, but only if governance, platforms and public understanding evolve quickly enough to preserve a shared reality.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article/video are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

FO° Talks: Deepfakes and Democracy: Why the Next Election Could Be Decided by AI

January 17, 2026

FO° Talks: Israel Recognizing Somaliland Is About Turkey, Iran and the Future of Middle East

January 16, 2026

FO° Talks: Modi–Putin Meeting: Kanwal Sibal Explains India’s Signal to Trump and Europe

January 15, 2026

FO° Exclusive: Immigration, War, Economic Collapse: Will the Global Order Change in 2026?

January 14, 2026

FO° Live: Is the Quad Still Relevant? Why Southeast Asia No Longer Trusts This Alliance

FO° Talks: “We’re Going To Keep the Oil:” Trump Breaks the Rules as China Watches Closely

January 08, 2026

FO° Talks: Can Japan and South Korea Shape the Indo-Pacific as US–China Rivalry Intensifies?

January 07, 2026

FO° Talks: Does the CIA Control American Presidents and Media? John Kiriakou Explains

January 05, 2026

FO° Talks: From Shrimp Among Whales to Strategic Power: How South Korea Is Shaping Geopolitics

December 25, 2025

FO° Talks: Is Myanmar’s Junta Using Elections to Consolidate Power?

December 23, 2025

FO° Talks: Is China’s Economy Really Collapsing or Is the West Misreading the Numbers?

December 19, 2025

FO° Talks: Why Are US Companies Leaving China and Rushing to India?

December 18, 2025

FO° Talks: Nigeria — Mass Kidnappings Surge as Poverty, Terror and Corruption Fuel Crisis

December 17, 2025

FO° Talks: Kazakhstan’s Abraham Accords Move — Critical Minerals, Trump Diplomacy and Geopolitics

December 15, 2025

FO° Talks: Venezuela on the Brink — Is Trump Planning a Military Strike on Nicolás Maduro?

December 14, 2025

FO° Talks: Understanding Japan’s Taiwan Stance — Why PM Takaichi’s Comments Triggered China

December 11, 2025

FO° Talks: Will Zelenskyy Cede Territory? Putin’s New Demands Put Europe on High Alert

December 08, 2025

FO° Exclusive: $650 Billion a Year? The Numbers Behind the AI Boom Don’t Add Up

December 07, 2025

FO° Exclusive: Tensions Over Taiwan Push China and Japan Closer to Conflict

December 06, 2025

FO° Exclusive: Is the Ukraine War Ending on Putin’s Terms? Decoding Trump’s 28-Point Plan

December 05, 2025

 

Fair Observer, 461 Harbor Blvd, Belmont, CA 94002, USA