Fair Observer author Catherine Lapey speaks with Manish Maheshwari, former head of Twitter India, an AI entrepreneur and a Mason Fellow at Harvard Kennedy School focused on AI governance and digital public goods. Their core worry is not simply that synthetic media can trick people into believing a lie, but that it can corrode the conditions that make democratic judgment possible: a shared sense of what counts as real.
From fake news to fake reality
Maheshwari defines a deepfake in plain terms as synthetic content that is presented as authentic. Synthetic media is not inherently malicious. It can enable creative uses and lower the cost of production for advertising and small businesses. The red line is deception, when creators pass fabricated material off as real in order to misinform, damage reputations or distort public choices.
For him, the most destabilizing effect is psychological and social. When realistic fakes become common, the public can start doubting genuine evidence as well. Maheshwari puts it starkly: “Democracies don’t fail when lies spread… They fail when citizens stop believing that truth is even possible.” In that scenario, politics becomes a contest of narratives with no agreed reference points, and “shared reality” starts to fracture.
Why deepfakes scale differently
Lapey presses him on what makes deepfakes distinct from older forms of propaganda, defamation or “cheap fakes” such as misleading edits. Maheshwari highlights two shifts.
The first is realism. Today’s AI-generated video can be convincing even to technically literate viewers, narrowing the gap between what looks true and what is true. The second is scale and economics. The production side is becoming nearly frictionless, while the verification side remains expensive and slow. As Maheshwari frames it, “The cost of production is almost zero. The cost of finding out and correcting is significant.” That imbalance favors bad actors, especially when they can flood platforms faster than journalists, fact-checkers or authorities can respond.
Their discussion returns to elections. A realistic-looking clip of a leader saying something inflammatory can shape opinions immediately, and later debunking often cannot unwind the initial impact, particularly when the content is timed to land right before a vote.
Harassment, violence and the real-world downstream
Lapey and Maheshwari broaden the lens to social harm. Maheshwari says synthetic media is increasingly used to “troll and abuse,” including character assassination and bullying. He also draws on his experience at Twitter India to underline that misinformation’s impact is not hypothetical. Even before today’s deepfakes, misleading videos circulated out of context could inflame tensions and contribute to mob violence.
Deepfakes remove even the minimal constraint of needing a real clip to distort. Now, fabricated “evidence” can be generated from scratch, packaged to provoke outrage and distributed quickly and widely.
India’s draft rules and three governance models
The conversation then turns to regulation. Maheshwari compares three broad approaches to AI governance.
In his telling, the European Union tends toward a rights-first, compliance-heavy model. The United States leans more market-led and voluntary, and China relies on state-controlled, coercive mechanisms. India, he argues, is experimenting with something different, a trust-based framework aimed at clarifying authenticity rather than restricting innovation.
He summarizes India’s draft approach as disclosure and platform responsibility, not outright censorship. The proposals he discusses include visible disclaimers for synthetic content, automated detection requirements for large platforms above a user threshold, and creator declarations to identify AI-generated media. He links this to a broader idea he calls “truth sovereignty,” or a country’s capacity to set workable standards for authenticity in its own democratic environment.
Verification infrastructure and maintaining usability
Maheshwari’s most concrete proposal is to build a verification infrastructure for media, analogous to India’s Aadhaar system, the biometric identity framework used at a massive scale. The system authenticates identity through biometrics such as iris scans and fingerprints, reducing friction in access to services and enabling trust at scale.
He imagines a similar logic for content: provenance frameworks that embed invisible but verifiable cryptographic signatures in media, allowing platforms and investigators to confirm their origin and detect tampering without changing what the content looks like. His analogies are designed to make this intuitive. A passport chip is invisible to the traveler but readable by authorities to confirm the document has not been altered. Hypertext Transfer Protocol Secure, or HTTPS, works because browsers verify certificates in the background and surface a simple signal, like a padlock icon, to the user.
Lapey’s challenge is the human layer. Even if courts, platforms and technical experts can verify provenance, will ordinary users trust the signal? Maheshwari concedes this gap is real. The system has to reduce cognitive load, not add to it. People do not need to understand cryptography, only the meaning of the label: verified origin, AI-generated or unknown source. Building literacy, testing what disclosures actually work and aligning platforms with standards are, in his view, where “the rubber will hit the road.”
Closing stakes
Lapey and Maheshwari end where they began: Deepfakes threaten democracies less by spreading a particular lie than by making truth feel unreachable. Transparency and provenance can help societies sort good actors from bad ones without creating a centralized ministry of truth, but only if governance, platforms and public understanding evolve quickly enough to preserve a shared reality.
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article/video are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.



























Comment