In early 2019, Foreign Affairs took a detailed look at the social and potentially political implication of the emergence of a phenomenon called “Deepfakes,” the result of recent advances in digital technology. The article describes the kind of manipulation of images and sound that now make it possible, starting with disparate components of reality, to fabricate convincingly realistic documents that appear to be totally natural. The perfect device for creating and maintaining an increasingly hyperreal world.
The article provides readers with the basic scientific principles concerning the technology used. It combines artificial intelligence, big data and digital manipulation of image and sound. The authors present deepfake’s practically limitless power of hyperrealistic simulation. Its emergence and future exploitation may require changing the traditional proverb to: “Fool me once, shame on you. Fool me twice…” and we both can be sure it’s deepfake technology that has pulled off the trick.
The article explores in considerable depth the potential for distortion and manipulation. It goes on to speculate at length about the measures that may eventually serve to combat its effects or neutralize its insidious power. The authors reach a largely pessimistic conclusion: There is no easily implementable defense.
Deepfake means that what everyone now recognizes as the contamination of our media — and particularly our social media — by “fake news” has taken one enormous and worrying step further. Unlike fake news, deepfake doesn’t just appear in the form of narrated information, which usually boils down to the biased interpretation of reported facts. Instead, it imposes itself as a substitute reality in the form of a far more convincing presentation of “alternative facts” than US President Donald Trump’s and other politicians’ brazen lies. It’s more than words. It resembles concrete reality but may be entirely imaginary.
The authors of the article, Robert Chesney and Danielle Citron, worry about its use in politics, a domain in which truth already has some difficulty in making itself known: “Deepfakes have the potential to be especially destructive because they are arriving at a time when it already is becoming harder to separate fact from fiction.”
Here is today’s 3D definition:
The ultimate technological achievement of hyperreality that enshrines both narcissism and its opposite in highly realistic and difficult-to-detect digital manipulations of audio or video
The last 10 words of our definition are borrowed directly from the Foreign Affairs article, which defines deepfake in these terms: “highly realistic and difficult-to-detect digital manipulations of audio or video.” Chesney and Citron describe the phenomenon of deepfakes as a direct result of the encounter of sophisticated digital technology and social media. It promises to amplify the already worrying potential of social media to validate false ideas and encourage the behaviors associated with the dissemination and consumption of fake news.
The article stops short of analyzing the central component of social media culture that makes fake news not only effective but also inevitable: the narcissism that motivates its users to use the media. Social media works like a two-way mirror. First, it projects each individual’s image outward for maximum display, exposure and hoped-for feedback. At the same time, it serves to fix and confirm the values associated with that artificially constructed self-image.
One of the solutions that Chesney and Citron propose to neutralize the worst political effects of deepfakes is what they call “lifelogging,” which they define as the “practice of recording nearly every aspect of one’s life.” This would enable politicians to create a complete log of all their activities and thus to call out and contradict deepfakes of their actions and words. The authors even imagine commercial services delivering and managing a person’s lifelog. They appropriately dismiss the idea as unfeasible because it will inevitably be perceived as “invasive,” leading to the creation of “a massive peer-to-peer surveillance network.” As if that wasn’t already what the commercial internet has already become!
This nevertheless illustrates how easy it has become, even in our speculation, to slide into the acceptance of a form of hyperreality that replaces our perception of reality. In a society dominated by hyperreality, the spontaneity of our social existence exists only to be captured and codified in the form of big data. The image produced by the data and the logical (i.e., repetitive) patterns discerned by artificial intelligence end up replacing the dynamics of real social interaction. And we, in turn, end up accepting that as the basis of our social reality.
In other words, and despite the authors’ worries concerning political responsibility and decision-making, because the technology exists, attracts and may thus generate income and profit, deepfakes — some for entertainment, others for manipulative deception — will be a feature of the world we are now being invited to join. In a technology-dominated society, reality and hyperreality are converging. The boundary between them will disappear. Attempting to separate them will contradict our commitment to a free-market economy. To a great extent, reality and hyperreality have already merged. Deepfakes will simply add a new dimension to the game of illusions.
Social networks have existed throughout history. Humans are social animals, just as bees and ants are. But thanks to our imagination and personal ambition, we have achieved levels of creativity and productivity that our insect friends will never attempt.
The social networks we talk about today are, in fact, pseudo-social networks. True social networks form as the result of people sharing experience in the patient development of a common language and culture. Because our electronic media can imitate the functional principles of true social networks and, thanks to their speed, eliminate the requirement of patience, they open the floodgates to the acceptance of a totally illusory hyperreality.
What they produce looks like human culture, but it is clearly something else. In true social networks, people not only share but also build experience. In pseudo-networks, they share what they believe to be information, which is past experience that has already been interpreted and codified. When experience is codified, it contains a purpose built into the strategy of codification that may or may not be evident. That purpose provides the basis for the construction of hyperreality.
From the start (let’s assume it was around 15 years ago), social media quite naturally appealed to the kind of narcissism that is prevalent among socially awkward students in an elite university, the kind of narcissism Mark Zuckerberg might have experienced as a rudderless member of the next generation’s elite at Harvard. The name of the brand he created and which has come to dominate the landscape of social media advertises its narcissistic ambition: Facebook, a book of faces, though the result has none of the coherence and substance of even a shoddily written book.
One’s face is one’s image, which, despite visible faults, everyone instinctively believes conveys to the world outside an idealized image of oneself. Narcissus fell in love with the image of his face and became indifferent to the world around him; not just to other people, but to the world itself, to the point of no longer understanding that the object he fixed his attention on was the fluid medium of water. For Narcissus, the water existed only as a mirror reflecting his face.
Social media’s orientation towards the user’s self-image provides the psychological basis on which our latest and most effective version of hyperreality can be built. Marshall McLuhan taught us that the medium is the message (and the massage). And we mustn’t forget that the word medium and its plural — media — both start with “me.”
Chesney and Citron review and ultimately dismiss a series of proposed methods to detect and counter the effects of deepfakes. They express their deep pessimism when they point out that “even if extremely capable detection algorithms emerge, the speed with which deepfakes can circulate on social media will make debunking them an uphill battle.” They conclude that, “In the meantime, democratic societies will have to learn resilience.” They tell us little about how that learning will take place, though they do forecast a “fight,” without indicating who might be engaged in the fighting and which champions might emerge. They tell us that the solution will require “fighting the descent into a post-truth world, in which citizens retreat to their private information bubbles and regard as fact only that which flatters their own beliefs.”
That is a fitting description of hyperreality. If we really want to combat it, we should strive to discourage rather than reward the motivation that makes the “post-truth world” economically viable for its class of providers, which includes not only the owners of our media, but also the governing agents — politicians and those who finance them — who rely on hyperreality to promote their values and beliefs. But that raises questions most people in the public sphere prefer avoiding.
*[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.