Two major traumatic events marked the year 2022. They will both leave lasting traces in history. The first was the Russian invasion of Ukraine in February. The combat is still dragging on, with no end in sight. The second appeared near the end of the year and it promises to produce its effects on the economy even longer than any of our forever wars. It was the public release of ChatGPT a week after Black Friday.
The public quickly became fascinated with the tool’s ability to credibly imitate human expression. Almost as quickly the media began spreading the fear that this new form of intelligence might take over the world. The educational community went into panic. The fear of undetectable cheating by students who might use ChatGPT to write academic essays turned out to be traumatizing for institutions that had invested heavily in anti-plagiarism software. (Though I have personally experimented and documented one approach to solving this problem, and others are busily working on it, confusion continues to reign in the field of education as a whole. This shouldn’t surprise us, since educational traditions developed during the industrial age have encouraged a form of learning that is more appropriate to machines than to human minds.)
As for the general economy, it took just about six months for the media to discover the core issue: AI’s effect on employment. Last week, The Washington Post published an article titled “ChatGPT took their jobs. Now they walk dogs and fix air conditioners.” The authors, Pranshu Verma and Gerrit De Vynck, hint at the prospect of an apocalyptic outcome of a process that has only just begun. “Some economists,” they write, “predict artificial intelligence technology like ChatGPT could replace hundreds of millions of jobs, in a cataclysmic reorganization of the workforce mirroring the industrial revolution.”
The authors provide the initial evidence of the trend. “For some workers, this impact is already here. Those who write marketing and social media content are in the first wave of people being replaced with tools such as chatbots, which are seemingly able to produce plausible alternatives to their work.”
Today’s Weekly Devil’s Dictionary definition:
Artifacts similar to junk jewelry that successfully imitate forms of rationality cultivated by humans in former times, now abandoned by a human race that insists on delegating its reasoning capacity to machines capable of generating standardized, marketable simulacra of human insight.
Keen to demonstrate their personal sympathy for the plight of the those who have been displaced by AI, the article cites the view of experts who “say that even advanced AI doesn’t match the writing skills of a human: It lacks personal voice and style, and it often churns out wrong, nonsensical or biased answers.” They appear to admit that in an ideal world, where the quest for quality would systematically prevail over the obsession with quantity, we humans would choose to trust in people rather than machines. Humans – at least potentially – possess a specific quality no algorithm can conceivably have: a moral sense clearly related to their innate understanding of biological reality.
But, of course, we don’t live in that ideal world. In purely monetary terms, quality cannot compete with quantity. And since everything we now consider “good” will inevitably be evaluated in monetary terms, that competitive advantage for quantity over quality is unlikely to change.
In today’s world, goodness itself, when it is recognized, always has a price tag attached to it. Quality appears in only limited numbers. Its rarity makes it expensive. That means it will fail to meet the most widely shared objective in the commercial world: satisfying the demands of a mass market.
Concerning the job market – itself a mass market – the authors note that AI and robots can increasingly do things humans were traditionally paid to do, as they remind us that “chatbots that can hold fluid conversations, write songs and produce computer code.” Replacing humans who do those thing represents an obvious “economy, stupid!”
The Post’s article reveals but doesn’t explicitly state the fact that we now find ourselves at a special moment in human history. As a society, we face an existential choice at a time when none of our institutions appears prepared for the challenge. The authors mention AI producing “plausible alternatives” to human production. But we are the ones who must begin envisioning plausible alternatives to a system that is undermining its own basic assumptions about the economy.
The authors’ use of the word “plausible” refers to the criteria put forward by the late mathematician Alan Turing whose “Turing test” determines whether a machine is capable of fooling us into thinking that it is human. My experience with ChatGPT tells me that only those who want to be fooled will be fooled. It’s true, however, that because most human beings are not very skilled at producing coherent prose, ChatGPT’s performance is superficially up to “some” human standards. But everything it produces is annoyingly predictable.
To its credit, ChatGPT makes far fewer performance errors than the average human. That is what effectively fools us. But its performance is – to coin an oxymoron – deeply superficial. It has polish but no substance. Polish in a world seduced and regulated by hyperreality impresses. Some people today may even see AI’s lack of substance as normal for human behavior. If our criterion for judgment is the lowest common denominator, this makes sense.
There is however a difference. Humans, unlike AI, have the capacity to rise above what is considered normal. The reason we can be fooled is that our hyperreal culture has conditioned to be fooled. We are consumers of foolery. We expect insincerity in a world regulated by the need to sell not just products, but also ideas and even our own personalities. All these things – products, ideas and personalities – have identifiable monetary value, the measure by which we expect everything to be judged.
Humanity should have noticed by now what is truly exceptional in this moment of history. Many of the values that have guided us in the past have brought us to the brink of uncontrollable catastrophe. Technology is wonderful but, at some point, its wonders threaten our existing assets. A competitive economy is marvelous as a stimulator of innovation, but Schumpeter’s theory of “creative destruction” may easily morph into irredeemably destructive destruction. The individualism of the consumer society is exhilarating, promising to satisfy an aver wider range of desires, but it ends up confining people within the isolated cells of their cultivated consumer habits.
For many, disaster is looming, and it has multiple names. It might be the destruction of our ecosystem, nuclear confrontation or the total collapse of an economy based on the utterly insubstantial notion of monetary value. Citing “liberal democracy” with the belief that it can be a source of stability no longer makes sense. Our institutions that leave essential decisions to “market forces” have proved themselves incapable of addressing the most obvious problems or choosing any “plausible alternative” to what we see as risky. This has undermined the majority’s confidence in a rational future. As a society, we increasingly sense that we may have painted ourselves into a corner.
This crisis of values is real. We have made what appears to be a reasoned commitment to a productivist economy that “creates wealth.” But the evidence is clear that this has undermined the environment on which we depend. But we have chosen to feel ourselves more dependent on productivism than on the physical environment that supports us. We seem incapable of imagining plausible alternatives while trusting AI to keep producing plausible alternatives to our rationality.
AI’s plausible alternative is clearly a chimera. Though totally superficial, it has successfully created the illusion of depth. In the meantime, an increasing number of people who were taught to measure their own value by the job they managed to secure, are condemned to watch helplessly as AI steps in to plausibly imitate what they were trained to do.
AI systematically produces plausible alternatives to human effort. It’s time for humans to use their biological brains to imagine the feasible alternatives that may avert catastrophe.
*[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news. Read more of Fair Observer Devil’s Dictionary.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.