Since the release of ChatGPT near the end of 2022, it seems that every pundit in the world has succumbed to participating in a collective sport that consists of highlighting the dangers of AI. There have been literally millions of articles on the topic. If printed as hard copy, they could wallpaper over the surface of every skyscraper in New York and glass and steel building in Silicon Valley.
People like Elon Musk and the late Stephen Hawking have been practicing AI fear-mongering for years. And there were and are good reasons for doing so. Nevertheless, the motives have never been clear, and the science has remained, at best, murky. Musk himself both promoted AI while complaining that it would end civilization as we know it. Now that it’s here and in everyone’s hands, it makes good sense to cry havoc. AI-bashing and calling for its regulation have now become a commodious way of proving to the world that one is a responsible citizen, concerned about the future of humanity.
The fear – already signalled back in 1968 by Stanley Kubrick in 2001, A Space Odyssey – never caused much more than a vague stir in people’s minds over the intervening half century. At the time, it evoked a distant future which most people found difficult to believe would ever become real. Kubrick’s HAL, not only a speaking but also strategically thinking computer that governed all activity on an interplanetary spacecraft, was for the public of the time literally science fiction, with an emphasis on fiction. It belonged to the same ultra-hyperreal universe of cities traversed by flying cars, the ultimate goal and supreme fulfilment of the increasingly technology-oriented consumer society. It was a curious cultural phenomenon. People wanted to believe these things would become real, but no one would have bet on it.
Whether it was flying cars or AI, in the real world, despite an ever advancing hyperreality, all that futurism seemed for decades to belong to an unattainable future. Then, at the end of 2022, something radical happened. Suddenly, everyone in the now connected universe could have a hands-on experience of AI and use it for undefined purposes. Immediately, alarm bells began ringing across the media landscape. As a game-changing and earth-shaking event, the release of ChatGPT last November rivalled in importance the Russian invasion of Ukraine. In 2023, no writer or producer of text has been more talked about than ChatGPT.
Now, AI, a technology that only occasionally shimmered on the surface of our collective consciousness, failing to compete for our attention with powerful stories of war, climate change, imperiled supply lines and anti-hegemonic rebellion, has become a blinding light throwing all these issues into its penumbra. After all, wars eventually end, climate change we now realize is beyond anyone’s control, the economy silently adjusts thanks to the play of markets and power relationships regularly shift. But AI is a cancer that never stops growing, not because it reproduces itself but because it gains power, just as humans used to do, by learning.
The world is now aware that when humans learn new things, they tend to apply that learning abusively. But people can be arrested, sanctioned and punished for doing so. When machines programmed to be intelligent learn things that they may use for their own purposes, there is no legal or institutional recourse. Even worse, they may learn things that serve not their own but some particular humans’ purposes. And those humans cannot be held responsible because it is the machine, not them, that is both learning and applying the learning.
In other words, humanity, who has shown itself so inept in assuring its own education, is faced with a monumental learning problem that it has no means to address.
In this context The Guardian reported on one point of view expressed at the International Conference on Learning Representations (ICLR), which it describes as “the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.”
Today’s Weekly Devil’s Dictionary definition:
Indiscriminate amassing of text from limitless sources for the purpose of making it possible reproduce the lowest common denominator of contemporary culture.
The Guardian focuses on the fate of Timnit Gebru, who claims she was sacked by Google for sounding an alarm bell on the very real risks related to AI. The newspaper allows Gebru to sum up her thesis in economic terms. Like so many of the problems humanity faces, the “root of all evil,” to quote St Paul, is good old greed.
“In fact, it is a gold rush,” she explains. “And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not.” The Ethiopian computer scientist goes on to offer a more refined sociological analysis of the origin of the greed and the nature of the problem it created. “The clear danger… is that such supposed ‘intelligence’ is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations.” The Guardian clarifies: “AI threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe.”
In other words, the world is facing a problem created by people with a gold rush mentality who are using the consolidated tools and data of their culture to dominate the world and impose a form of civilization in their image. We shouldn’t be talking about the technology and its capacity for evil and not even the people who created it, but those who exploit with the intention of making humanity dependent on it. “I’m not worried about machines taking over the world,” she insists. “I’m worried about groupthink, insularity and arrogance in the AI community.” She speaks of “a quest to push beyond the tendency of the tech industry and the media to focus attention on worries about AI taking over the planet and wiping out humanity while questions about what the technology does, and who it benefits and damages, remain unheard.”
Gebru focuses on the captains of Silicon Valley industries who are driven by classic economic interest. “There’s a lot of exploitation in the field of AI, and we want to make that visible so that people know what’s wrong. But also, AI is not magic. There are a lot of people involved – humans.” And their motivation is only too banal: finding the best and most efficient way to make money.
According to Insider, Gebru “warned that the current AI “gold rush” means companies won’t ‘self-regulate’ unless they are pressured.” Why should they? In a gold rush, even more than on Wall Street, as Oliver Stone’s hero Gordon Gekko, put it: “Greed is good.” So the real question that remains is this: Who is likely to pressure them?
History tells us that even when we know over a span of decades that an impending catastrophe fed by the instinct of greed is likely, human institutions and human governments are powerless to effectively pressure those who take part in the competition to profit “while the sun is shining.” Greed systematically defeats the actions of even the most powerful governments.
In the 1980s, the scientific community started to recognize the potential risks associated with increasing greenhouse gas emissions from human activities, primarily the burning of fossil fuels. In 1988, the Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations and the World Meteorological Organization to assess and synthesize scientific research on climate change.
Since then, numerous scientific studies, reports, and assessments have been conducted to understand the causes, impacts, and potential solutions to the climate crisis. The IPCC has released a series of comprehensive assessment reports, the most recent being the IPCC Sixth Assessment Report, which was published in 2021.
Nothing significant has been done. Governments are blamed. But whose interests are they responding to when they fail to act in any meaningful way? There’s a simple answer: the greedy.
*[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news.
Read more of Fair Observer Devil’s Dictionary.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.