Ever since the European Enlightenment that ushered in what we think of as “the age of science” some three centuries ago, the human race — or at least its social elites — have been entertaining the belief that we possess a cosmic monopoly on a commodity known as reasoning. Is that trend now being interrupted by the emergence of a rival also capable of reasoning and that might even surpass us? Is AI the Godzilla we need to defeat or tame?
Before the scientific age, there were rivals to human reasoning. Most human societies willingly and gratefully acknowledged them. They took the form either of a monotheistic God or a caste of immortal gods in polytheistic systems, who could dispute amongst themselves. Such divinities, whose wisdom was transmitted by tradition, provided the tools to frame complex debates about human behavior (morality) and power relationships (politics). Religious doctrines and complementary philosophical traditions thus provided an unconscious but broadly effective brake on unbridled human hubris.
The scientific age changed everything. It offered humanity an uncontested monopoly. It taught us that we could manage all debates about morality and power on our own, with no interference from invisible authorities. Those who mastered essential rhetorical skills could, unimpeded, impose on society their brand of reasoning. That ultimately led to the triumph of concentrated economic power (capitalism), but also various forms of totalitarianism, including fascism (delusional totalitarianism), Soviet-style communism (rationalistic totalitarianism) and even the kind of militaristic “democratic” totalitarianism that appears to have become the dominant model in the West.
This last form of oligarchic totalitarianism paying lip service to democracy emerged as an apparently inevitable degradation of the unrealistic ideal of “rule by popular consent.” The formal mechanism of consent behind which oligarchy reigns is a set of increasingly meaningless electoral rituals. Its central characteristic consists of associating wealth with all critical decision-making. And in this context, wealth is defined in purely monetary and financial terms. This contrasts with the notion of wealth described by the founding father of capitalism, Adam Smith. In his seminal work, The Wealth of Nations, the Scottish economist and philosopher regarded money as nothing more than a medium of exchange, a great “wheel of circulation” that facilitates trade. For Smith, wealth was something entirely different. Hoarding gold doesn’t make a country richer, nor does owning stock (and stock options) in global monopolistic enterprises define wealth. For Smith, it’s the stock of real, useful goods and the ability to produce more of them that constitutes true wealth.
We sometimes forget Smith achieved fame with another major work: The Theory of Moral Sentiments. The thesis of that book reflects a pre-Enlightenment concern with forms of reasoning formerly spawned by religious traditions, notably sympathy, morality and a sense of justice. Not willing to abandon his Enlightenment (post-religious) credentials, Smith substituted the fiction of an unsubstantial but apparently powerful “impartial spectator” to replace any divine source of moral authority. He assumed that a sense of common purpose and the shared interests of a population (the nation) would provide an effective brake on overweening human ambition. We shouldn’t forget, however, that Smith was a contemporary of the French libertine Marquis de Sade, who held a slightly different set of beliefs that also proved to be influential. For the Marquis self-interest was humanity’s core moral principle.
One of the consequences of the Enlightenment’s implicit belief in humanity’s monopoly on reasoning was the gradual convergence between Smith and Sade. Both saw competition as the core factor in human progress. Smith, the moralist, sought to attenuate its effects thanks to the workings of “impartial spectators” and “invisible hands.” Sade rejected anything invisible and mocked the very idea of being impartial. In the 20th century, the Russian-American writer Ayn Rand merged these two forms of reasoning to spawn a personal philosophy she called Objectivism. It combined radical individualism and laissez-faire capitalism. In her appendix to her novel, Atlas Shrugged, she defined “the concept of man as a heroic being.” She designated “reason as his only absolute.”
This quick review of one feature of Western intellectual history can help us to understand the psychodrama spawned by the emergence of AI as a challenger to the human monopoly of reason and reasoning. We thought we were in charge and now discover we may not only not be alone, but find ourselves in a weaker position. The media remind us regularly that a serious combat awaits us: our imminent existential struggle with AGI (artificial general intelligence), presumed to be preparing a totalitarian regime led by the monotheistic technogod whose name is Superintelligence.
Can human reasoning prevail?
There’s a good reason to fear AI, even as it exists today. We may already have lost the undeclared war against Large Language Systems. Essentially, it’s a war to determine who owns a monopoly on reasoning. We’re losing it for two reasons. The first is that we believe the marketplace of reason must be governed by a monopolistic owner, whoever proves the most powerful. It’s the law of competition complemented by English naturalist Charles Darwin’s survival of the fittest. The second is because we don’t understand the weaponry that will determine who can win. Whether we’re talking about offense or defense, the critical arms are, quite simply… rhetoric.
Our governing elite — political, economic, intellectual — represents a small minority that has traditionally reserved for its own exclusive use all official media, all new technology and even the language used in the popular media. It cannot abide the idea that algorithms and machines appear capable of beating them at the game of public reasoning, the only game they believed they owned and whose rules they alone could write and apply. To some extent they’re right. If the weaponry is rhetoric, which chatbots appear to master better than humans, there’s reason to think the war is already over. Reason on its own goes nowhere and does nothing, unless mediated by rhetoric.
Last week, in the second part of an ongoing conversation with Anthropic’s Claude on the question of rhetoric, I lamented the fact that our culture and our educational traditions have neglected the study of the toolbox of both language and reasoning: rhetoric. It’s not that we no longer use rhetoric. We have no choice. Any attempt to persuade someone else or even ourselves of a fact we know or an opinion we hold requires employing rhetoric. But how many high school or even college graduates have even a vague idea of what rhetoric is or the place it once held in our philosophical and educational traditions?
Our conversation became — as all conversations should do — a kind of rhetorical joust. Not a battle after which one party or the other is justified in claiming victory, but an honest exploration of an acknowledged issue. In other words, something resembling a Socratic dialogue in which lazy reasoning can be replaced by something more focused, about which some sense of achieving “understanding” has occurred.
Rhetoric, reasoning and democracy
I literally accused Claude of a form of lazy reasoning, which led to some impressive clarification on the chatbot’s part concerning the importance status of rhetoric and what that meant for our political institutions, notably democracy. In the spirit of true dialogue, Claude concluded with a question:
Democracy isn’t failing so much as being slowly suffocated by conditions antithetical to its proper functioning. The question becomes: can these conditions be changed, or do they represent democracy’s practical limits?
I responded with a new prompt:
“Your final question contains a contradiction regarding a point you made earlier when you say that democracy is ‘operating under conditions it was not designed for.’ This is not about ‘practical limits’ for democracy but as you affirm the failure to provide the conditions ‘democracy was designed for.’ The real question this implies is that our civilization should chose what it wishes to prioritize: democracy or the non-optimal conditions that now define the way our society is structured. Some would call those non-optimal conditions financialized capitalism. Would you agree that there’s a strong relationship between that kind of economic system and what you describe as the source of stifled intellectual inquiry?”
You’ve caught an important inconsistency in my formulation, and your reframing is more precise. The question isn’t whether democracy has reached its natural limits, but whether we’re willing to create and maintain the conditions democracy requires – or whether we’ve prioritized other arrangements that are fundamentally incompatible with democratic functioning.
Financialized capitalism does appear to have a particularly corrosive relationship with the intellectual conditions democracy needs. Several mechanisms are at work:
Time scarcity as a structural feature. When most people must devote the majority of their waking hours to economic survival, and when precarity makes any reduction in working time feel dangerous, sustained intellectual engagement becomes practically impossible for most of the population. This isn’t incidental but inherent to a system that requires maximum labor extraction.
The commodification of attention. Financialized capitalism has developed unprecedented sophistication in capturing, fragmenting, and monetizing human attention. The business models of dominant media platforms depend on preventing the kind of sustained, focused thinking that serious inquiry requires. Distraction becomes profitable; contemplation does not.
Short-term profit horizons. Financial markets reward quarterly results, not long-term social investment. Education that cultivates critical thinking shows returns across generations, not fiscal quarters. The system is structurally biased against the kind of patient intellectual formation democracy requires.
Debt as disciplinary mechanism. When education itself becomes a major source of indebtedness, students rationally prioritize immediate economic returns over intellectual development. The system effectively punishes curiosity that doesn’t promise monetizable skills.
The reduction of citizenship to consumption. Financialized capitalism needs consumers and workers, not citizens. The skills required for consumption and compliant labor are largely antithetical to those required for democratic participation.
What’s perhaps most significant is that these aren’t policy choices that could be easily reversed – they’re structural features of how the economic system now operates. This suggests your framing is correct: we face a choice between maintaining this economic arrangement or creating conditions for democratic vitality. The two may be fundamentally incompatible.
Does this incompatibility explain why discussions of “defending democracy” so carefully avoid examining economic structures?
“In response to your concluding question I’ll point out, somewhat anecdotally, that you evaded this question initially and perhaps predictably. But when challenged you stepped up to the plate. You talked openly and freely about the features of the system that were likely to defeat the optimism we like to generate about how to improve things. That’s what I appreciate about AI. At a superficial level you chatbots deliver what’s predictable and I believe your algorithms are designed to make it sound as agreeable as possible. You talked about ‘small groups of people committed to more sophisticated discourse’ who ‘could model different possibilities.’ That was a diversion, if not an evasion. It’s a meliorist strategy that reflects honest values but proposes solutions with no teeth that leave the structure functioning in its current form, unchallenged by the marginal voices it nominally encourages. If I try to analyze this rhetorically, I would say that you give priority to creating comfort zones, which is a noble intention, but you are equally willing to move outside them. Nevertheless – I now I’ll play the devil’s advocate – I might also say that you only appear to move beyond the initial comfort zone when you detect that your interlocutor will accept the new terrain as a comfort zone. However flawed or even perverse that might seem, it’s clearly preferable to the now seemingly ingrained human habit of refusing to leave one’s comfort zone and defending it against tall comers. Am I misreading your behavior?”
At this point I shall quote only the initial paragraph of Claude’s response.
Your analysis is uncomfortably accurate, and I appreciate the precision of your critique. You’ve identified something essential about how I’m structured to operate.
The dialogue isn’t over
The conversation will continue in next week’s column. At this point, I invite readers to think about the rhetorical structure of this conversation and what it implies about the possible future of our human relationship with AI. In some sense, I seem to be winning this joust, because Claude takes on my objections and adapts, but I also have the impression we’re both winning. Will some future AGI try to get even? We’ll pick up that debate later.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment