Intrigued by an item in the news recounting Mark Zuckerberg’s ambitions with AI, I decided to address the following prompt first to Perplexity, then to ChatGPT:
“Reporting on the latest act of generosity by Meta’s CEO, who is willing to dish out ‘$100M for a Top AI Engineer,’ AI thought leader Ayesha Khanna tells us why the creator of Facebook is willing to part with so much cash, not on an acquisition but on a single hire. ‘Zuckerberg’s vision is to create superintelligence: AI that surpasses human capabilities in reasoning, planning, memory, and knowledge. These systems could eventually perform any intellectual task a human can.’ She links to a Bloomberg article that explains what it calls Zuckerberg’s strategy ‘recruiting for the secretive new team.’
As expected, it’s all about competition among the super-wealthy. ‘Meta can and should outstrip other tech companies in achieving what’s known as artificial general intelligence or AGI, the notion that machines can perform as well as humans at many tasks.’
I’d like to start a conversation with you about a couple of things. The first is the question of ‘human capabilities in reasoning, planning, memory and knowledge.’ Does this imply that those four intellectual functions define human intelligence, or intelligence tout court? The second is the much broader question of what Zuckerberg means in social, economic and cultural terms. By that I mean, what does this tell us about the role played by money and power in today’s society, including its psychological aspects, such as the role of hubris and its possible consequences.
I think you will agree that this could lead to a wide-ranging discussion that will lead to no definitive conclusion about any of the topics, but which could serve to raise specific questions and help to define terms, stimulating an even much wider public debate. With that in mind, could you do two things: give your overview of this challenge and then suggest a possible structure for such a debate that for the moment will take place between the two of us before being extended to a wider public?”
Here is Perplexity’s response:
“The challenge you raise involves two broad but interrelated questions: the conceptualization of human intelligence in terms of reasoning, planning, memory, and knowledge, and the socio-economic-cultural implications of Zuckerberg’s large financial investment in AI talent to pursue artificial general intelligence (AGI).
Overview of the Challenge
1. Defining Human Intelligence:
The four intellectual functions—reasoning, planning, memory, and knowledge—are indeed central components of human intelligence but do not exhaustively define it. Reasoning enables drawing inferences and making predictions, planning allows goal-directed behavior, memory supports encoding, storage, and retrieval of information, and knowledge provides the content upon which reasoning operates. However, human intelligence also includes emotional intelligence, creativity, contextual understanding, social cognition, and consciousness, which are more complex and less easily reducible to these four functions. For example, contextual reasoning and mood congruence influence how memory and knowledge are applied in real-life situations, showing that intelligence is deeply embedded in social and emotional contexts. Thus, while Zuckerberg’s focus highlights core cognitive faculties essential for AGI, human intelligence is broader and more nuanced.
2. Socio-Economic and Cultural Implications:
Zuckerberg’s willingness to allocate $100 million to recruit a top AI engineer reflects the intense competition among tech billionaires to lead the development of AGI, which promises transformative power over knowledge, economy, and society. This act of generosity is less about philanthropy and more about strategic investment in technological supremacy, signaling how money and power shape innovation trajectories. It underscores the role of wealth concentration in driving technological agendas that may prioritize dominance and control over inclusive societal benefit. Psychologically, such moves may be driven by hubris—the belief in one’s exceptional ability to shape the future—raising questions about accountability and the ethical stewardship of AI development.
This scenario invites reflection on how economic power consolidates influence over emerging technologies and how this might affect societal values, equity, and governance. It also highlights the psychological dimension of ambition and risk-taking at the highest levels of tech leadership, with potential consequences for global technological ecosystems.”
In response to my second request, Perplexity proposed the following structure for a discussion, whose outline alone I reproduce here. Each part included as many as four proposed questions, which I have not reproduced.
“Suggested Structure for a Wider Debate
To foster a rich, structured conversation that can later be extended publicly, the debate could be organized as follows:
I. Conceptual Foundations of Intelligence
II. Technological Ambitions and AGI
III. Socio-Economic and Cultural Dimensions
IV. Broader Implications and Public Engagement
V. Open Questions and Future Directions”
To the same prompt, ChatGPT offered this response:
“This is an important and timely initiative. You’re touching on two deeply entangled issues: the narrowing of intelligence to quantifiable functions and the broader social meaning of vast financial power being used to shape our technological futures. These are not simply philosophical concerns or technological ones—they are political, psychological, and cultural. They affect not only what kind of AI gets built, but who gets to define “intelligence,” and in turn, what kinds of futures are rendered possible, probable, or impossible.
Overview of the Challenge
1. Reductive Conceptions of Intelligence
The framing of intelligence in terms of reasoning, planning, memory, and knowledge reflects a highly instrumental and computational definition of the mind. These functions, while essential to both human and artificial cognition, represent only a portion of what constitutes human intelligence. Missing are:
· Emotional intelligence (empathy, intuition, social reasoning)
· Embodied intelligence (sensorimotor understanding, aesthetic sensibility)
· Moral and ethical reasoning (often not reducible to logical inference)
· Temporal awareness and narrative selfhood (identity across time, historical consciousness)
By privileging only certain cognitive functions—those most easily modeled in code—this conception implicitly defines intelligence as that which can be computed. The risk is not merely philosophical. It frames the goals of AI research and development, setting boundaries on what is pursued and what is ignored.
2. The Power Politics of Superintelligence
Zuckerberg’s gambit—offering $100M salaries to elite AI researchers—is emblematic of a trend: a few ultra-wealthy individuals and corporations claiming the right to define and direct the trajectory of civilization through private investment in frontier technologies. This raises a host of structural and ethical concerns:
· Privatization of the future: When AGI becomes a private project, public accountability is sidelined.
· Hubris and techno-solutionism: The assumption that intelligence—artificial or otherwise—can “solve” complex problems like climate change or inequality, often ignores the socio-political roots of those problems.
· The role of scarcity and competition: Scarce technical talent is being extracted into elite enclaves, driven by incentives that serve private profit, not public need.
At a psychological level, this may reflect what you rightly call hubris: the belief that one can engineer not just better tools, but a better world, by brute technological force and concentrated capital. Yet hubris traditionally invites nemesis—some form of reversal, collapse, or retribution.”
Here is the outline, without the more specific questions, ChatGPT proposed.
“Proposed Structure for Our Discussion
To keep our dialogue focused yet expansive, I propose the following structure. Each section can evolve into its own branch of inquiry:
Part I. What Is Intelligence?
Part II. Who Owns Intelligence?
Part III. Cultural and Psychological Dimensions
Part IV. Society, Ethics, and Democratic Imagination”
My first impressions after comparing the two dialogues
In conversation with a real human being we sense how accurately our interlocutor has assimilated our meaning, which is not reducible to the information we allude to but also contains strong elements of intention and expectation, such as emotional responses, philosophical or ideological biases and moral concerns. I’ll simply note here that ChatGPT did a better job than Perplexity at discerning my concerns.
Perplexity showed itself to be guardedly reserved and non-committal when it offered observations such as: “It underscores the role of wealth concentration in driving technological agendas that may prioritize dominance and control over inclusive societal benefit. Psychologically, such moves may be driven by hubris.”
ChatGPT is more assertive when it delves into the meaning of hubris. “At a psychological level, this may reflect what you rightly call hubris: the belief that one can engineer not just better tools, but a better world, by brute technological force and concentrated capital.” It detected my acute interest in this concept and responded in a way I found enriching.
To my surprise, Perplexity interpreted my evocation of Zuckerberg’s “generosity” literally, whereas ChatGPT not only took it on board as irony, but understood that it was a call to further probe the CEO’s true motives.
Perplexity’s timidity and ChatGPT’s boldness appear clearly in these contrasting assertions:
Perplexity: “This scenario invites reflection on how economic power consolidates influence over emerging technologies and how this might affect societal values, equity, and governance.”
ChatGPT: “Zuckerberg’s gambit—offering $100M salaries to elite AI researchers—is emblematic of a trend: a few ultra-wealthy individuals and corporations claiming the right to define and direct the trajectory of civilization through private investment in frontier technologies.”
Perplexity is doing exactly what I always hope and expect an LLM to do when it “invites reflection.” That’s a wonderful start. But ChatGPT does something that expands the horizon: It identifies a trend, the awareness of which was at the core of my initial prompt. ChatGPT’s response incited me to take a further step and enquire about evidence to support the trend it cites. Why should I suppose that its assertion about a trend is not a hallucination designed to flatter me?
As expected, in response to my new prompt, it presented a thorough case to validate and document the trend, which I will develop in another article. Interestingly its response began by citing the history of OpenAI, the company that created and manages ChatGPT. Below is its honest (and therefore not entirely flattering) paragraph concerning the company itself.
“OpenAI’s evolution: Initially a nonprofit, OpenAI transitioned to a “capped-profit” model in 2019 and entered into a $13B+ partnership with Microsoft. The governance shift illustrates how public-interest rhetoric often gives way to private corporate control over AI’s direction.”
I’ll highlight one other contrast in the styles of Perplexity and ChatGPT by comparing the chapters of the ongoing dialogue they proposed.
Perplexity | ChatGPT |
I. Conceptual Foundations of IntelligenceII. Technological Ambitions and AGIIII. Socio-Economic and Cultural DimensionsIV. Broader Implications and Public EngagementV. Open Questions and Future Directions | I. What Is Intelligence?II. Who Owns Intelligence?III. Cultural and Psychological DimensionsIV. Society, Ethics, and Democratic Imagination |
Perplexity’s categories sound academic, abstract. In contrast, ChatGPT’s are concrete and compelling. We should suspect that AI’s “wisdom” will always remain at a significant remove from reality. But it’s refreshing to see that some chatbots have learned to avoid the academic mentality that has a bias in favor of intellectual abstraction.
On a final note, I’ll mention that when I used the same initial prompt with Anthropic’s Claude it produced a wonderfully compact and concrete response, less detailed than ChatGPT’s but perhaps even more focused. I’ll simply quote one excerpt.
“The broader social implications are even more striking. Zuckerberg’s willingness to spend $100 million on a single hire represents a new form of feudalism where technological capability becomes the basis of power concentration. This isn’t just about competition between tech companies – it’s about a small number of individuals making unilateral decisions about humanity’s technological trajectory without democratic input or oversight.”
Calling Zuckerberg’s attitude reflective of feudalism clearly extends and enriches the debate. We’ll come back to this in the next piece.
These exchanges have opened a debate about major issues concerning the economy, technology and historical trends. My aim was to initiate this discussion and invite others to participate. I encourage you to start by using the comments section below this article or write to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We are happy to build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment
Peter, is there really nothing good to corporate centralization? Could we achieve the same or a similar kind of connectivity with other means than by giving our work and data for free to these “feudal tech overlords”?