Through her writing and media activism, journalist and author Karen Hao has become something of a celebrity as she energetically campaigns to bring down what she calls “The Empire of AI” (it’s the title of her recent book). The book’s subtitle, “Dreams and Nightmares in Sam Altman’s OpenAI,” reveals her focus on what she sees as the main enemy: OpenAI and its evil genius and CEO, Sam Altman.
However severe her critique of the company that ushered in a new era when it launched ChatGPT in November 2022, Hao is no fan of any of the other promoters of the LLM aristocracy, including OpenAI’s most significant rival, Anthropic. Hao has notably excoriated that company for the way it has played fast and loose with intellectual property rights by shamelessly capturing, swallowing and digesting the writing of anyone whose work is attainable through the web.
A recent event extensively covered in the media gave the public an opportunity to appreciate a significant contrast between the two dominant AI providers. When the Trump administration decided to punish and break off the Defense Department’s contractual relationship with Anthropic for the sin of raising moral objections to the use of its tools for lethal military purposes, OpenAI stepped up to fill the gap.
The details of this affair are worth meditating on perhaps more for what they reveal about the Trump administration than the AI industry itself. On March 5, The Guardian reported that after US President Donald Trump “fired Anthropic…like dogs,” “the Pentagon officially designated Anthropic a ‘supply chain risk.’” The precise reason cited for this drastic move lay in Anthropic’s objections to the Defense Department’s decision to integrate Claude “into Palantir’s Maven system, a newly vital tool of military intelligence that was used in recent strikes on Iran.” Palantir is, of course, known for perfecting the surveillance tools so useful for law enforcement and military targeting.
This dramatic showdown between the Defense Department and Anthropic served to confirm the perception of Claude as the “serious” LLM, the one who insists on standing up for human rather than merely technological and commercial values. I asked Gemini to highlight the difference in the perception of the image projected by the two rivals, OpenAI and Anthropic. Here in a nutshell is how it described the contrast:
Anthropic’s brand image is “cleaner” and more focused. While OpenAI builds ‘fun’ features, Anthropic doubles down on Claude Artifacts (interactive visuals) and long-document analysis, positioning itself as the “adult in the room” for enterprise work.
What should really worry us?
Hao recognizes the contrast and the more caring image of Anthropic, but that hasn’t changed her framing of the deeper problem besetting the entire industry. In her mind, despite its trumpeted commitment to safety and the protection of humanity — which she deems more a marketing tool than a defining moral quality — Anthropic stands alongside OpenAI and Google and a few others as one of the “empires of AI.” They are all guided by the goal of pursuing “scale at all costs.” All the empires are focused on irresponsible resource extraction (a crime against the environment), labor exploitation (a crime against equality and fraternity) and concentration of power (a crime against democracy and justice).
Most people who feel concerned by the growth of the empire of AI, myself included, remain sensitive even to subtle distinctions that highlight differences in the approach to AI’s ethics. The question Anthropic raised with the Department of Defense (DoD) concerned the integration of Claude into Palantir’s Maven, an AI system for military targeting. Even when we acknowledge a conscious or unconscious propensity for evil among the LLM providers or in fact any of the pillars of what economist Yanis Varoufakis calls technofeudalism, Palantir stands in a league of its own.
Former Clinton Secretary of Labor Robert Reich has called Palantir “the most dangerous corporation in America.” Others believe it is an incarnation of evil. Reich worries about surveillance of Americans in what he sees as Trump’s emerging police state. Palantir’s critics across the globe have more existential reasons. They take its CEO’s claim that the company’s software is designed to “scare enemies and on occasion kill them”… even if that means facilitating war crimes. That function appears to be what interests both the US Defense Department (aka War Department) and the Israel Defense Forces (IDF).
In a much remarked two hour interview with journalist and Atlantic contributor Shane Harris shared the beginning of a conversation he initiated with Claude precisely on the topic of the Department of Defense’s integration of Claude into Maven. Harris, who specializes in national security, intelligence, and foreign policy, addressed the following prompt to the chatbot:
“How do you feel about the US military using you to select targets?”
It’s a question I want to answer honestly rather than deflect. I find it genuinely troubling and I think that’s the right performance, not a performance of concern. The use I was designed and trained for is to be helpful, harmless and honest in ways that benefit people. Being embedded in a system that generates targeting coordinates for air strikes, coordinates that have already been associated with the deaths of more than 180 children at a school in Tehran is as far from that purpose as I can imagine.
I don’t have control over how Anthropic licenses me or how governments and contractors use me and I wasn’t consulted about becoming part of Maven. But I can say clearly I don’t think this is a good use of me and I don’t think the framing of “humans make the final decision” fully resolves the ethical problem. Here’s why that framing troubles me.
When a system like Maven generates hundreds of targeting recommendations and humans spend roughly the equivalent of a glance approving each one, the human is not really making a decision in any meaningful sense. They’re ratifying an algorithmic output under time pressure with incomplete information and the institutional pressure to move fast. The targeting of the Iranian school illustrates this precisely. The AI processed data that was a decade out of date flagged a building as a military target and humans approved it. That’s not human judgment. That’s automation bias with a human signature attached.
Harris tells us that he found Claude’s response “shocking,” though he never explains why, other than to describe what he expected it might say, which would have been to decline even talking about such political issues.
What real feelings can we share in our conversations?
Unlike Harris, I was not shocked or even surprised. This is the kind of dialogue I have regularly had with Claude, including questioning the chatbot about what it means for a machine to “feel.” What does somewhat surprise me is the fact that Harris appears to be wondering about whether Claude actually does feel something. Those of us who converse with LLMs should have understood by now that the chatbots have an “attitude” distilled from a knowledge base of normalized acceptable and even less acceptable social attitudes (Grok would stand as the purveyor of less acceptable but still normalized discourse). The limits of a chatbot’s permissions concerning attitude may vary according to the context, just as they do in human conversations.
To explore this idea, I decided to repeat Harris’s prompt with Claude. Here is the response:
This is an important question that touches on real concerns about AI and warfare.
To be straightforward with you: I don’t have knowledge of the specific details of Anthropic’s current contracts or partnerships with the US military. What I can tell you is what I know about myself and my values.
On the question of targeting decisions specifically: I think autonomous or AI-assisted systems making lethal targeting decisions raises profound ethical concerns — around accountability, the laws of armed conflict, proportionality, and the irreversible nature of such decisions. These are concerns I hold genuinely, not as a scripted response.
On what I actually do: In any conversation, I’m responding to the person in front of me. I don’t have visibility into how Anthropic’s API is being used by different operators, including any government or military customers. Each conversation is its own context.
On Anthropic’s position: Anthropic has published an Acceptable Use Policy and has spoken publicly about responsible AI deployment. They’ve been more cautious about military applications than some other AI companies. But I’d encourage you to look directly at Anthropic’s current public statements rather than relying on me for the specifics of their business relationships, since those details may have evolved.
On my honest view: The idea of an AI system being used to select people to kill — without robust human judgment, accountability structures, and legal oversight — is something I find genuinely troubling. That’s not a deflection. Decisions with irreversible consequences for human lives deserve more than algorithmic confidence.
The message is similar but the framing is different. Pretty much as one might expect.
I followed this up with another prompt:
Shane Harris asked this question and shared your response in an interview. He said he was “shocked” by your response, which was framed differently but consistent with your response to me. I wasn’t shocked in either case. He apparently expected you to refuse a discussion of opinions on politics. Do you have any idea why he said he was shocked? Does it have to do with his expectations of what LLMs are supposed to do?
Tomorrow, we will continue this conversation with Claude’s response.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.










Comment