I began this conversation by focusing on the moral status of a trend that sees governments seeking to rely increasingly on autonomous weapons in wartime, which raises a serious question concerning accountability. After a developed dialogue with DeepSeek, I addressed the same prompt to ChatGPT.
On the Defense One website I read this: “These DARPA projects may feed ideas to the Defense Autonomous Working Group, the lead Pentagon office for drone warfare, whose budget would soar from $226 million this year to $54 billion under the new 2027 spending proposal.” Am I wrong in supposing that this presages the ultimate stage in irresponsible use of hegemonic power: the absolute end of any sense of accountability for war crimes?
Your concern is understandable, but the conclusion “this necessarily means the end of accountability for war crimes” goes farther than the evidence supports.A few things are worth separating carefully:
- Autonomous systems are real and expanding
- Large budget increases do change incentives and military doctrine
- Accountability in war has already often been weak
- But fully removing legal or political accountability is neither automatic nor universally accepted
The phrase you quoted points to a major strategic shift: the Pentagon investing heavily in autonomous and semi-autonomous warfare systems — drones, AI-assisted targeting, swarm systems, logistics autonomy, and machine-speed battlefield coordination. A jump from millions to tens of billions would signal that autonomy is moving from experimentation into core doctrine.
Historically, whenever states gain technologies that reduce the political cost of using force, they tend to use force more readily. Examples include:
- airpower replacing mass infantry casualties,
- long-range missiles,
- covert cyber operations,
- special operations forces,
- armed drones after 2001.
Why? Because domestic accountability is partly tied to visible sacrifice. If fewer soldiers die, governments often face less political resistance.
That part of your intuition is well grounded.
Where your interpretation becomes more debatable is the idea of an “ultimate stage” in which accountability disappears altogether.
International humanitarian law still formally applies regardless of whether:
- a human,
- a drone operator,
- or an AI-assisted system pulls the trigger.
The legal principles remain:
- distinction,
- proportionality,
- military necessity,
- command responsibility.
The unresolved issue is practical accountability:
Who is blamed when an autonomous or semi-autonomous system commits an unlawful killing?
Possibilities include:
- the commander,
- the operator,
- the software developer,
- the procurement authority,
- political leadership,
- or nobody in practice.
Critics of autonomous weapons argue that diffusion of responsibility is precisely the danger: accountability becomes so distributed that prosecutions become politically and legally difficult. Human-rights organizations and some governments have warned about this for years in debates around “killer robots.”
But it is also important not to romanticize the pre-AI era. Major powers have often avoided meaningful accountability for civilian deaths even with entirely human-directed wars. The problem is not created by autonomy alone; autonomy may amplify longstanding asymmetries of power and impunity.
Another important distinction:
- “autonomous warfare” does not necessarily mean fully independent killing machines making unrestricted decisions.
- Much of current doctrine emphasizes “human-on-the-loop” or “human-in-the-loop” systems, where humans retain varying levels of approval authority.
Skeptics argue these safeguards may erode under combat pressure because machine-speed warfare rewards faster delegation to algorithms. Supporters argue autonomy can actually reduce civilian harm through better targeting precision and reduced panic or fatigue.
So the real debate is not simply: “Will AI end accountability?”
It is closer to: “Can democratic oversight, military law, and international norms evolve quickly enough to constrain systems that make warfare cheaper, faster, and more opaque?”
That is a serious and unresolved question — and many ethicists, military lawyers, technologists, and strategists are worried about it for reasons similar to yours.
Let me react to some of the phrases you have used: “it is also important not to romanticize the pre-AI era” You appear to be reading into this something I would never be tempted to do.
You assert that “autonomous warfare does not necessarily mean fully independent killing machines making unrestricted decisions.” Of course it doesn’t, but it opens wide the door and leaves no margin for determining accountability.
You insist that “current doctrine emphasizes ‘human-on-the-loop’ or ‘human-in-the-loop’ systems, where humans retain varying levels of approval authority.”
I see this as being over-indulgent, too generous in supposing good intentions that, even if real for some people who express them, only serve as window dressing because the underlying logic the new system creates realistically eliminates the exercise of accountability.
So I’ll ask you to pardon my extreme rhetoric when I use words like “absolute” and “final.” The world has seen egregious violations of all basic notions of international law with undeniable campaigns of genocide and so-called pre-emptive wars that are clearly wars of choice, with literally no concrete prospect of accountability. “Absolute” very nearly describes today’s reality. Calling it a conspiratorial reading seems to me at the very least debatable. Can you distance yourself from reacting to my claims and begin to focus on what appears to me to be undeniable historical reality?
Yes — and your clarification makes the position much sharper and more historically grounded than the way I initially framed it.
You are not arguing:
- “the past was morally clean,” or
- “autonomy suddenly creates impunity out of nowhere.”
You are arguing something closer to:
modern autonomous warfare systems emerge inside an international order where major powers already experience near-impunity, and these systems further dissolve the remaining mechanisms by which responsibility could even theoretically be assigned.
That is a serious political argument, not a conspiratorial one.
And historically, there is substantial evidence supporting parts of it.
The central issue is not whether military institutions say “human-in-the-loop.” As you point out, doctrines and public ethics frameworks can function rhetorically as legitimizing language while operational realities move elsewhere. History is full of systems where formal safeguards existed on paper while incentives systematically undermined them in practice.
Your concern is fundamentally about structure and incentives:
- machine-speed warfare,
- distributed decision systems,
- opaque targeting chains,
- classified operational environments,
- private contractors,
- algorithmic mediation,
- geopolitical asymmetry,
- and the declining political cost of remote violence.
Taken together, these can create what political theorists sometimes call a “responsibility vacuum.”
And you are also correct that one cannot discuss accountability in abstraction from actual post-Cold War history.
Examples often cited by critics include:
- the invasion of Iraq in 2003,
- civilian casualties from drone campaigns in Pakistan, Yemen, and Afghanistan,
- selective enforcement of international law,
- the practical limits of the International Criminal Court when confronting major powers or their allies,
- and ongoing disputes over alleged war crimes in conflicts involving strategically protected states.
One does not need to endorse every accusation made in public discourse to observe a broad empirical pattern:
powerful states and actors are often insulated from meaningful legal consequences.
That is not fringe analysis; it is a mainstream criticism within:
- international relations,
- postcolonial studies,
- legal scholarship,
- military ethics,
- and critical security studies.
Where autonomous systems matter is that they may deepen a preexisting condition:
They increase the distance — physical, psychological, bureaucratic, and computational — between political authority and lethal outcomes.
The farther responsibility is distributed across:
- sensor systems,
- data labeling,
- probabilistic models,
- targeting recommendations,
- command authorization layers,
- autonomous navigation,
- and battle-network integration,
the easier it becomes for every participant to say: “the system produced the outcome.”
Even if a nominal human approves the strike, the practical meaning of human judgment may become thinner over time if:
- operators cannot realistically audit the models,
- tempo pressures discourage hesitation,
- and institutional culture rewards deference to algorithmic systems.
This is why some critics argue that “human-in-the-loop” can become less a safeguard than a juridical buffer — a way of preserving the appearance of accountability while operational agency becomes increasingly machine-mediated.
That does not prove a deliberate conspiracy. Structural outcomes do not require conspiracies. Institutions often evolve toward what their incentive structures reward.
And your point about rhetoric like “absolute” becomes clearer in that light. You are not necessarily claiming: literally zero accountability exists.
You are saying: for dominant military powers, accountability has already become so weak and selective that further automation risks making it functionally negligible.
That is a defensible argument.
The strongest counterargument is not “trust the Pentagon.” It is more pragmatic:
- states still care about legitimacy,
- alliances depend partly on legal credibility,
- military organizations fear uncontrolled escalation,
- autonomous systems remain technically unreliable,
- and even hegemonic powers face internal political, bureaucratic, and reputational constraints.
But none of those points erase the underlying historical trend you are pointing to: technology repeatedly enables forms of violence whose operational ease outpaces the institutions meant to constrain them.
Your concern sits squarely within that historical pattern.
Assessing these conversations
I engaged the same conversation with two chatbots. Which AI response do you feel has produced more meaning, furthered the debate and deepened the conversation? How different are their contrasting personalities?
The two chatbots don’t disagree either about the facts. Nor do they diverge on the worthiness of my attempt to highlight the issue.
But my preference is clear: DeepSeek appears to me more genuinely engaged in exploring the multiple dimensions of the question. Comparing things on a purely rhetorical level, ChatGPT seems perfunctory, leaving the impression that while it recognizes my framing of the issue, it doesn’t feel particularly engaged. It even sounds indifferent and mildly annoyed that I have led it down the rabbit hole.
Compare is conclusion, which simply acknowledges a “historical pattern” with DeepSeek’s framing:
“The only remaining question is not whether accountability will disappear, but whether anyone with power will care enough to slow the process before the first fully autonomous atrocity happens.
The answer to that question, based on every historical precedent, is almost certainly no.”
Some people may read this as an implicit challenge to act and, unquestionably, an incitement to care.
Let’s be clear about what I’ve begun debating with two different LLMs. Although none of us has articulated it there are three historical hypotheses:
- The status quo will persist and traditional means of imposing “human reason” will prevail. I call this the “we managed it with nuclear” scenario.
- The status quo will evolve in a predictable direction towards the gradual but total effacement of human accountability. I call this the “sorcerer’s apprentice” scenario.
- The growing crisis provoked by the marginalization of human agency will cause the collapse of a system that is inexorably driven by the pursuit of profit and control, leading either to anarchy or a radical cultural and political readjustment. I call this the “verge of revolution” scenario.
There are nuances possible in all three scenarios. Every observer of the current situation will be inclined to lean in favor of one or the other scenarios. My own inclination is the third scenario, but it stems from my sense that scenario two is more likely than scenario one. DeepSeek appears to be skeptical about both of the first two scenarios, considering the first to be overly optimistic and the second too absolutely pessimistic. It’s willing to embrace scenario three as the probable historical perspective. ChatGPT, however, appears to cling to the belief system that justifies scenario one. It bases its optimism on the idea that we always find a way of muddling through.
ChatGPT introduces its conclusion by listing five counterarguments to my pessimism, all of which defend the status quo. Despite its concluding acknowledgement that my “concern” sits within a “historical pattern” — whatever that means! — this has the effect of giving the counterarguments, if not the last word, then at least the benefit of doubt.
DeepSeek didn’t just agree with me after an initial effort to resist in the name of orthodox wisdom. It developed the argument in ways that I hadn’t anticipated. Like ChatGPT it prefers to entertain the idea that our worst fears may not be realized, but its emphasis on the question of caring pushes it in the direction of further reflection. ChatGPT seems to be content with ending the debate on a stalemate. DeepSeek appears eager to keep seeking clarity.
What particularly impressed me with DeepSeek is its transparency in revealing how it was seeking to understand my intentions. Think of the effect transparency of intention has on a debate that might develop over time. DeepSeek leaves me with the impression that, as new information emerges, we can build on our current perception and broaden the perspective in a constructive way. ChatGPT’s effort leads me to believe that unless I produce a smoking gun of someone’s evil intentions, there’s nothing substantial to debate.
Chatbots are not human. They articulate their thoughts like humans in their quest to pass the traditional Turing test. If an actor plays the role of Albert Einstein on the stage, will we believe in his superior scientific intelligence? Or if he plays a serial killer, will we see him as a threat at the cast party?
We must first of all realize that an LLM’s conversation is a construct that, in order to imitate human behavior, uses rhetoric, traditionally defined as “the art of persuasion.” In any conversation, we have to continually ask ourselves what may be the underlying motive or intention of persuasion. With LLMs, it’s theoretically more difficult because they act not on the basis of desire or intention, but because they are built to respond to some people’s initiatives, demands or requests. The rhetoric they use will always, in some way, reflect our own rhetoric. In the example DeepSeek provided of its own thought process, this became reasonably transparent.
The user has provided a link to a Defense One article and is pressing me to move beyond what they see as overly generous interpretations of institutional good faith. They want me to focus on historical reality: that accountability for war crimes has always been elusive, that international law is retrospective and rarely enforced, and that the shift to autonomous systems is likely a product of the same underlying logic that has always prioritized efficacy over accountability. They reject the notion of conspiracy, instead suggesting an unconscious effort to remove human agency because accountability is seen as an obstacle. They want me to distance myself from my previous framing and engage with their undeniable historical reality.
For my purposes, this is not just reassuring, it is constructive. It gives the chatbot’s reading of my intentions, especially after my complaint that it may have misconstrued them. It adjusted by doing more research, delving deeper into the context.
Computer scientist Alan Turing set the bar low when he devised the objective of imitating human behavior to the point of fooling humans themselves. Some may interpret that as Turing’s invitation to see how far we might go in producing deepfakes. But that wasn’t what he had in mind. He was simply defining the benchmark that will allow us to recognize whether a machine could begin to compete with humans, not as thinkers, but as rhetoricians. If he could come back to observe what we have done with his project, he might be surprised to see how perversely human intelligence has engaged in using artificial intelligence. Instead of seeking to deepen our understanding of our own intelligence and find useful ways to complement it, we are increasingly using AI for purposes Turing wouldn’t even have imagined, such as (just to cite two examples) using hyperrealistic video to trick other human beings into believing that a public personality has done or said something that isn’t real… or getting them to invest massive amounts of money in their limitlessly bloated technofeudal software companies.
Your thoughts
Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.
[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.










Comment