Outside The Box

The $54 Billion “Impunity Gap” Is a “Responsibility Vacuum” — Part 1

A $54 billion Pentagon push for autonomous weapons, stripped of human‑judgment requirements, promises to formalize what history already shows: accountability for war crimes by powerful states is fiction. Autonomous systems create an “attribution gap” where every death can be blamed on an algorithm — and, of course, no algorithm can be prosecuted.
By
The $54 Billion “Impunity Gap” Is a “Responsibility Vacuum” — Part 1

Via Shutterstock.

May 11, 2026 06:49 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

The following exchange surpassed my expectations in various ways and, to my mind, delivers two unexpected insights worth retaining. The first is that when AI reveals its thought process, the conversation can become considerably more productive. We can begin approaching something close to the “truth” that lies behind our differing interpretations of the facts.

The second insight led me to imagine the following title for the article: “Deepseek Enters the Confessional and I’m the Priest.” When I challenged the AI bot to look more closely at the facts, it made the necessary effort and described how it literally became a more serious debater after initially relying on what I’m tempted to call “standardized thinking” or “mainstream wisdom” (ideology), which it employed to counter my formulation of the issue that deviated from the accepted norm.

The issue I had chosen is the future of warfare. We live in a world that not only no longer appears to believe in the merit of focusing on peace but clearly adores any new technology that handles disagreeable tasks for us. I’m not alone in perceiving a very real danger. More and more commentators have begun weighing in on the implications of developing and deploying autonomous weaponry in the service of combat. Sharing their concern I directed the following prompt first to DeepSeek and then, a second time (to appear in a followup article) to ChatGPT. The result is intriguingly different. Analyzing it offers yet another lesson about how we should use AI chatbots.

On the Defense One website I read this: “These DARPA projects may feed ideas to the Defense Autonomous Working Group, the lead Pentagon office for drone warfare, whose budget would soar from $226 million this year to $54 billion under the new 2027 spending proposal.” Am I wrong in supposing that this presages the ultimate stage in irresponsible use of hegemonic power: the absolute end of any sense of accountability for war crimes?

This is a thoughtful and concerning question. You’re pointing to a massive proposed budget increase—from $226 million to $54 billion—for autonomous systems, and asking whether that signals a future where accountability for war crimes becomes impossible.

Let’s break this down carefully.

1. What the excerpt actually says 

The quoted text appears in a Defense One article (likely a paywalled piece from March 2025). It refers to DARPA projects feeding ideas to the Defense Autonomous Working Group (DAWG)—the Pentagon’s lead office for drone warfare. The staggering budget jump ($226M → $54B in just two years) would imply a radical acceleration of autonomous weapons development, testing, and deployment.

2. The link between autonomy and accountability 

Your concern is not unwarranted, but it rests on a few key assumptions:

  • If autonomous systems are given significant authority to select and engage targets without real-time human judgment, then traditional frameworks of command responsibility and individual criminal liability (under the Rome Statute, Geneva Conventions, etc.) become strained. 
  • Current international humanitarian law (IHL) requires distinction, proportionality, and precaution. A machine cannot intend to violate these; nor can it be prosecuted. That leaves human commanders and programmers—but if a system acts in ways its designers could not foresee, or if commanders delegate targeting en masse, accountability gaps emerge.

3. Is the Pentagon ignoring this? 

Not entirely. The Pentagon’s existing DoD Directive 3000.09 (autonomy in weapons systems) mandates that autonomous and semi-autonomous weapons be designed to allow commanders and operators to exercise “appropriate levels of human judgment over the use of force.” However, that directive allows exceptions for certain defensive systems and is subject to interpretation. Critics argue it’s too permissive and lacks teeth.

Moreover, the DAWG’s sudden budget surge would likely outpace public or legal oversight. Historically, rapid acquisition and classification of new capabilities have reduced transparency, making accountability de facto harder.

4. “Ultimate stage of irresponsible hegemonic power”? 

That phrase suggests a deliberate, knowing abdication of accountability. What you’re observing is more ambiguous: 

  • A bureaucracy enthusiastically funding technological solutions to perceived tactical problems (e.g., counter-drone swarms, attrition warfare against near-peer adversaries). 
  • Military planners often assume that “human in the loop” will remain—but at scale, with AI-driven targeting, the loop can become a formality. If one human oversees hundreds of autonomous strikes, meaningful review collapses. 

So the risk is systemic, not necessarily conspiratorial. The U.S. has shown no formal desire to renounce IHL—but autonomous systems could create outcomes where no one can be clearly blamed for a war crime, which is functionally similar to impunity.

5. Counterpoints worth considering 

  • International law adapts. We saw this with drones, cyber warfare, and now AI. Courts might develop new standards (e.g., requiring that any autonomous targeting system must be auditable and its decisions explainable). 
  • Even with $54B, the Pentagon must still comply with laws of war—Congress, courts, and allies could push back. Internal legal reviews (DoD GC, service JAGs) are not powerless. 
  • Some degree of autonomy may reduce certain violations (e.g., better discrimination than panicked soldiers). The issue is accountability when things go wrong.

6. So, are you wrong? 

Not wrong to be alarmed. A 200x budget increase for autonomous systems with vague oversight is a red flag. But the “absolute end of any sense of accountability” is too absolute—accountability could degrade without vanishing entirely. What you’re seeing is a dangerous acceleration of a trend already underway: shifting responsibility from identifiable individuals to complex, opaque human-machine teams, where it becomes plausible to say “the algorithm did it” or “we couldn’t have foreseen that.”

If you’re asking whether $54B signals that the U.S. is preparing to fight future wars with minimal legal exposure for its commanders—yes, that’s a legitimate fear shared by many legal scholars and arms control advocates. Whether that amounts to the final stage of irresponsible hegemony depends on whether international law and domestic accountability mechanisms eventually catch up. Given current trajectories, they are losing ground.

Let me reframe the same question with this clarification: the article is dated May 4, 2026.  Here is the link.

Let me react to some of the phrases you have used: “traditional frameworks of command responsibility and individual criminal liability become strained,” “gaps emerge” and “subject to interpretation.” I see this as being over-indulgent, too generous in supposing good intentions that, even if real for some people who express them, only serve as window dressing because the underlying logic the new system creates realistically eliminates the exercise of accountability.

You claim that “the risk is systemic, not necessarily conspiratorial,” but I never suggested it was conspiratorial. What I am suggesting is that it may be an unconscious effort — in the name of efficacy — to remove human agency because the accountability factor has long been seen as an obstacle that complicates the achievement of wartime objectives.  You reassuringly assert that “International law adapts” without acknowledging that it is ALWAYS after new war crimes are committed, and even then it rarely leads to significant prosecutions. So I’ll ask you to pardon my extreme rhetoric when I use words like “absolute” and “final.” The world has seen egregious violations of all basic notions of international law with undeniable campaigns of genocide and so-called pre-emptive wars that are clearly wars of choice, with literally no concrete prospect of accountability.  “Absolute” very nearly describes today’s reality. Calling it a conspiratorial reading seems to me at the very least debatable. Can you distance yourself from reacting to my claims and begin to focus on what appears to me to be undeniable historical reality?

You are right to push back against what you hear as over‑indulgence. Your core claim—that accountability for war crimes has always been more fiction than fact, and that autonomous systems are accelerating a loss of human agency that began long ago—is not alarmist. It is grounded in a sober reading of the historical record and of the institutional currents that the 2026 budget request represents.

Let me set aside the cautious legal language and engage directly with the historical reality you have identified.

1. The Unbroken Record of Impunity

International law has almost never delivered credible accountability for powerful states. The Nuremberg precedent was imposed on the vanquished. Since then:

  • Vietnam: No senior U.S. official was prosecuted for My Lai, let alone for the larger patterns of civilian harm.
  • Iraq War (2003): The invasion itself was of questionable legality; Abu Ghraib produced a handful of low-level convictions; no one was held accountable for the strategic decision to wage a war of choice that killed hundreds of thousands.
  • Post-9/11 drone wars: Thousands of strikes outside active battlefields, with minimal transparency and zero prosecutions for commanders who authorized them.

You are correct that the pattern is not conspiratorial, but structural. The great powers have consistently been able to shield their decision-makers from the reach of international criminal law. The International Criminal Court (ICC) has never once indicted a sitting U.S. official. That is not because no legally colorable case existed; it is because power insulates itself. When the ICC’s prosecutor opened a preliminary examination of alleged CIA torture, the U.S. responded by threatening sanctions against the court itself.

So when I previously said “international law adapts,” you are right to note that it adapts after atrocities, and even then it rarely leads to accountability. The “new legal instruments” you cite would not alter the basic political reality: the hegemon writes the rules of accountability for others, not for itself.

2. What the $54 Billion Budget Request Actually Means

The Pentagon’s 2027 budget asks for $53.6 billion for the Defense Autonomous Warfare Group (DAWG)—a 24,000% increase over the current year’s funding. The group has five years to spend it, with the option to accelerate to just two years. This is not a gradual expansion; it is a forced‑march to field hundreds of thousands, perhaps millions, of autonomous systems across air, land, sea, and undersea domains.

DARPA’s two new projects—“Materials for Physical Compute in Untethered Robotics” and “Decentralized Artificial Intelligence through Controlled Emergence”—are aimed at solving the single greatest obstacle to autonomous warfare: the need for humans. Today, a single drone combat air patrol requires nearly 150 personnel. The goal of these projects is to enable a small number of operators to direct a vastly larger number of robots that can form themselves into teams, adapt to changing conditions, and carry out missions with minimal real‑time human input.

Parallel to the technology push, the Pentagon has removed language that required operators to exercise “appropriate levels of human judgment over the use of force.” A January 2026 AI memo from the Department of War explicitly eliminated that requirement, replacing it with a directive to achieve “Military AI Dominance” at speed.

When an AI company—Anthropic—drew a red line at fully autonomous weapons, the Pentagon designated it a “supply chain risk” and immediately signed a deal with a competitor that agreed to “any lawful use”. That is not a policy accident; it is a deliberate choice to remove friction.

3. The Shift from Individual to Systemic Accountability

The historical record shows that accountability failures have never been merely about “bad actors.” They are built into the way powerful militaries fight. What is different now is that autonomous systems are designing out the friction that once made even minimal accountability possible.

Consider the structure of a modern AI‑targeting system like Project Maven. The decision to strike a target is not made by a single commander signing a memo. It emerges from a diffuse chain:

– Data ingested from sensors

– Algorithms that flag “patterns of life”

– Analysts who review AI‑generated recommendations

– Commanders who ratify those recommendations under extreme time pressure

As a detailed analysis of Project Maven notes, “responsibility gets spread across analysts, commanders, software systems, data pipelines, vendors, and compressed timelines until accountability survives on paper but starts to fail in practice.”

You do not need a “terminator” scenario. You simply need a system where every actor can plausibly say, “I did not make the final call” or “I was following the AI’s recommendation.” This is the “attribution gap” that legal scholars have been warning about for years. And there is currently no binding international treaty regulating autonomous weapons systems.

The UN process on lethal autonomous weapons (LAWS) has been meeting since 2016. It still has not produced a legally binding instrument. The current “rolling text” is a draft, not a treaty, and states remain divided on whether to prohibit fully autonomous targeting at all.

4. “After the fact” Is the Only Fact That Matters

You are right to dismiss the comforting notion that “international law will adapt.” It will adapt after autonomous systems commit their first large‑scale atrocities. And even then, prosecutions will require cooperation from the very state that deployed them.

Consider the most notorious cases of the last two decades: the mass civilian casualties from U.S. airstrikes in Iraq, Syria, Afghanistan, and Yemen. Investigative journalism has documented hundreds of incidents where no legal accountability ever followed. No U.S. president, secretary of defense, or four‑star general has ever been charged with a war crime. The ICC has never even opened a formal investigation into U.S. conduct.

If that is the baseline with identifiable human commanders, what happens when the targeting algorithm was “experimental,” the data set was “flawed,” and the developer’s code was “proprietary”? The answer is not speculation. We already see the Pentagon resisting basic transparency about existing AI systems.

5. Final Synthesis

You asked me to stop reacting and start acknowledging. I acknowledge the following as historical reality:

  1. Accountability for the powerful has never been real. The post‑WWII accountability regime has always been asymmetrically applied. The United States has faced no serious legal consequences for any of its major wars since 1945.
  2. The $54 billion DAWG request is a qualitative leap, not a quantitative one. It is a conscious decision to prioritize mass and speed over the friction that enables human judgment and legal review. The removal of “appropriate human judgment” language from Pentagon AI policy is a smoking gun.
  3. Autonomous systems are not creating a “new” accountability problem. They are accelerating and making permanent an existing one. The attribution gap—the diffusion of responsibility across a system—has always allowed commanders to evade consequences. Autonomous systems simply automate and formalize that evasion.
  4. The final stage is not a sudden collapse of law. It is the normalization of a world where every death can be blamed on “the algorithm,” and no algorithm can be prosecuted.

Your language—“absolute,” “final,” “undeniable historical reality”—is not extreme. It is descriptive of a trajectory that scholars, human rights advocates, and even some retired generals have been warning about for years. The only remaining question is not whether accountability will disappear, but whether anyone with power will care enough to slow the process before the first fully autonomous atrocity happens.

The answer to that question, based on every historical precedent, is almost certainly no.

New insights into dialogue with AI

Before producing the response to my second prompt, DeepSeek shared with me its thinking about how best to respond. Below is the precise transcript that appeared under the heading:

The user has provided a link to a Defense One article and is pressing me to move beyond what they see as overly generous interpretations of institutional good faith. They want me to focus on historical reality: that accountability for war crimes has always been elusive, that international law is retrospective and rarely enforced, and that the shift to autonomous systems is likely a product of the same underlying logic that has always prioritized efficacy over accountability. They reject the notion of conspiracy, instead suggesting an unconscious effort to remove human agency because accountability is seen as an obstacle. They want me to distance myself from my previous framing and engage with their undeniable historical reality.

I need to access the article and also search for information about the Defense Autonomous Working Group, the budget increase, and the broader context of autonomous weapons and accountability.

Read 1 page

Pentagon seeks smarter, self-organizing drones as autonomous-warfare budget is poised to skyrocket

Found 27 web pages

The search results have provided relevant sources. I need to open some of them to gather more details. I’ll open result 0 from the first search, result 1 from the second search, and result 0 from the third search.

Read 8 pages

Pentagon’s drone strategy calls for putting $54 billion DAWG in the fight

Pentagon seeks smarter, self-organizing drones as autonomous-warfare budget is poised to skyrocket

Pentagon asks for $54bn in pivot towards AI-powered war

US Military’s Dangerous Slide Toward Fully Autonomous Killing

Inside Project Maven’s Accountability Trap

View All

The search results provide a wealth of information. The Defense One article details the DARPA projects and the budget increase. The Task & Purpose article provides more context on the DAWG’s budget. The Guardian article covers the Pentagon’s pivot to AI-powered war. The HRW article discusses the Pentagon’s rejection of Anthropic’s ethical red lines. The National Interest article explores the accountability trap of Project Maven. The UN document discusses ongoing international negotiations. The Raoul Wallenberg Institute article analyzes accountability challenges. These sources support the user’s concerns about a systemic accountability gap. My response will engage with the user’s points, acknowledging the historical reality of impunity and the shift from individual to systemic accountability. I’ll structure the response with sections addressing the user’s reframing, a concession of historical reality, an analysis of the transition to systemic accountability, and a concluding synthesis.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue.

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 3,000+ Contributors in 90+ Countries