FAIR OBSERVER DEVIL'S DICTIONARY

AI Slaughterbots and the Pentagon’s “Best Practice”

The Department of Defense has announced an innovative initiative to build a new generation of artificially intelligent weaponry. Challenged by China’s ambitions in technology, the Pentagon’s commitment to “best practice” is destined to ensure a bright future for more efficient and intelligently managed means of human slaughter.
By
Killer Robots

Killer Robots In Area – Electronic Road Sign © Tilted Hat Productions / shutterstock.com

October 04, 2023 02:00 EDT
Print

The Hill last week featured an article by Brad Dress titled “Why the Pentagon’s ‘killer robots’ are spurring major concerns.” It looks at the issues related to a new generation of weapons powered by AI, variously described as “slaughterbots” or “killer robots,” that is already on its way. 

At her Defense News Conference in August, US Deputy Secretary of Defense Kathleen Hicks explained the urgency of the initiative. Beijing “has spent the last 20 years building a modern military carefully crafted to blunt the operational advantages we’ve enjoyed for decades.” Interestingly, it isn’t about the fear that China will surpass the US, but that it might ”blunt” its traditional sharpness.

Hicks’s blunt remarks referred to an ambitious program called the “Replicator initiative.” She reassuringly explains that the innovative program will require no new appropriations. This of course means, among other things, that it will be shielded from congressional oversight.

The Pentagon describes Replicator as “designed to produce swarms of AI-powered drones and flying or swimming craft to attack targets.” These autonomous weapons will be “connected through a computerized mainframe to synchronize and command units.” Aware of the fact that this is all about competition, Hicks described this as “a ‘game-changing’ initiative that will counter China’s growing ambitions and larger fleet of military resources.” 

Hicks desperately wants to leave the impression that, for all its murderous autonomy, Replicator is not the apocalyptic beast some people imagine. We must understand that the whole program is in good hands and will succeed thanks to “American ingenuity,” an “advantage that they,” the Chinese, “can never blunt, steal, or copy.”

The author of The Hill’s article, Brad Dress, dares to demur. He reminds us that not everyone is so confident about where this may lead, citing Anna Hehir at the Future of Life Institute who says that this is “a Pandora’s box that we’re starting to see open, and it will be very hard to go back.” We learn that “human rights groups are uneasy about Washington’s ethical guidelines on AI-powered systems and whether they will offer any protection against an array of humanitarian concerns.”

Today’s Weekly Devil’s Dictionary definition:

Humanitarian concerns:

Ephemeral doubts that occasionally arise in the minds of a minority of conscientious people skilled in detecting moral risk that, in any environment sufficiently focused on security, will inevitably be dismissed as irrelevant because of an overriding economic logic that is guided by the pressing need to preserve a competitive advantage.

Contextual note

Hicks insists it’s not just about killing; it’s also about improving the Pentagon’s culture. She’s an adept of modern management theory. “This is about driving culture change just as much as technology change — and about replicating best practice just as much as products, so we can gain military advantage faster.”

A sharp observer whose human intelligence has not been blunted by Hicks’s taste for business management jargon might find it useful to understand what “military advantage” actually means and what the implications may be of making it happen “faster.” Hicks, however, wants her audience to have confidence in the Department of Defense (DoD). She insists “the initiative will remain within ethical guidelines for fully autonomous systems.”

Dress isn’t taken in by her rhetoric. He begins by focusing on the guidelines themselves. They stipulate that there should be an “‘appropriate level of human judgment’ before an AI weapon system can use force.” But then Dress cites the analysis of the Congressional Research Service. It notes that “the phrase was a ‘flexible term’ that does not apply in every situation, and that another phrase, ‘human judgment over the use of force,’ does not mean direct human control but refers to broad decisions about deployment.” Could the Pentagon, for the first time in its history, be distorting the meaning of its own words?

More worryingly, Dress also informs us that there exists “a waiver throughout the policy that appears to allow for the bypassing of the requirement for senior-level review of AI weapons before deployment.” This “flexibility” of interpretation may help to explain why Hicks has proudly announced that there will be no need for new allocations from Congress to carry out this initiative.

Although the dimension of this weaponry is not quite on the nuclear scale, the conditions described eerily resemble the plot of Stanley Kubrick’s Dr. Strangelove. General Jack D. Ripper’s psychic algorithm that turned around “purity of essence” was at least discoverable, albeit belatedly. AI’s algorithms are notoriously opaque.

Aware of the risk, DoD spokesperson Eric Pahon reassuringly proclaimed that there is nothing to fear. The US is the “world leader in ethical AI standards.” We can be assured that Washington will do everything in its power to make sure that no one will be in a position to blunt that advantage.

Historical note

Though the DoD still hasn’t passed an audit, even after the discovery back in 2019 of a $21 trillion shortfall over two decades of accounting, it has now apparently adopted the language, if not the skills, of modern management. Hicks cites concepts such as “culture change” and “best practice” at the same time that she bandies about jargon such as “exerting leadership focus” and “maturing solutions.”

In 1935, the decorated war hero Brigadier General Smedley Butler wrote a book called War is a Racket. Things have clearly evolved. Butler saw resemblances between the military’s culture of war in the first half of the 20th century and the methods of Al Capone; Hicks has chosen more respectable models for today’s military culture, like McKinsey or Harvard Business School. War is now a business — a complex business — and indeed the work of a complex itself, as President Eisenhower warned.

But Hicks may be one step behind in her modernity. The idea of “best practice” she cites became a trendy mantra of modern management around the time when Total Quality was all the rage in the 1980s. In 2005, MIT’s Sloan Management Review published an article with the title “Beyond Best Practice,” pointing out that “ high-performing companies do more: They also embrace unique ‘signature processes’ that reflect their values.”

The notion of best practice turned out to be a late 20th-century version of Frederick Winslow Taylor’s notion, earlier in the century, of “Scientific Management.” It produced two generations of “efficiency experts.” Hicks, for all her proclaimed modernity, appears to identify with that culture. The DoD clearly hasn’t reached full modernity and probably will never do so, at least not until the day it manages to pass an audit.

Dress’s fairly lengthy article is worth reading to the end. It challenges the unaccountable Replicator initiative on moral grounds. Morality cannot be reduced to guidelines or even laws, however elaborate. It must be an effort to produce and manage the values that appear to be a key component of a notion such as signature processes. Those who practice the process leave their signature. They sign their work. It becomes a commitment, a shared commitment.

The questions Dress examines right up to the end of the article concern shared values and not just the stale management techniques Hicks appears to relish. The DoD doesn’t seem to feel any need to go “beyond best practice.” The term “signature” in signature values conveys the idea of human accountability. The idea that it reflects values conveys a sense of collective responsibility.

AI, for all its capacity to imitate human thoughts and actions, cannot have a signature because it has neither a hand to produce the signature nor an organic identity to associate with it. AI is what it knows, not what it is in its environment and even less who it is within its environment. It will always be a prisoner of the data it has been fed, driven by the algorithms it is supplied with or that it may generate on its own through machine learning.

The very idea of elaborating best practices for a war machine should give us all pause. The DoD is itself a war machine that produces war machines. Its “swarms of AI-powered drones” eerily evoke Goethe’s Sorcerer’s Apprentice, whose concern for efficiency led him to produce an unwanted catastrophe.

*[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news. Read more of Fair Observer Devil’s Dictionary.]

[Thomas Isackson edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member