Science & Technology

We Need to Reduce Killer Military AI Bias Immediately

Artificial intelligence-driven military decision-making systems use algorithms to analyze data and assist military leaders in decision-making. Flawed data curation and human use can instill and cultivate prejudice in them, spelling danger for potentially innocent people targeted by the AI. We must correct these biases throughout the systems’ life cycles.
By
soldier

modern warfare american marines soldier aiming on laseer sight optics in combat position and searching for target glitch effect © dotshock / shutterstock.com

September 24, 2024 05:02 EDT
Print

Observers are increasingly sounding the alarm about artificial intelligence-driven military decision-making systems (AIMDS). AIMDS are instruments that employ AI methods to evaluate data, offer practical suggestions and help decision-makers resolve semi-structured and unstructured military tasks. The increased use of these systems raises questions about the possibility of algorithmic bias — the application of an algorithm that exacerbates pre-existing disparities in socioeconomic status, ethnic background, race, religion, gender, sexual orientation or disability.

In 2023, the Summit on Responsible Artificial Intelligence in the Military Domain highlighted the need for military personnel to consider potential biases in data. While this is an excellent place to start, bias is a much broader phenomenon than just biased data.

Prejudice is a political and empirical phenomenon that affects some groups of people more negatively than others. As such, it significantly influences decision-making processes that integrate AI technology. This is why a merely technical understanding of bias undervalues its relevance.

International humanitarian law expressly prohibits adverse distinction — military practices based on color, religion, race, sex, language, national or social origin, political opinion or status, wealth, birth or any other similar criteria like apartheid and other degrading practices. Yet these distinctions often define algorithmic biases. The way AI systems interpret their given data keeps them embedded in social structures and society.

Understanding the extent of this problem helps one consider how algorithmic bias manifests itself across a system’s lifespan, from pre-development to repurposing and retirement. Our example of bias focuses on four phases of the AIMDS life cycle: data set curation, design and development, usage and post-use review. We begin by outlining the fundamental instances of bias at each of these four phases before analyzing the issues that stem from bias, specifically regarding AIMDS and military use-of-force judgments. Since most current use cases occur at the tactical and operational levels, we take examples from decision-making processes involving the use of force.

Bias-induced databases

Data bias is arguably well-documented, with numerous studies recognizing both explicit and implicit versions. Pre-existing bias is ingrained in data sets and social structures, behaviors and attitudes. Developers set specific statistical standards, such as assuming that a particular category or identification group of a population is more likely to represent a threat that may be morally, ethically or legally objectionable, before providing training data to an algorithm. However, relevant information can only be created from raw data through this process of curating datasets.

There is a lack of transparency regarding these data sets and the assumptions they convey, particularly in the military sphere. Bias is introduced by over- or under-representing specific data points. This can be challenging to address and moderate. For example, it is commonly known that darker-skinned individuals misidentify themselves more frequently than lighter-skinned ones due to various types of sampling bias.

Furthermore, developers can program and introduce bias into a system during the data selection, gathering and preparation. This includes pre-processing, which is preparing a data collection for training by eliminating irrelevant data points. Thus, pre-processing runs the risk of adding bias to the data. An algorithm is generally only as good as the data it has devoured, and improper data collection, storage and use methods can produce unfair results.

The creation of targeted kill lists in the context of AIMDS is particularly troubling since this procedure depends on data inputs consistent with prevailing societal prejudices. This data has labels, such as certain traits that indicate terrorism suspects. Such traits most likely include unconscious and explicit previous bias, such as racial and identity stereotypes. The development of an AIMDS, for instance, might be predicated on the biased premise that any pious Muslim is radical, given that the concept of counterterrorism is inextricably linked to their racial and ethnic roots.

Bias-induced models

Decisions and procedures made throughout the design and development phase might intensify data bias. At this lifecycle stage, pre-existing biases combine with technical prejudice originating from technical limitations or considerations. This bias includes internal, frequently opaque processes within the neural network systems and human data processing.

The iterative process of data annotation, labeling, classification and output evaluation throughout the training phase is a helpful illustration of human-steered processes. Human cognitive biases, many of which are unconscious, present themselves when doing these tasks. More fundamentally, bias may also arise from creating human and societal categories amenable to computer processing. In this sense, AI algorithms may also promote prejudice. For instance, they may be over-programmed into too coarse categories; coupled with significant data set variation, they may prevent the AI model from identifying pertinent trends.

Moreover, the indeterminate nature of neural network processing may introduce additional biases, thereby exacerbating pre-existing biases in the data sets. An AI algorithm may display reduced identification rates for classes of data points that occur less frequently in the data collection, as in the case of class disparity bias (CDB). This well-known bias can be actively mitigated by adding synthetic data to the data set.

Over-programming and CDB are two particular cases of bias pertinent to AIMDS. Situations that demand military decisions are ambiguous and marked by turmoil. In these cases, an AIMDS runs the risk of using incorrect categories to accurately identify the scenario or having too few points of comparison to create meaningful categories. One specific issue that has been identified is the shortage of suitable training data, both qualitative and quantitative, for numerous military decision-making scenarios.

Developers must assess the cultural, religious, racial and identity biases that affect the decisions they and the system make. AIMDS are designed to recognize particular groups of people specifically. Notably, when the US Project Maven was developed to support data-labeling operations for the DISIS (Dismantling-ISIS) campaign, its creators had specific identities or people groups in mind. Many people doubt this system is pragmatic in identifying the correct targets. It is essential to consider how many kinds of bias may influence the development and design of these systems, especially when human targets are involved.

Bias-induced application

Emergent bias is combined with previous, technically ingrained prejudice in AIMDS at the point of usage. This stems from the ways specific users engage with AI DSS (decision support systems) under specific use cases. Deploying AIMDS in a use-of-force environment necessitates value-based sensemaking amongst military strategic, operational and tactical decision-makers — all of whom may imbue the system outputs with their value judgments.

Automation bias is a well-known type of bias that developed during this usage phase. It describes human users’ blind faith in the results generated by an AI DSS. This faith can encourage algorithmic bias by permitting judgments that might otherwise have been dubious if made exclusively by people, since a computer is thought to be more dependable and trustworthy. Furthermore, bias in an AIMDS has the potential to be negatively self-fortifying, which can create a cycle whereby the system generates more bias the longer it is left uncorrected. For example, if a system often flags individuals of a specific gender and physical appearance as potential threats, it may perpetuate its bias by supposing that everyone in a neighborhood who fits these traits is a danger actor.

The system perpetuates itself rather than addressing prejudice, particularly when decision-makers promptly fail to recognize the bias. AIMDS may enhance the number of possible targets in military use-of-force decision-making, even if such algorithmic functions could be specific to a commercial setting. Because of this, such systems may begin by recognizing a small number of potential danger actors; their goal is to expand the number by associating and linking an increasing number of individuals. Thus, AIMDS may continue to pick up new skills and get training from human users even while in use.

This process can initiate the learning of new biases and the reinforcement of pre-existing ones. When people engage with the final product, analyze the data and provide feedback to the system, bias can potentially re-enter it.

Essential questions to be asked are: Who is engaged in this process? How is it monitored? By whom? These options are appealing for military decision-making due to the flexibility of continuous learning algorithms, but they are also unpredictable.

The best way forward

One aspect of reviewing AIMDS after usage is examining whether specific systems functioned as the developers intended during the design phase. Another is considering potential future enhancements. We can view this as a discrete stage in the life cycle. It is a continuous action that should be implemented before and after each use case, mainly when continuous learning systems are employed.

Theoretically, this phase could be critical for detecting and correcting biased decision-making errors. However, if we do not push this immediately, the biased results that an AIMDS produces throughout its lifetime will be utilized to support more decision-making processes. Notably, new studies have discovered indications that humans could inherit the systems’ prejudice. Therefore, people may duplicate bias learned from an AIMDS even when they are not interacting with it.

AIMDS run the risk of propagating the effects of algorithmic bias into military use-of-force decision-making procedures. While emergent bias enters the system at the point of application, pre-existing and technical kinds of bias enter it from the beginning and have ongoing influence.

We still have much to do to raise public awareness of these AIMDS flaws, their potentially catastrophic consequences and strategies for mitigating them. Such strategies may include introducing bias reduction techniques and standardizing the processes for creating the systems after usage.

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member