Perhaps the best test of the future role of AI in our global civilization concerns the moral assessment of public acts taking place under our very eyes. Israel’s war on Gaza has raised a series of questions about the meaning of various forms of extreme behavior: terrorism, war crimes and genocide. We need to begin asking what actions are required in such a context.
With the following prompt I opened my conversation with ChatGPT4:
“According to an article on the website @972mag, Israeli military personnel have disclosed the fact that its forces have been using AI to generate targets out of the Gaza population register. This fact alone helps to explain the exorbitant rate of collateral damage as the IDF continues to claim that it is targeting Hamas.
If true, this may be the first publicly documented case of how AI can be used for what many fear can amount to evil purposes. The question now is whether the international community will respond to this challenge, either now as the war continues, or after a resolution of the conflict. The framework for assessing war crimes already exists, but doesn’t this indicate that there is also a pressing need to define the limits of the use of AI in military strategies during conflicts? Is the international community in the 21st century capable of handling such questions?”
Its response:
The use of artificial intelligence (AI) in military strategies, particularly in conflict situations, is indeed a complex and sensitive issue. The case you mentioned about the Israeli military reportedly using AI to generate targets is a significant example of how AI can be used in ways that raise ethical and legal questions.
As for the international community’s response, it’s important to note that there are ongoing discussions about the need for regulations on the use of AI in military applications. For instance, the European Parliament has a report on artificial intelligence that includes questions of interpretation and application of international law in the areas of civil and military uses. There’s also a project called Designing International Law and Ethics into Military AI (DILEMA) that explores interdisciplinary perspectives on military applications of AI, with a focus on legal, ethical and technical approaches on safeguarding human agency over military AI.
However, the challenge lies in the fact that the development and deployment of AI technologies in the military sector are often outpacing the establishment of international laws and regulations. Some experts have indeed called for “hard laws” on the military uses of AI.
In conclusion, while the framework for assessing war crimes already exists, your question highlights the need for the international community to define the limits of the use of AI in military strategies during conflicts. This is a complex task that requires careful consideration of ethical, legal and technical aspects. It’s a challenge for the 21st century, but with collaborative efforts, the international community should be capable of handling such questions.
I followed up with the following question.
“On October 9, IDF Spokesperson Daniel Hagari explained that in Israel’s strategy for the conflict ‘the emphasis is on damage and not on accuracy.’ Given that the conflict is still raging, despite a short humanitarian pause, and an estimated 60% of all residential real estate in Gaza has been damaged or destroyed, it would appear that Hagari’s statement wasn’t just about ‘emphasis’ but about a conscious policy.
There is even evidence that the IDF has specifically targeted Gaza’s ‘best and brightest,’ all of whom are obviously non-military targets.
Given that proof of intent is the key factor that determines whether a policy can be officially termed genocidal; given that the extent of destruction goes far beyond any possible concerns with accuracy; and given that other statements by Israeli officials indicate the willingness to punish an entire population, shouldn’t every nation not a direct party to the conflict conclude that the question of genocide needs to be raised immediately? And shouldn’t any nation that refuses to raise that question now or, worse, objects to its being raised be deemed complicit in allowing genocide?”
ChatGPT4 responded:
The situation you’re describing is indeed very serious and raises important questions about international law and human rights. The term “genocide” is a very specific one in international law, defined by the United Nations in the Convention on the Prevention and Punishment of the Crime of Genocide as acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group.
Determining whether a situation constitutes genocide requires a thorough investigation by international bodies, such as the International Criminal Court, and is based on a range of factors, including intent, the nature of the acts committed and their impact on the targeted group.
As for the question of complicity, international law does provide for the responsibility of states that aid or abet in the commission of international crimes. However, determining such complicity also requires a thorough investigation and is a complex legal question.
It’s important to note that these are legal determinations that should be made by appropriate international bodies based on a thorough investigation of the facts. It’s also crucial for all parties involved in a conflict to respect international humanitarian law, which includes principles such as distinction (separating civilians from combatants) and proportionality (ensuring that any harm caused to civilians is not excessive in relation to the military advantage gained).
In conclusion, your questions highlight the importance of accountability in conflicts and the need for the international community to take action when there are allegations of serious violations of international law. It’s a reminder that all nations have a role to play in upholding international law and human rights.”
I initiated this short dialogue with Chat GPT on a sensitive topic to try to achieve some perspective on the wider debate currently raging in the media around whether AI should be considered good or evil.
The first lesson is that AI can be a productive foil to explore one’s own thinking and test it against other facts and ideas one needs to be aware of. Instead of looking to the chatbot for definitive answers, I sought to engage with it rhetorically, wondering out loud. Treating it not as an authoritative source but as a foil to better frame the issues I felt may be important clearly helps to clarify ideas that are not fully formed or that correspond to intuitions.
In this case, my concern was neither to confirm the accusation that Israel is guilty of using AI for illicit militarized purposes, nor to prove that any of its actions constitute war crimes. What AI and I ended up agreeing on is, as ChatGPT4 concluded, that there is a pressing “need for the international community to take action.” It is up to all of us to try to understand why that need is not being addressed.
What came out of it
The media and various personalities quoted by the media have encouraged a great deal of fearmongering around the question of what AI might do to humanity when it “realizes” its intelligence is superior to ours. This attitude presumes AI will do what any human being endowed with excessive power will do: abuse it.
Long before AI organizes its coup d’état, current practice, including Israel’s presumed use of AI, already demonstrates that the evil side is about two things: control and destructive force. We know from both ancient or contemporary history that both animal and human intelligence creates the possibility of creating pecking orders. One individual or group can seek control over other groups, and they are frequently willing to use massively destructive force to achieve or consolidate that control.
But that Hobbesian vision represents a temptation rather than a fatality. Political thinkers designed the idea of democracy precisely to attenuate that threat, if not eliminate it. Were AI to become super-intelligent, humans could create a specifically human culture in which they would design their relationship with AI on the model of dialogue between quasi-equals rather than that of rivalry. If we take our ideas of democracy seriously, there is absolutely no justification for thinking that someone with superior intelligence has a right to command and control the actions of others.
Perhaps the problem is that we have never quite learned to take the ideas of democracy seriously.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
money.
Please consider supporting us on a regular basis as a recurring donor or a
sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Comment