This modern era has given birth to many groundbreaking advancements to aid humans in their day-to-day lives. One of those includes artificial intelligence [“AI”] which is an increasingly used and developed form of technology born sometime in the 1950s. The term “intelligence” itself means “the ability to acquire and apply different skills and knowledge to solve a given problem” and the use of “general mental capacity to solve, reason, and learn various situations”[1].
Meanwhile, “AI” can simply be defined as “the theory and development of computer systems to carry out tasks normally requiring human intelligence”[2]. According to Jeremy Achin, CEO of a company called DataRobot, these AI function through different means, including machine learning, deep learning, and rules.[3]
The foundation of AI’s development began in 1950, by mathematician Alan Turing who posed the question of whether machines could think like humans did, and how. Turing published a paper “Computing Machinery and Intelligence” and established the “Turing Test”, which is designed to determine whether or not a computer is “intelligent” by testing if they can imitate an input given by a human and produce an output that human users cannot identify as coming from a machine.[4]
Now, around 70 years since that, AIs are able to do a lot more—ranging from recognizing a person’s voice and responding to its commands, to driving cars without human intervention.[5] Due to the capabilities that AI have to gather information and produce detailed results free from biases such as emotions, people are now more and more reliant on it.
Warfare is another area that involves the growing usage of AI. Last year, the U.S. Department of Defense proposed a budget of $927 million out of their $718 billion to be dedicated for AI and machine learning for the military.[6] China has also acknowledged this development, stating that the world’s major developed countries “are taking the development of AI as a major strategy to enhance national competitiveness and protect national security”[7].
Even within a small nation such as Singapore, they are currently working on incorporating AI into their defense structure to “maximize combat effectiveness with minimum manpower.”[8] Its development is rampant due to its promising contribution for the military in the future which could make work safer for the army, like its ability to spot and detect objects that are out of human view, helping personnel make decisions through analysis, increase accuracy in targeting, and more.
While they are capable of many functions, AI also has its limitations. Firstly, AI learns and acts from the data that they are given—if there is a flaw or lack of data, this will impact the results as the AI cannot act outside of what data they receive. Furthermore, they have specialized and limited functions, meaning that they might be able to come close to perfecting what they are designed to do, but cannot do something outside of that.
Other problems include the lack of information and predictability on the way they process their input and create output, and of course, their inability to empathize and assess situations that are case-by-case, which would require them to look beyond the limited data they are given. For instance, an AI wouldn’t be able to determine what counts as a “proportional” attack, or what constitutes as giving “sufficient precaution”, as the actions taken to fulfill these principles will depend on what’s ongoing in the field.
A shocking case that demonstrates an AI’s flaw was a study conducted in 2017 by a group of MIT students which involves an AI from Google. Surprisingly, when shown a 3D-printed turtle, the machine identified it as a rifle.[9] Ironically, while they are able to spot or detect certain things that humans cannot easily do, they can also be far more easily deceived.
Imagine the deadly results that would be caused if an AI from a weapon mistakenly identifies something that’s not supposed to be attacked as a military objective instead, like in the 3D turtle experiment. Due to the potential damages and harm to human life, many experts including Elon Musk and the three co-founders of AI company “Google DeepMind” have pledged not to develop AI weapons that could create devastating results.
They, and thousands of other AI experts, have opposed the existence of automated weapons that rely on AI without sufficient human control and constantly warns the public of the repercussions, whether moral or pragmatic, in enforcing such weapons.[10]
Legally, the existing IHL legal regime already creates, to some extent, a foundation for the usage of AI in warfare. For instance, Article 36 of Additional Protocol I to the Geneva Conventions [“AP I”] obliges parties to determine if the employment of a new weapon or method of warfare would be prohibited by IHL or any other relevant international law.[11]
Even if a State is not a party to AP I, it is considerably customary for the States to abide by the principle of this rule. Still, there are no specific internationally agreed rules or conventions about AI in weapons yet. A disharmony in agreement or understanding about what can be used in the AI battle field and how much human intervention should be applied could result fatally.
Not only is it dangerous because of the AI’s unpredictability, limitations, and its possible errors, or that one party could unfairly overpower the other with their developments and cause superfluous damages, but also because the lack of legal certainty here might free people from their responsibility to control these AI weaponry.
Never mind getting to the contents of the law when States can’t even uniformly agree whether a new law would be needed. According to some such as Austria, Brazil and Chile, a new law for this new technological system would be necessary, but other parties contend that current IHL and its principles are enough.[12]
We live in a time when the reality of AIs being used in combat is already a reality, and the current IHL regime is still yet to catch up with these developments, leaving us with gray areas and questions that remain unresolved.
Thus, it is necessary for parties—not just the scientists like Elon Musk, but also government officials—to gather and discuss the issue to ensure that these technologies are used responsibly and do not danger civilians. It’s important for there to be a universal understanding and agreement namely on the type and degree of human control needed when using weapons with AI and which types of weapons can be used in accordance with IHL.
Quoting Netta Goussac from the International Committee of the Red Cross, “technologies like AI may be ‘determining how wars can be fought’ but it is international humanitarian law that restricts how wars can be fought.”[13]
Reference
[1] Shabbir, Jahanzaib, and Tarique Anwer. “Artificial Intelligence and Its Role in Near Future.” Journal of Latex Class Files, vol. 14, 8 Aug. 2015, p. 1.
[2] Lexico, Oxford Dictionaries, www.lexico.com/definition/artificial_intelligence.
[3] Artificial Intelligence — What it is and why it matters.” SAS , SAS, www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html.
[4] Hern, Alex. “What Is the Turing Test? And Are We All Doomed Now?” The Guardian, Guardian News and Media, 9 June 2014, www.theguardian.com/technology/2014/jun/09/what-is-the-alan-turing-test.
[5] “Artificial Intelligence — What it is and why it matters.” SAS , SAS, www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html.
[6] Knight, Will. “Military Artificial Intelligence Can Be Easily and Dangerously Fooled.” MIT Technology Review, MIT Technology Review, 5 Nov. 2019, www.technologyreview.com/s/614497/military-artificial-intelligence-can-be-easily-and-dangerously-fooled/.
[7] Ibid
[8] Picard, Michael. “Weaponized AI in Southeast Asia: In Sight Yet out of Mind.” The Diplomat, The Diplomat, 6 July 2019, thediplomat.com/2019/07/weaponized-ai-in-southeast-asia-in-sight-yet-out-of-mind/.
[9] Vincent, James. “Google’s AI Thinks This Turtle Looks like a Gun, Which Is a Problem.” The Verge, The Verge, 2 Nov. 2017, www.theverge.com/2017/11/2/16597276/google-ai-image-attacks-adversarial-turtle-rifle-3d-printed.
[10] Vincent, James. “Elon Musk, DeepMind Founders, and Others Sign Pledge to Not Develop Lethal AI Weapon Systems.” The Verge, The Verge, 18 July 2018, www.theverge.com/2018/7/18/17582570/ai-weapons-pledge-elon-musk-deepmind-founders-future-of-life-institute.
[11] Article 36 of Additional Protocol I of 1977 to the Geneva Conventions of 1949
[12] Lewis, Dustin A. “Legal Reviews of Weapons, Means and Methods of Warfare Involving Artificial Intelligence: 16 Elements to Consider.” Humanitarian Law & Policy Blog, ICRC, 29 Dec. 2019, blogs.icrc.org/law-and-policy/2019/03/21/legal-reviews-weapons-means-methods-warfare-artificial-intelligence-16-elements-consider/.
[13] Goussac, Netta. “Safety Net or Tangled Web: Legal Reviews of AI in Weapons and War-Fighting.” Humanitarian Law & Policy Blog, ICRC, 7 June 2019, blogs.icrc.org/law-and-policy/2019/04/18/safety-net-tangled-web-legal-reviews-ai-weapons-war-fighting/.
Comments