Home » Blog » The Legal Framework Governing Artificial Intelligence in Warfare: Challenges and Opportunities

The Legal Framework Governing Artificial Intelligence in Warfare: Challenges and Opportunities

Authored By: Ayesha Minhas​

University of Greenwich

Abstract

The rapid advancement of artificial intelligence (AI) has reshaped numerous sectors, including modern warfare. The integration of Autonomous Weapon Systems (AWS) into military operations introduces profound ethical and legal dilemmas, particularly regarding compliance with International Humanitarian Law (IHL). While AI-driven weaponry promises enhanced precision and reduced risks to human soldiers, it also raises concerns about accountability, human oversight, and the ethical implications of delegating life or death decisions to machines. However, since AWS warfare is a relatively modern concept, there are limited real-world examples of its use in conflict, and transparency from governments and military authorities remains minimal. This lack of accessible data complicates efforts to establish clear legal and ethical guidelines. This article explores the intersection of AWS and IHL, analysing the extent to which existing legal frameworks are equipped to regulate autonomous warfare. It evaluates contrasting perspectives on the necessity of real-time human decision-making versus the feasibility of programmed ethical constraints. Ultimately, this study argues for the development of robust international legal mechanisms to ensure responsible AI integration in military contexts, preventing the unchecked proliferation of potentially catastrophic autonomous technologies.

Introduction

The integration of artificial intelligence (AI) into warfare represents a turning point in military strategy and global security. While AI-powered Autonomous Weapon Systems (AWS) promise operational efficiency and enhanced battlefield precision, they also challenge the ethical and legal foundations that govern armed conflict. The ability of machines to make independent combat decisions raises fundamental concerns about human control, accountability, and compliance with International Humanitarian Law (IHL), which was originally designed for conventional warfare.

As autonomous warfare technology advances, debates over the necessity of human involvement in decision-making grow more complex. Some advocate for mandatory real-time human oversight to ensure adherence to IHL, while others argue that semi-autonomous systems with built-in ethical constraints could provide sufficient safeguards. However, a critical challenge in this debate is the lack of real-world data on AWS use in conflict. The secrecy surrounding their deployment, particularly in conflicts involving Russia-Ukraine and Israel-Palestine, limits transparency and complicates the establishment of regulatory frameworks. This article will explore the legal and ethical dimensions of AI-driven warfare, highlighting the urgent need to adapt existing laws or develop new international regulations to mitigate potential risks. The future of warfare and humanity depends on the ability to navigate the uncharted territory that AI presents.

Examination of Existing International Humanitarian Law (IHL) and its Application to Autonomous Weapons Systems (AWS)

International Humanitarian Law (IHL) is built on fundamental principles designed to mitigate the effects of armed conflicts. The three key principles outlined in the Geneva Convention Additional Protocol I[1] are; distinction, proportionality, and precautions. These principles guide the conduct of hostilities to protect civilians and ensure military necessity.

The principle of distinction[2] requires that parties involved in a conflict to differentiate between combatants and non-combatants, only targeting military objectives. The principle of proportionality[3] prohibits attacks in which the anticipated harm to civilians and civilian objects outweighs the expected military advantage. Finally, the principle of precautions[4] mandates that all parties take feasible steps to minimise incidental harm to civilians and to avoid unnecessary suffering. The deployment of Autonomous Weapons Systems (AWS) challenges these three principles in several ways. AWS, which operates with varying degrees of human oversight, may struggle to make nuanced assessments in some complex combat scenarios. Distinction becomes particularly difficult in environments where civilians and combatants are intermingled, potentially leading to wrongful targeting. Similarly, AWS may not be able to reliably assess proportionality due to their inability to weigh contextual, moral and ethical considerations effectively. Lastly, the implementation of adequate precautions is hindered by the unpredictability of the behaviour of AWS, particularly when AI models act beyond their programmed constraints.[5]

Challenges of Applying IHL to AWS

One of the most significant challenges in applying IHL to AWS is ensuring that these systems can accurately distinguish between civilians and combatants. While humans rely on personal experience in the field, intuition and real-time situational awareness to make various targeting decisions, AWS depends on pre-programmed algorithms, sensor data, and pattern recognition. These technologies struggle in dynamic environments, such as urban warfare, where the combatants may not be wearing uniforms and civilians could be engaged in activities potentially resembling hostile intent[6]. In addition, algorithmic bias and errors pose additional risks.

If an AWS is trained on biased or incomplete data, it may incorrectly identify threats, leading to wrongful engagements and violations of IHL. For example, this can be seen in Israel’s reported use of AI driven military programmes (operation names: “Lavender”, “Where’s Daddy”, and “the Gospel”[7]) which processes vast amounts of data, ranging from surveillance footage, communication intercepts and other intelligence to identify potential targets.  According to The Guardian[8], these AI-driven systems have facilitated the authorisation of indiscriminate attacks, leading to the killing of over one hundred Palestinian civilians. Testimonies from within the Israel Defence Forces (IDF)[9] further expose the recklessness of AI usage in warfare, with an anonymous intelligence officer admitting that estimates of civilian casualties were “imprecise” and that the AI systems had pre-approved strikes with an accepted collateral damage threshold of up to twenty uninvolved civilians in the attack. However, despite this threshold, the actual civilian death toll exceeded one hundred, over five times the initial authorisation.  This indicates a severe miscalculation or disregard for human life and the testimony underscores the fundamental failures of AI-driven military tools to adhere to the principle of distinction, demonstrating that such systems not only enable but systematically normalise disproportionate harm and indiscriminate killings, raising serious concerns about their legality, accountability, and ethical implications.

The principle of proportionality requires military forces to weigh the anticipated military advantage against the potential civilian harm. This comes from the International Criminal Tribunal for the former Yugoslavia (ICTY) judgement on the case of Prosecutor v Kupreškić et al. [2000][10] which defined proportionality violations and reaffirmed that excessive force against civilians constitutes a war crime.  However, AWS faces challenges in quantifying and predicting collateral damage. Human operators consider contextual factors, such as emotional responses, unforeseen consequences, and evolving battlefield conditions, all of which AWS lacks the cognitive ability to assess. Additionally, AWS may struggle to recognise indirect consequences, such as psychological trauma or infrastructure damage, making it difficult to ensure compliance with proportionality requirements. Reports indicate that Russia’s AI-assisted drones, such as the Lancet loitering munition and Orlan-10 reconnaissance drone, have struck civilian infrastructure in Ukraine, raising concerns over targeting accuracy. According to the Armed Conflict Location & Event Data Project (ACLED)[11], nearly one in five drone strikes in Ukraine’s Nikopol district resulted in civilian casualties, suggesting potential failures in AI-driven target recognition or deliberate strikes on non-military targets. These incidents highlight the risks of autonomous warfare, where flawed AI algorithms or misidentification can lead to devastating consequences for civilians. Purves, Jenkins & Strawser argue that “even a sophisticated robot” is not “capable of replicating human judgement”[12] no matter how advanced AWS programming may become. The risk of excessive civilian harm is further compounded if AWS are deployed without rigorous oversight, as miscalculations in proportionality assessments could lead to catastrophic consequences,

To comply with the IHL’s principle of precaution, warring parties must take all feasible precautions to minimise harm to civilians. AWS introduces uncertainties regarding reliability, testing, and safeguards. The Convention on Certain Conventional Weapons (CCW) and the CCW Group of Governmental Experts (GGE) has repeatedly debated AWS and the need for regulations to ensure compliance[13]. Adequate precautions require extensive testing to ensure that AWS operates as intended, yet rapid advancements in AI and machine learning introduce risks of unpredicted behaviour. AWS that rely on adaptive algorithms could evolve beyond their initial programming, making their actions difficult to foresee and control down the line. Maintaining human oversight also remains a challenge. If AWS was to operate with full autonomy, their decisions may not be subject to real-time human intervention, raising ethical and legal concerns about accountability and compliance with IHL. A critical issue in the legality of AWS is the concept of “meaningful human control”, the extent to which human operators remain involved in the decision–making process. The requirement for human oversight varies across the spectrum of autonomy, ranging from remotely operated drones (with full human control) to full autonomous systems (with no human intervention). U.S. military policy states that AWS should be designed to allow for human intervention in targeting decisions[14], which acknowledges the risks of losing human oversight.  The challenge lies in defining an acceptable threshold for human involvement, balancing operational effectiveness with legal and ethical responsibilities.

There are contrasting perspectives on meaningful human control. Some critics, such as ethics and philosophy professor Robert Sparrow[15], argue that real-time human decision making should be mandatory for all AWS engagements, ensuring compliance with IHL. Other critics, including roboethicist Ronald Arkin[16], contend that semi-autonomous AWS with programmed ethical constraints may offer sufficient safeguards, reducing human cognitive burdens in high intensity conflicts.. However, the absence of a universally accepted definition of “meaningful human control” in the context of AWS complicates regulatory efforts. Establishing clear legal frameworks and technical standards will be essential to ensure that AWS operations remain within the bounds of IHL while upholding humanitarian values.

The principles of distinction, proportionality, and precautions, all fundamental to IHL, are difficult to implement when dealing with autonomous systems that lack human judgement and contextual understanding. Ensuring compliance with IHL necessitates robust testing, safeguards, and the establishment of clear guidelines on meaningful human control. As AWS continues to evolve, international legal frameworks must adapt to address emerging ethical, legal, and security concerns, ensuring that autonomous warfare remains accountable to humanitarian principles.

Ethical and Legal Implications of AI in Warfare:

Accountability and Responsibility

One of the primary legal challenges posed by AWS is determining accountability for violations of IHL. Traditional warfare assigns responsibility to human decision-makers, but the existence of AWS introduces a “responsibility gap”, a situation where it is unclear who should be held accountable for unlawful acts committed by an autonomous system. Article 28 of the Command Responsibility Doctrine[17] holds military commanders responsible for any war crimes committed by their (implied human) subordinates. However, if AWS was to act in an unpredictable manner, the law is vague on whether commanders or programmers would be held accountable. The difficulty of attributing responsibility spans multiple actors, also including manufacturers and policymakers along with the previously mentioned actors. Each of these groups plays a role in designing, deploying, and executing AWS operations, yet no clear legal framework exists to delineate their liability in the event of a violation of IHL.

A significant barrier to accountability is that AI itself lacks legal personhood[18], meaning it cannot be held liable under any of the current legal systems. Unlike a human soldier who can be prosecuted for war crimes, an AI system is merely a tool, even if it functions autonomously. This lack of direct responsibility raises concerns about justice and enforcement, as victims of unlawful AWS attacks may struggle to seek redress. The diffusion of responsibility across multiple parties further complicates matters, as it becomes challenging to pinpoint where negligence or intent occurred. Policymakers must establish new legal norms to assign clear accountability and ensure that AWS related violations of IHL do not go unpunished.

Ethical Concerns

Delegating life or death decisions to machines also introduces profound ethical dilemmas. Michael Walzer’s “Just War Theory”[19] which draws upon the principles of Kantian ethics[20] questions whether moral responsibility can be delegated to machines, emphasising the ethical basis that decisions involving life and death should not be automated. AWS removes human moral reasoning from the battlefield, potentially leading to the dehumanisation of warfare and the potential erosion of moral responsibility. Human soldiers often exercise judgement based on situational awareness, emotional intelligence, and ethical considerations, all of which AWS lacks. The absence of human emotions, such as compassion and restraint, may increase the likelihood of excessive and indiscriminate violence. Additionally, AWS lacks the capacity for moral reflection, meaning they may execute orders without questioning their ethical implications. In 2016, MIT created the Moral Machine experiment[21] which highlighted the difficulty of programming morality into AWS as ethical dilemmas require human intuition that AI lacks. Another concern is the risk of accidental escalation. Autonomous systems operate based on pre-programmed algorithms, which may misinterpret threats and trigger unintended conflicts. If AWS engages in hostile actions without human intervention, misunderstandings between states or non-state actors could rapidly escalate into full-scale conflicts. Such risks highlight the need for strict limitations on AWS autonomy to prevent unintended warfare.

As AI systems become increasingly autonomous, there is a growing concern about the loss of human control. AWS may behave unpredictably due to algorithmic bias, software errors, or even hacking. Cyberattacks on AWS could result in adversaries taking control of these advanced systems, potentially leading to devastating unintended consequences. Malfunctions in AWS decision-making processes could also result in unlawful attacks and unintended civilian casualties, raising serious concerns about reliability and oversight. An example of this is the Ukrainian forces’ successful use of electronic warfare to jam and misdirect Russian drones, including the Orlan-10 and Lancet loitering munitions[22]. By deploying advanced jamming systems, Ukraine has been able to disrupt the communication links between Russian drones and their operators, causing them to crash or miss their intended targets. Reports indicate that some Russian drones have even been forced to return to base or land prematurely due to signal interference. This highlights the vulnerabilities of AI-assisted drone warfare when faced with effective countermeasures.

Some experts warn that highly autonomous AWS could pose an existential threat if they surpass human control mechanisms entirely. The ICRC[23] has warned that if AWS evolves to operate independently beyond human commands, their behaviour could become erratic or self-sustaining, leading to uncontrollable warfare. Given these risks, strict safeguards must be imposed, including failsafe mechanisms, international oversight, and clear operational limitations to ensure AWS remains under meaningful human control.

Potential for New Legal Frameworks

The rapid advancement of AWS technology highlights significant gaps in existing IHL, which was not designed to address fully autonomous systems. Current legal frameworks, including the Geneva Conventions and Additional Protocols, assume human decision-making at all stages of warfare. However, AWS challenges this assumption by introducing decision-making processes that operate without direct human input. As a result, existing laws may be inadequate in addressing issues of accountability, proportionality, and meaningful human control. A new treaty or protocol specifically tailored to AWS is necessary to bridge these gaps. This legal instrument should establish clear rules on the permissible levels of autonomy in AWS, ensuring that human oversight remains central to decision-making. Such a treaty should also define accountability mechanisms to close the responsibility gap, ensuring that violations of IHL by AWS can be effectively prosecuted. Without clear regulations, there is a risk of unchecked proliferation of AWS, leading to increased global instability and the erosion of humanitarian protections.

A robust legal framework for AWS should include several key provisions:

  1. A Ban on Fully Autonomous Weapons Systems: Complete autonomy in AWS should be prohibited, particularly for systems capable of making life or death decisions without human intervention. This aligns with ethical considerations and ensures compliance with fundamental IHL principles.
  2. Requirements for Meaningful Human Control: The new framework should establish a clear threshold for human oversight in AWS operations. It must specify the degree of human involvement necessary to ensure compliance with legal and ethical standards.
  3. Standards for Testing, Verification, and Transparency: AWS should undergo rigorous testing and certification before deployment. Developers must provide transparency regarding AWS decision-making processes and demonstrate that systems operate predictably and within legal constraints.
  4. Provisions for Accountability and Responsibility: The framework should specify who is legally responsible for AWS actions, ensuring that programmers, manufacturers, and military commanders can be held accountable for violations of IHL.
  5. International Cooperation and Consensus-Building: AWS regulation requires global cooperation to be effective. A new treaty should encourage states to collaborate on technology-sharing agreements, oversight mechanisms, and diplomatic efforts to prevent an AI arms race.

In addition to binding legal instruments, soft law mechanisms, such as guidelines, industry standards, and codes of conduct, can also play a crucial role in shaping AWS development and use. The OECD AI Principles[24] emphasise the importance of transparency, accountability and human oversight in AI development which could be adapted for military AI purposes. These soft law non-binding regulations provide flexibility and allow for adaptation to emerging technological challenges. One approach to consider could be industry self-regulation, where technology companies voluntarily commit to ethical AI development principles. Governments and international organisations can also promote best practices, such as requiring AWS developers to conduct human rights impact assessments and publish

transparency reports. Public awareness and multi-stakeholder dialogue are essential in fostering responsible AI governance. Civil society organisations, academic institutions, and legal experts must be involved in discussions to ensure that AWS regulations reflect diverse perspectives and uphold humanitarian principles.

Conclusion

The rise of AWS presents both opportunities and profound challenges for modern warfare. While these systems offer potential military advantages, their use raises significant legal, ethical, and humanitarian concerns. Existing IHL frameworks are clearly insufficient to address the complexities of AWS, making the development of a new treaty or protocol an urgent necessity. Establishing clear legal boundaries for AWS, ensuring meaningful human control, and enforcing accountability measures are essential in order to safeguard humanitarian protections. The risks associated with unchecked AWS deployment, including the potential for unintended escalation, loss of human oversight, and violations of IHL – all demand immediate global action. A human-centered approach to AI in warfare, grounded in legal and ethical principles, is crucial to ensuring that technological advancements do not come at the expense of fundamental human rights. Only through international cooperation and a sincere commitment to responsible AI governance can the future of armed conflict and maintain the integrity of international humanitarian law be successfully preserved.

Primary Resources:

International Legal Documents & Government Reports:

News Reports (Primary Reporting of Events):

Secondary Resources:

Books & Ethical Discussions:

    • Arkin, Ronald, Governing Lethal Behavior in Autonomous Robots (Chapman & Hall, 2009)
    • Forrest, B K, ‘The Yale Law Journal – Forum: The Ethics and Challenges of Legal Personhood for AI’ Yale Law Journal Forum (22 April 2024)
    • Kant I, Groundwork of the Metaphysics of Morals (Mary Gregor trans, Cambridge University Press, 1998)
    • MIT Media Lab, ‘Moral Machine’ (2016)
    • Purves D, Jenkins R & Strawser BJ, ‘Autonomous Machines, Moral Judgment, and Acting for the Right Reasons’ (2015) 18 Ethic Theory Moral Prac 851
    • Sparrow R, ‘Killer Robots’ (2007) 24 Journal of Applied Philosophy 62
    • Walzer M, Just and Unjust Wars: A Moral Argument with Historical Illustrations (4th edn, Basic Books, 2006)

Reference (S): 

[1]Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I) (adopted 8 June 1977, entered into force 7 December 1978) 1125 UNTS

[2]Protocol I, art 48, art.51 (2))

[3]Protocol I, art 51(5)(b)

[4]Protocol I, art 57

[5] ICRC, ‘ICRC Position on Autonomous Weapon Systems’ ICRC (10 March 2025, ICRC position on autonomous weapon systems)

[6]  ICRC, ‘ICRC Position on Autonomous Weapon Systems’ ICRC (10 March 2025, ICRC position on autonomous weapon systems)

[7] Y. Magid and S. Adler, ‘‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza’

+972 Magazine (3 April 2024, Lavender’: The AI machine directing Israel’s bombing spree in Gaza) <Accessed 11 March 2025>

[8] B. McKernan, ‘How Israel used AI to identify 37,000 Hamas targets’ The Guardian (3 April 2024,

‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets | Israel-Gaza war | The Guardian) <Accessed 11 March 2025>

[9] B. McKernan, ‘How Israel used AI to identify 37,000 Hamas targets’ The Guardian (3 April 2024,

‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets | Israel-Gaza war | The Guardian) <Accessed 11 March 2025>

[10] Prosecutor v Kupreškić et al. (Trial Judgment) IT-95-16-T (ICTY, 14 January 2000)

[11] Polishchuk O and Gurcov N, ‘Bombing into submission: Russian targeting of civilians and infrastructure in Ukraine’ (21 February 2025) <Bombing into submission: Russian targeting of civilians and infrastructure in Ukraine <accessed 12 March 2025>

[12] Purves D, Jenkins R & Strawser BJ, ‘Autonomous Machines, Moral Judgment, and Acting for the Right Reasons’ (2015) 18 Ethic Theory Moral Prac 851

[13]CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWS), ‘Report of the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)’ (CCW/GGE.LAWS/2019/3, 2019) <

Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems> <Accessed 12 March 2025>

[14] U.S. Department of Defense, DoD Directive 3000.09, Autonomy in Weapon Systems (21 November 2012 )Autonomy in Weapons Systems – DoD <Accessed 12 March 2025>

[15] Sparrow R, ‘Killer Robots’ (2007) 24 Journal of Applied Philosophy 62

[16] Arkin, Ronald. “Governing Lethal Behavior in Autonomous Robots.” Chapman & Hall, 2009

[17] Rome Statute of the International Criminal Court (adopted 17 July 1998, entered into force 1 July 2002) 2187 UNTS 3, art 28

[18] Forrest, B K, ‘The Ethics and Challenges of Legal Personhood for AI’ Yale Law Journal Forum (22 April 2024, The Ethics and Challenges of Legal Personhood for AI – Yale Law Journal) <Accessed 12 March 2025>.

[19] Walzer M, Just and Unjust Wars: A Moral Argument with Historical Illustrations (4th edn, Basic Books 2006) 100

[20] Kant I, Groundwork of the Metaphysics of Morals (Mary Gregor trans, Cambridge University Press 1998)

[21] MIT Media Lab, ‘Moral Machine’ (2016) Moral Machine <Accessed 12 March 2025>

[22] Axe D, ‘Russia Deployed Radio Jammers To Ground Ukraine’s Drones. Just One Problem: The Jammers Don’t Work’ (31 January 2024) ;Russia Deploys Radio-Jammers To Ground Ukraine’s Drones. Just One Problem: The Jammers Don’t Work./; <accessed 13 March 2025>

[23] ICRC, ‘Working Paper on Autonomous Weapon Systems’ ICRC (2021)

[24] OECD, OECD Recommendation on Artificial Intelligence (OECD/LEGAL/0449, 2019) Recommendation of the Council on Artificial Intelligence <Accessed 12 March 2025>

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top