Home » Blog » Are Machines Lawful Combatants? Examining Autonomous Weapons in Light of IHL

Are Machines Lawful Combatants? Examining Autonomous Weapons in Light of IHL

Authored By: Mishaal Amjad

Pakistan College of law

Abstract

The rise of autonomous weapons systems (AWS) presents one of the most significant legal and ethical challenges in modern warfare. These technologies, capable of selecting and engaging targets without direct human intervention, push the boundaries of International Humanitarian Law (IHL). This article examines whether such systems can be classified as lawful combatants, focusing on their compliance with IHL principles of distinction, proportionality, and accountability. It explores the lack of a clear international legal framework, surveys divergent state practices, and analyses current debates within the United Nations and civil society. Ultimately, the paper argues that full autonomy in the use of force undermines core IHL values and risks creating a world where accountability is diffused and human dignity is compromised. It concludes with proposed reforms, including a preemptive ban, regulatory frameworks, and ethical guardrails to ensure human control is never fully surrendered. Through this analysis, the article contributes to the urgent international discourse on how and whether machines should be permitted to make decisions of life and death in armed conflict.

Introduction

In recent years, the emergence of artificial intelligence in military technology has sparked profound legal, ethical, and humanitarian concerns. Among the most controversial developments are Lethal Autonomous Weapon Systems (LAWS), machines capable of selecting and engaging targets without direct human intervention. Currently, no commonly agreed definition of LAWS exists [1]. However, once a theme of science fiction, autonomous weapons are rapidly becoming a reality. This raises questions under IHL regarding the legal framework governing the conduct of armed conflict.

The increasing reliance on autonomy in warfare challenges foundational IHL principles, including distinction, proportionality, and precaution. These rules, set out in the Geneva Conventions and Protocol I, are based on things like human judgment, morality, and accountability. These are the kind of qualities only people have, not machines. While some argue that advanced algorithms may enhance precision and reduce human error, critics warn of a legal vacuum where no actor bears responsibility for unlawful harm caused by machines. The debate over whether machines can ever be considered “lawful combatants” under IHL is no longer theoretical. It is a pressing issue for legal scholars, states, and international bodies alike.

This article critically examines the legality of autonomous weapons under IHL. It begins by defining LAWS and outlining their current and potential military applications. It then analyses the relevant legal framework and applies core IHL principles to assess whether autonomous weapons can operate within lawful bounds. Finally, it explores the international discourse surrounding the regulation or prohibition of such systems and evaluates whether existing legal instruments are sufficient or whether a new treaty is necessary. In doing so, it seeks to answer a crucial question: Can machines comply with the laws of war, or must humanity retain meaningful control over life and death decisions in armed conflict?

Defining Autonomous Weapons

Autonomous weapons, formally known as Lethal Autonomous Weapon Systems (LAWS), refer to machines that, once activated, can identify, select, and engage targets without further human intervention. While there is no universally accepted legal definition, the most referenced description comes from the United Nations Group of Governmental Experts (GGE) under the Convention on Certain Conventional Weapons (CCW), which describes LAWS as “weapons systems that can select and engage targets without human intervention.”[2]

Importantly, LAWS exist on a spectrum of autonomy. On one end, we have automated weapons like heat-seeking missiles, which follow set instructions and do not make their own choices. On the other end, fully autonomous weapons can use AI to pick targets on their own and even change how they operate as they go. In the middle are semi autonomous weapons, which still need humans to supervise but can carry out certain tasks by themselves. Examples of systems approaching autonomy include Israel’s Harpy Drone [3], which can loiter in an area and autonomously attack radar emitters; South Korea’s SGR-A1 sentry gun [4], deployed in the demilitarized zone, capable of identifying human movement and, depending on configuration, engaging targets; Russia’s Marker Robot [5], reportedly capable of autonomous targeting and tracking.

While these systems may not yet meet the threshold of full autonomy, the technological trend is clear: states are investing in increasingly independent systems. Notably, the U.S. Department of Defense’s Directive 3000.09 [6] allows for the development of LAWS under stringent oversight, indicating a willingness to integrate such systems into modern arsenals.

The growing reliance on AI in warfare forces the legal community to confront a key dilemma: Can a machine be entrusted with decisions that IHL has historically reserved for human judgment? Before answering that, the core legal principles of IHL must be applied to LAWS, a task taken up in the next section.

Relevant Legal Framework under International Humanitarian Law (IHL)

International Humanitarian Law (IHL), also known as the law of armed conflict, governs the conduct of hostilities during armed conflict. Its primary aim is to protect civilians and other non-combatants, and to limit the means and methods of warfare. The legality of autonomous weapons must therefore be assessed within this framework, particularly under the Geneva Conventions (1949) and their Additional Protocols, as well as under customary international law.

One of the cornerstones of IHL is the principle of distinction, under Article 48, Additional Protocol I, which obliges parties to distinguish between combatants and civilians and also between military objectives and civilian objects [7]. Attacks may only be directed at lawful military targets. For autonomous weapons, this raises a critical question: Can machines reliably distinguish between lawful and unlawful targets in complex, real world combat environments?

Furthermore, the principle of Proportionality Enshrined in Article 51(5)(b) of Additional Protocol I reflects customary international law [8]. It prohibits attacks where the harm to civilians would be too much compared to the clear and direct military gain expected [9]. This principle requires contextual human judgment, which is difficult to replicate in code or algorithms, especially where nuanced moral and legal assessments are needed.

Under Article 57 of Additional Protocol I, attackers must take all feasible precautions to avoid or minimize civilian harm [10]. This includes verifying targets, assessing potential collateral damage, and cancelling attacks if conditions change. Whether a machine can dynamically and lawfully adjust its actions based on shifting battlefield realities remains deeply contested.

The Martens Clause, appearing in the preamble of the 1899 Hague Convention II and later treaties, acts as an ethical safety net. It provides that, even in cases not explicitly covered by law, civilians and combatants remain under the protection of the principles of humanity and the dictates of public conscience [11]. This clause becomes especially relevant in regulating emerging technologies like LAWS, which may fall outside existing treaty law but still raise humanitarian concerns.

Even where treaty law is silent, customary international law, based on widespread and consistent state practice coupled with opinio juris, may apply. State positions on LAWS are rapidly evolving, but there is currently no unified customary rule either authorizing or banning their use, leaving a grey area that fuels international debate.

Application of IHL Principles to Autonomous Weapons

Having outlined the core principles of International Humanitarian Law (IHL), it is essential to assess whether autonomous weapons can lawfully operate within this framework. The complexity of IHL lies not only in its rules but in their application, requiring human reasoning, interpretation, and accountability. Whether machines can comply with these legal standards is both practically and philosophically contentious.

The principle of distinction requires that attacks be limited to combatants and military objectives, excluding civilians and civilian objects [12]. While human soldiers rely on a range of visual, contextual, and emotional cues to make such judgments, autonomous systems depend solely on algorithmic data processing. Even with advanced sensors and AI, current LAWS lack true situational awareness. Consider scenarios where a civilian picks up a weapon or a wounded combatant surrenders…etc, would a machine interpret these actions lawfully? The inability to understand intention, context, or nuance poses a real threat to compliance with IHL. For example, a loitering munition might misidentify a camera tripod as a missile launcher, leading to unlawful harm. Human soldiers, with training and discretion, are more likely to distinguish between the two.

Moreover, the principle of proportionality requires a balancing test [13]. whether anticipated civilian harm is excessive in relation to the concrete and direct military advantage. This is not a simple numerical calculation. It often demands moral judgment and an understanding of evolving battlefield conditions. Autonomous weapons currently operate based on pre programmed thresholds or risk matrices. However, they cannot weigh legal or ethical values the way a human commander can. A machine may detect ten civilians and one hostile tank, but it cannot determine whether destroying the tank justifies the risk to human life, especially if the information is incomplete or ambiguous.

IHL requires attackers to take all feasible precautions to avoid or minimize civilian harm. This involves dynamic decision making, altering or aborting an attack if new risks arise. Autonomous weapons, once activated, may lack the ability to reassess, particularly if communication with human operators is severed or jammed. Furthermore, “learning” AI systems might behave unpredictably in real world combat. Even their creators may not fully understand how a system will act once deployed, raising concerns about meaningful human control and the unpredictability of battlefield outcomes.

Perhaps the most pressing legal question is: Who is liable when a LAWS violates IHL? The core of international law is accountability, but autonomous systems disperse responsibility among programmers, commanders, manufacturers, and even states. If a machine commits what would amount to a war crime, can it be punished? No. And if its actions cannot be legally attributed to a responsible actor, then a fundamental pillar of IHL, individual criminal responsibility, is undermined. This dilemma has led many legal scholars and NGOs to argue that fully autonomous weapons should be banned, not merely regulated.

Also, even in the absence of specific treaty rules, the Martens Clause ensures that emerging weapons are subject to the dictates of public conscience and the principles of humanity. Suppose public outcry and ethical standards reject machines making life-and-death decisions. In that case, such systems may be considered unlawful or illegitimate under customary international law, even without an explicit treaty ban.

Currently, the rapid development of autonomous weapons has triggered extensive international debate, yet no consensus has emerged on their legality or regulation. Different groups have different views. Some countries, experts, and organisations want to ban these weapons before they are widely used, others think we should just set rules to control them, and some do not want any limits at all because they see them as useful for strategy or tech progress.

United Nations Efforts (CCW Framework)

Since 2013, discussions on Lethal Autonomous Weapon Systems (LAWS) have been ongoing under the Convention on Certain Conventional Weapons (CCW). A Group of Governmental Experts (GGE) was established to study the legal, ethical, and military implications of LAWS. While the GGE has acknowledged the need for “meaningful human control”, it has not adopted any binding agreement [14] [15] [16]. In 2019 and again in 2021, proposals were made to begin negotiating a new legally binding instrument, but these were blocked by a small number of powerful states, including the United States, Russia, and Israel, who argue that existing IHL is sufficient [17] [18] [19]. The lack of consensus has effectively paralysed the UN process, reflecting a deeper geopolitical divide between states seeking to limit or ban LAWS and those heavily investing in their development.

Divergent State Positions

  • Pro regulation states like Germany, France, and Japan support developing international norms but stop short of endorsing a ban [20].
  • Ban seeking states, including Austria, Brazil, Pakistan, and Mexico, argue that LAWS inherently violate IHL and should be preemptively outlawed, much like blinding laser weapons were banned in 1995 under Protocol IV of the CCW [21] [22] [23] [24].
  • Major military powers such as the United States, China, and Russia emphasize “responsible use” and national level oversight, rejecting binding international restrictions [25].

This divergence creates a fractured global landscape with no unified legal or political approach. Civil Society Advocacy: The Campaign to Stop Killer Robots Civil society has played a critical role in raising awareness. The Campaign to Stop Killer Robots [26], a coalition of over 180 NGOs [27], calls for a legally binding treaty to ban fully autonomous weapons [28]. It invokes both IHL principles and ethical considerations, arguing that delegating kill decisions to machines violates human dignity and removes accountability from warfare [29].

This campaign draws parallels to the successful advocacy efforts that led to the bans on landmines (Ottawa Treaty) and cluster munitions (Oslo Convention). It has pressured states, produced impactful reports, and helped shift public opinion [30].

Some regions and individual states are moving ahead despite global gridlock:

  • The European Parliament has passed resolutions urging the prohibition of LAWS without meaningful human control.
  • In 2018, the UN Secretary-General Antonio Guterres called for a ban on machines that have the power and discretion to take lives without human involvement, calling them “politically unacceptable and morally repugnant. [31] [32] ”
  • At the national level, Canada, the Netherlands, and Sweden have invested in studying the ethical implications and potential policy frameworks for responsible use, though these remain non-binding [33] [34] [35].

Despite years of debate, there is no specific treaty or customary law that governs the design, development, and deployment of LAWS [36]. This leaves a dangerous regulatory vacuum, particularly as dual-use AI technologies developed in civilian sectors can easily be repurposed for military ends.

In the absence of international consensus, the burden falls on states to adopt national level policies or moratoria, but this risks a fragmented and uneven approach. Without collective restraint, an autonomous arms race may ensue, undermining global stability and humanitarian protections.

Reform Proposals & Future Pathways

As autonomous weapons move from concept to battlefield reality, the need for legal clarity and proactive regulation becomes urgent. The current international legal framework lacks specific provisions tailored to the unique challenges of LAWS. To prevent a dangerous erosion of humanitarian norms, scholars, civil society groups, and policymakers have proposed a range of reforms, from regulatory models to outright bans. This section explores those proposals and outlines possible paths forward.

One of the most widely supported proposals is a legally binding international treaty that prohibits the development, deployment, and use of fully autonomous weapons. Modelled on existing disarmament treaties like the Ottawa Convention (1997) on landmines [37], and the Convention on Cluster Munitions (2008) [38], such a treaty could build on humanitarian concerns and ethical imperatives.

A key proposal gaining support from advocacy groups and many Global South countries is the creation of a new international treaty that clearly regulates or prohibits autonomous weapons systems. Core features of this approach would include a ban on weapons that operate without meaningful human control, a prohibition on delegating life-and-death decisions to machines, and strong accountability mechanisms for any violations. Organisations such as the Campaign to Stop Killer Robots strongly support this path, seeing it as essential to preserving human responsibility in armed conflict. However, efforts to advance treaty negotiations under the Convention on Certain Conventional Weapons (CCW) have faced strong political resistance from major military powers, making progress slow and uncertain.

An alternative approach to addressing LAWS focuses on strict regulation rather than outright prohibition. This model emphasises the importance of maintaining meaningful human control, ensuring compliance with IHL, and promoting operational transparency. Key elements of this regulatory approach include requiring systems that are either human in the loop or human on the loop, establishing rigorous testing and certification procedures, and developing national legal frameworks that align with IHL standards. Technologically advanced states such as France, Germany, and Japan support this model, recognising the potential military advantages of artificial intelligence while acknowledging the need for strong legal safeguards. However, critics argue that regulation alone may be insufficient, warning that such frameworks could become weak, inconsistently applied, or difficult to enforce in high-pressure combat environments.

Another proposed approach is to interpret and extend existing principles of IHL, such as distinction, proportionality, and precaution, so they clearly apply to autonomous weapons systems, even without the adoption of new treaties. This “interpretive approach” involves issuing official guidance on how IHL should govern the use of LAWS, potentially through bodies like the International Committee of the Red Cross (ICRC) or the United Nations. It also encourages national military manuals to include specific rules for the use of AI in combat, and promotes peer review and compliance monitoring among states. While this method avoids the political difficulties of negotiating new treaties, critics argue that it may not be enough to close accountability gaps or address the legal and moral uncertainties associated with autonomous weapons.

To strengthen accountability under IHL, reform proposals suggest expanding legal responsibility to all actors involved in the deployment of LAWS. This includes holding commanders, programmers, and manufacturers liable when negligent or reckless design or use causes unlawful harm. Additionally, it has been proposed that doctrines of command responsibility should evolve to cover the oversight of autonomous systems. International investigatory bodies or fact-finding commissions could also be established to assess violations related to LAWS and ensure proper accountability. These reforms aim to make clear that even if a weapon operates autonomously, humans must still be held responsible for its actions.

Beyond legal measures, many experts advocate for the development of ethical frameworks to guide the responsible use of AI in warfare. These frameworks would require transparency in decision making algorithms, in order to ensure that weapons are explainable and auditable, and embed principles of human dignity and moral responsibility into system design. Although some countries and academic institutions have already introduced ethical codes for military AI, most of these remain voluntary and lack binding enforcement mechanisms.

Conclusion

Autonomous weapons systems are more than just new technology; they raise serious legal and moral issues under IHL. As machines begin making life and death decisions, key IHL principles like distinction, proportionality, and accountability face major challenges. This article asked whether autonomous weapons can be lawful combatants. While some believe they might follow IHL rules, there are deep concerns. Machines struggle to understand human situations, and it is ethically troubling for them to make kill decisions without real human control. Although IHL is strong in theory, it lacks clear rules for lethal autonomous weapons systems (LAWS). After years of UN debates and advocacy, there is still no binding treaty or global regulation. This gap increases the risk of an arms race and removes human responsibility from warfare. Going forward, countries must cooperate to prohibit or strictly control autonomous weapons. Whether through a new treaty, updated interpretations of IHL, or national laws, one key rule must remain: humans must stay in charge of decisions to kill. Without action, we risk replacing legal and moral responsibility in war with algorithms and uncertainty.

Reference(S):

[1] United Nations Office for Disarmament Affairs (UNODA), Lethal Autonomous Weapons Systems (LAWS) (UNODA, 2023) Lethal Autonomous Weapon Systems (LAWS) – UNODA accessed 10 June 2025

[2] United Nations Office for Disarmament Affairs (UNODA), Revised Rolling Text: Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (8 November 2024) *Revised_rolling_text_as_of_8_November_2024_final.pdf  accessed 8 June 2025.

[3] Israel Aerospace Industries, IAI’s Family of Loitering Attack Systems (IAI, 2023) IAI’s family of loitering attack systems accessed 10 June 2025.

[4] Daniel Hoadley, ‘The South Korean Sentry—A “Killer Robot” to Prevent War’ (Lawfare, 3 May 2021) The Foreign Policy Essay: The South Korean Sentry—A “Killer Robot” to Prevent War | Lawfare accessed 10 June 2025.

[5] ‘Marker Anti-Tank Robotic Unmanned Ground Vehicle, Russia’ (Jane’s Defence, 2024) Marker Anti-Tank Robotic Unmanned Ground Vehicle, Russia accessed 10 June 2025.

[6] DoD Announces Update to DoD Directive 3000.09, ‘Autonomy In Weapon Systems’ > U.S. Department of Defense > Release accessed 10 June 2025.

[7] Jean-François Quéguiner, ‘Precautions under the Law Governing the Conduct of Hostilities’ (2006) 88(864) International Review of the Red Cross 793. accessed 10 June 2025.

[8] ICRC, Study on Customary International Humanitarian Law (2005) Rule 14 accessed 10 June 2025.

[9] 4616_002_Overview of the Legal Framework Governing National Information Bureaux (April 2022, PDF) *4616_002_Overview of the Legal Framework Governing National Information Bureaux; 04.2022; PDF only accessed 10 June 2025.

[10] IHL Treaties, Additional Protocol I to the Geneva Conventions (1977) art IHL Treaties – Additional Protocol (I) to the Geneva Conventions, 1977 – Article 57 accessed 10 June 2025.

[11] ‘The Martens Clause and the Laws of Armed Conflict’ (International Review of the Red Cross) The Martens Clause and the Laws of Armed Conflict | International Review of the Red Cross accessed 10 June 2025.

[12] Customary IHL, ‘Rule 1: The Principle of Distinction between Civilians and Combatants’  Customary IHL – Rule 1. The Principle of Distinction between Civilians and Combatants accessed 10 June 2025.

[13] Customary IHL, ‘Rule 14: Proportionality in Attack’ Customary IHL – Rule 14. Proportionality in Attack accessed 10 June 2025.

[14] ‘Autonomous weapons that kill must be banned, insists UN chief’ (UN News) https://news.un.org/en/story/2021/11/1106212 accessed 10 June 2025.

[15] ‘Convention on Certain Conventional Weapons’ (Wikipedia) https://en.wikipedia.org/wiki/Convention_on_Certain_Conventional_Weapons accessed 10 June 2025.

[16] ‘Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control’ (Human Rights Watch) Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control | HRW accessed 10 June 2025.

[17] ‘Poll Shows Strong Opposition to “Killer Robots”’ (Human Rights Watch) Poll Shows Strong Opposition to ‘Killer Robots’ | Human Rights Watch accessed 10 June 2025.

[18] ‘Geopolitics and the Regulation of Autonomous Weapons Systems’ (Arms Control Association) Geopolitics and the Regulation of Autonomous Weapons Systems | Arms Control Association accessed 10 June 2025.

[19] ‘Lethal autonomous weapon’ (Wikipedia) https://en.wikipedia.org/wiki/Lethal_autonomous_weapon accessed 10 June 2025.

[20] ‘Geopolitics and the Regulation of Autonomous Weapons Systems’ (Arms Control Association) Geopolitics and the Regulation of Autonomous Weapons Systems | Arms Control Association accessed 10 June 2025.

[21] ‘UN head calls for a ban’ (Stop Killer Robots UN head calls for a ban – Stop Killer Robots accessed 10 June 2025.

[22] ‘Poll Shows Strong Opposition to “Killer Robots”’ (Human Rights Watch) Poll Shows Strong Opposition to ‘Killer Robots’ | Human Rights Watch accessed 10 June 2025.

[23] ‘Protect civilians: Stop killer robots’ (Stop Killer Robots) Protect civilians: Stop killer robots – Stop Killer Robots accessed 10 June 2025.

[24] ‘Heed the Call: A Moral and Legal Imperative to Ban Killer Robots’ (Human Rights Watch)  Heed the Call: A Moral and Legal Imperative to Ban Killer Robots | HRW accessed 10 June 2025.

[25] Ray Acheson, ‘It’s Time to Exercise Human Control over the CCW’ (CCW Report, Vol 7 No 2, 27 March 2019) accessed 10 June 2025.

[26] ‘Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control’ (Human Rights Watch) Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control | HRW accessed 10 June 2025.

[27] ‘UN head calls for a ban’ (Stop Killer Robots) UN head calls for a ban – Stop Killer Robots accessed 10 June 2025.

[28] ‘Poll Shows Strong Opposition to “Killer Robots”’ (Human Rights Watch) Poll Shows Strong Opposition to ‘Killer Robots’ | Human Rights Watch accessed 10 June 2025.

[29] ‘Heed the Call: A Moral and Legal Imperative to Ban Killer Robots’ (Human Rights Watch)  Heed the Call: A Moral and Legal Imperative to Ban Killer Robots | HRW accessed 10 June 2025.

[30] ‘Campaign to Stop Killer Robots’ (Wikipedia) https://en.wikipedia.org/wiki/Campaign_to_Stop_Killer_Robots accessed 10 June 2025.

[31] UN News, ‘Autonomous Weapons That Kill Must Be Banned, Insists UN Chief’ (UN News, 22 March 2023) Autonomous weapons that kill must be banned, insists UN chief | UN News accessed 10 June 2025.

[32] Human Rights Watch, ‘Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control’ (HRW, August 2020) Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control | HRW accessed 10 June 2025.

[33] Human Rights Watch, ‘Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control’ (HRW, August 2020) Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control | HRW accessed 10 June 2025.

[34] Stop Killer Robots, ‘Protect Civilians: Stop Killer Robots’ (Stop Killer Robots, 2021)  Protect civilians: Stop killer robots – Stop Killer Robots accessed 10 June 2025.

[35] Stop Killer Robots, ‘UN Head Calls for a Ban’ (Stop Killer Robots, 22 March 2023) UN head calls for a ban – Stop Killer Robots accessed 10 June 2025.

[36] Vincent Boulanin, ‘Geopolitics and the Regulation of Autonomous Weapons Systems’ (Arms Control Association, June 2021) Geopolitics and the Regulation of Autonomous Weapons Systems | Arms Control Association accessed 10 June 2025.

[37] United Nations Office for Disarmament Affairs, ‘Anti-Personnel Landmines Convention’ (UNODA) Anti-Personnel Landmines Convention – UNODA accessed 10 June 2025.

[38] ‘Convention on Cluster Munitions’ (Wikipedia, last updated 9 June 2025) Convention on Cluster Munitions – Wikipedia accessed 10 June 2025.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top