Authored By: Rongala Jahnavi
Gitam Deemed to Be University
Abstract
International Humanitarian Law (IHL) faces unprecedented challenges with the integration of artificial intelligence (AI) in armed conflict. This article examines the complex relationship between AI technologies and IHL principles, focusing on autonomous weapons systems (AWS), cyber warfare, surveillance operations, and precision targeting. While AI proponents argue that these technologies enhance accuracy and reduce collateral damage, significant challenges emerge: loss of human control over life-and-death decisions, algorithmic bias in target selection, and fundamental attribution problems in determining responsibility for violations. These challenges threaten IHL’s core principles—distinction, proportionality, and humanity.
This article argues that AWS, representing a potential “third revolution in military affairs,” risk violating the principle of distinction through bias-driven misidentification of targets. Similarly, AI-enabled cyber operations raise profound questions about proportionality assessment and attribution of responsibility. Through comparative analysis of AI applications in military operations and examination of recent conflicts, this article demonstrates the urgent need to reevaluate existing legal frameworks. The analysis employs qualitative research methodology, examining primary sources (international treaties and conventions) and secondary sources (scholarly analysis and military doctrines). Ultimately, this article contributes to ongoing discourse on developing ethical and legal frameworks to govern AI in warfare, ensuring IHL compliance while acknowledging legitimate national security concerns and state sovereignty.
Introduction
In October 2024, as the Royal Swedish Academy announced Nobel Prize winners, internet users wryly suggested that ChatGPT deserved the literature prize “for its intricate tapestry of prose which showcases the redundancy of sentience in art.” This jest followed genuine AI-related Nobel awards in chemistry and physics, underscoring AI’s remarkable ascent from experimental technology to transformative force across disciplines—including warfare.
Artificial intelligence represents one of the fastest-growing scientific and technological domains. Though the concept originated with computing advances in the 1950s and 1960s, AI’s definition and applications have evolved dramatically. Alan Turing’s 1950 “imitation game”—now known as the Turing Test—posed the foundational question: “Can a machine think?” Turing projected that by 2000, computers could convincingly imitate humans at least 30% of the time. Contemporary AI has far exceeded these projections. Modern AI systems perform complex tasks including facial recognition, autonomous vehicle navigation, text and image generation, voice synthesis, music composition, and increasingly sophisticated simulation of human cognitive abilities including long-term planning and creativity.
Technological advancement has consistently transformed warfare’s character. The integration of AI and autonomous weapons systems into modern military operations raises profound legal, strategic, and ethical questions, particularly regarding International Humanitarian Law (IHL). Armed drones and autonomous weapon systems have become increasingly prevalent in contemporary conflict. AI’s inherent limitations—algorithmic bias, reduced human control, and accountability challenges—threaten IHL’s fundamental principles of distinction, proportionality, and humanity. As warfare undergoes what some characterize as a revolution comparable to mechanization’s impact in the twentieth century, legal frameworks require urgent reevaluation to address the unique challenges posed by AI-enabled combat systems.
This article examines how AI integration in armed conflict challenges existing IHL frameworks. Section II traces IHL’s evolution and foundational principles. Section III analyzes challenges posed by autonomous weapons systems. Section IV examines AI in cyber warfare. Section V addresses AI’s role in surveillance and precision targeting. Section VI explores national security implications. The conclusion argues for development of specialized legal frameworks to ensure AI-enabled weapons comply with IHL principles while respecting state sovereignty and security needs.
Evolution of International Humanitarian Law
The Latin maxim inter arma silent leges—”in times of war, the laws are silent”—is often attributed to Cicero. Yet despite their historical flexibility and malleability, rules governing warfare conduct have existed since humans first engaged in organized conflict. The concept that warfare should be subject to legal regulation is ancient, though enforcement mechanisms remained weak for millennia.
Historically, belligerents governed warfare through private agreements and treaties. The emergence of modern legal frameworks accelerated in the nineteenth century through the International Committee of the Red Cross (ICRC) and Sir Henry Dunant’s humanitarian initiatives. Contemporary IHL finds primary codification in the four Geneva Conventions of 1949 and the Additional Protocols of 1977. Beyond these treaties, customary international law has developed to regulate armed conflict—legal principles arising from consistent state practice that achieves recognition as legally binding even absent formal treaty ratification.
IHL constitutes a body of law designed to minimize suffering during armed conflict. Its principal objectives are limiting warfare’s scope and protecting individuals who do not or no longer participate in hostilities (wounded soldiers, prisoners of war, civilians). IHL must balance this humanitarian imperative against military necessity—the need to preserve armed forces’ capacity to conduct legitimate military operations.
Foundational IHL Principles
Three core principles govern IHL’s application:
Distinction: Parties to conflict must distinguish between combatants and civilians, and between military objectives and civilian objects. Only lawful military targets may be attacked. This principle requires belligerents to direct operations exclusively against military objectives.
Proportionality: Anticipated incidental civilian harm and collateral damage must not be excessive relative to the concrete and direct military advantage anticipated. Even when attacking legitimate military objectives, disproportionate civilian harm is prohibited.
Precaution: Parties must take all feasible precautions to minimize harm to civilians and civilian objects. This includes giving effective advance warning of attacks when circumstances permit, choosing means and methods of warfare that minimize civilian harm, and canceling or suspending attacks when civilian harm becomes disproportionate.
Humanity: Suffering must be minimized to the extent consistent with military necessity. Even enemy combatants retain fundamental human dignity and protections once rendered hors de combat (out of combat).
The emergence of AI and AWS parallels earlier technological transformations. Just as mechanization revolutionized twentieth-century warfare, AI threatens to fundamentally alter twenty-first-century conflict. While AI applications in surveillance and logistics generate less controversy, armed autonomous systems raise profound ethical and legal concerns about whether machines can comply with IHL’s inherently human-centered principles.
Challenges of Autonomous Weapon Systems
Defining Autonomous Weapons
Autonomous weapon systems are robotic weapons that, once activated, can select and engage targets without human intervention. These systems integrate sensors for situational awareness, computers for information processing, and effectors (weapons) for target engagement. The Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapon Systems clarifies that AWS “are not one or two types of weapons” but rather “a weapon system that integrates autonomy into its essential operations, particularly target selection and engagement.”
The Human Judgment Imperative
Two historical examples illustrate why human judgment remains indispensable in life-and-death decisions:
During the Cold War, a Soviet officer manually overrode an AI-triggered nuclear launch alert, averting potential catastrophe. A machine operating purely on algorithmic logic might not have exercised such restraint.
In Afghanistan, a U.S. sniper team encountered a young girl who was, in fact, a Taliban scout. Paul Scharre recounts that while killing might have been legally permissible, the team recognized ethical reasons for restraint. A robot would lack capacity to understand why a legally permissible action might nonetheless be morally wrong.
These incidents underscore AWS limitations and highlight indispensable human capacities for contextual understanding and moral reasoning.
Market Growth and Military Investment
Despite these concerns, military robotics are transforming warfare as profoundly as they transform civilian sectors. Global military robotics expenditure reached $13.4 billion in 2022 and is projected to approach $30 billion by 2032.[^1] The U.S. Air Force’s Unmanned Aircraft Systems Flight Plan (2009–2047) predicts that competition in unmanned aerial vehicles will increasingly resemble automated stock trading—high-speed algorithmic competition leaving minimal time for human deliberation.
IHL Principle Challenges
AWS expansion creates acute challenges for core IHL principles, particularly distinction, proportionality, and precaution.
Distinction challenges: The principle requires distinguishing between lawful military targets and protected persons or objects. Scharre’s example of the child scout illustrates the problem: while the girl might technically qualify as a legitimate target under IHL’s formal rules, human moral judgment recognized reasons for restraint. AWS operating on mechanical classification criteria cannot replicate nuanced human assessment that considers context, proportionality at the individual level, and ethical considerations beyond formal legal categories.
AWS systems trained on biased datasets may systematically misidentify civilians as combatants based on race, ethnicity, gender, age, or other impermissible criteria. These systems lack capacity for the contextual judgment that humans apply when formal rules alone would produce unjust outcomes.
Proportionality challenges: Assessing whether anticipated civilian harm is “excessive” relative to military advantage requires complex value judgments. Humans can weigh incommensurable values—military advantage against human life and suffering. AWS algorithms cannot meaningfully make such assessments. Programming a proportionality algorithm requires quantifying inherently qualitative judgments about human dignity, suffering’s severity, and military advantage’s significance.
Precaution challenges: The principle requires taking all “feasible” precautions to minimize civilian harm, including giving warnings when circumstances permit and canceling attacks when civilian harm becomes disproportionate. AWS may lack capacity to recognize when circumstances have changed in ways requiring attack cancellation or modification. Real-time adaptation to unexpected civilian presence requires judgment that current AI systems cannot reliably exercise.
Recent Conflict Applications
The war in Ukraine represents one of the first conflicts to employ autonomous targeting systems extensively. Analysis of these real-world scenarios suggests that limited autonomous targeting may be feasible in isolated, predictable environments, but human supervision remains essential. Parties must ensure reliable monitoring and override capabilities using advanced technologies to increase autonomous attack systems’ safe employment. Early evidence suggests that autonomous systems without robust human oversight create unacceptable risks of IHL violations.
AI in Cyber Warfare
Defining Cyber Warfare
Cyber warfare encompasses military operations that primarily employ computer networks and systems to target adversaries’ digital infrastructure. While conceptually related to AWS, cyber warfare operates in a distinct domain. AWS are physical systems capable of kinetic effects in the material world. Cyber warfare operations affect virtual environments by disrupting or manipulating networks, though these disruptions can produce physical consequences (shutting down electrical grids, disrupting hospital systems, etc.).
Both AWS and cyber warfare create attribution difficulties due to reliance on complex, opaque software systems. This opacity complicates accountability under IHL when violations occur.
AI’s Role in Cyber Operations
AI applications in cyber warfare are not universal—cyber operations can be conducted manually, and cyber attacks predated modern AI. However, contemporary cyber operations increasingly rely on AI for situational awareness (real-time assessment of contested cyberspace), rapid execution, and adaptive responses.
Estonia 2007: One prominent early cyber attack occurred in Estonia in April 2007. Sustained distributed denial of service (DDoS) attacks paralyzed the banking system, government institutions, and much of the media. While this attack predated sophisticated AI integration, it demonstrated cyber operations’ potential strategic impact.
Evolution of cyber doctrine: The U.S. Department of Defense now recognizes cyberspace as a distinct operational domain, open to both offensive and defensive military operations. AI transforms cyber warfare by enabling automated attack capabilities and adaptive defenses. AI systems can evaluate defenders’ actions in real time and generate dynamic responses. These systems train on vast databases of global cyber activity, making AI-driven attacks more agile and difficult to counter.
IHL Application to Cyber Warfare
Cyber warfare creates novel challenges for IHL’s distinction and proportionality principles:
Distinction in cyberspace: Military and civilian networks are often interconnected or indistinguishable. A cyber attack on military communications infrastructure might propagate to civilian networks controlling critical infrastructure like hospitals or water treatment facilities. AI systems conducting cyber operations may lack capacity to adequately distinguish between legitimate military targets and protected civilian infrastructure.
Proportionality in cyberspace: Assessing proportionality requires predicting an attack’s consequences. Cyber attacks’ effects can cascade unpredictably through interconnected systems. AI systems executing cyber operations may trigger consequences far exceeding intended military advantage, violating proportionality in ways that become apparent only after the attack concludes.
Attribution challenges: Identifying cyber attackers is notoriously difficult. AI-enabled attacks can employ sophisticated obfuscation techniques, making attribution nearly impossible. When violations occur, holding perpetrators accountable under IHL becomes extraordinarily challenging.
Social Media and Information Operations
AI combined with social media platforms creates powerful tools for influencing civilian populations for military purposes. These information operations blur lines between legitimate intelligence operations and prohibited attacks on civilian morale. Gaining advantage in cyber operations requires AI capabilities that provide significant advantages in both network penetration and defense, given vast data volumes available for training and exploitation.
AI in Surveillance and Precision Targeting
AI as Competitive Advantage
The 2021 U.S. National Security Commission on AI report observes: “The ability of a machine to observe, assess, and act more rapidly and accurately than a person constitutes a competitive advantage in any field—civilian or military.” AI technology confers substantial power on states and organizations deploying it effectively.
Surveillance Applications
Modern warfare has been transformed by AI, particularly in surveillance and precision targeting. AI-enhanced weapon systems employ algorithms to rapidly and accurately detect, track, and engage targets. These systems—incorporating facial and image recognition—are essential for gathering intelligence and conducting real-time battlefield surveillance.
However, these capabilities raise concerns about IHL compliance. The trend toward smaller, portable devices with advanced sensors and target recognition enables both military and non-state actors to exploit AI technology. Offensive Lethal Autonomous Robots (OLARs) lack human discretion and judgment, rendering problematic their application of IHL principles requiring contextual assessment and proportionality judgment.
Precision Targeting Benefits and Risks
AI potentially minimizes collateral damage through precise engagement of military targets, combining machine-learning algorithms with swarms of robotic equipment. Recent combat scenarios demonstrate AI-enabled systems’ speed and effectiveness. Russia’s rapid reconnaissance-to-attack response during the 2014 Ukraine crisis illustrated AI’s capacity to compress decision-making timelines dramatically.
AI can enhance human decision-making by improving targeting precision, thereby protecting civilians and reducing casualties. However, realizing these benefits depends on successfully integrating AI systems that genuinely adhere to IHL norms.
Military Applications Worldwide
China: China employs AI to modernize command and control, improving battlefield decision-making speed and precision. The People’s Liberation Army represents an early example of AI-led warfare doctrine, using AI for data fusion and predictive planning. AI-enabled defenses are reframing traditional tactical doctrines, prioritizing cutting-edge strategies in AI-rich environments.
While sophisticated missile systems employing deep reinforcement learning algorithms achieve near-perfect accuracy, concerns persist about diminished human control and system reliability. AI systems trained on historical data may replicate past biases or make catastrophic errors when encountering situations dissimilar from training scenarios.
United States: The U.S. Navy’s LOCUST (Low-Cost UAV Swarming Technology) project and the Defense Advanced Research Projects Agency’s unmanned vessel programs demonstrate distributed sensing and continuous tracking applications. The “Loyal Wingman” program illustrates AI-assisted rapid targeting’s tactical advantages, with autonomous aircraft supporting human-piloted fighters.
Russia: Russia’s quick-reaction model in Ukraine demonstrates AI-assisted targeting benefits while highlighting supervision needs. Russian forces have employed AI for rapid target identification and engagement, compressing traditional kill chains but also demonstrating risks when systems lack adequate human oversight.
These implementations underscore that while AI promises improved military capabilities in surveillance and precision, ethical employment requires rigorous oversight. IHL’s requirement for human judgment in targeting decisions must be balanced against AI’s enhanced precision capabilities. The critical question is not whether to employ AI but how to ensure its use complies with IHL’s fundamental principles.
National Security and AI
Cyber Sovereignty Debates
AI and AWS integration into national security frameworks has intensified debates over cyber sovereignty, ethical implications, and state sovereignty protection in increasingly digital conflict environments.
In 2018, UK Attorney General Jeremy Wright questioned whether international law specifically prohibits unauthorized cyber activities that abuse territorial sovereignty. The United Kingdom maintains that no special regulation governs cyber sovereignty. Rather, the U.N. Charter prohibits cyber activities only if they constitute unlawful intervention or use of force against another state.
Conversely, France, the Netherlands, Austria, and most NATO members (except the United Kingdom) contend that unauthorized cyber operations can violate state sovereignty independent of whether they constitute force or intervention. Debate continues regarding whether sovereignty violations require functional loss or physical harm, or whether sovereignty can be violated through purely digital intrusions.
Active and offensive cyber measures can violate sovereignty, though passive defensive measures remain generally permissible. AI algorithms are essential in military systems for national security because they integrate across applications and enhance the “Internet of Things”—the proliferation of connected devices that can serve intelligence and military functions.
Dual-Use Nature of AI
AI’s dual-use character—applicable to both military and civilian purposes—creates regulatory challenges. As a transparent technology frequently integrated into other systems, AI often remains invisible in everyday devices while performing sensitive functions. This opacity creates verification and compliance challenges for arms control regimes.
Geopolitical Competition
Global powers deploy AI-driven weapons and cyber capabilities to demonstrate technological superiority and national power:
United States: The U.S. emphasizes transition to constant digital battlefield readiness by defining AWS capabilities and prioritizing AI for defending against persistent cyber threats. AWS serve both operational and symbolic functions, signaling technological superiority and deterrence capacity.
China: China positions itself as an ascending global power through AWS development, symbolizing growing AI superiority. Sophisticated systems including AI-guided missile technology demonstrate both military capability and technological prowess. China employs AWS as “geopolitical signifiers” representing its vision of global order.
Russia: Russia has employed AI in cyber operations to influence global political processes through social media manipulation, disinformation campaigns, and targeted interference in democratic processes.
Non-State Actors and State Sovereignty
Some analysts argue that AI military systems development has paradoxically threatened state sovereignty by providing non-state actors access to power and influence previously available only to states. Simultaneously, AI integration has enhanced states’ surveillance capabilities, strengthening national security by improving citizen protection.
National security increasingly depends on these rapidly evolving technologies as nations compete for supremacy through robust AI system development. The proliferation of AI capabilities to non-state actors, however, may ultimately undermine the state-centric international order that IHL presupposes.
Conclusion
AI’s increasing integration in armed conflict is radically transforming modern warfare. While offering operational advantages in accuracy and speed, AI presents extensive ethical and legal challenges to IHL. AWS and AI-enabled weapons are not explicitly prohibited by existing IHL, but their employment must comply with fundamental principles of distinction, proportionality, and humanity.
The Geneva Conventions’ existing legal framework prohibits indiscriminate or disproportionate attacks and mandates distinction between combatants and civilians. Additionally, Article 36 of Additional Protocol I requires states to determine whether new weapons, means, or methods of warfare comply with IHL. However, current methodologies prove insufficient for examining AI’s autonomous decision-making capabilities and algorithmic opacity.
Legislative and Regulatory Needs
Specialized laws addressing AI and AWS risks must be established. Regulatory responses might include:
- New treaty instruments: A dedicated convention on autonomous weapons could provide comprehensive regulation specifically tailored to AI challenges.
- CCW Protocol amendments: The existing UN Convention on Certain Conventional Weapons could be amended or supplemented with new protocols addressing autonomous systems.
- Geneva Conventions protocol: A new protocol under the Geneva Conventions could extend IHL principles to autonomous weapons while maintaining consistency with existing frameworks.
- ICJ advisory opinion: The International Court of Justice might render advisory opinions on state responsibility for AI warfare, clarifying legal obligations.
Accountability Frameworks
Current attribution and accountability systems prove inadequate for AI-infused conflict. New accountability mechanisms must be developed, potentially including:
- Specialized tribunals for AI-related war crimes
- Enhanced state responsibility frameworks addressing autonomous systems
- Mandatory meaningful human control requirements for all autonomous weapons systems
Significant human control measures are essential because autonomous targeting carries risks of unintended escalation, misidentification, and systematic IHL violations. The prospect of autonomous systems making life-and-death decisions without human judgment fundamentally challenges IHL’s human-dignity foundations.
International Cooperation Imperative
Both international stability and state sovereignty face threats from unchecked AI-driven weapons technology deployment. These systems provide unprecedented capabilities in cyber warfare, asymmetric warfare, and automated retaliation systems. The international community must collaborate to create legally binding standards ensuring that AI systems employed in warfare adhere to IHL principles and state accountability frameworks.
The fundamental challenge is not whether to prohibit AI in warfare entirely—such prohibition may prove neither feasible nor desirable given legitimate security needs. Rather, the challenge is ensuring that AI integration occurs within robust legal and ethical frameworks that preserve IHL’s core principles while acknowledging technological realities. As warfare’s character transforms through AI, IHL itself must evolve—not by abandoning foundational principles, but by developing mechanisms to ensure those principles retain meaningful application in the age of autonomous weapons.
Reference(S):
Primary Sources
- Geneva Convention for the Amelioration of the Condition of the Wounded and Sick in Armed Forces in the Field (First Geneva Convention), 12 August 1949, 75 UNTS 31
- Geneva Convention Relative to the Protection of Civilian Persons in Time of War (Fourth Geneva Convention), 12 August 1949, 75 UNTS 287
- Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977, 1125 UNTS 3
- United Nations Charter, 26 June 1945, 1 UNTS XVI
Secondary Sources
Books and Reports
- Akerson, D. (2013). The illegality of offensive lethal autonomy. In D. Saxon (Ed.), International Humanitarian Law and the Changing Technology of War (pp. 65-98). Martinus Nijhoff Publishers.
- Amoroso, D., Sauer, F., Sharkey, N., Suchman, L., & Tamburrini, G. (2018). Autonomy in Weapon Systems: The Military Application of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy. Heinrich Böll Foundation.
Journal Articles
- Johnson, J. (2020). Artificial intelligence, drone swarming and escalation risks in future warfare. The RUSI Journal, 165(2), 26-36. https://doi.org/10.1080/03071847.2020.1752026
- Kallberg, J., & Cook, T. S. (2017). The unfitness of traditional military thinking in cyber. IEEE Access, 5, 8126-8130. https://doi.org/10.1109/ACCESS.2017.2693260
- Kilovaty, I. (2018). Doxfare: Politically motivated leaks and the future of the norm on non-intervention in the era of weaponized information. Harvard National Security Journal, 9, 146-179. https://ssrn.com/abstract=2945128
News and Web Sources
- ABP News Bureau. (2024, October 10). Nobel Prize in Literature: Here’s what internet thinks who should be winner in 2024 – Answer will leave you in splits. ABP Live. https://news.abplive.com/trending/nobel-prize-literature-2024-internet-predictions-winner-opinions-1723397
- Global Market Insights. (2023). Military Robotics Market Report. Retrieved from [specific URL needed for verification]





