Authored By: Ritu Sharma
Geeta Institute Of Law Samalkha
Abstract
International humanitarian law (IHL) faces unprecedented challenges when artificial intelligence (AI) is integrated into armed conflict. This article examines the intricate relationship between AI and IHL, focusing on autonomous weapons systems (AWS), precise targeting, cyber warfare, and surveillance operations. While AI promises enhanced accuracy and reduced collateral harm, significant challenges persist: loss of human control over decision-making, algorithmic bias, and attribution difficulties. These problems threaten the fundamental principles of IHL—distinction, proportionality, and humanity.
This article demonstrates that AWS, representing a “third revolution in military affairs,” risks misidentifying targets due to embedded programming biases, thereby violating the principle of distinction. Similarly, the application of AI to cyber operations raises questions regarding attack proportionality and attribution challenges. Through case studies and comparative analysis of AI applications in military operations, this research emphasizes the urgent need to reevaluate current legal frameworks to address the unique challenges posed by emerging technologies. Using a qualitative research methodology examining primary and secondary sources, this paper supports ongoing discussions about developing the ethical and legal frameworks necessary to govern AI in combat while ensuring IHL compliance and respecting state sovereignty and national security.
Introduction
In October 2024, when the Royal Swedish Academy announced the Nobel Prize winners, internet users joked that ChatGPT, a generative chatbot, should win the literature prize “for its intricate tapestry of prose which showcases the redundancy of sentience in art.” This followed news that contributions related to artificial intelligence (AI) had won Nobel prizes in chemistry and physics. AI has emerged as one of the newest and fastest-growing scientific and technological domains.
The concept of artificial intelligence is not new—it emerged with computer advancements in the 1950s and 1960s. However, AI’s definition and applications have evolved dramatically since then. In 1950, Alan Turing, a computing pioneer, devised an imitation game known as the “Turing Test,” asking, “Can a machine think?” This test assessed whether AI possessed sufficient intelligence to compete with humans. Turing predicted that by 2000, computers could be programmed to pass this human imitation test with at least 30% success.
AI represents the automation of activities associated with human thinking—decision-making, problem-solving, and learning. AI capabilities have expanded far beyond these initial projections. Perhaps the first chatbot to pass the Turing Test was created in 2001 by programmers in Saint Petersburg. Today, AI systems perform diverse tasks: identifying faces and objects, navigating traffic autonomously, generating text and images, creating artificial voices, and composing music. Human capacities such as long-term planning, creativity, and simulating complex concepts continue to improve. AI has evolved from experimental technology into a powerful instrument across multiple fields, extending well beyond the Turing Test.
Significant changes in warfare have occurred alongside technological advancements. From ancient Greek war gods Ares and Enyo (or their Roman counterparts Mars and Bellona) to the present day, warfare has undergone profound transformations. The English word “war” derives from the German werran, which evolved into weorre and finally warre, replacing the Latin bellum. Cave art from 10,000 years ago demonstrates that humans have engaged in group combat since the Stone Age. Nearly 5,000 years ago, the city of Uruk in Mesopotamia reportedly initiated offensive military campaigns and established military defense systems. The ancient Indian epic Mahabharata, written approximately 2,000 years ago, depicted conflicts throughout the subcontinent.
Nearly two millennia later, the First World War marked a clear and horrifying departure from earlier conflicts. Along with mobile infantry weapons such as light mortars, Lewis guns, and rifle grenades, more effective combined arms tactics emerged, integrating artillery, tanks, aircraft, and infantry. Since then, military tactics and warfare technologies have evolved more rapidly. Historical warfare examples include ordered tactics in the classical West, close-combat tactics of ancient civilizations, sophisticated Islamic combat techniques, and European chivalry. The advent of mechanized and total warfare in the industrial age sparked the destructive world wars of the twentieth century.
In the twenty-first century, politics and information technology play dramatically expanded roles in combat. Consequently, drone warfare, cyber warfare, and remote targeting have become commonplace. Armed drones and other unmanned or human-replacing weapon systems increasingly dominate modern warfare. The integration of AI in combat, particularly regarding International Humanitarian Law (IHL), raises legal, strategic, and ethical concerns. AI’s inherent limitations—algorithmic bias, diminished human control, and accountability issues—particularly threaten IHL’s fundamental principles of distinction, proportionality, and humanity. Therefore, legal frameworks require reevaluation to address ethical, strategic, and humanitarian concerns surrounding AI and autonomous weapon systems.
Evolution of International Humanitarian Law
Cicero is frequently credited with the Latin maxim inter arma enim silent leges, roughly translated as “in times of war, the laws are silent.” However, since humans first engaged in warfare, some form of rules of war has existed, despite their flexibility and malleability. The concept that war should be governed by regulations is not new. Historically, belligerents regulated warfare conduct through private agreements and treaties.
The nineteenth-century International Committee of the Red Cross (ICRC) and Sir Henry Dunant’s initiative paved the way for contemporary legal frameworks, including IHL. Today, the four 1949 Geneva Conventions and the 1977 Additional Protocols serve as primary codifications for this area of public international law. Additionally, customary law has developed to regulate armed conflicts over recent decades. Customary law refers to legal principles that evolve through consistent governmental practice and are widely regarded as obligatory.
IHL comprises a body of law designed to reduce suffering caused by armed conflicts. Its primary goals include limiting warfare and protecting individuals who do not or no longer participate in hostilities. IHL must balance this objective with protecting the military’s ability to conduct armed operations.
Similar to how mechanization transformed twentieth-century warfare, AI and AWS are predicted to revolutionize modern combat. While AI’s applications in surveillance and logistics are widely accepted, armed AI raises ethical and legal concerns. Currently, humans retain targeting decision authority even when militaries deploy automated weaponry. However, as some nations approach complete autonomy, AI systems will make crucial military decisions without human intervention, representing the extreme end of the legal spectrum in the international law of armed conflict.
Challenges of Autonomous Weapon Systems
AWS presents serious operational, ethical, and legal challenges, particularly regarding their ability to comply with IHL norms. AWS is a robotic weapon system that, once activated, operates without human assistance. These systems independently select and engage targets, equipped with sensors, computers, and effectors providing situational awareness, information processing, and decision-making capabilities.
According to the Group of Governmental Experts on Emerging Technologies in the Area of Lethal AWS, “AWS are not one or two types of weapons.” Rather, they constitute a capability category—weapon systems integrating autonomy into essential operations, particularly target engagement and selection.
Two scenarios illustrate AWS limitations. First, during the Cold War, disaster was averted when a Soviet officer manually overrode an AI-triggered nuclear launch alarm—a machine might not have made that choice. Second, Paul Scharre’s sniper team encountered a young girl in Afghanistan who was actually a Taliban scout. He concluded that even when killing is legally permissible, a robot would not understand that it might remain ethically wrong. These situations demonstrate AWS shortcomings and the indispensable human capacity to understand context and morality.
Despite these challenges, contemporary robots are revolutionizing warfare as they transform other sectors, including cleaning services and self-driving vehicles. Numerous nations invest defense budgets in military robotics. According to Global Market Insights (2023), global military robotics spending reached $13.4 billion in 2022 and is projected to reach $30 billion by 2032. The U.S. Air Force’s Unmanned Aircraft Systems Flight Plan (2009–2047) suggests that future competition for greater unmanned aircraft speed and automation will resemble automated stock trading.
AWS expansion challenges core IHL principles. Most significantly, the principles of distinction, proportionality, and precautions face contestation. Only lawful targets should be attacked according to the distinction principle, which differentiates between civilian and military objectives. However, in Scharre’s example, the young Taliban scout was theoretically a legitimate target. AWS faces real-world tests where mechanical distinction is superseded by humanistic differentiation standards.
Similarly, the proportionality principle stipulates that collateral or incidental harm to non-combatant civilians must not be disproportionate to anticipated military advantages. Third, the precaution principle necessitates implementing practical measures to protect civilians. As AWS develops, applying these IHL principles has become increasingly critical in striking a fair balance between humanitarian needs and warfare necessities.
The War in Ukraine represents one of the earliest military conflicts employing lethal autonomous systems. Analysis of these real-world scenarios suggests that limited autonomous targeting may be feasible in isolated, predictable environments, but human oversight remains essential. Parties must guarantee reliable monitoring and override capabilities using advanced technologies to increase autonomous attack system safety.
AI in Cyber Warfare
Cyber warfare describes military operations primarily using computer networks and systems to target adversaries. Even before AWS implementation, AI had been employed as a cyber weapon for considerable time. While AI application in cyber warfare relates to AWS employment in contemporary conflict, these concepts differ within their respective operating domains. AWS are physical systems capable of autonomous real-world operations. Conversely, cyber warfare impacts digital environments by altering or disrupting networks. These technologies relate because both rely substantially on intricate, opaque software, complicating attack attribution and accountability under IHL.
Like AWS, cyber warfare creates new challenges for IHL’s distinction and proportionality principles. AI is not always employed in cyber warfare—cyber attacks occurred long before modern AI emergence, and cyber warfare can be conducted manually like conventional battlefield operations. However, contemporary cyber-relevant strategies increasingly depend on AI, predetermined responses, situational awareness evaluating contested cyberspace in real-time, and computational execution speed.
One prominent early cyber attack occurred in Estonia in April 2007. Prolonged distributed denial of service (DDoS) attacks completely paralyzed the banking system, numerous government institutions, and much of the media. Due to cyber technology advancement and its combat applications, the U.S. Department of Defense now views cyberspace as a new conflict theater open to both offensive and defensive military operations.
AI is transforming cyber warfare by providing automated attack capabilities and responsive defenses. Automated cyber defenses manage threats through continuous, adaptive processes, while AI systems evaluate defenders’ actions in real-time and generate dynamic responses. Large databases from global cyber activity fuel this adaptability, making AI-driven cyber attacks more agile and difficult to counter. Social media and AI have evolved into powerful tools for subtly influencing civilians for military advantage. AI provides significant advantages in network security and penetration due to abundant accessible data, making it essential for cyber operations superiority.
The Role of AI in Surveillance and Precise Targeting
According to the 2021 U.S. National Security Commission on AI report, “The ability of a machine to observe, evaluate, and act more rapidly and correctly than a person constitutes a competitive advantage in any field—civilian or military.” AI technology provides businesses and nations employing it with substantial power.
AI has transformed modern warfare, particularly in precision targeting and surveillance. AI-enhanced weapon systems use algorithms to quickly and accurately detect, track, and engage targets. These systems, including facial and image recognition technologies, are essential for intelligence gathering and real-time battlefield surveillance. However, concerns arise regarding compliance with IHL’s proportionality and distinction principles. Offensive Lethal Autonomous Robots (OLARs) lack human discretion and judgment, making IHL application problematic. The trend toward smaller, portable devices with advanced sensors and target recognition enables both military and non-state actors to exploit AI technology without ethical constraints.
AI may minimize collateral damage by precisely engaging military targets through combined machine-learning techniques with robotic equipment swarms. Recent combat scenarios, such as Russia’s immediate reconnaissance-to-attack response during the 2014 Ukraine crisis, demonstrate AI-enabled system speed and effectiveness. Additionally, AI can enhance human decision-making by improving targeting precision, thereby protecting civilians and reducing casualties. However, implementing these principles depends on successfully integrating AI systems that comply with IHL norms.
China is modernizing command and control using AI to improve battlefield decision-making speed and precision. The People’s Liberation Army employs AI for data fusion and predictive planning, representing an early example of AI-led warfare. AI-enabled defenses reframe traditional tactical doctrines, prioritizing advanced defense strategies in AI-rich environments. Although sophisticated missile systems with deep reinforcement learning algorithms enable near-pixel-perfect accuracy, concerns about diminished human control and reliability persist. Because AI relies on pre-existing data, catastrophic errors remain possible.
Military AI applications demonstrate transformative combat potential. For example, the U.S. Navy’s LOCUST project and China’s “intelligentized” cruise missiles show that autonomous high-precision weaponry advancement is inevitable. The U.S. Marine “warbot companies” and the U.S. Defense Advanced Research Projects Agency’s unmanned vessel program further demonstrate AI use in distributed sensing and continuous tracking. Meanwhile, the U.S. “Loyal Wingman” program and Russia’s quick-reaction model in Ukraine illustrate AI-assisted rapid targeting’s tactical advantages. However, these implementations emphasize AI-driven decision-making oversight necessity.
Combined, AI promises improved military surveillance and accuracy applications, but ethical use requires rigorous oversight. AI’s precise targeting capability must be balanced with IHL’s requirement for crucial human judgment.
National Security and AI
AI and AWS incorporation into national security frameworks has intensified discussions over cyber sovereignty, ethical implications, and maintaining state sovereignty in increasingly digital conflicts. In 2018, United Kingdom Attorney General Jeremy Wright questioned whether international law specifically prohibits unauthorized cyber activities that abuse territorial sovereignty. The United Kingdom maintains that no special regulation pertaining to cyber sovereignty exists. Rather, the U.N. Charter prohibits cyber activities only if they constitute unlawful intervention or force against another state.
Conversely, nations including France, the Netherlands, Austria, and NATO members other than the United Kingdom argue that unauthorized cyber activities can violate national sovereignty. Cyber sovereignty debate continues, with some contending that functional loss or physical harm constitutes violation. However, while passive protection is generally permissible, active and offensive cyber measures can violate another state’s sovereignty.
AI algorithms are essential in military systems for national security because they integrate across applications and enhance the “Internet of Things.” Because AI is dual-purpose, it applies to both military and civilian contexts. Since AI is a transparent technology, it frequently integrates into other technologies and is not obvious in everyday products.
Global powers including the United States, China, and Russia deploy cyber security innovations and AI-driven weapons to demonstrate dominance in geopolitical confrontations and display national power. The United States emphasizes transitioning to a constant, digital battlefield by defining AWS as automated systems and prioritizing AI in protection against routine cyber attacks. Conversely, China positions itself as an assertive global power by using AWS as symbols of its growing AI superiority, having developed sophisticated systems including AI-guided missile technology. Both countries employ AWS as “geopolitical signifiers” conveying military might and patriotism, representing their worldview visions. Meanwhile, Russia has employed AI in cyber security to influence global political events through strategies including social media manipulation.
Notably, some argue that AI development in military systems has threatened state sovereignty by providing non-state actors with previously unattainable influence and power. Simultaneously, AI incorporation has improved state surveillance capabilities to enhance citizen protection, thereby strengthening national security. As these nations seek to establish supremacy through robust AI system development, national security has become reliant on these rapidly evolving technologies.
Conclusion
Modern warfare is undergoing radical transformation due to increasing AI application in combat. While AI offers operational advantages such as accuracy and speed, it presents extensive ethical and legal challenges for IHL. Although AWS and AI-powered weapons are not expressly prohibited, their use must adhere to fundamental IHL principles of proportionality and distinction.
The current legal framework established by the Geneva Conventions prohibits indiscriminate or disproportionate attacks and demands distinction between combatants and civilians. Additionally, it mandates that states evaluate each new weapon for IHL compliance. However, current methods inadequately examine AI’s autonomous decision-making. Laws specifically tailored to AI and AWS risks must be established. Regulatory responses might include creating a new treaty, modifying the UN Convention on Certain Conventional Weapons (CCW), or developing a new protocol under the Geneva Conventions. The International Court of Justice may also render an advisory opinion regarding state accountability for AI warfare.
Accountability and attribution represent legal conundrums in AI-infused combat for which current systems are insufficient. New accountability mechanisms must be established, such as AI war crimes tribunals or state responsibility frameworks. Significant human control measures are necessary because autonomous targeting carries risks of unintentional escalation, misidentification, and systematic IHL violations.
Unchecked application of AI-driven weapons technology threatens both international stability and state sovereignty, providing unprecedented advantages in asymmetric warfare, automated retaliation systems, and cyber warfare. To ensure that combat AI systems comply with IHL principles and state accountability, the international community must collaborate to create legally binding standards.
Reference(S):
Akerson, D. (2013). The illegality of offensive lethal autonomy. In D. Saxon (Ed.), International Humanitarian Law and the Changing Technology of War (pp. 65-98). Martinus Nijhoff Publishers.
Amoroso, D., Sauer, F., Sharkey, N., Suchman, L., & Tamburrini, G. (2018). Autonomy in weapon systems: The military application of artificial intelligence as a litmus test for Germany’s new foreign and security policy. Heinrich Böll Foundation.
Johnson, J. (2020). Artificial intelligence, drone swarming and escalation risks in future warfare. The RUSI Journal, 165(2), 26-36. https://doi.org/10.1080/03071847.2020.1752026
Kallberg, J., & Cook, T. S. (2017). The unfitness of traditional military thinking in cyber. IEEE Access, 5, 8126-8130. https://doi.org/10.1109/ACCESS.2017.2693260
Kilovaty, I. (2018). Doxfare: Politically motivated leaks and the future of the norm on non-intervention in the era of weaponized information. Harvard National Security Journal, 9, 146-179. https://ssrn.com/abstract=2945128