Introduction
The number of unmanned systems in NATO nations’ military inventories has grown rapidly and is still increasing throughout all domains. Unmanned Aircraft Systems (UAS) currently represent the largest share of those systems. At the same time, the level of automation built into these unmanned systems has not only increased significantly, but has also reached a level of sophistication at which they are seemingly capable of performing many tasks ‘autonomously’ and with no necessity for direct human supervision. Although it is a common understanding within NATO that autonomous capabilities should not be integrated into lethal weapon systems, there are systems already in service which can be considered to almost have approached that limit, e.g., highly automated cannon-based air defence systems such as Skyshield1 or Close-In Weapon Systems (CIWS) such as Phalanx.2 These systems are capable of firing at incoming targets automatically, within seconds of detection of a target, assuming this mode of operation has been activated.
Under the umbrella of the ‘Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects,’ the United Nations (UN) conducted informal expert meetings on the topic of Lethal Autonomous Weapon Systems (LAWS) in 2014, 2015 and 2016.3 Succeeding a Multinational Capability Development Campaign (MCDC) on the ‘Role of Autonomous Systems in Gaining Operational Access’,4 Allied Command Transformation (ACT) is currently working on a ‘Counter Unmanned Autonomous Systems’ concept for the Alliance.5 However, international law, as well as NATO doctrine, does not currently address the potential legal and ethical issues which may arise from the use of highly automated weapon systems.
Aim and Methodology
The aim of this document is to outline potential legal and ethical implications of introducing highly automated unmanned systems to the national inventories of NATO’s members and partners.
The study provides a brief overview of the current state of technology in the field of system automation and looks at possible future developments. As there is no definition of an autonomous weapon in NATO yet6, it also proposes a set of levels or tiers of automation/autonomy which may be used as a common baseline within NATO to define what autonomy actually is, where it begins and how it delineates itself from automation.
After introducing the basic principles of International Humanitarian Law (IHL), often also referred to as Law of Armed Conflict (LOAC), the study outlines the legal requirements a highly automated unmanned system has to meet if NATO nations seeks to introduce this kind of technology and wants to comply with IHL. Moreover, it discusses the potential consequences and responsibilities if automated functions violate international law or cause unintended harm.
Finally, the study briefly discusses the ethical implications of using highly automated systems in military operations and gives an assessment of what may or may not be accepted in NATO.