Geneva Academy
9 May 2017
Our new publication Defending the Boundary analyses the constraints and requirements on the use of autonomous weapon systems (AWS), also called ‘killer robots’, under international humanitarian law (IHL) and international human rights law (IHRL).
A Research Brief of this publication provides policy makers and advocacy groups with a summary of key findings.
Drawing on case law dealing with other weapon technologies and autonomous systems, it asks where and when AWS may be used, and what the procedural legal requirements are in terms of the planning, conduct and aftermath of AWS use.
The use of a ‘sentry-AWS’ to control a boundary, secure a perimeter or deny access to an area, for example along an international border, forms the backdrop to the legal discussion.
‘This publication is particularly timely as the UN Convention on Certain Conventional Weapons (CCW) mandated a Group of Governmental Experts to consider the issue as views diverge on the circumstances in which it would be lawful to use an AWS and on whether additional law is required to ensure respect for the norms to safeguard humanity’ underlines Maya Brehm, the author of the report.
‘At the national level, several states are also looking at the legality of AWS, including in Switzerland where two parliamentary motions call for an international prohibition of fully autonomous weapons’ she adds.
There is, as yet, no agreed definition of an AWS but the basic idea is that once activated, such a weapon system would, with the help of sensors and computationally intensive algorithms, detect, select and attack targets without further human intervention.
The use of such a weapon system can be expected to change how human beings exercise control over the use of force and its consequences. Human beings may no longer be able to predict who or what, specifically, is made the target of attack, or explain why a particular target was chosen by an AWS. That raises serious ethical, humanitarian, legal and security concerns.
According to leading researchers in the field of artificial intelligence and robotics, the deployment of AWS will be practically, if not legally, feasible within years.
The focus of scholarly inquiry into the legality of AWS has mostly been on compliance with IHL rules on the conduct of hostilities. Comparably, little attention has been given to the impact of AWS on human rights protection. This study aims to fill this gap by analyzing the constraints and requirements that IHRL places on the use of force by means of an AWS, both, in times of peace and during armed conflict, in relation to the conduct of hostilities and for law enforcement purposes.
‘IHL would never be the sole, and, often, it would not be the primary legal frame of reference to assess the legality of AWS’ stresses Maya Brehm.
As IHL permits the ‘categorical targeting’ of persons based on their status or imputed membership in a group (e.g. combatants), there is scope for the lawful use of an AWS for the conduct of hostilities.
However, to ensure that targeting rules can be applied in a manner that effectively protects the victims of war, human agents must bound every attack appropriately in spatio-temporal terms and retain sufficient control over an AWS to recognize changing circumstances and to adjust operations in a timely manner. ‘This calls for active and constant control over every individual attack’ underlines Maya Brehm.
To preserve life, the use of potentially lethal force for law enforcement purposes can only be justified if it is absolutely necessary and strictly proportionate in the specific circumstances. Lethal force cannot be applied automatically or categorically. ‘Human beings must therefore be actively and even personally involved in every instance of force application. AWS are difficult to reconcile with IHRL due to the need to individuate the use of force’, says Maya Brehm.
Furthermore, the algorithm-based targeting of security measures can be de-humanizing, objectifying and discriminatory. ‘Human involvement in such processes serves as an essential procedural safeguard to uphold human dignity and human rights’ according to Brehm.
To allow an independent assessment of the lawfulness of the use of force, ‘human agents must remain involved in targeting processes in a manner that enables them to explain the reasoning underlying algorithmic decisions in particular circumstances’ recalls Maya Brehm.
Whereas legal norms already regulate and limit the use of AWS, controversies and uncertainties about the applicability and meaning of existing norms diminish their capacity to serve as a guidepost. In addition, accommodating new practices within the existing legal framework bears the risk that existing rules are preserved formally but filled with a radically different meaning.
‘An explicit legal requirement to exercise meaningful human control in the use of weapons can help ensure compliance with the norms that safeguard humanity’ stresses Maya Brehm. ‘Action is urgently needed and the CCW Group of Governmental Experts is well placed to formulate such a requirement.’
RawPixel
In our latest research brief, Beyond Power and Politics: Engaging Russia in a Fractured Multilateral Order, examines the role of and pathways towards accountability for Russia’s human rights violations.
ICRC (AI Generated)
This report examines themes that arose during two expert workshops on the role of AI-based decision support systems in decision-making on the use of force in armed conflicts.
Brill Nijhoff
In his book launch, Linus Mührel will discuss his book’s main findings with experts from academia and the ICRC.
ICRC
This online short course discusses the protection offered by international humanitarian law (IHL) in non-international armed conflicts (NIACs) and addresses some problems and controversies specific to IHL of NIACs, including the difficulty to ensure the respect of IHL by armed non-state actors.
UN Photo / Jean-Marc Ferré
This training course will explore the origin and evolution of the Universal Periodic Review (UPR) and its functioning in Geneva and will focus on the nature of implementation of the UPR recommendations at the national level.
Olivier Chamard/Geneva Academy
Adobe Stock
This project addresses the human rights implications stemming from the development of neurotechnology for commercial, non-therapeutic ends, and is based on a partnership between the Geneva Academy, the Geneva University Neurocentre and the UN Human Rights Council Advisory Committee.
Geneva Academy
Geneva Academy