Would the use of weapon systems that can detect, select and fire at targets without human intervention comply with international legal standards for the protection of the human person, and if so, under what circumstances? This is one of the key questions in the current debate about autonomous weapon systems (AWS), also called ‘killer robots’, weapon systems that by some definitions don’t yet exist.
This project examined the legal requirements that the use of AWS would need to comply with in a number of scenarios envisaged by proponents of increasing autonomy in weapon systems.
It looked beyond compliance with the international humanitarian law (IHL) rules on targeting and also examines other rules of IHL and international human rights law, including standards on the use of force for law enforcement purposes.
Drawing on case law dealing with other weapon technologies and autonomous systems, it asks in particular: Who or what may force be directed at? Where and when may AWS be used? What are the procedural legal requirements in terms of the planning, conduct and aftermath of AWS use?