13 May 2024
Our latest report, ‘Artificial Intelligence And Related Technologies In Military Decision-Making On The Use Of Force In Armed Conflicts: Current Developments And Potential Implications’, highlights and examines some of the themes that arose during two expert workshops on the role of AI-based decision support systems (AI DSS) in decision-making on the use of force in armed conflicts. Drafted by Anna Rosalie Greipl, Geneva Academy Researcher, with contributions from Neil Davison and Georgia Hinds, International Committee of the Red Cross (ICRC), the report aims to provide a preliminary understanding of the challenges and risks related to such use of AI DSS and to explore what measures may need to be implemented regarding their design and use, to mitigate risks to those affected by armed conflict.
Anna Rosalie Greipl explained, ‘The introduction of AI to DSS for military decision-making on the use of force in armed conflicts adds a new dimension to existing challenges relating to non-AI-based DSS and distinct legal and conceptual issues to those raised by Lethal Autonomous Weapon Systems. These systems carry the potential to reduce the human judgment involved in military decision-making on the use of force in armed conflicts in a manner that raises serious humanitarian, legal, and ethical questions.’
Georgia Hinds went on to underscore the critical role of human judgment in addressing these concerns. ‘It’s crucial to preserve human judgment when it comes to decisions on the use of force in armed conflicts. And this will have implications for the way these systems are designed and used, particularly given what we already know about how humans interact with machines, and the limitations of these systems themselves.’
The report shows that certain technical limitations of AI-DSS may be insurmountable and many existing challenges of human-AI interaction are likely to persist. It therefore may be necessary to adopt additional measures and constraints on the use of AI DSS in military decision-making on the use of force to reduce risks for people affected by armed conflicts and to facilitate compliance with IHL. To that end, it will be necessary to pursue additional research and dialogue to better understand the precise measures and constraints that may be required with regard to the design and use of AI DSS. Moreover, further analysis will be needed to identify the applications of AI DSS in this context that have the biggest impact on decisions on the use of force.
This report is part of the joint initiative ‘Digitalization of Conflict: Humanitarian Impact and Legal Protection’ between the ICRC and the Swiss Chair of International Humanitarian Law at the Geneva Academy of International Humanitarian Law and Human Rights. Other themes addressed under the project include the societal risks and humanitarian impact of cyber operations and, more recently, the legal implications of the rising civilian involvement in cyber operations.
Geneva Academy, ICRC
ICRC (AI GENERATED)
ICRC (AI Generated)
This report examines themes that arose during two expert workshops on the role of AI-based decision support systems in decision-making on the use of force in armed conflicts.
Geneva Academy
To kick-start the activities of the 'IHL in Focus' project and refine its approach, a two-day expert meeting of representative practitioners and scholars from different disciplines was convened.
ministère de la défense
This Military Briefing will discuss NATO's protection of civilians military frameworks and explore some of the current military and legal challenges connected to the operationalization of PoC efforts.
ICRC
This online short course provides an overview of the content and evolution of the rules governing the use of unilateral force in international law, including military intervention on humanitarian grounds and the fight against international terrorism. It focuses on the practice of states and international organizations.
ICRC
This online short course discusses the protection offered by international humanitarian law (IHL) in non-international armed conflicts (NIACs) and addresses some problems and controversies specific to IHL of NIACs, including the difficulty to ensure the respect of IHL by armed non-state actors.
Adobe
To unpack the challenges raised by artificial intelligence, this project will target two emerging and under-researched areas: digital military technologies and neurotechnology.
Shutterstock
This project will explore humanitarian consequences and protection needs caused by the digitalization of armed conflicts and the extent to which these needs are addressed by international law, especially international humanitarian law.
Geneva Academy