Responsible use of AI within defense
Artificial Intelligence is indispensable to the Dutch armed forces in order to meet the challenges of peacekeeping and warfare. The soldier of the future will have to deal with hostile use of AI, misleading information, and enormous amounts of data. AI offers opportunities to act faster and smarter, but it also raises important questions: how do we ensure human control, how do we prevent abuse, and how do we apply AI within legal and ethical frameworks?
The role of AI in defense
In the defense domain, it is still unclear which AI systems are ethically and legally acceptable and under what conditions. This can lead to either overuse (risk of abuse and social damage) or underuse (fear and missed opportunities). Both extremes can jeopardize the freedom and security of society.
About ELSA Lab Defense
The ELSA Lab Defense works on knowledge and methods to deploy AI responsibly and with public support. The goal: AI solutions that are both militarily effective and socially responsible. The ELSA Lab Defense aims to become an independent advisory body for the responsible use of military AI. The lab does not provide standard solutions, but rather tailored advice that adapts to the context and technological developments. In this way, the lab contributes to military innovation that enhances security without losing sight of public values.
Guiding responsible military AI
ELSA Lab Defense is developing a future-proof ecosystem for AI in defense. The lab is working on knowledge and methods to deploy AI responsibly and with public support. Public, private, and academic parties are collaborating on this shared task. This means, among other things, that the lab is working on:
- Methods for structurally incorporating ethical, legal, and social values in the procurement and deployment of military AI systems.
- Educational programs, information, and advice for military personnel, politicians, and the media.
- Research into how defense personnel and society experience AI and how this changes over time and in different contexts.
- Application and further development of methods such as value-sensitive design, explainable AI, and human-machine teaming in realistic case studies.
Collaboration partners
Security issues surrounding military AI require different perspectives and domains. Within the lab, public, private, and academic partners work together, each from their own role and expertise:
- TNO
- Department of Defense
- TU Delft
- ASSER Institute
- The Hague University of Applied Sciences
- Leiden University
- HCSS
- HSD
Want to know more or collaborate?
Would you like more information about the ELSA Lab Defense? Then contactJurriaan van DiggelenorMartijn van Emmerik.Did you know that there are also ELSA Labs for other sectors? Check this page for the complete overview.
