Summary
The adoption of Artificial Intelligence (AI) in military practice has been famously portrayed as the third revolution in military affairs, after gunpowder and nuclear weapons. While lethal autonomous weapons might be an emerging reality, AI is already present on the battlefields, in the form of, inter alia, decision support systems and risk-assessing predictive algorithms. Just like any other means and methods of warfare, AI-based technology might (and will) lead to human rights and international humanitarian law violations. There is no consensus on the horizon regarding how, to whom and based on what principles international responsibility for such violations could be assigned, let alone implemented. This is particularly conspicuous in the European Union, where the political focus has largely been on the possible applications of AI in business and public services, while the examination of military applications is lagging behind; in fact, the core policy document on the matter, the 2020 European Commission White Paper on AI explicitly excludes “the development and use of AI for military purposes” from its scope.
'Implementing International Responsibility for AI in Military Practice' (I2 RAMP) Action fills this void through bridging theory and practice to comprehensively analyze ethical and legal challenges military AI raises, who (individuals, States or both) can be held internationally responsible and how this responsibility can be implemented with a view to produce both academic and viable operational outputs. This will be achieved by adopting an interdisciplinary approach combining, on the one hand, the Researcher’s academic knowledge of international law and hands-on experience as a military legal adviser, and on the other, the expertise in ethics and computer science of researchers working on the Designing International Law and Ethics into Military AI (DILEMA) Project at the Asser Institute.
'Implementing International Responsibility for AI in Military Practice' (I2 RAMP) Action fills this void through bridging theory and practice to comprehensively analyze ethical and legal challenges military AI raises, who (individuals, States or both) can be held internationally responsible and how this responsibility can be implemented with a view to produce both academic and viable operational outputs. This will be achieved by adopting an interdisciplinary approach combining, on the one hand, the Researcher’s academic knowledge of international law and hands-on experience as a military legal adviser, and on the other, the expertise in ethics and computer science of researchers working on the Designing International Law and Ethics into Military AI (DILEMA) Project at the Asser Institute.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101031698 |
Start date: | 01-09-2021 |
End date: | 31-08-2024 |
Total budget - Public funding: | 263 358,72 Euro - 263 358,00 Euro |
Cordis data
Original description
The adoption of Artificial Intelligence (AI) in military practice has been famously portrayed as the third revolution in military affairs, after gunpowder and nuclear weapons. While lethal autonomous weapons might be an emerging reality, AI is already present on the battlefields, in the form of, inter alia, decision support systems and risk-assessing predictive algorithms. Just like any other means and methods of warfare, AI-based technology might (and will) lead to human rights and international humanitarian law violations. There is no consensus on the horizon regarding how, to whom and based on what principles international responsibility for such violations could be assigned, let alone implemented. This is particularly conspicuous in the European Union, where the political focus has largely been on the possible applications of AI in business and public services, while the examination of military applications is lagging behind; in fact, the core policy document on the matter, the 2020 European Commission White Paper on AI explicitly excludes “the development and use of AI for military purposes” from its scope.'Implementing International Responsibility for AI in Military Practice' (I2 RAMP) Action fills this void through bridging theory and practice to comprehensively analyze ethical and legal challenges military AI raises, who (individuals, States or both) can be held internationally responsible and how this responsibility can be implemented with a view to produce both academic and viable operational outputs. This will be achieved by adopting an interdisciplinary approach combining, on the one hand, the Researcher’s academic knowledge of international law and hands-on experience as a military legal adviser, and on the other, the expertise in ethics and computer science of researchers working on the Designing International Law and Ethics into Military AI (DILEMA) Project at the Asser Institute.
Status
SIGNEDCall topic
MSCA-IF-2020Update Date
28-04-2024
Images
No images available.
Geographical location(s)
Structured mapping