AI in IHL: Legal and Ethical Implications of Emerging Disinformation and Decision-Making Technologies
Artificial Intelligence (AI) has profound implications for the roles of humans, technology, and human-machine interactions in armed conflict. In particular, it contributes to the development of autonomous weapon systems, new forms of information warfare, and military decision-making processes. This panel explores new developments in the use of AI in armed conflict, with a focus on disinformation technologies and decision-making processes, as well as the legal obligations and ethical considerations that should govern its development and use. Recently, the National Security Commission on Artificial Intelligence published its final report, concluding that the "United States must act now to field AI systems and invest substantially more resources in AI innovation to protect its security, promote its prosperity, and safeguard the future of democracy." Panel members will address both technological innovations and human-machine vulnerabilities that are shaping the future of AI in the context of IHL. They will also shed light on the role of governments, international organizations, and the security industry in safeguarding these technological developments and curtailing their human vulnerabilities.
- Lieutenant Colonel Christopher Coleman, U.S. Army Intelligence and Security Command
- Shiri Krebs, Deakin University, Australian Cyber Security Cooperative Research Centre
- Nicol Turner Lee, Brookings Institution
- Mary Ann McGrail, Law Office of M.A. McGrail, Moderator
- Robert McLaughlin, Australian National University
- Matt Turek, Defense Advanced Research Projects Agency (DARPA)
This session is organized by ASIL's Law of Armed Conflict Interest Group.