The regulation of AI in military applications is crucial, but blanket prohibitions are not the answer. Instead, a consensus on best practices for ethical and legal compliance among all nations is essential. Rules of engagement (ROE) can serve as a regulatory framework for military applications of AI because they allow armed forces to translate political and legal considerations into specific guidelines. These guidelines can define parameters for human-machine teaming and human control over AI systems, including monitoring, control, geographical zones, and task authorisation.
AI's integration into military command and control systems presents challenges, including potential risks to civilians, loss of humanity, and biased decision-making. It also complicates responsibility allocation for AI's actions. Therefore, regulatory frameworks must be holistic, specific, concrete, and flexible, reflecting a variety of considerations beyond simple military or legal ones. International cooperation is difficult, but necessary. An adaptive AI governance framework is essential due to the rapid evolution of AI compared to traditional legislative procedures.
Effective regulation must be grounded in the technical behaviour of AI models, with AI researchers involved throughout the regulatory lifecycle. This includes addressing the unique risks posed by AI-powered lethal autonomous weapon systems (AI-LAWS), such as unanticipated escalation and erosion of human oversight. Clear, behaviour-based definitions of AI-LAWS are needed as a foundation for technically grounded regulation, as existing frameworks do not distinguish them from conventional LAWS.