Today we are publishing a new EFF white paper, The Cautious Path to Strategic Advantage: How Militaries Should Plan for AI. This paper analyzes the risks and implications of military AI projects in the wake of Google's decision to discontinue AI assistance to the US military's drone program and adopt AI ethics principles that preclude many forms of military work.

The key audiences for this paper are military planners and defense contractors, who may find the objections to military uses of AI from Google's employees and others in Silicon Valley hard to understand. Hoping to bridge the gap, we urge our key audiences to consider several guiding questions. What are the major technical and strategic risks of applying current machine learning methods in weapons systems or military command and control? What are the appropriate responses that states and militaries can adopt in response? What kinds of AI are safe for military use, and what kinds aren't?

Militaries must make sure they don't buy into the machine learning hype while missing the warning label.

We are at a critical juncture. Machine learning technologies have received incredible hype, and indeed they have made exciting progress on some fronts, but they remain brittle, subject to novel failure modes, and vulnerable to diverse forms of adversarial attack and manipulation. They also lack the basic forms of common sense and judgment on which humans usually rely.[1]

Militaries must make sure they don't buy into the machine learning hype while missing the warning label. There's much to be done with machine learning, but plenty of reasons to keep it away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles in the near future, and perhaps beyond that too.

The U.S. Department of Defense and its counterparts have an opportunity to show leadership and move AI technologies in a direction that improves our odds of security, peace, and stability in the long run—or they could quickly push us in the opposite direction. We hope this white paper will help them chart the former course.

Part I identifies how military use of AI could create unexpected dangers and risks, laying out four major dangers:

  • Machine learning systems can be easily fooled or subverted: neural networks are vulnerable to a range of novel attacks including adversarial examples, model stealing, and data poisoning. Until these attacks are better understood and defended against, militaries should avoid ML applications that are exposed to input (either direct input or anticipatable indirect input) by their adversaries.
  • The current balance of power in cybersecurity significantly favors attackers over defenders. Until that changes, AI applications will necessarily be running on insecure platforms, and this is a grave concern for command, control, and intelligence (C2I), as well as autonomous and partially autonomous weapons.
  • Many of the most dramatic and hyped recent AI accomplishments have come from the field of reinforcement learning (RL), but current state-of-the-art RL systems are particularly unpredictable, hard to control, and unsuited to complex real-world deployment.
  • The greatest risk posed by military applications of AI, increasingly autonomous weapons, and algorithmic C2I is that the interactions between the systems deployed will be extremely complex, impossible to model, and subject to catastrophic forms of failure that are hard to mitigate. This is true both of use by a single military over time, and, even more importantly, between those of opposing nations. As a result, there is a serious risk of accidental conflict, or accidental escalation of conflict, if ML or algorithmic automation is used in these kinds of military applications.

Part II offers and elaborates on an agenda for mitigating these risks:

  • Support and establish international institutions and agreements for managing AI, and AI-related risks, in military contexts.
  • Focus on machine learning applications that lie outside of the "kill chain," including logistics, system diagnostics and repair, and defensive cybersecurity.
  • Focus R&D effort on increasing the predictability, robustness, and safety of ML systems.
  • Share predictability and safety research with the wider academic and civilian research community.
  • Focus on defensive cybersecurity (including fixing vulnerabilities in widespread platforms and civilian infrastructure) as a major strategic objective, since the security of hardware and software platforms is a precondition for many military uses of AI. The national security community has a key role to play in changing the balance between cyber offense and defense.
  • Engage in military-to-military dialogue, and pursue memoranda of understanding and other instruments, agreements, or treaties to prevent the risks of accidental conflict, and accidental escalation, that increasing automation of weapons systems and C2I would inherently create.

Finally, Part III provides strategic questions to consider in the future that are intended to help the defense community contribute to building safe and controllable AI systems, rather than making vulnerable systems and processes that we may regret in decades to come.

Read the full white paper as a PDF or on the Web.