End-to-End Game-Focused Learning of Adversary Behavior in Security Games

Authors

  • Andrew Perrault Harvard University
  • Bryan Wilder Harvard University
  • Eric Ewing University of Southern California
  • Aditya Mate Harvard University
  • Bistra Dilkina University of Southern California
  • Milind Tambe Harvard University

DOI:

https://doi.org/10.1609/aaai.v34i02.5494

Abstract

Stackelberg security games are a critical tool for maximizing the utility of limited defense resources to protect important targets from an intelligent adversary. Motivated by green security, where the defender may only observe an adversary's response to defense on a limited set of targets, we study the problem of learning a defense that generalizes well to a new set of targets with novel feature values and combinations. Traditionally, this problem has been addressed via a two-stage approach where an adversary model is trained to maximize predictive accuracy without considering the defender's optimization problem. We develop an end-to-end game-focused approach, where the adversary model is trained to maximize a surrogate for the defender's expected utility. We show both in theory and experimental results that our game-focused approach achieves higher defender expected utility than the two-stage alternative when there is limited data.

Downloads

Published

2020-04-03

How to Cite

Perrault, A., Wilder, B., Ewing, E., Mate, A., Dilkina, B., & Tambe, M. (2020). End-to-End Game-Focused Learning of Adversary Behavior in Security Games. Proceedings of the AAAI Conference on Artificial Intelligence, 34(02), 1378-1386. https://doi.org/10.1609/aaai.v34i02.5494

Issue

Section

AAAI Technical Track: Computational Sustainability