Skip to main content

Making the Most of Our Regrets: Regret-Based Solutions to Handle Payoff Uncertainty and Elicitation in Green Security Games

  • Conference paper
  • First Online:
Decision and Game Theory for Security (GameSec 2015)

Abstract

Recent research on Green Security Games (GSG), i.e., security games for the protection of wildlife, forest and fisheries, relies on the promise of an abundance of available data in these domains to learn adversary behavioral models and determine game payoffs. This research suggests that adversary behavior models (capturing bounded rationality) can be learned from real-world data on where adversaries have attacked, and that game payoffs can be determined precisely from data on animal densities. However, previous work has, as yet, failed to demonstrate the usefulness of these behavioral models in capturing adversary behaviors based on real-world data in GSGs. Previous work has also been unable to address situations where available data is insufficient to accurately estimate behavioral models or to obtain the required precision in the payoff values.

In addressing these limitations, as our first contribution, this paper, for the first time, provides validation of the aforementioned adversary behavioral models based on real-world data from a wildlife park in Uganda. Our second contribution addresses situations where real-world data is not precise enough to determine exact payoffs in GSG, by providing the first algorithm to handle payoff uncertainty in the presence of adversary behavioral models. This algorithm is based on the notion of minimax regret. Furthermore, in scenarios where the data is not even sufficient to learn adversary behaviors, our third contribution is to provide a novel algorithm to address payoff uncertainty assuming a perfectly rational attacker (instead of relying on a behavioral model); this algorithm allows for a significant scaleup for large security games. Finally, to reduce the problems due to paucity of data, given mobile sensors such as Unmanned Aerial Vehicles (UAV), we introduce new payoff elicitation strategies to strategically reduce uncertainty.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The true mixed strategy would be a probability assignment to each pure strategy, where a pure strategy is an assignment of R resources to T targets. However, that is equivalent to the set \(\mathbf {X}\) described here, which is a more compact representation [12].

  2. 2.

    This is the preliminary work on modeling poachers’ behaviors. Further study on building more complex behavioral models would be a new interesting research topic for future work.

  3. 3.

    Models involving cognitive hierarchies [26] are not applicable in Stackelberg games given that attacker plays knowing the defender’s actual strategy.

  4. 4.

    Online Appendix: https://www.dropbox.com/s/620aqtinqsul8ys/Appendix.pdf?dl=0.

  5. 5.

    A similar idea was introduced in [2] although in a very different domain without UAV paths.

References

  1. Basilico, N., Gatti, N., Amigoni, F.: Leader-follower strategies for robotic patrolling in environments with arbitrary topologies. In: AAMAS (2009)

    Google Scholar 

  2. Boutilier, C., Patrascu, R., Poupart, P., Schuurmans, D.: Constraint-based optimization and utility elicitation using the minimax decision criterion. Artif. Intell. 170, 686–713 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Braziunas, D., Boutilier, C.: Assessing regret-based preference elicitation with the utpref recommendation system. In: EC (2010)

    Google Scholar 

  4. Brown, M., Haskell, W.B., Tambe, M.: Addressing scalability and robustness in security games with multiple boundedly rational adversaries. In: GameSec (2014)

    Google Scholar 

  5. Brunswik, E.: The Conceptual Framework of Psychology. University of Chicago Press, New York (1952)

    Google Scholar 

  6. De Farias, D.P., Van Roy, B.: On constraint sampling in the linear programming approach to approximate dynamic programming. Math. Oper. Res. 29, 462–478 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  7. Fang, F., Stone, P., Tambe, M.: When security games go green: designing defender strategies to prevent poaching and illegal fishing. In: IJCAI (2015)

    Google Scholar 

  8. French, S.: Decision Theory: An Introduction to the Mathematics of Rationality. Halsted Press, New York (1986)

    MATH  Google Scholar 

  9. Haskell, W.B., Kar, D., Fang, F., Tambe, M., Cheung, S., Denicola, L.E.: Robust protection of fisheries with compass. In: IAAI (2014)

    Google Scholar 

  10. Kiekintveld, C., Islam, T., Kreinovich, V.: Security games with interval uncertainty. In: AAMAS (2013)

    Google Scholar 

  11. Kiekintveld, C., Jain, M., Tsai, J., Pita, J., Ordez, F., Tambe, M.: Computing optimal randomized resource allocations for massive security games. In: AAMAS (2009)

    Google Scholar 

  12. Korzhyk, D., Conitzer, V., Parr, R.: Complexity of computing optimal stackelberg strategies in security resource allocation games. In: AAAI (2010)

    Google Scholar 

  13. Letchford, J., Vorobeychik, Y.: Computing randomized security strategies in networked domains. In: AARM (2011)

    Google Scholar 

  14. McFadden, D.: Conditional logit analysis of qualitative choice behavior. Technical report (1972)

    Google Scholar 

  15. McKelvey, R., Palfrey, T.: Quantal response equilibria for normal form games. Game Econ. Behav. 10(1), 6–38 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  16. Montesh, M.: Rhino poaching: a new form of organised crime1. University of South Africa, Technical report (2013)

    Google Scholar 

  17. Nguyen, T.H., Yadav, A., An, B., Tambe, M., Boutilier, C.: Regret-based optimization and preference elicitation for stackelberg security games with uncertainty. In: AAAI (2014)

    Google Scholar 

  18. Nguyen, T.H., Yang, R., Azaria, A., Kraus, S., Tambe, M.: Analyzing the effectiveness of adversary modeling in security games. In: AAAI (2013)

    Google Scholar 

  19. Nudelman, E., Wortman, J., Shoham, Y., Leyton-Brown, K.: Run the gamut: a comprehensive approach to evaluating game-theoretic algorithms. In: AAMAS (2004)

    Google Scholar 

  20. Pita, J., Jain, M., Tambe, O.M., Kraus, S., Magori-cohen, R.: Effective solutions for real-world stackelberg games: when agents must deal with human uncertainties. In: AAMAS (2009)

    Google Scholar 

  21. Qian, Y., Haskell, W.B., Jiang, A.X., Tambe, M.: Online planning for optimal protector strategies in resource conservation games. In: AAMAS (2014)

    Google Scholar 

  22. Secretariat, G.: Global tiger recovery program implementation plan: 2013–14. Report, The World Bank, Washington, DC (2013)

    Google Scholar 

  23. Shieh, E., An, B., Yang, R., Tambe, M., Baldwin, C., DiRenzo, J., Maule, B., Meyer, G.: Protect: a deployed game theoretic system to protect the ports of the united states. In: AAMAS (2012)

    Google Scholar 

  24. Tambe, M.: Security and Game Theory: Algorithms, Deployed Systems. Cambridge University Press, Lessons Learned (2011)

    Book  Google Scholar 

  25. Wilcox, R.: Applying Contemporary Statistical Techniques. Academic Press, New York (2002)

    Google Scholar 

  26. Wright, J.R., Leyton-Brown, K.: Level-0 meta-models for predicting human behavior in games. In: ACM-EC, pp. 857–874 (2014)

    Google Scholar 

  27. Yang, R., Ford, B., Tambe, M., Lemieux, A.: Adaptive resource allocation for wildlife protection against illegal poachers. In: AAMAS (2014)

    Google Scholar 

  28. Yang, R., Ordonez, F., Tambe, M.: Computing optimal strategy against quantal response in security games. In: AAMAS (2012)

    Google Scholar 

  29. Yin, Z., Jiang, A.X., Tambe, M., Kiekintveld, C., Leyton-Brown, K., Sandholm, T., Sullivan, J.P.: Trusts: scheduling randomized patrols for fare inspection in transit systems using game theory. AI Mag. 33, 59 (2012)

    Google Scholar 

  30. Yin, Z., Korzhyk, D., Kiekintveld, C., Conitzer, V., Tambe, M.: Stackelberg vs. nash in security games: Interchangeability, equivalence, and uniqueness. In: AAMAS (2010)

    Google Scholar 

Download references

Acknowledgements

This research was supported by MURI Grant W911NF-11-1-0332 and by CREATE under grant number 2010-ST-061-RE0001. We wish to acknowledge the contribution of all the rangers and wardens in Queen Elizabeth National Park to the collection of law enforcement monitoring data in MIST and the support of Uganda Wildlife Authority, Wildlife Conservation Society and MacArthur Foundation, US State Department and USAID in supporting these data collection financially.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thanh H. Nguyen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Nguyen, T.H. et al. (2015). Making the Most of Our Regrets: Regret-Based Solutions to Handle Payoff Uncertainty and Elicitation in Green Security Games. In: Khouzani, M., Panaousis, E., Theodorakopoulos, G. (eds) Decision and Game Theory for Security. GameSec 2015. Lecture Notes in Computer Science(), vol 9406. Springer, Cham. https://doi.org/10.1007/978-3-319-25594-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-25594-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-25593-4

  • Online ISBN: 978-3-319-25594-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics