Skip to main content

Advertisement

Log in

Learning machiavellian strategies for manipulation in Stackelberg security games

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

This paper suggests a new approach for repeated Stackelberg security games (SSGs) based on manipulation. Manipulation is a strategy interpreted by the Machiavellianism social behavior theory, which consists on three main concepts: view, tactics, and immorality. The world is conceptualized by manipulators and manipulated (view). Players employ Machiavelli’s tactics and Machiavellian intelligence in order to manipulate attacker/defender situations. The immorality plays a fundamental role in these games, defenders are able to not be attached to a conventional moral in order to achieve their goals. We consider a security game model involving manipulating defenders and manipulated attackers engaged cooperatively in a Nash game and at the same time restricted by a Stackelberg game. The resulting game is non-cooperative bargaining game. The cooperation is represented by the Nash bargaining solution. We propose an analytical formula for solving the manipulation game, which arises as the maximum of the quotient of two Nash products. The role of the players in the Stackelberg security game are determined by the weights of the players for the Nash bargaining approach. We consider only a subgame perfect equilibrium where the solution of the manipulation game is a Strong Stackelberg Equilibrium (SSE). We employ a reinforcement learning (RL) approach for the implementation of the immorality. A numerical example related to developing a strategic schedule for the efficient use of resources for patrolling in a smart city is handled using a class of homogeneous, ergodic, controllable, and finite Markov chains for showing the usefulness of the method for security resource allocation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Albarran, S., Clempner, J.B.: A stackelberg security markov game based on partial information for strategic decision making against unexpected attacks. Eng. Appl. Artif. Intel. 81, 408–419 (2019)

    Article  Google Scholar 

  2. Alcantara-Jiménez, G., Clempner, J.: Repeated stackelberg security games: Learning with incomplete state information. Reliab. Eng. Syst. Saf. 19, 106695 (2020)

    Article  Google Scholar 

  3. Asiain, E., Clempner, J.B., Poznyak, A.S.: Controller exploitation-exploration: a reinforcement learning architecture. Soft. Comput. 23(11), 3591–3604 (2019)

    Article  Google Scholar 

  4. Balcan, M.F., Blum, A., Haghtalab, N., Procaccia, A.D.: Commitment without regrets: Online learning in stackelberg security games. In: Proceedings of the Sixteenth ACM Conference on Economics and Computation, pp. 61–78. ACM New York, NY, USA, Portland, Oregon, USA (2015)

  5. Blum, A., Haghtalab, N., Procaccia, A.D.: Learning optimal commitment to overcome insecurity. In: G. Cowan, C. Cécile Germain, I. Guyon, B. Kégl, D. Rousseau (eds.) Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS), vol. 1, pp. 1826–1834. MIT Press Cambridge, MA, USA, Montreal, Quebec, Canada (2014)

  6. Braun, T., Fung, B., Iqbal, F., Shah, B.: Security and privacy challenges in smart cities. Sustain. Cities Soc. 39, 499–507 (2018)

    Article  Google Scholar 

  7. Bucarey, V., Vecchia, E.D., Jean-Marie, A., Ordóñez, F.: Stationary strong stackelberg equilibrium in discounted stochastic games. Tech. Rep. hal-02144095 INRIA (2019)

  8. Christie, R., Geis, F.: Studies in machiavellianism. Academic press (1970)

  9. Clempner, J.B.: A game theory model of manipulation based on the machiavellian social interaction theory: Moral and ethical behavior. J. Artif. Soc. Soc. Simul. 20(2), 1–12 (2017)

    Article  Google Scholar 

  10. Clempner, J.B.: A continuous-time markov stackelberg security game approach for reasoning about real patrol strategies. Int. J. Control. 91(11), 2494–2510 (2018). https://doi.org/10.1080/00207179.2017.1371853

    Article  MathSciNet  Google Scholar 

  11. Clempner, J.B.: On lyapunov game theory equilibrium: Static and dynamic approaches, International Game Theory Review 20(2), 1750,033–1–1750, 033–14 (2018)

  12. Clempner, J.B.: Strategic manipulation approach for solving negotiated transfer pricing problem. J. Optim. Theory Appl. 178(1), 304–316 (2018)

    Article  MathSciNet  Google Scholar 

  13. Clempner, J.B.: A markovian stackelberg game approach for computing an optimal dynamic mechanism. Comput. Appl. Math. 40(6), 1–25 (2021)

    Article  MathSciNet  Google Scholar 

  14. Clempner, J.B., Poznyak, A.S.: Convergence method, properties and computational complexity for lyapunov games. Int. J. Appl. Math. Comput. Sci. 21(2), 349–361 (2011)

    Article  MathSciNet  Google Scholar 

  15. Clempner, J.B., Poznyak, A.S.: Simple computing of the customer lifetime value: A fixed local-optimal policy approach Journal of Systems Science and Systems Engineering. http://link.springer.com/article/10.1007/s11518-014-5260-y (2014)

  16. Clempner, J.B., Poznyak, A.S.: Computing the strong nash equilibrium for markov chains games. Appl. Math. Comput. 265, 911–927 (2015)

    MathSciNet  MATH  Google Scholar 

  17. Clempner, J.B., Poznyak, A.S.: Stackelberg security games: Computing the shortest-path equilibrium. Expert Syst Appl 42(8), 3967–3979 (2015)

    Article  Google Scholar 

  18. Clempner, J.B., Poznyak, A.S.: Analyzing an optimistic attitude for the leader firm in duopoly models: a strong stackelberg equilibrium based on a lyapunov game theory approach. Econ. Comput. Econ. Cybern. Stud. Res. 50(4), 41–60 (2016)

    Google Scholar 

  19. Clempner, J.B., Poznyak, A.S.: Conforming coalitions in stackelberg security games: Setting max cooperative defenders vs. non-cooperative attackers. Appl. Soft Comput. 47, 1–11 (2016)

    Article  Google Scholar 

  20. Clempner, J.B., Poznyak, A.S.: Convergence analysis for pure and stationary strategies in repeated potential games: Nash, lyapunov and correlated equilibria. Expert Syst. Appl. 46, 474–484 (2016)

    Article  Google Scholar 

  21. Clempner, J.B., Poznyak, A.S.: Using the extraproximal method for computing the shortest-path mixed lyapunov equilibrium in stackelberg security games. Math. Comput. Simul. 138, 14–30 (2017)

    Article  MathSciNet  Google Scholar 

  22. Elderman, R., Pater, L.A.T., Drugan, M., Wiering, M.: Adversarial Reinforcement Learning in a Cyber Security Simulation. In: 9Th International Conference on Agents and Artificial Intelligence, pp. 1–8. INSTICC, Porto, Portugal (2017)

  23. Etesami, S.R., Başar, T.: Dynamic games in cyber-physical security: an overview. Dyn. Games Appl. 9(4), 884–913 (2019)

    Article  MathSciNet  Google Scholar 

  24. Gan, J., Elkind, E., Wooldridge, M.: Stackelberg Security Games with Multiple Uncoordinated Defenders. In: Procedings of the 17Th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 703–711. Stockholm, Sweden (2018)

  25. Guerrero, D., Carsteanu, A., Huerta, R., Clempner, J.B.: An Iterative Method for Solving Stackelberg Security Games: a Markov Games Approach. In: 14Th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), pp. 1–6. Mexico city, Mexico (2017)

  26. Guerrero, D., Carsteanu, A.A., Huerta, R., Clempner, J.B.: Solving stackelberg security markov games employing the bargaining nash approach: Convergence analysis. Comput. Sec. 74, 240–257 (2018)

    Article  Google Scholar 

  27. Habibzadeh, H., Nussbaum, B., Anjomshoa, F., Kantarci, B., Soyata, T.: A survey on cybersecurity, data privacy, and policy issues in cyber-physical system deployments in smart cities. Sustain. Cities Soc. 50(101), 660 (2019)

    Google Scholar 

  28. He, X., Dai, H., Ning, P.: Improving learning and adaptation in security games by exploiting information asymmetry. In: Proceedings of IEEE INFOCOM. IEEE Communications Society, Hong Kong, China (2015)

  29. Klima, R., Tuyls, K., Oliehoek, F.: Markov Security Games: Learning in Spatial Security Problems. In: NIPS’16 Workshop on Learning, Inference and Control of Multi-Agent Systems. NIPS, Barcelona, Spain (2016)

  30. Korzhyk, D., Yin, Z., Kiekintveld, C., Conitzer, V., Tambe, M.: Stackelberg vs. nash in security games: an extended investigation of interchangeability, equivalence, and uniqueness. J. Artif. Intell. Res. 41, 297–327 (2011)

    Article  MathSciNet  Google Scholar 

  31. Lin, E.S., Agmon, N., Sarit Kraus, S.: Multi-robot adversarial patrolling: Handling sequential attacks. Artif. Intell. 274, 1–25 (2019)

    Article  MathSciNet  Google Scholar 

  32. Ma, W., McAreavey, K., Liu, W., Luoc, X.: Acceptable costs of minimax regret equilibrium: a solution to security games with surveillance-driven probabilistic information. Expert Syst. Appl. 108, 206–222 (2018)

    Article  Google Scholar 

  33. Marecki, J., Tesauro, G., Segal, R.: Playing repeated stackelberg games with unknown opponents. In: Proceedings of the 11th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), vol. 2, pp. 821–828. International Foundation for Autonomous Agents and Multiagent Systems Richland, SC, Valencia, Spain (2012)

  34. Marques, S., Ferreira, F., Banaitis, I.: A.: Classifying urban residential areas based on their exposure to crime: A constructivist approach. Sustain. Cities Soc. 39, 418–429 (2018)

    Article  Google Scholar 

  35. Poznyak, A.S., Najim, K., Gómez-Ramírez, E.: Self-Learning Control of Finite Markov Chains. Inc, Marcel Dekker (2000)

    MATH  Google Scholar 

  36. Rahman, M., Oh, J.: Online Learning for Patrolling Robots against Active Adversarial Attackers. In: Mouhoub, M., Sadaoui, S. Ait Mohamed, O. Ali, M. (eds.) Recent Trends and Future Technology in Applied Intelligence, Lecture Notes in Computer Science, vol. 10868. Springer, Springer, Cham, Montreal, Quebec, Canada (2018)

  37. Sayed Ahmed, I.: Stackelberg-Based Anti-Jamming Game for Cooperative Cognitive Radio Networks. Ph.D. Thesis, CALGARY, ALBERTA (2017)

  38. Solis, C.U., Clempner, J.B., Poznyak, A.S.: Solving Stackelberg Security Games for Multiple Defenders and Multiple Attackers. In: 26Th Stony Brook International Conference on Game Theory. New York, USA (2015)

  39. Solis, C.U., Clempner, J.B., Poznyak, A.S.: Modeling multi-leader-follower non-cooperative stackelberg games. Cybern. Syst. 47(8), 650–673 (2016)

    Article  Google Scholar 

  40. Solis, C.U., Clempner, J.B., Poznyak, A.S.: Handling a Kullback-Leibler Divergence Random Walk for Scheduling Effective Patrol Strategies in Stackelberg Security Games. Kybernetika. To be published (2019)

  41. von Stengel, B., Zamir, S.: Leadership games with convex strategy sets. Games Econ. Behav. 69, 446–457 (2010)

    Article  MathSciNet  Google Scholar 

  42. Trejo, K.K., Clempner, J.B., Poznyak, A.S.: Computing the stackelberg/nash equilibria using the extraproximal method: Convergence analysis and implementation details for markov chains games. Int. J. Appl. Math. Comput. Sci. 25(2), 337–351 (2015)

    Article  MathSciNet  Google Scholar 

  43. Trejo, K.K., Clempner, J.B., Poznyak, A.S.: A stackelberg security game with random strategies based on the extraproximal theoretic approach. Eng. Appl. Artif. Intel. 37, 145–153 (2015)

    Article  Google Scholar 

  44. Trejo, K.K., Clempner, J.B., Poznyak, A.S.: Adapting Strategies to Dynamic Environments in Controllable Stackelberg Security Games. In: IEEE 55th Conference on Decision and Control (CDC), pp. 5484–5489. IEEE, Las Vegas, USA (2016)

  45. Trejo, K.K., Clempner, J.B., Poznyak, A.S.: An optimal strong equilibirum solution for cooperative multi-leader-follower stackelberg markov chains games. Kibernetika. To be published (2016)

  46. Trejo, K.K., Clempner, J.B., Poznyak, A.S.: Computing the lp-strong nash equilibrium for markov chains games. Appl. Math. Model. 41, 399–418 (2017)

    Article  MathSciNet  Google Scholar 

  47. Trejo, K.K., Clempner, J.B., Poznyak, A.S.: Adapting attackers and defenders preferred strategies: a reinforcement learning approach in stackelberg security games. J. Comput. Syst. Sci. 95, 35–54 (2018)

    Article  Google Scholar 

  48. Vanek, O., Jakob, M., Lisy, V., Bosansky, B., Michal Pechoucek, M.: Iterative Game-Theoretic Route Selection for Hostile Area Transit and Patrolling. In: The 10th International Conference on Autonomous Agents and Multiagent Systems, pp. 1273–1274. Taipei, Taiwan (2011)

  49. Wang, Y., Shi, Z., Yu, L., Wu, Y., Singh, R., Joppa, L., Fang, F.: Deep Reinforcement Learning for Green Security Games with Real-Time Information. In: The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), pp. 1401–1408. Honolulu, Hawaii (2019)

  50. Wilczyński, A., Jakóbik, A., Kołodziej, J.: Stackelberg security games: Models, applications and computational aspects. J. Telecommun. Inf. Technol. 3, 70–79 (2016)

    Google Scholar 

  51. Xu, H., Tran-Thanh, L., Jennings, N.R.: Playing repeated security games with no prior knowledge. In: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 104–112. International Foundation for Autonomous Agents and Multiagent Systems Richland, SC, Singapore, Singapore (2016)

Download references

Acknowledgements

The author wishes to thank the Reviewers for their useful and appropriate comments which improved the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julio B. Clempner.

Ethics declarations

Competing interests

The authors declare that they have no conflict of interest.

Additional information

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Clempner, J.B. Learning machiavellian strategies for manipulation in Stackelberg security games. Ann Math Artif Intell 90, 373–395 (2022). https://doi.org/10.1007/s10472-022-09788-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-022-09788-0

Keywords