skip to main content
10.1145/3394486.3403089acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Malicious Attacks against Deep Reinforcement Learning Interpretations

Authors Info & Claims
Published:20 August 2020Publication History

ABSTRACT

The past years have witnessed the rapid development of deep reinforcement learning (DRL), which is a combination of deep learning and reinforcement learning (RL). However, the adoption of deep neural networks makes the decision-making process of DRL opaque and lacking transparency. Motivated by this, various interpretation methods for DRL have been proposed. However, those interpretation methods make an implicit assumption that they are performed in a reliable and secure environment. In practice, sequential agent-environment interactions expose the DRL algorithms and their corresponding downstream interpretations to extra adversarial risk. In spite of the prevalence of malicious attacks, there is no existing work studying the possibility and feasibility of malicious attacks against DRL interpretations. To bridge this gap, in this paper, we investigate the vulnerability of DRL interpretation methods. Specifically, we introduce the first study of the adversarial attacks against DRL interpretations, and propose an optimization framework based on which the optimal adversarial attack strategy can be derived. In addition, we study the vulnerability of DRL interpretation methods to the model poisoning attacks, and present an algorithmic framework to rigorously formulate the proposed model poisoning attack. Finally, we conduct both theoretical analysis and extensive experiments to validate the effectiveness of the proposed malicious attacks against DRL interpretations.

References

  1. Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity checks for saliency maps. In NeurIPS. 9505--9515.Google ScholarGoogle Scholar
  2. Akanksha Atrey, Kaleigh Clary, and David Jensen. 2019. Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning. arXiv preprint arXiv:1912.05743 (2019).Google ScholarGoogle Scholar
  3. Vahid Behzadan and Arslan Munir. 2017. Vulnerability of deep reinforcement learning to policy induction attacks. In International Conference on Machine Learning and Data Mining in Pattern Recognition. Springer, 262--275.Google ScholarGoogle ScholarCross RefCross Ref
  4. Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of neural networks is fragile. In Proceedings of the AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Sam Greydanus, Anurag Koul, Jonathan Dodge, and Alan Fern. 2017. Visualizing and understanding atari agents. arXiv preprint arXiv:1711.00138 (2017).Google ScholarGoogle Scholar
  6. Mengdi Huai, Di Wang, Chenglin Miao, and Aidong Zhang. 2020. Towards Interpretation of Pairwise Learning. In Thirty-fourth AAAI Conference on Artificial Intelligence.Google ScholarGoogle Scholar
  7. Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. 2017. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284 (2017).Google ScholarGoogle Scholar
  8. Léonard Hussenot, Matthieu Geist, and Olivier Pietquin. 2019. Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations. arXiv preprint arXiv:1905.12282 (2019).Google ScholarGoogle Scholar
  9. Rahul Iyer, Yuezhang Li, Huao Li, Michael Lewis, Ramitha Sundar, and Katia Sycara. 2018. Transparency and explanation in deep reinforcement learning neural networks. In Proc. of the AAAI/ACM Conference on AI, Ethics, and Society.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Michael Kearns and Satinder Singh. 2002. Near-Optimal Reinforcement Learning in Polynomial Time. Mach. Learn. (2002).Google ScholarGoogle Scholar
  11. Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T Schütt, Sven Dahne, Dumitru Erhan, and Been Kim. 2019. The (un) reliability of saliency methods. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, 267--280.Google ScholarGoogle Scholar
  12. Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. 2017. Tactics of adversarial attack on deep reinforcement learning agents. arXiv preprint arXiv:1703.06748 (2017).Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Chenglin Miao, Qi Li, Lu Su, Mengdi Huai, Wenjun Jiang, and Jing Gao. 2018a. Attack under Disguise: An Intelligent Data Poisoning Attack Mechanism in Crowdsourcing. In Proc. of the 2018 World Wide Web Conference. 13--22.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Chenglin Miao, Qi Li, Houping Xiao, Wenjun Jiang, Mengdi Huai, and Lu Su. 2018b. Towards data poisoning attacks in crowd sensing systems. In Proc. of the Eighteenth ACM International Symposium on Mobile Ad Hoc Networking and Computing. 111--120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature, Vol. 518, 7540 (2015), 529.Google ScholarGoogle Scholar
  16. Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary. 2018. Robust deep reinforcement learning with adversarial attacks. In Proc. of the 17th International Conference on Autonomous Agents and MultiAgent Systems. 2040--2042.Google ScholarGoogle Scholar
  17. Xinghua Qu, Zhu Sun, Pengfei Wei, Yew-Soon Ong, and Abhishek Gupta. 2019. Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy. arXiv preprint arXiv:1911.03849 (2019).Google ScholarGoogle Scholar
  18. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. 2015. Trust region policy optimization. In ICML. 1889--1897.Google ScholarGoogle Scholar
  19. Jianwen Sun, Tianwei Zhang, Xiaofei Xie, Lei Ma, Yan Zheng, Kangjie Chen, and Yang Liu. 2020. Stealthy and efficient adversarial attacks against deep reinforcement learning. arXiv preprint arXiv:2005.07099 (2020).Google ScholarGoogle Scholar
  20. Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Van Hasselt, Marc Lanctot, and Nando De Freitas. 2015. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581 (2015).Google ScholarGoogle Scholar
  22. Laurens Weitkamp, Elise van der Pol, and Zeynep Akata. 2018. Visual rationalizations in deep reinforcement learning for atari games. In Benelux Conference on Artificial Intelligence. Springer, 151--165.Google ScholarGoogle Scholar
  23. Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, Huan Zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, and Xue Lin. 2018. Structured adversarial attack: Towards general implementation and better interpretability. arXiv preprint arXiv:1808.01664 (2018).Google ScholarGoogle Scholar
  24. Liu Yuezhang, Ruohan Zhang, and Dana H Ballard. 2018. An Initial Attempt of Combining Visual Selective Attention with Deep Reinforcement Learning. arXiv preprint arXiv:1811.04407 (2018).Google ScholarGoogle Scholar
  25. Tom Zahavy, Nir Ben-Zrihem, and Shie Mannor. 2016. Graying the black box: Understanding dqns. In ICML. 1899--1908.Google ScholarGoogle Scholar
  26. Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, and Ting Wang. 2020. Interpretable deep learning under fire. In 29th USENIX Security Symposium (USENIX Security 20).Google ScholarGoogle Scholar

Index Terms

  1. Malicious Attacks against Deep Reinforcement Learning Interpretations

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
          August 2020
          3664 pages
          ISBN:9781450379984
          DOI:10.1145/3394486

          Copyright © 2020 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 20 August 2020

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate1,133of8,635submissions,13%

          Upcoming Conference

          KDD '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader