skip to main content
10.1145/3341216.3342210acmconferencesArticle/Chapter ViewAbstractPublication PagescommConference Proceedingsconference-collections
research-article

Cracking Open the Black Box: What Observations Can Tell Us About Reinforcement Learning Agents

Published: 14 August 2019 Publication History

Abstract

Machine learning (ML) solutions to challenging networking problems, while promising, are hard to interpret; the uncertainty about how they would behave in untested scenarios has hindered adoption. Using a case study of an ML-based video rate adaptation model, we show that carefully applying interpretability tools and systematically exploring the model inputs can identify unwanted or anomalous behaviors of the model; hinting at a potential path towards increasing trust in ML-based solutions.

Supplementary Material

MP4 File (p29-dethise.mp4)

References

[1]
A. Dethise, M. Canini, and S. Kandula. Cracking Open the Black Box Github repository, 2019. https://github.com/adethise/observing-rl-agents-netai2019.
[2]
J. Chen, L. Song, M. Wainwright, and M. Jordan. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation. In ICML, 2018.
[3]
M. Dong, T. Meng, D. Zarchy, E. Arslan, Y. Gilad, B. Godfrey, and M. Schapira. PCC Vivace: Online-Learning Congestion Control. In NSDI, 2018.
[4]
N. Feamster and J. Rexford. Why (and How) Networks Should Run Themselves. CoRR, abs/1710.11583, 2017.
[5]
FICO. Explainable Machine Learning Challenge, 2018. https://community.fico.com/s/explainable-machine-learning-challenge.
[6]
H. Mao, R. Netravali and M. Alizadeh. Pensieve Github repository, 2017. https://github.com/hongzimao/pensieve.
[7]
K. Kompella. The Self-Driving Network™: How to Realize It, 2017. https://www.nanog.org/sites/default/files/1_Kompella_The_Networking_Grand_Challenge.pdf.
[8]
S. M. Lundberg and S.-I. Lee. A Unified Approach to Interpreting Model Predictions. In NIPS, 2017.
[9]
M. T. Ribeiro, S. Singh and C. Guestrin. LIME Github repository, 2016. https://github.com/marcotcr/lime.
[10]
H. Mao, M. Alizadeh, I. Menache, and S. Kandula. Resource Management with Deep Reinforcement Learning. In HotNets, 2016.
[11]
H. Mao, R. Netravali, and M. Alizadeh. Neural Adaptive Video Streaming with Pensieve. In SIGCOMM, 2017.
[12]
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
[13]
NIPS. Interpretable ML Symposium, 2017. http://interpretable.ml/.
[14]
M. T. Ribeiro, S. Singh, and C. Guestrin. "Why Should I Trust You?" Explaining the Predictions of Any Classifier. In KDD, 2016.
[15]
K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In ICLR workshop, 2014.
[16]
A. Valadarsky, M. Schapira, D. Shahaf, and A. Tamar. Learning to Route. In HotNets, 2017.
[17]
A. Verma, V. Murali, R. Singh, P. Kohli, and S. Chaudhuri. Programmatically Interpretable Reinforcement Learning. In ICML, 2018.
[18]
C. Wierzynski. The Challenges and Opportunities of Explainable AI, 2018. https://ai.intel.com/the-challenges-and-opportunities-of-explainable-ai/.
[19]
X. Yin, A. Jindal, V. Sekar, and B. Sinopoli. A Control-Theoretic Approach for Dynamic Adaptive Video Streaming over HTTP. In SIGCOMM, 2015.

Cited By

View all
  • (2024)XAI for Interpretable Multimodal Architectures with Contextual Input in Mobile Network Traffic Classification2024 IFIP Networking Conference (IFIP Networking)10.23919/IFIPNetworking62109.2024.10619769(757-762)Online publication date: 3-Jun-2024
  • (2024)Redefining Counterfactual Explanations for Reinforcement Learning: Overview, Challenges and OpportunitiesACM Computing Surveys10.1145/364847256:9(1-33)Online publication date: 24-Apr-2024
  • (2024)Explainable Deep-Learning Approaches for Packet-Level Traffic Prediction of Collaboration and Communication Mobile AppsIEEE Open Journal of the Communications Society10.1109/OJCOMS.2024.33668495(1299-1324)Online publication date: 2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
NetAI'19: Proceedings of the 2019 Workshop on Network Meets AI & ML
August 2019
96 pages
ISBN:9781450368728
DOI:10.1145/3341216
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 August 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. explainable machine learning
  2. feature analysis
  3. neural adaptive video streaming
  4. post-hoc explanations

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SIGCOMM '19
Sponsor:
SIGCOMM '19: ACM SIGCOMM 2019 Conference
August 23, 2019
Beijing, China

Acceptance Rates

NetAI'19 Paper Acceptance Rate 13 of 38 submissions, 34%;
Overall Acceptance Rate 13 of 38 submissions, 34%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)22
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)XAI for Interpretable Multimodal Architectures with Contextual Input in Mobile Network Traffic Classification2024 IFIP Networking Conference (IFIP Networking)10.23919/IFIPNetworking62109.2024.10619769(757-762)Online publication date: 3-Jun-2024
  • (2024)Redefining Counterfactual Explanations for Reinforcement Learning: Overview, Challenges and OpportunitiesACM Computing Surveys10.1145/364847256:9(1-33)Online publication date: 24-Apr-2024
  • (2024)Explainable Deep-Learning Approaches for Packet-Level Traffic Prediction of Collaboration and Communication Mobile AppsIEEE Open Journal of the Communications Society10.1109/OJCOMS.2024.33668495(1299-1324)Online publication date: 2024
  • (2024)Inferring Visibility of Internet Traffic Matrices Using eXplainable AINOMS 2024-2024 IEEE Network Operations and Management Symposium10.1109/NOMS59830.2024.10575173(1-6)Online publication date: 6-May-2024
  • (2024)Decision on Control Path: Rule-Based Policy ConversionLatency Optimization in Interactive Multimedia Streaming10.1007/978-981-97-6729-8_4(43-60)Online publication date: 30-Oct-2024
  • (2024)Cooperative-Competitive Decision-Making in Resource Management: A Reinforcement Learning PerspectiveIntelligent Data Engineering and Automated Learning – IDEAL 202410.1007/978-3-031-77731-8_34(375-386)Online publication date: 14-Nov-2024
  • (2023)Influence of video content type on the usefulness of reinforcement learning algorithms in DASH systemsJournal of Computer Sciences Institute10.35784/jcsi.357927(162-170)Online publication date: 30-Jun-2023
  • (2023)EXPLORA: AI/ML EXPLainability for the Open RANProceedings of the ACM on Networking10.1145/36291411:CoNEXT3(1-26)Online publication date: 28-Nov-2023
  • (2023)Improving Performance, Reliability, and Feasibility in Multimodal Multitask Traffic Classification with XAIIEEE Transactions on Network and Service Management10.1109/TNSM.2023.324679420:2(1267-1289)Online publication date: Jun-2023
  • (2023)Interpretable Modeling of Deep Reinforcement Learning Driven Scheduling2023 31st International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS)10.1109/MASCOTS59514.2023.10387651(1-8)Online publication date: 16-Oct-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media