ABSTRACT
Deep learning has been successfully exploited in addressing different multimedia problems in recent years. The academic researchers are now transferring their attention from identifying what problem deep learning CAN address to exploring what problem deep learning CAN NOT address. This tutorial starts with a summarization of six 'CAN NOT' problems deep learning fails to solve at the current stage, i.e., low stability, debugging difficulty, poor parameter transparency, poor incrementality, poor reasoning ability, and machine bias. These problems share a common origin from the lack of deep learning interpretation. This tutorial attempts to correspond the six 'NOT' problems to three levels of deep learning interpretation: (1) Locating - accurately and efficiently locating which feature contributes much to the output. (2) Understanding - bidirectional semantic accessing between human knowledge and deep learning algorithm. (3) Expandability - well storing, accumulating and reusing the models learned from deep learning. Existing studies falling into these three levels will be reviewed in detail, and a discussion on the future interesting directions will be provided in the end.
- Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. ICML (2018).Google Scholar
- Been Kim and Finale Doshi-Velez. 2017. Tutorial on Interpretable Machine Learning. ICML (2017).Google Scholar
- Lydia T Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. 2018. Delayed impact of fair machine learning. ICML (2018).Google Scholar
- Gary Marcus. 2018. Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631 (2018).Google Scholar
- Wojciech Samek and Klaus-Robert Müller. 2017. Tutorial on Interpretable Machine Learning. GCPR (2017).Google Scholar
- Guanhua Zheng, Jitao Sang, and Changsheng Xu. 2017. Understanding deep learning generalization by maximum entropy. arXiv preprint arXiv:1711.07758 (2017).Google Scholar
Index Terms
- Deep Learning Interpretation
Recommendations
An Overview of Deep Reinforcement Learning
CACRE2019: Proceedings of the 2019 4th International Conference on Automation, Control and Robotics EngineeringAs a new machine learning method, deep reinforcement learning has made important progress in various fields of people's production and life since it was proposed. However, there are still many difficulties in function design and other aspects. Therefore,...
Deep learning: systematic review, models, challenges, and research directions
AbstractThe current development in deep learning is witnessing an exponential transition into automation applications. This automation transition can provide a promising framework for higher performance and lower complexity. This ongoing transition ...
Robust Deep Reinforcement Learning with Adversarial Attacks
AAMAS '18: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent SystemsThis paper proposes adversarial attacks for Reinforcement Learning (RL). These attacks are then leveraged during training to improve the robustness of RL within robust control framework. We show that this adversarial training of DRL algorithms like Deep ...
Comments