skip to main content
10.1145/3240508.3241472acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
tutorial

Deep Learning Interpretation

Published:15 October 2018Publication History

ABSTRACT

Deep learning has been successfully exploited in addressing different multimedia problems in recent years. The academic researchers are now transferring their attention from identifying what problem deep learning CAN address to exploring what problem deep learning CAN NOT address. This tutorial starts with a summarization of six 'CAN NOT' problems deep learning fails to solve at the current stage, i.e., low stability, debugging difficulty, poor parameter transparency, poor incrementality, poor reasoning ability, and machine bias. These problems share a common origin from the lack of deep learning interpretation. This tutorial attempts to correspond the six 'NOT' problems to three levels of deep learning interpretation: (1) Locating - accurately and efficiently locating which feature contributes much to the output. (2) Understanding - bidirectional semantic accessing between human knowledge and deep learning algorithm. (3) Expandability - well storing, accumulating and reusing the models learned from deep learning. Existing studies falling into these three levels will be reviewed in detail, and a discussion on the future interesting directions will be provided in the end.

References

  1. Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. ICML (2018).Google ScholarGoogle Scholar
  2. Been Kim and Finale Doshi-Velez. 2017. Tutorial on Interpretable Machine Learning. ICML (2017).Google ScholarGoogle Scholar
  3. Lydia T Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. 2018. Delayed impact of fair machine learning. ICML (2018).Google ScholarGoogle Scholar
  4. Gary Marcus. 2018. Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631 (2018).Google ScholarGoogle Scholar
  5. Wojciech Samek and Klaus-Robert Müller. 2017. Tutorial on Interpretable Machine Learning. GCPR (2017).Google ScholarGoogle Scholar
  6. Guanhua Zheng, Jitao Sang, and Changsheng Xu. 2017. Understanding deep learning generalization by maximum entropy. arXiv preprint arXiv:1711.07758 (2017).Google ScholarGoogle Scholar

Index Terms

  1. Deep Learning Interpretation

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MM '18: Proceedings of the 26th ACM international conference on Multimedia
      October 2018
      2167 pages
      ISBN:9781450356657
      DOI:10.1145/3240508

      Copyright © 2018 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 15 October 2018

      Check for updates

      Qualifiers

      • tutorial

      Acceptance Rates

      MM '18 Paper Acceptance Rate209of757submissions,28%Overall Acceptance Rate995of4,171submissions,24%

      Upcoming Conference

      MM '24
      MM '24: The 32nd ACM International Conference on Multimedia
      October 28 - November 1, 2024
      Melbourne , VIC , Australia

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader