skip to main content
10.1145/3447527.3474873acmconferencesArticle/Chapter ViewAbstractPublication PagesmobilehciConference Proceedingsconference-collections
extended-abstract

A Mobile Tool that Helps Nonexperts Make Sense of Pretrained CNN by Interacting with Their Daily Surroundings

Authors Info & Claims
Published:27 September 2021Publication History

ABSTRACT

Current research on explainable AI (XAI) is primarily aimed at expert users (data scientists or AI developers). However, there is an increasing emphasis on making AI more understandable to non-experts who are expected to use AI techniques but have limited knowledge about AI. We propose a mobile application to help non-experts understand convolutional neural networks (CNN) in an interactive way; it allows users to taking pictures of surrounding objects and use pre-trained CNN to recognize it. We use the latest XAI (Class Activation Map) technology to visualize the model decision (the most important image area leading to a specific result). This playful learning tool was implemented in college courses and found to help design students gain a vivid understanding of the functions and limitations of pre-trained CNN in the real world. We thereby contribute an online tool that could be used for twofold purposes: first, it could help non-experts interactively learn how a pre-trained CNN works. Second, it can be used by researchers to probe and characterize the non-experts’ process of sensemaking, which could contribute insights into explainable AI design beyond expert users.

References

  1. TensorFlow authors. 2021. MobileNet. https://github.com/tensorflow/tfjs--models/tree/masGoogle ScholarGoogle Scholar
  2. Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 arxiv:1910.10045Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Evgeny Demidov. 2019. Interactive Heat map demo. https://www.ibiblio.org/e--notes/ml/heatmap.htmGoogle ScholarGoogle Scholar
  4. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, and Fosca Giannotti. 2018. A survey of methods for explaining black box models. arXiv 51, 5 (2018), 1–42. arxiv:1802.01933Google ScholarGoogle Scholar
  5. Min Lin, Qiang Chen, and Shuicheng Yan. 2014. Network in network. 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings(2014). arxiv:1312.4400Google ScholarGoogle Scholar
  6. Swati Mishra and Jeffrey M Rzeszotarski. 2021. Designing Interactive Transfer Learning Tools for ML Non-Experts. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411764.3445096Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2020. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision 128, 2 (2020), 336–359. https://doi.org/10.1007/s11263-019-01228-7 arxiv:1610.02391Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Daniel Smilkov, Nikhil Thorat, Yannick Assogba, Ann Yuan, Nick Kreeger, Ping Yu, Kangyi Zhang, Shanqing Cai, Eric Nielsen, and David Soergel. 2019. Tensorflow. js: Machine learning for the web and beyond. arXiv preprint arXiv:1901.05350(2019).Google ScholarGoogle Scholar
  9. Qian Yang, Jina Suh, Nan-Chen Chen, and Gonzalo Ramos. 2018. Grounding interactive machine learning tool design in how non-experts actually build models. In Proceedings of the 2018 Designing Interactive Systems Conference. 573–584.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Matthew D. Zeiler, Graham W. Taylor, and Rob Fergus. 2011. Adaptive deconvolutional networks for mid and high level feature learning. Proceedings of the IEEE International Conference on Computer Vision (2011), 2018–2025. https://doi.org/10.1109/ICCV.2011.6126474Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning Deep Features for Discriminative Localization. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016-Decem (2016), 2921–2929. https://doi.org/10.1109/CVPR.2016.319 arxiv:1512.04150Google ScholarGoogle ScholarCross RefCross Ref
  1. A Mobile Tool that Helps Nonexperts Make Sense of Pretrained CNN by Interacting with Their Daily Surroundings

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MobileHCI '21: Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction
      September 2021
      150 pages
      ISBN:9781450383295
      DOI:10.1145/3447527

      Copyright © 2021 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 September 2021

      Check for updates

      Qualifiers

      • extended-abstract
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate202of906submissions,22%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format