ABSTRACT
Current research on explainable AI (XAI) is primarily aimed at expert users (data scientists or AI developers). However, there is an increasing emphasis on making AI more understandable to non-experts who are expected to use AI techniques but have limited knowledge about AI. We propose a mobile application to help non-experts understand convolutional neural networks (CNN) in an interactive way; it allows users to taking pictures of surrounding objects and use pre-trained CNN to recognize it. We use the latest XAI (Class Activation Map) technology to visualize the model decision (the most important image area leading to a specific result). This playful learning tool was implemented in college courses and found to help design students gain a vivid understanding of the functions and limitations of pre-trained CNN in the real world. We thereby contribute an online tool that could be used for twofold purposes: first, it could help non-experts interactively learn how a pre-trained CNN works. Second, it can be used by researchers to probe and characterize the non-experts’ process of sensemaking, which could contribute insights into explainable AI design beyond expert users.
- TensorFlow authors. 2021. MobileNet. https://github.com/tensorflow/tfjs--models/tree/masGoogle Scholar
- Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 arxiv:1910.10045Google ScholarDigital Library
- Evgeny Demidov. 2019. Interactive Heat map demo. https://www.ibiblio.org/e--notes/ml/heatmap.htmGoogle Scholar
- Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, and Fosca Giannotti. 2018. A survey of methods for explaining black box models. arXiv 51, 5 (2018), 1–42. arxiv:1802.01933Google Scholar
- Min Lin, Qiang Chen, and Shuicheng Yan. 2014. Network in network. 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings(2014). arxiv:1312.4400Google Scholar
- Swati Mishra and Jeffrey M Rzeszotarski. 2021. Designing Interactive Transfer Learning Tools for ML Non-Experts. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411764.3445096Google ScholarDigital Library
- Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2020. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision 128, 2 (2020), 336–359. https://doi.org/10.1007/s11263-019-01228-7 arxiv:1610.02391Google ScholarDigital Library
- Daniel Smilkov, Nikhil Thorat, Yannick Assogba, Ann Yuan, Nick Kreeger, Ping Yu, Kangyi Zhang, Shanqing Cai, Eric Nielsen, and David Soergel. 2019. Tensorflow. js: Machine learning for the web and beyond. arXiv preprint arXiv:1901.05350(2019).Google Scholar
- Qian Yang, Jina Suh, Nan-Chen Chen, and Gonzalo Ramos. 2018. Grounding interactive machine learning tool design in how non-experts actually build models. In Proceedings of the 2018 Designing Interactive Systems Conference. 573–584.Google ScholarDigital Library
- Matthew D. Zeiler, Graham W. Taylor, and Rob Fergus. 2011. Adaptive deconvolutional networks for mid and high level feature learning. Proceedings of the IEEE International Conference on Computer Vision (2011), 2018–2025. https://doi.org/10.1109/ICCV.2011.6126474Google ScholarDigital Library
- Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning Deep Features for Discriminative Localization. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016-Decem (2016), 2921–2929. https://doi.org/10.1109/CVPR.2016.319 arxiv:1512.04150Google ScholarCross Ref
- A Mobile Tool that Helps Nonexperts Make Sense of Pretrained CNN by Interacting with Their Daily Surroundings
Recommendations
Explainability via Interactivity? Supporting Nonexperts’ Sensemaking of pre-trained CNN by Interacting with Their Daily Surroundings
CHI PLAY '21: Extended Abstracts of the 2021 Annual Symposium on Computer-Human Interaction in PlayCurrent research on Explainable AI (XAI) heavily targets on expert users (data scientists or AI developers). However, increasing importance has been argued for making AI more understandable to nonexperts, who are expected to leverage AI techniques, but ...
Explainable Convolutional Neural Networks: A Taxonomy, Review, and Future Directions
Convolutional neural networks (CNNs) have shown promising results and have outperformed classical machine learning techniques in tasks such as image classification and object recognition. Their human-brain like structure enabled them to learn ...
Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism
Computer Vision – ECCV 2022 WorkshopsAbstractIn this paper two new learning-based eXplainable AI (XAI) methods for deep convolutional neural network (DCNN) image classifiers, called L-CAM-Fm and L-CAM-Img, are proposed. Both methods use an attention mechanism that is inserted in the original ...
Comments