skip to main content
10.1145/3472301.3484360acmotherconferencesArticle/Chapter ViewAbstractPublication PagesihcConference Proceedingsconference-collections
research-article

Analysis of the User Experience with a Multiperspective Tool for Explainable Machine Learning in Light of Interactive Principles

Published: 18 October 2021 Publication History

Abstract

Machine Learning (ML) models have been widely used nowadays, as "magical black boxes", in many different domains and for distinct goals, but the way they generate their results is not fully understood yet, including by knowledgeable users. If users cannot interpret or trust the prediction generated by the model, they will not use it. Furthermore, the human role is often not (properly) considered for the developing of ML systems. In this article, we present a discussion about user centered development of Interactive Machine Learning systems. We ground our discussion on Explain-ML, a multi-perspective human-centered machine learning tool that assists users in the building, refinement and interpretability of ML models. To do so, we have conducted an analysis of the results of the evaulation of Explain-ML with potential users in light of the principles for Interactive ML systems design. Our results contribute to the understanding and consolidation of these principles. Moreover, the experience gained from the discussion about Explain-ML, its user centered development and its evaluation with users, based on these principles, is relevant to the research and development of Interactive ML Tools that involve ML explainability.

References

[1]
Julius Adebayo and Lalana Kagal. 2016. Iterative orthogonal feature projection for diagnosing bias in black-box models. arXiv preprint arXiv:1611.04967 (2016).
[2]
Philip Adler, Casey Falk, Sorelle A Friedler, Tionney Nix, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, and Suresh Venkatasubramanian. 2018. Auditing black-box models for indirect influence. Knowledge and Information Systems 54, 1 (2018), 95--122.
[3]
J.M Carroll. 2000. Introduction to this Special Issue on "Scenario-Based System Development". Interacting with Computers 13, 1 (09 2000), 41--42.
[4]
Paulo Cortez and Mark J Embrechts. 2011. Opening black box data mining models using sensitivity analysis. In 2011 IEEE Symposium on Computational Intelligence and Data Mining (CIDM). IEEE, 341--348.
[5]
John J Dudley and Per Ola Kristensson. 2018. A review of user interface design for interactive machine learning. ACM Transactions on Interactive Intelligent Systems (TiiS) 8, 2 (2018), 8.
[6]
Strumbelj Erik and Igor Kononenko. 2010. An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research 11, Jan (2010), 1--18.
[7]
Jerry Alan Fails and Dan R Olsen Jr. 2003. Interactive machine learning. In Proceedings of the 8th International Conference on Intelligent User Interfaces. ACM, 39--45.
[8]
Rebecca Fiebrink and Marco Gillies. 2018. Introduction to the special issue on human-centered machine learning. ACM Transactions on Interactive Intelligent Systems (TiiS) 8, 2 (2018), 7.
[9]
Marco Gillies, Rebecca Fiebrink, Atau Tanaka, Jérémie Garcia, Frédéric Bevilacqua, Alexis Heloir, Fabrizio Nunnari, Wendy Mackay, Saleema Amershi, Bongshin Lee, et al. 2016. Human-centred machine learning. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 3558--3565.
[10]
Alex Goldstein, Adam Kapelner, Justin Bleich, and Emil Pitkin. 2015. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. J. of Comput. and Graphical Statistics 24, 1 (2015), 44--65.
[11]
Bryce Goodman and Seth Flaxman. 2017. European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine 38, 3 (2017), 50--57.
[12]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. 2018. Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018).
[13]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 93.
[14]
Qi Han, Weimeng Zhu, Florian Heimerl, Steffen Koch, and Thomas Ertl. 2016. A visual approach for interactive co-training. In KDD 2016 Workshop on Interactive Data Exploration and Analytics. 46--52.
[15]
Giles Hooker. 2004. Discovering additive structure in black box functions. In Proc. of ACM SIGKDD. ACM, 575--580.
[16]
Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 1885--1894.
[17]
Josua Krause, Adam Perer, and Enrico Bertini. 2016. Using visual analytics to interpret predictive machine learning models. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016).
[18]
Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 5686--5697.
[19]
Cloudera Fast Forward Labs. 2020. Interpretability, Report FF06. Technical Report. Cloudera Fast Forward Labs. https://ff06-2020.fastforwardlabs.com/.
[20]
Jonathan Lazar, Jinjuan Heidi Feng, and Harry Hochheiser. 2017. Research methods in human-computer interaction. Morgan Kaufmann.
[21]
Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. 2021. Explainable ai: A review of machine learning interpretability methods. Entropy 23, 1 (2021), 18.
[22]
Bárbara G. C. O. Lopes. 2020. Explain-ML: A Human-Centered Multiperspective and Interactive Visual Tool For Explainable Machine Learning. Master's thesis. Universidade Federal de Minas Gerais, Belo Horizonte, Minas Gerais, Brasil.
[23]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems. 4765--4774.
[24]
Sabine Madsen and Lene Nielsen. 2010. Exploring Persona-Scenarios - Using Storytelling to Create Design Ideas. In Human Work Interaction Design, Dinesh Katre, Rikke Orngreen, Pradeep Yammiyavar, and Torkil Clemmesen (Eds.). Springer Science+Business Media, Germany, 57--66. https://doi.org/10.1007/978-3-642-11762-6
[25]
Yao Ming, Huamin Qu, and Enrico Bertini. 2019. RuleMatrix: Visualizing and Understanding Classifiers with Rules. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2019), 342--352. https://doi.org/10.1109/TVCG.2018.2864812
[26]
Mário Popolin Neto and Fernando V. Paulovich. 2021. Explainable Matrix - Visualization for Global and Local Interpretability of Random Forest Classification Ensembles. IEEE Transactions on Visualization and Computer Graphics 27, 2 (2021), 1427--1437.
[27]
Jennifer Preece, Helen Sharp, and Yvonne Rogers. 2019. Interaction design: beyond human-computer interaction. John Wiley & Sons, 408.
[28]
Gonzalo Ramos, Jina Suh, Soroush Ghorashi, Christopher Meek, Richard Banks, Saleema Amershi, Rebecca Fiebrink, Alison Smith-Renner, and Gagan Bansal. 2019. Emerging Perspectives in Human-Centered Machine Learning. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, W11.
[29]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD. 1135--1144.
[30]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In Thirty-Second AAAI Conference on Artificial Intelligence.
[31]
Mary Beth Rosson and John M Carroll. 2002. Scenario-based usability engineering. In Proceedings of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques. 413--413.
[32]
Sameer Singh, Marco Tulio Ribeiro, and Carlos Guestrin. 2016. Programs as black-box explanations. arXiv preprint arXiv:1611.07579 (2016).
[33]
Gabriele Tolomei, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017. Interpretable predictions of tree-based ensembles via actionable feature tweaking. In Proceedings of the 23rd ACM SIGKDD. 465--474.
[34]
Ryan Turner. 2016. A model explanation system. In 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 1--6.
[35]
Marina M-C Vidovic, Nico Görnitz, Klaus-Robert Müller, and Marius Kloft. 2016. Feature importance measure for non-linear learning algorithms. arXiv preprint arXiv:1611.07567 (2016).
[36]
Qianwen Wang, Yao Ming, Zhihua Jin, Qiaomu Shen, Dongyu Liu, Micah J Smith, Kalyan Veeramachaneni, and Huamin Qu. 2019. Atmseer: Increasing transparency and controllability in automated machine learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 681.
[37]
Jiawei Zhang, Yang Wang, Piero Molino, Lezhi Li, and David S. Ebert. 2019. Manifold: A Model-Agnostic Framework for Interpretation and Diagnosis of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2019), 364--373.
[38]
Xun Zhao, Yanhong Wu, Dik Lun Lee, and Weiwei Cui. 2019. iForest: Interpreting Random Forests via Visual Analytics. IEEE Transactions on Visualization and Computer Graphics 25, 1 (2019), 407--416. https://doi.org/10.1109/TVCG.2018.2864475

Cited By

View all

Index Terms

  1. Analysis of the User Experience with a Multiperspective Tool for Explainable Machine Learning in Light of Interactive Principles

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        IHC '21: Proceedings of the XX Brazilian Symposium on Human Factors in Computing Systems
        October 2021
        523 pages
        ISBN:9781450386173
        DOI:10.1145/3472301
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        In-Cooperation

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 18 October 2021

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Human Centered Machine Learning
        2. Human-Computer Interaction
        3. Interpretability
        4. Machine Learning Models

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        IHC '21

        Acceptance Rates

        IHC '21 Paper Acceptance Rate 29 of 77 submissions, 38%;
        Overall Acceptance Rate 331 of 973 submissions, 34%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 174
          Total Downloads
        • Downloads (Last 12 months)21
        • Downloads (Last 6 weeks)3
        Reflects downloads up to 19 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media