Abstract
Research in the area of explainable AI (XAI) has made some progress. Research papers [9, 17] report that explainability cannot be built into technology without understanding the needs, goals, and tasks of the target user group. Little research has been done to provide evidence that explanations should be user role specific. The research results reported in this paper intents to provide data points that explanations need to be user role specific. The research addresses two research questions: RQ 1 Is a one-explanation fits all approach acceptable. To better understand explainability, the paper assumes three explanation qualities: trustworthiness (contributing to acceptability), understandability (contributing to effectiveness), and actionability (contributing to efficiency). The paper hypothesis that trustworthiness is a pre-requisite for understandability which is a pre-requisite for actionability. A user-centered design approach is performed to elicit explanation needs to validate them with representatives of the target user group of energy engineers, professionals that maintain buildings and their building services (providing a comfortable environment for occupants while optimizing cost and other goals). The research found that even for one user group (energy engineers), different explanations are needed for different user steps. The hypothesis of one-explanation fits-all had to be rejected. Based on the results, the hypothetical relationship between trustworthiness, understandability, and actionability had to be rejected. A new hypothetical relationship is formulated: understandability (contribution to effectiveness) and actionability (contributing to efficiency) are pre-requisites for trustworthiness (contributing to acceptability).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Assaf, R., Schumann, A.: Explainable deep neural networks for multivariate time series predictions. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 6488–6490. International Joint Conferences on Artificial Intelligence Organization (2019). https://doi.org/10.24963/ijcai.2019/932
Ben David, D., Resheff, Y.S., Tron, T.: Explainable AI and adoption of financial algorithmic advisors: an experimental study, pp. 390–400. Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3461702.3462565
Carletti, M., Masiero, C., Beghi, A., Susto, G.A.: Explainable machine learning in industry 4.0: evaluating feature importance in anomaly detection to enable root cause analysis. In: 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 21–26 (2019). https://doi.org/10.1109/SMC.2019.8913901
Creswell, J.S., David, C.J.: Research Design. Qualitative, quantitative, and mixed method approaches. SAGE Publications, Los Angeles, CA, USA, 5 edn. (2018)
Degen, H.: Respect the user’s time: experience architecture and design for efficiency. Helmut Degen, Plainsboro, NJ, USA, 1 edn. (Jun 2022), https://www.designforefficiency.com
Degen, H., Budnik, C.J., Chitre, K., Lintereur, A.: How to explain it to facility managers? a qualitative, industrial user research study for explainability. In: Stephanidis, C., et al. (eds.) HCII 2021. LNCS, vol. 13095, pp. 401–422. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90963-5_31
Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.: automated rationale generation: a technique for explainable AI and its effects on human perceptions. arXiv (2019). https://arxiv.org/abs/1901.03729
Granollers, T., Lorés, J.: Incorporation of users in the evaluation of usability by cognitive walkthrough. In: Navarro-Prieto, R., Vidal, J.L. (eds.) HCI related papers of Interacción 2004. pp. 243–255. Springer, Netherlands, Dordrecht, Netherlands (2006). https://doi.org/10.1007/1-4020-4205-1_20
Gunning, D., Vorm, E., Wang, J.Y., Turek, M.: DARPA’s explainable AI (XAI) program: a retrospective. Appl. AI Lett. 2(4), e61 (2021)
Hong, C.W., Lee, C., Lee, K., Ko, M.S., Kim, D.E., Hur, K.: Remaining useful life prognosis for turbofan engine using explainable deep neural networks with dimensionality reduction. Sensors 20(22) (2020). https://doi.org/10.3390/s20226626, https://www.mdpi.com/1424-8220/20/22/6626
Islam, M.R., Ahmed, M.U., Barua, S., Begum, S.: A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 12(3) (2022). https://doi.org/10.3390/app12031353, https://www.mdpi.com/2076-3417/12/3/1353
ISO 9241–110:2020(E): Ergonomics of human-system interaction - Part 110: Dialogue principles. Standard, International Organization for Standardization, Geneva, CH (2020). https://www.iso.org/obp/ui/#iso:std:iso:9241:-110:ed2:v1:en
ISO 9241–210:2019(E): Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems. Standard, International Organization for Standardization, Geneva, CH (2019). https://www.iso.org/standard/77520.html
Itani, S., Lecron, F., Fortemps, P.: A one-class classification decision tree based on kernel density estimation. Appl. Soft Comput. 91, 106250 (2020)
Larasati, R., De Liddo, A., Motta, E.: The effect of explanation styles on user’s trust. In: 2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies (2020). https://oro.open.ac.uk/70421/
Loyola-González, O., et al.: An explainable artificial intelligence model for clustering numerical databases. IEEE Access 8, 52370–52384 (2020). https://doi.org/10.1109/ACCESS.2020.2980581
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Nielsen, J.: Usability Engineering. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1994)
Nor, A.K.M., Pedapati, S.R., Muhammad, M.: Application of explainable AI (XAI) for anomaly detection and prognostic of gas turbines with uncertainty quantification. Preprints (2021). https://www.preprints.org/manuscript/202109.0034/v1
Norman, D.A., Draper, S.W.: User Centered System Design: New Perspectives on Human-Computer Interaction. Taylor & Francis, Hillsdale, NJ, USA (1986)
Nourani, M., King, J.T., Ragan, E.D.: The role of domain expertise in user trust and the impact of first impressions with intelligent systems. ArXiv abs/2008.09100 (2020). https://www.semanticscholar.org/paper/The-Role-of-Domain-Expertise-in-User-Trust-and-the-Nourani-King/23c9685bbecaa187ea4d0d1f8aed8ca46f9bb996
Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for AI-based clinical decision support systems. In: CHI Conference on Human Factors in Computing Systems. CHI ’22, Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3491102.3502104
Serradilla, O., Zugasti, E., Cernuda, C., Aranburu, A., de Okariz, J.R., Zurutuza, U.: Interpreting remaining useful life estimations combining explainable artificial intelligence and domain knowledge in industrial machinery. In: 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8 (2020). https://doi.org/10.1109/FUZZ48607.2020.9177537
Shalaeva, V., Alkhoury, S., Marinescu, J., Amblard, C., Bisson, G.: Multi-operator decision trees for explainable time-series classification. In: Medina, J., et al. (eds.) IPMU 2018. CCIS, vol. 853, pp. 86–99. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91473-2_8
Sun, K.H., Huh, H., Tama, B.A., Lee, S.Y., Jung, J.H., Lee, S.: Vision-based fault diagnostics using explainable deep learning with class activation maps. IEEE Access 8, 129169–129179 (2020). https://doi.org/10.1109/ACCESS.2020.3009852
van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating XAI: a comparison of rule-based and example-based explanations. Artif. Intell. 291, 103404 (2021)
Vilone, G., Longo, L.: Explainable artificial intelligence: a systematic review. arXiv (2020). https://arxiv.org/abs/2006.00093
Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extr. 3(3), 615–661 (2021)
Zeldam, S.t., de Jong, A., Loendersloot, R., Tinga, T.: Automated failure diagnosis in aviation maintenance using explainable artificial intelligence (XAI). In: PHM Society European Conference 4, no. 1 (2018). https://papers.phmsociety.org/index.php/phme/article/view/432
Acknowledgment
The authors want to thank the participants for their time and insights.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Degen, H., Budnik, C., Conte, G., Lintereur, A., Weber, S. (2022). How to Explain It to Energy Engineers?. In: Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S. (eds) HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence. HCII 2022. Lecture Notes in Computer Science, vol 13518. Springer, Cham. https://doi.org/10.1007/978-3-031-21707-4_20
Download citation
DOI: https://doi.org/10.1007/978-3-031-21707-4_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21706-7
Online ISBN: 978-3-031-21707-4
eBook Packages: Computer ScienceComputer Science (R0)