Skip to main content

How to Explain It to Energy Engineers?

A Qualitative User Study About Trustworthiness, Understandability, and Actionability

  • Conference paper
  • First Online:
HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence (HCII 2022)

Abstract

Research in the area of explainable AI (XAI) has made some progress. Research papers [9, 17] report that explainability cannot be built into technology without understanding the needs, goals, and tasks of the target user group. Little research has been done to provide evidence that explanations should be user role specific. The research results reported in this paper intents to provide data points that explanations need to be user role specific. The research addresses two research questions: RQ 1 Is a one-explanation fits all approach acceptable. To better understand explainability, the paper assumes three explanation qualities: trustworthiness (contributing to acceptability), understandability (contributing to effectiveness), and actionability (contributing to efficiency). The paper hypothesis that trustworthiness is a pre-requisite for understandability which is a pre-requisite for actionability. A user-centered design approach is performed to elicit explanation needs to validate them with representatives of the target user group of energy engineers, professionals that maintain buildings and their building services (providing a comfortable environment for occupants while optimizing cost and other goals). The research found that even for one user group (energy engineers), different explanations are needed for different user steps. The hypothesis of one-explanation fits-all had to be rejected. Based on the results, the hypothetical relationship between trustworthiness, understandability, and actionability had to be rejected. A new hypothetical relationship is formulated: understandability (contribution to effectiveness) and actionability (contributing to efficiency) are pre-requisites for trustworthiness (contributing to acceptability).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Assaf, R., Schumann, A.: Explainable deep neural networks for multivariate time series predictions. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 6488–6490. International Joint Conferences on Artificial Intelligence Organization (2019). https://doi.org/10.24963/ijcai.2019/932

  2. Ben David, D., Resheff, Y.S., Tron, T.: Explainable AI and adoption of financial algorithmic advisors: an experimental study, pp. 390–400. Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3461702.3462565

  3. Carletti, M., Masiero, C., Beghi, A., Susto, G.A.: Explainable machine learning in industry 4.0: evaluating feature importance in anomaly detection to enable root cause analysis. In: 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 21–26 (2019). https://doi.org/10.1109/SMC.2019.8913901

  4. Creswell, J.S., David, C.J.: Research Design. Qualitative, quantitative, and mixed method approaches. SAGE Publications, Los Angeles, CA, USA, 5 edn. (2018)

    Google Scholar 

  5. Degen, H.: Respect the user’s time: experience architecture and design for efficiency. Helmut Degen, Plainsboro, NJ, USA, 1 edn. (Jun 2022), https://www.designforefficiency.com

  6. Degen, H., Budnik, C.J., Chitre, K., Lintereur, A.: How to explain it to facility managers? a qualitative, industrial user research study for explainability. In: Stephanidis, C., et al. (eds.) HCII 2021. LNCS, vol. 13095, pp. 401–422. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-90963-5_31

    Chapter  Google Scholar 

  7. Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.: automated rationale generation: a technique for explainable AI and its effects on human perceptions. arXiv (2019). https://arxiv.org/abs/1901.03729

  8. Granollers, T., Lorés, J.: Incorporation of users in the evaluation of usability by cognitive walkthrough. In: Navarro-Prieto, R., Vidal, J.L. (eds.) HCI related papers of Interacción 2004. pp. 243–255. Springer, Netherlands, Dordrecht, Netherlands (2006). https://doi.org/10.1007/1-4020-4205-1_20

  9. Gunning, D., Vorm, E., Wang, J.Y., Turek, M.: DARPA’s explainable AI (XAI) program: a retrospective. Appl. AI Lett. 2(4), e61 (2021)

    Article  Google Scholar 

  10. Hong, C.W., Lee, C., Lee, K., Ko, M.S., Kim, D.E., Hur, K.: Remaining useful life prognosis for turbofan engine using explainable deep neural networks with dimensionality reduction. Sensors 20(22) (2020). https://doi.org/10.3390/s20226626, https://www.mdpi.com/1424-8220/20/22/6626

  11. Islam, M.R., Ahmed, M.U., Barua, S., Begum, S.: A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 12(3) (2022). https://doi.org/10.3390/app12031353, https://www.mdpi.com/2076-3417/12/3/1353

  12. ISO 9241–110:2020(E): Ergonomics of human-system interaction - Part 110: Dialogue principles. Standard, International Organization for Standardization, Geneva, CH (2020). https://www.iso.org/obp/ui/#iso:std:iso:9241:-110:ed2:v1:en

  13. ISO 9241–210:2019(E): Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems. Standard, International Organization for Standardization, Geneva, CH (2019). https://www.iso.org/standard/77520.html

  14. Itani, S., Lecron, F., Fortemps, P.: A one-class classification decision tree based on kernel density estimation. Appl. Soft Comput. 91, 106250 (2020)

    Article  Google Scholar 

  15. Larasati, R., De Liddo, A., Motta, E.: The effect of explanation styles on user’s trust. In: 2020 Workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies (2020). https://oro.open.ac.uk/70421/

  16. Loyola-González, O., et al.: An explainable artificial intelligence model for clustering numerical databases. IEEE Access 8, 52370–52384 (2020). https://doi.org/10.1109/ACCESS.2020.2980581

    Article  Google Scholar 

  17. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  18. Nielsen, J.: Usability Engineering. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1994)

    MATH  Google Scholar 

  19. Nor, A.K.M., Pedapati, S.R., Muhammad, M.: Application of explainable AI (XAI) for anomaly detection and prognostic of gas turbines with uncertainty quantification. Preprints (2021). https://www.preprints.org/manuscript/202109.0034/v1

  20. Norman, D.A., Draper, S.W.: User Centered System Design: New Perspectives on Human-Computer Interaction. Taylor & Francis, Hillsdale, NJ, USA (1986)

    Book  Google Scholar 

  21. Nourani, M., King, J.T., Ragan, E.D.: The role of domain expertise in user trust and the impact of first impressions with intelligent systems. ArXiv abs/2008.09100 (2020). https://www.semanticscholar.org/paper/The-Role-of-Domain-Expertise-in-User-Trust-and-the-Nourani-King/23c9685bbecaa187ea4d0d1f8aed8ca46f9bb996

  22. Panigutti, C., Beretta, A., Giannotti, F., Pedreschi, D.: Understanding the impact of explanations on advice-taking: a user study for AI-based clinical decision support systems. In: CHI Conference on Human Factors in Computing Systems. CHI ’22, Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3491102.3502104

  23. Serradilla, O., Zugasti, E., Cernuda, C., Aranburu, A., de Okariz, J.R., Zurutuza, U.: Interpreting remaining useful life estimations combining explainable artificial intelligence and domain knowledge in industrial machinery. In: 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8 (2020). https://doi.org/10.1109/FUZZ48607.2020.9177537

  24. Shalaeva, V., Alkhoury, S., Marinescu, J., Amblard, C., Bisson, G.: Multi-operator decision trees for explainable time-series classification. In: Medina, J., et al. (eds.) IPMU 2018. CCIS, vol. 853, pp. 86–99. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91473-2_8

    Chapter  Google Scholar 

  25. Sun, K.H., Huh, H., Tama, B.A., Lee, S.Y., Jung, J.H., Lee, S.: Vision-based fault diagnostics using explainable deep learning with class activation maps. IEEE Access 8, 129169–129179 (2020). https://doi.org/10.1109/ACCESS.2020.3009852

    Article  Google Scholar 

  26. van der Waa, J., Nieuwburg, E., Cremers, A., Neerincx, M.: Evaluating XAI: a comparison of rule-based and example-based explanations. Artif. Intell. 291, 103404 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  27. Vilone, G., Longo, L.: Explainable artificial intelligence: a systematic review. arXiv (2020). https://arxiv.org/abs/2006.00093

  28. Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extr. 3(3), 615–661 (2021)

    Article  Google Scholar 

  29. Zeldam, S.t., de Jong, A., Loendersloot, R., Tinga, T.: Automated failure diagnosis in aviation maintenance using explainable artificial intelligence (XAI). In: PHM Society European Conference 4, no. 1 (2018). https://papers.phmsociety.org/index.php/phme/article/view/432

Download references

Acknowledgment

The authors want to thank the participants for their time and insights.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Helmut Degen .

Editor information

Editors and Affiliations

7 Appendix

7 Appendix

Fig. 8.
figure 8

Selected design elements for view 1

Fig. 9.
figure 9

Selected design elements for view 2

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Degen, H., Budnik, C., Conte, G., Lintereur, A., Weber, S. (2022). How to Explain It to Energy Engineers?. In: Chen, J.Y.C., Fragomeni, G., Degen, H., Ntoa, S. (eds) HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence. HCII 2022. Lecture Notes in Computer Science, vol 13518. Springer, Cham. https://doi.org/10.1007/978-3-031-21707-4_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21707-4_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21706-7

  • Online ISBN: 978-3-031-21707-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics