Abstract
The interpretation and explanation of machine learning models are important tasks to ensure transparency and credibility. Various methods have been developed to achieve this goal, including Shap, Lime, and Explanatory Dashboards. The application of these explainability methods aims not only at interpreting the machine learning model but also at incorporating transparency into the implementation of a system that uses artificial intelligence algorithms. By leveraging part of a an initial manufactoring project, we implemented techniques to enhance the understanding of the model’s time prediction through explainability methods. This improves trust among the involved entities and contributes to the overall effectiveness of the system in various application contexts. Our objective is to apply those strategies to other projects with in industry 4.0 and manufactoring to increase the adoption of new and innovative understandable intelligent systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Burgas, L., Melendez, J., Colomer, J., Massana, J., Pous, C.: N-dimensional extension of unfold-PCA for granular systems monitoring. Eng. Appl. Artif. Intell. 71, 113–124 (2018). https://doi.org/10.1016/J.ENGAPPAI.2018.02.013
Gosiewska, A., Kozak, A., Biecek, P.: Simpler is better: lifting interpretability-performance trade-off via automated feature engineering. Decis. Support Syst. 150, 113556 (2021). https://doi.org/10.1016/J.DSS.2021.113556
Hjort, A., Scheel, I., Sommervoll, D.E., Pensar, J.: Locally interpretable tree boosting: an application to house price prediction. Decis. Support Syst., 114106 (2023). https://doi.org/10.1016/J.DSS.2023.114106
Kraus, M., Feuerriegel, S.: Forecasting remaining useful life: interpretable deep learning approach via variational bayesian inferences. Decis. Support Syst. 125, 113100 (2019). https://doi.org/10.1016/J.DSS.2019.113100
Li, Y., Chan, J., Peko, G., Sundaram, D.: An explanation framework and method for AI-based text emotion analysis and visualisation. Decis. Support Syst., 114121 (2023). https://doi.org/10.1016/J.DSS.2023.114121, https://linkinghub.elsevier.com/retrieve/pii/S0167923623001963
Ribeiro, J., Ramos, J.: Forecasting system and project monitoring in industry. Lect. Notes Netw. Syst. 583, 249–259 (2023). https://doi.org/10.1007/978-3-031-20859-1_25
Schmitz, H.C., Lutz, B., Wolff, D., Neumann, D.: When machines trade on corporate disclosures: using text analytics for investment strategies. Decis. Support Syst. 165, 113892 (2023). https://doi.org/10.1016/J.DSS.2022.113892
Wang, B., Li, W., Bradlow, A., Bazuaye, E., Chan, A.T.: Improving triaging from primary care into secondary care using heterogeneous data-driven hybrid machine learning. Decis. Support Syst. 166, 113899 (2023). https://doi.org/10.1016/J.DSS.2022.113899
Acknowledgements
This work has been supported by FCT - Fundação para a Ciência e Tecnologia within the RD Units Project Scope: UIDB/00319/2020.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ribeiro, J., Santos, R., Analide, C., Silva, F. (2024). Exploring Transparency in Decisions of Artificial Neural Networks for Regression. In: Rocha, Á., Adeli, H., Dzemyda, G., Moreira, F., Poniszewska-Marańda, A. (eds) Good Practices and New Perspectives in Information Systems and Technologies. WorldCIST 2024. Lecture Notes in Networks and Systems, vol 987. Springer, Cham. https://doi.org/10.1007/978-3-031-60221-4_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-60221-4_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-60220-7
Online ISBN: 978-3-031-60221-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)