Skip to main content

Exploring Transparency in Decisions of Artificial Neural Networks for Regression

  • Conference paper
  • First Online:
Good Practices and New Perspectives in Information Systems and Technologies (WorldCIST 2024)

Abstract

The interpretation and explanation of machine learning models are important tasks to ensure transparency and credibility. Various methods have been developed to achieve this goal, including Shap, Lime, and Explanatory Dashboards. The application of these explainability methods aims not only at interpreting the machine learning model but also at incorporating transparency into the implementation of a system that uses artificial intelligence algorithms. By leveraging part of a an initial manufactoring project, we implemented techniques to enhance the understanding of the model’s time prediction through explainability methods. This improves trust among the involved entities and contributes to the overall effectiveness of the system in various application contexts. Our objective is to apply those strategies to other projects with in industry 4.0 and manufactoring to increase the adoption of new and innovative understandable intelligent systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Burgas, L., Melendez, J., Colomer, J., Massana, J., Pous, C.: N-dimensional extension of unfold-PCA for granular systems monitoring. Eng. Appl. Artif. Intell. 71, 113–124 (2018). https://doi.org/10.1016/J.ENGAPPAI.2018.02.013

    Article  Google Scholar 

  2. Gosiewska, A., Kozak, A., Biecek, P.: Simpler is better: lifting interpretability-performance trade-off via automated feature engineering. Decis. Support Syst. 150, 113556 (2021). https://doi.org/10.1016/J.DSS.2021.113556

  3. Hjort, A., Scheel, I., Sommervoll, D.E., Pensar, J.: Locally interpretable tree boosting: an application to house price prediction. Decis. Support Syst., 114106 (2023). https://doi.org/10.1016/J.DSS.2023.114106

  4. Kraus, M., Feuerriegel, S.: Forecasting remaining useful life: interpretable deep learning approach via variational bayesian inferences. Decis. Support Syst. 125, 113100 (2019). https://doi.org/10.1016/J.DSS.2019.113100

    Article  Google Scholar 

  5. Li, Y., Chan, J., Peko, G., Sundaram, D.: An explanation framework and method for AI-based text emotion analysis and visualisation. Decis. Support Syst., 114121 (2023). https://doi.org/10.1016/J.DSS.2023.114121, https://linkinghub.elsevier.com/retrieve/pii/S0167923623001963

  6. Ribeiro, J., Ramos, J.: Forecasting system and project monitoring in industry. Lect. Notes Netw. Syst. 583, 249–259 (2023). https://doi.org/10.1007/978-3-031-20859-1_25

    Article  Google Scholar 

  7. Schmitz, H.C., Lutz, B., Wolff, D., Neumann, D.: When machines trade on corporate disclosures: using text analytics for investment strategies. Decis. Support Syst. 165, 113892 (2023). https://doi.org/10.1016/J.DSS.2022.113892

  8. Wang, B., Li, W., Bradlow, A., Bazuaye, E., Chan, A.T.: Improving triaging from primary care into secondary care using heterogeneous data-driven hybrid machine learning. Decis. Support Syst. 166, 113899 (2023). https://doi.org/10.1016/J.DSS.2022.113899

    Article  Google Scholar 

Download references

Acknowledgements

This work has been supported by FCT - Fundação para a Ciência e Tecnologia within the RD Units Project Scope: UIDB/00319/2020.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José Ribeiro .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ribeiro, J., Santos, R., Analide, C., Silva, F. (2024). Exploring Transparency in Decisions of Artificial Neural Networks for Regression. In: Rocha, Á., Adeli, H., Dzemyda, G., Moreira, F., Poniszewska-Marańda, A. (eds) Good Practices and New Perspectives in Information Systems and Technologies. WorldCIST 2024. Lecture Notes in Networks and Systems, vol 987. Springer, Cham. https://doi.org/10.1007/978-3-031-60221-4_34

Download citation

Publish with us

Policies and ethics