Skip to main content

Generalized Entropy Loss Function in Neural Network: Variable’s Importance and Sensitivity Analysis

  • Conference paper
  • First Online:
Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference (EANN 2020)

Abstract

Artificial neural networks are powerful tools for data analysis and are particularly suitable for modelling relationships between variables for best prediction of an outcome. A large number of error functions have been proposed in the literature to achieve a better predictive power of a neural network. Only a few works employ Tsallis statistics, although the method itself has been successfully applied in other machine learning techniques. This paper undertakes the effort to examine various characteristics of the \( q \)-generalized function based on Tsallis entropy as an alternative loss measure in neural networks. To achieve this goal, we will explore various methods that can be used to interpret supervised neural network models. These methods can be used to visualize the model using the neural network interpretation diagram, assess the importance of variables by disaggregating the model’s weights (Olden’s and Garson’s algorithms) and perform a sensitivity analysis of model responses to input variables (Lek’s profile).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhang, Z., Beck, M.W., Winkler, D.A., Huang, B., Sibanda, W., Goyal, H.: Opening the black box of neural networks: methods for interpreting neural network models in clinical applications. Ann. Transl. Med. 6(11), 216 (2018)

    Article  Google Scholar 

  2. Gajowniczek, K., Orłowski, A., Ząbkowski, T.: Entropy based trees to support decision making for customer churn management. Acta Physica Polonica A 129(5), 971–979 (2016)

    Article  Google Scholar 

  3. Gajowniczek, K., Karpio, K., Łukasiewicz, P., Orłowski, A., Ząbkowski, T.: Q-entropy approach to selecting high income households. Acta Physica Polonica A 127(3a), A38–A44 (2015)

    Article  Google Scholar 

  4. Nafkha, R., Gajowniczek, K., Ząbkowski, T.: Do customers choose proper tariff? empirical analysis based on polish data using unsupervised techniques. Energies 11(3), 514 (2018)

    Article  Google Scholar 

  5. Gajowniczek, K., Orłowski, A., Ząbkowski, T.: Simulation study on the application of the generalized entropy concept in artificial neural networks. Entropy 20(4), 249 (2018)

    Article  Google Scholar 

  6. Golik, P., Doetsch, P., Ney, H.: Cross-entropy vs squared error training: a theoretical and experimental comparison. In: Proceedings of the 14th Annual Conference of the International Speech Communication Association “Interspeech-2013”, Lyon, France, pp. 1756–1760 (2013)

    Google Scholar 

  7. Tsallis, C.: Introduction to Nonextensive Statistical Mechanics. Springer, New York (2009)

    MATH  Google Scholar 

  8. Beck, M.W.: NeuralNetTools: visualization and analysis tools for neural networks. J. Stat. Softw. 85(11), 1–20 (2018)

    Article  Google Scholar 

  9. Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F.E.: A survey of deep neural network architectures and their applications. Neurocomputing 234, 11–26 (2017)

    Article  Google Scholar 

  10. Gajowniczek, K., Ząbkowski, T.: Short term electricity forecasting based on user behavior from individual smart meter data. J. Intell. Fuzzy Syst. 30(1), 223–234 (2016)

    Article  Google Scholar 

  11. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by BackPropagating errors. Nature 323(6088), 533–536 (1986)

    Article  Google Scholar 

  12. Riedmiller, M.: Rprop – Description and Implementation Details; Technical Report. University of Karlsruhe, Germany (1994)

    Google Scholar 

  13. Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 623–666 (1948)

    Article  MathSciNet  Google Scholar 

  14. Zurada, J.M., Malinowski, A., Cloete, I.: Sensitivity analysis for minimization of input data dimension for feedforward neural network. In: IEEE International Symposium on circuits and Systems, ISCAS1994, vol. 6. IEE Press, London (1994)

    Google Scholar 

  15. Engelbrecht, A.P., Cloete, I., Zurada, J.M.: Determining the significance of input parameters using sensitivity analysis. From natural to artificial neural computation, Springer, Malaga-Torremolinos (1995)

    Google Scholar 

  16. Kim, S.H., Yoon, C., Kim, B.J.: Structural monitoring system based on sensitivity analysis and a neural network. Comput.-Aided Civil Infrastruct. Eng. 155, 309–318 (2000)

    Google Scholar 

  17. Dimopoulos, Y., Bourret, P., Lek, S.: Use of some sensitivity criteria for choosing networks with good generalization ability. Neural Process. Lett. 2, 1–4 (1995)

    Article  Google Scholar 

  18. Garson, G.D.: Interpreting neural network connection weights. Artif. Intell. Exp. 6, 46–51 (1991)

    Article  Google Scholar 

  19. Goh, A.T.C.: Back-propagation neural networks for modeling complex systems. Artif. Intell. Eng. 9, 143–151 (1995)

    Article  Google Scholar 

  20. Olden, J.D., Joy, M.K., Death, R.G.: An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecol. Model. 178(3–4), 389–397 (2004)

    Article  Google Scholar 

  21. Lek, S., Delacoste, M., Baran, P., Dimopoulos, I., Lauga, J., Aulagnier, S.: Application of neural networks to modelling nonlinear relationships in ecology. Ecol. Model. 90(1), 39–52 (1996)

    Article  Google Scholar 

  22. Kuhn, M.: Building predictive models in R using the caret package. J. Stat. Softw. 28(5), 1–26 (2008)

    Article  Google Scholar 

  23. The R Development Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria (2014)

    Google Scholar 

  24. Fritsch, S., Guenther, F.: Neuralnet: training of neural networks. R Package Version 1.33.2016. https://CRAN.R-project.org/package=neuralnet. Accessed 10 Jan 2020

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Krzysztof Gajowniczek .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gajowniczek, K., Ząbkowski, T. (2020). Generalized Entropy Loss Function in Neural Network: Variable’s Importance and Sensitivity Analysis. In: Iliadis, L., Angelov, P., Jayne, C., Pimenidis, E. (eds) Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference. EANN 2020. Proceedings of the International Neural Networks Society, vol 2. Springer, Cham. https://doi.org/10.1007/978-3-030-48791-1_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-48791-1_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-48790-4

  • Online ISBN: 978-3-030-48791-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics