Skip to main content

Adapting Loss Functions to Learning Progress Improves Accuracy of Classification in Neural Networks

  • Conference paper
  • First Online:
Foundations of Intelligent Systems (ISMIS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13515))

Included in the following conference series:

  • 734 Accesses

Abstract

Power error loss (PEL) has recently been suggested as a more efficient generalization of binary or categorical cross entropy (BCE/CCE). However, as PEL requires to adapt the exponent q of a power function to training data and learning progress, it has been argued that the observed improvements may be due to implicitly optimizing learning rate. Here we invalidate this argument by optimizing learning rate in each training step. We find that PEL clearly remains superior over BCE/CCE if q is properly decreased during learning. This proves that the dominant mechanism of PEL is better adapting to output error distributions, rather than implicitly manipulating learning rate.

This work was supported by the Ministerium für Wirtschaft, Arbeit und Tourismus Baden-Württemberg (VwV Invest BW - Innovation) via the project KICAD (FKZ BW1_0092/02) and by the Deutsches Bundesministerium für Verkehr und digitale Infrastruktur (Modernitätsfonds/mFUND) via the project AI4Infra (FKZ 19F2112C). The author acknowledges support by the state of Baden-Württemberg through bwHPC. The author is also grateful to German Nemirovski, Rober Frank, and Lukas Lorek for valuable discussions and help with the computing infrastructure.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)

    Article  Google Scholar 

  2. Bishop, C.: Pattern Recognition and Machine Learning. Springer, New York (2006)

    MATH  Google Scholar 

  3. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV). arXiv preprint arXiv:1802.02611v2 (2018)

  4. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Teh, Y., Titterington, M. (eds.) Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, vol. 9, pp. 249–256. JMLR Workshop and Conference Proceedings, Chia Laguna Resort, Sardinia, Italy (2010)

    Google Scholar 

  5. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.org

  6. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), December 2015

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 770–778. IEEE Computer Society (2016). https://doi.org/10.1109/CVPR.2016.90

  8. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: Bengio, Y., LeCun, Y. (eds.) 3rd Proceedings of the International Conference on Learning Representations (ICLR). arXiv:1412.6980v9 (2015)

  9. Knoblauch, A.: Power function error initialization can improve convergence of backpropagation learning in neural networks for classification. Neural Comput. 33(8), 2193–2225 (2021)

    Article  MathSciNet  Google Scholar 

  10. Knoblauch, A.: On the antiderivatives of \(x^p/(1-x)\) with an application to optimize loss functions for classification with neural networks. Ann. Math. Artif. Intell. 90(4), 425–452 (2022)

    Google Scholar 

  11. Knoblauch, A., Luniak, P.: Improving learning of neural networks for classification, segmentation, and associative memory. In: Proceedings of the 7th bwHPC Symposium, 8 November 2021, Ulm, Germany (2022, to appear)

    Google Scholar 

  12. Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 25. Curran Associates, Inc. (2012)

    Google Scholar 

  13. Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, Department of Computer Science, University of Toronto (2009)

    Google Scholar 

  14. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  15. Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016)

  16. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  17. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(56), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  18. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 6105–6114. PMLR, 09–15 June 2019

    Google Scholar 

  19. Werbos, P.J.: Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Harvard University (1974)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Knoblauch .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Knoblauch, A. (2022). Adapting Loss Functions to Learning Progress Improves Accuracy of Classification in Neural Networks. In: Ceci, M., Flesca, S., Masciari, E., Manco, G., Raś, Z.W. (eds) Foundations of Intelligent Systems. ISMIS 2022. Lecture Notes in Computer Science(), vol 13515. Springer, Cham. https://doi.org/10.1007/978-3-031-16564-1_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16564-1_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16563-4

  • Online ISBN: 978-3-031-16564-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics