Skip to main content

A Game-Theoretic Analysis of Deep Neural Networks

  • Conference paper
  • First Online:
Algorithmic Aspects in Information and Management (AAIM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13153))

Included in the following conference series:

  • 619 Accesses

Abstract

As an important part of machine learning, deep learning has been intensively used in various fields relevant to data science. Despite of its popularity in practice, it is still of challenging to compute the optimal parameters of a deep neural network, which has been shown to be NP-hard. We devote the present paper to an analysis of deep neural networks with nonatomic congestion games, and expect that this can inspire the computation of optimal parameters of deep neural networks. We consider a deep neural network with linear activation functions of the form \(x+b\) for some biases b that need not be zero. We show under mild conditions that learning the weights and the biases is equivalent to computing the social optimum flow of a nonatomic congestion game. When the deep neural network is for classification, then the learning is even equivalent to computing the equilibrium flow. These results generalize a recent seminar work by [18], who have shown similar results for deep neural networks of linear activation functions with zero biases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Alpaydin, E.: Introduction to Machine Learning (Adaptive Computation and Machine Learning Series). The MIT Press, Cambridge (2008)

    MATH  Google Scholar 

  2. Balduzzi, D.: Deep online convex optimization with gated games. arXiv preprint, arXiv:1604.01952 (2016)

  3. Blum, A.L., Rivest, R.L.: Training a 3-node neural network is np-complete. Neural Netw. 5(1), 117–127 (1992)

    Article  Google Scholar 

  4. Ge, R., Huang, F., Jin, C., Yuan, Y.: Escaping from saddle points - online stochastic gradient for tensor decomposition. In: Proceedings of the Conference on Learning Theory, vol. 40, pp. 797–842 (2015)

    Google Scholar 

  5. Goldberg, D.E., Holland, J.H.: Genetic algorithms and machine learning. Mach. Learn. 3, 95–99 (1988)

    Article  Google Scholar 

  6. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. The MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  7. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  8. Hu, J., Wellman, M.P.: Nash q-learning for general-sum stochastic games. J. Mach. Learn. Res. 4, 1039–1069 (2003)

    MathSciNet  MATH  Google Scholar 

  9. Jin, C., Ge, R., Netrapalli, P., Kakade, S.M., Jordan, M.I.: How to escape saddle points efficiently. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1724–1732. PMLR (2017)

    Google Scholar 

  10. Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the Eleventh International Conference on Machine Learning, pp. 157–163. Morgan Kaufmann (1994)

    Google Scholar 

  11. Littman, M.L., Szepesvári, C.: A generalized reinforcement-learning model: convergence and applications. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 310–318. Morgan Kaufmann (1996)

    Google Scholar 

  12. Liu, Y., Pi, D.: A novel kernel SVM algorithm with game theory for network intrusion detection. Trans. Internet Inf. Syst. 11(8), 4043–4060 (2017)

    Google Scholar 

  13. Kollovieh, M., Bani-Harouni, D.: Machine learning. Der Hautarzt 72(8), 719–719 (2021). https://doi.org/10.1007/s00105-021-04834-0

    Article  Google Scholar 

  14. Murty, K.G., Kabadi, S.N.: Some NP-complete problems in quadratic and nonlinear programming. Math. Program. 39(2), 117–129 (1987)

    Article  MathSciNet  Google Scholar 

  15. Roughgarden, T.: Routing games. Algorithmic Game Theory 18, 459–484 (2007)

    MATH  Google Scholar 

  16. Roughgarden, T., Tardos, É.: How bad is selfish routing? J. ACM 49(2), 236–259 (2002)

    Article  MathSciNet  Google Scholar 

  17. Schuurmans, D., Zinkevich, M.: Deep learning games. In: Proceedings of the Annual Conference on Neural Information Processing Systems, vol. 29, pp. 1678–1686 (2016)

    Google Scholar 

  18. Vesseron, N., Redko, I., Laclau, C.: Deep neural networks are congestion games: From loss landscape to Wardrop equilibrium and beyond. In: Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, vol. 130, pp. 1765–1773. PMLR (2021)

    Google Scholar 

  19. Wardrop, J.G.: Road paper. Some theoretical aspects of road traffic research. In: Proceedings of the Institution of Civil Engineers, vol. 1, pp. 325–362 (1952)

    Google Scholar 

  20. Wu, Z., Möhring, R.H., Chen, Y., Xu, D.: Selfishness need not be bad. Oper. Res. 69(2), 410–435 (2021)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgement

The first and third authors are supported by Beijing Natural Science Foundation Project No. Z200002 and National Natural Science Foundation of China (No. 12131003); The second author is supported by National Natural Science Foundation of China (No. 61906062), Natural Science Foundation Project of Anhui Science and Technology Department (No. 1908085QF262), and Natural Science Foundation Project of Anhui Education Department (No. KJ2019A0834).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zijun Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ren, C., Wu, Z., Xu, D., Xu, W. (2021). A Game-Theoretic Analysis of Deep Neural Networks. In: Wu, W., Du, H. (eds) Algorithmic Aspects in Information and Management. AAIM 2021. Lecture Notes in Computer Science(), vol 13153. Springer, Cham. https://doi.org/10.1007/978-3-030-93176-6_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93176-6_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93175-9

  • Online ISBN: 978-3-030-93176-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics