Abstract
As an important part of machine learning, deep learning has been intensively used in various fields relevant to data science. Despite of its popularity in practice, it is still of challenging to compute the optimal parameters of a deep neural network, which has been shown to be NP-hard. We devote the present paper to an analysis of deep neural networks with nonatomic congestion games, and expect that this can inspire the computation of optimal parameters of deep neural networks. We consider a deep neural network with linear activation functions of the form \(x+b\) for some biases b that need not be zero. We show under mild conditions that learning the weights and the biases is equivalent to computing the social optimum flow of a nonatomic congestion game. When the deep neural network is for classification, then the learning is even equivalent to computing the equilibrium flow. These results generalize a recent seminar work by [18], who have shown similar results for deep neural networks of linear activation functions with zero biases.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alpaydin, E.: Introduction to Machine Learning (Adaptive Computation and Machine Learning Series). The MIT Press, Cambridge (2008)
Balduzzi, D.: Deep online convex optimization with gated games. arXiv preprint, arXiv:1604.01952 (2016)
Blum, A.L., Rivest, R.L.: Training a 3-node neural network is np-complete. Neural Netw. 5(1), 117–127 (1992)
Ge, R., Huang, F., Jin, C., Yuan, Y.: Escaping from saddle points - online stochastic gradient for tensor decomposition. In: Proceedings of the Conference on Learning Theory, vol. 40, pp. 797–842 (2015)
Goldberg, D.E., Holland, J.H.: Genetic algorithms and machine learning. Mach. Learn. 3, 95–99 (1988)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. The MIT Press, Cambridge (2016)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Hu, J., Wellman, M.P.: Nash q-learning for general-sum stochastic games. J. Mach. Learn. Res. 4, 1039–1069 (2003)
Jin, C., Ge, R., Netrapalli, P., Kakade, S.M., Jordan, M.I.: How to escape saddle points efficiently. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1724–1732. PMLR (2017)
Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the Eleventh International Conference on Machine Learning, pp. 157–163. Morgan Kaufmann (1994)
Littman, M.L., Szepesvári, C.: A generalized reinforcement-learning model: convergence and applications. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 310–318. Morgan Kaufmann (1996)
Liu, Y., Pi, D.: A novel kernel SVM algorithm with game theory for network intrusion detection. Trans. Internet Inf. Syst. 11(8), 4043–4060 (2017)
Kollovieh, M., Bani-Harouni, D.: Machine learning. Der Hautarzt 72(8), 719–719 (2021). https://doi.org/10.1007/s00105-021-04834-0
Murty, K.G., Kabadi, S.N.: Some NP-complete problems in quadratic and nonlinear programming. Math. Program. 39(2), 117–129 (1987)
Roughgarden, T.: Routing games. Algorithmic Game Theory 18, 459–484 (2007)
Roughgarden, T., Tardos, É.: How bad is selfish routing? J. ACM 49(2), 236–259 (2002)
Schuurmans, D., Zinkevich, M.: Deep learning games. In: Proceedings of the Annual Conference on Neural Information Processing Systems, vol. 29, pp. 1678–1686 (2016)
Vesseron, N., Redko, I., Laclau, C.: Deep neural networks are congestion games: From loss landscape to Wardrop equilibrium and beyond. In: Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, vol. 130, pp. 1765–1773. PMLR (2021)
Wardrop, J.G.: Road paper. Some theoretical aspects of road traffic research. In: Proceedings of the Institution of Civil Engineers, vol. 1, pp. 325–362 (1952)
Wu, Z., Möhring, R.H., Chen, Y., Xu, D.: Selfishness need not be bad. Oper. Res. 69(2), 410–435 (2021)
Acknowledgement
The first and third authors are supported by Beijing Natural Science Foundation Project No. Z200002 and National Natural Science Foundation of China (No. 12131003); The second author is supported by National Natural Science Foundation of China (No. 61906062), Natural Science Foundation Project of Anhui Science and Technology Department (No. 1908085QF262), and Natural Science Foundation Project of Anhui Education Department (No. KJ2019A0834).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Ren, C., Wu, Z., Xu, D., Xu, W. (2021). A Game-Theoretic Analysis of Deep Neural Networks. In: Wu, W., Du, H. (eds) Algorithmic Aspects in Information and Management. AAIM 2021. Lecture Notes in Computer Science(), vol 13153. Springer, Cham. https://doi.org/10.1007/978-3-030-93176-6_31
Download citation
DOI: https://doi.org/10.1007/978-3-030-93176-6_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93175-9
Online ISBN: 978-3-030-93176-6
eBook Packages: Computer ScienceComputer Science (R0)