Abstract
Neural networks have been very successful in many learning tasks, for their powerful ability to fit the data. Recently, to understand the success of neural networks, much attention has been paid to the relationship between differential equations and neural networks. Some research suggests that the depth of neural networks is important for their success. However, the understanding of neural networks from the differential equation perspective is still very preliminary. In this work, also connecting with the differential equation, we extend the depth of neural networks to infinity, and remove the existing constraint that parameters of every layer have to be the same by using another ordinary differential equation(ODE) to model the evolution of the weights. We prove that the ODE can model any continuous evolutionary weights and validate it by an experiment. Meanwhile, we propose a new training strategy to overcome the inefficiency of pure adjoint method. This strategy allows us to further understand the relationship between ResNet with finite layers and that with infinite layers. Our experiment indicates that the former can be a good initialization of the latter. Finally, we give a heuristic explanation on why the new training method works better than pure adjoint method. Further experiments show that our neural ODE with evolutionary weights converges faster than that with fixed weights.
The first author is a student.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bengio, Y., et al.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)
Chang, B., Meng, L., Haber, E., Tung, F., Begert, D.: Multi-level residual networks from dynamical systems view. arXiv preprint arXiv:1710.10348 ( 2017)
Chen, T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Advances in Neural Information Processing Systems, pp. 6571–6583 (2018)
Weinan, E.: A proposal on machine learning via dynamical systems. Commun. Math. Stat. 5(5), 1–11 (2017)
Fang, C., Zhao, Z., Zhou, P., Lin, Z.: Feature learning via partial differential equation with applications to face recognition. Pattern Recogn. 69, 14–25 (2017)
Haber, E., Ruthotto, L.: Stable architectures for deep neural networks. Inverse Prob. 34(1), 014004 (2017)
Haber, E., Ruthotto, L., Holtham, E., Jun, S-H.: Learning across scales–Multiscale methods for convolution neural networks. In: AAAI Conference on Artificial Intelligence (2018)
Hardt, M., Ma, T.: Identity matters in deep learning. arXiv preprint arXiv:1611.04231 (2016)
He, K., Sun, J.: Convolutional neural networks at constrained time cost. In: Computer Vision and Pattern Recognition, pp. 5353–5360 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Jastrzebski, S., Arpit, D., Ballas, N., Verma, V., Che, T., Bengio, Y.: Residual connections encourage iterative inference. arXiv preprint arXiv:1710.04773 (2017)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. In: Advances in Neural Information Processing Systems, pp. 10215–10224 (2018)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Li, H., Xu, Z., Taylor, G., Studer, C., Goldstein, T.: Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, pp. 6389–6399 (2018)
Lin, Z., Zhang, W., Tang, X.: Learning partial differential equations for computer vision. MSR-TR-2008-189 (2008)
Lin, Z., Zhang, W., Tang, X.: Designing partial differential equations for image processing by combining differential invariants. MSR-TR-2009-192 (2009)
Liu, R., Cao, J., Lin, Z., Shan, S.: Adaptive partial differential equation learning for visual saliency detection. In: Computer Vision and Pattern Recognition, pp. 3866–3873 (2014)
Liu, R., Lin, Z., Zhang, W., Su, Z.: Learning PDEs for image restoration via optimal control. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 115–128. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15549-9_9
Liu, R., Lin, Z., Zhang, W., Tang, K., Zhixun, S.: Toward designing intelligent PDEs for computer vision: an optimal control approach. Image Vis. Comput. 31(1), 43–56 (2013)
Liu, R., Zhong, G., Cao, J., Lin, Z., Shan, S., Luo, Z.: Learning to diffuse: a new perspective to design pdes for visual analysis. IEEE Trans. Pattern Anal. Mach. Intell. 38(12), 2457–2471 (2016)
Lu, Y., Zhong, A., Li, Q., Dong, B.: Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. In International Conference on Machine Learning (2017)
Rezende, D.J., Mohamed, S.: Variational inference with normalizing flows. In: International Conference on Machine Learning, pp. 1530–1538. JMLR (2015)
Szegedy, C., et al.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition, pp. 1–9 (2015)
Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)
Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)
Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
Zagoruyko, S., Komodakis, N.: DiracNets: Training very deep neural networks without skip-connections. arXiv preprint arXiv:1706.00388 (2017)
Zhang, H., Shao, J., Salakhutdinov, R.: Deep neural networks with multi-branch architectures are less non-convex. arXiv preprint arXiv:1806.01845 (2018)
Zhang, X., Li, Z., Loy, C.C., Lin, D.: PolyNet: a pursuit of structural diversity in very deep networks. In: Computer Vision and Pattern Recognition, pp. 718–726 (2017)
Acknowledgments
The work of Zhouchen Lin is supported in part by 973 Program of China under Grant 2015CB352502, in part by NSF of China under Grants 61625301 and 61731018, and in part by Beijing Academy of Artificial Intelligence (BAAI) and Microsoft Research Asia.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
He, L., Xie, X., Lin, Z. (2019). Neural Ordinary Differential Equations with Envolutionary Weights. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2019. Lecture Notes in Computer Science(), vol 11857. Springer, Cham. https://doi.org/10.1007/978-3-030-31654-9_51
Download citation
DOI: https://doi.org/10.1007/978-3-030-31654-9_51
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-31653-2
Online ISBN: 978-3-030-31654-9
eBook Packages: Computer ScienceComputer Science (R0)