Skip to main content

Neural Ordinary Differential Equations with Envolutionary Weights

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11857))

Included in the following conference series:

  • 2573 Accesses

Abstract

Neural networks have been very successful in many learning tasks, for their powerful ability to fit the data. Recently, to understand the success of neural networks, much attention has been paid to the relationship between differential equations and neural networks. Some research suggests that the depth of neural networks is important for their success. However, the understanding of neural networks from the differential equation perspective is still very preliminary. In this work, also connecting with the differential equation, we extend the depth of neural networks to infinity, and remove the existing constraint that parameters of every layer have to be the same by using another ordinary differential equation(ODE) to model the evolution of the weights. We prove that the ODE can model any continuous evolutionary weights and validate it by an experiment. Meanwhile, we propose a new training strategy to overcome the inefficiency of pure adjoint method. This strategy allows us to further understand the relationship between ResNet with finite layers and that with infinite layers. Our experiment indicates that the former can be a good initialization of the latter. Finally, we give a heuristic explanation on why the new training method works better than pure adjoint method. Further experiments show that our neural ODE with evolutionary weights converges faster than that with fixed weights.

The first author is a student.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bengio, Y., et al.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)

    Article  MathSciNet  Google Scholar 

  2. Chang, B., Meng, L., Haber, E., Tung, F., Begert, D.: Multi-level residual networks from dynamical systems view. arXiv preprint arXiv:1710.10348 ( 2017)

  3. Chen, T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Advances in Neural Information Processing Systems, pp. 6571–6583 (2018)

    Google Scholar 

  4. Weinan, E.: A proposal on machine learning via dynamical systems. Commun. Math. Stat. 5(5), 1–11 (2017)

    MathSciNet  MATH  Google Scholar 

  5. Fang, C., Zhao, Z., Zhou, P., Lin, Z.: Feature learning via partial differential equation with applications to face recognition. Pattern Recogn. 69, 14–25 (2017)

    Article  Google Scholar 

  6. Haber, E., Ruthotto, L.: Stable architectures for deep neural networks. Inverse Prob. 34(1), 014004 (2017)

    Article  MathSciNet  Google Scholar 

  7. Haber, E., Ruthotto, L., Holtham, E., Jun, S-H.: Learning across scales–Multiscale methods for convolution neural networks. In: AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  8. Hardt, M., Ma, T.: Identity matters in deep learning. arXiv preprint arXiv:1611.04231 (2016)

  9. He, K., Sun, J.: Convolutional neural networks at constrained time cost. In: Computer Vision and Pattern Recognition, pp. 5353–5360 (2015)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  11. Jastrzebski, S., Arpit, D., Ballas, N., Verma, V., Che, T., Bengio, Y.: Residual connections encourage iterative inference. arXiv preprint arXiv:1710.04773 (2017)

  12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  13. Kingma, D.P., Dhariwal, P.: Glow: Generative flow with invertible 1x1 convolutions. In: Advances in Neural Information Processing Systems, pp. 10215–10224 (2018)

    Google Scholar 

  14. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  15. Li, H., Xu, Z., Taylor, G., Studer, C., Goldstein, T.: Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, pp. 6389–6399 (2018)

    Google Scholar 

  16. Lin, Z., Zhang, W., Tang, X.: Learning partial differential equations for computer vision. MSR-TR-2008-189 (2008)

    Google Scholar 

  17. Lin, Z., Zhang, W., Tang, X.: Designing partial differential equations for image processing by combining differential invariants. MSR-TR-2009-192 (2009)

    Google Scholar 

  18. Liu, R., Cao, J., Lin, Z., Shan, S.: Adaptive partial differential equation learning for visual saliency detection. In: Computer Vision and Pattern Recognition, pp. 3866–3873 (2014)

    Google Scholar 

  19. Liu, R., Lin, Z., Zhang, W., Su, Z.: Learning PDEs for image restoration via optimal control. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 115–128. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15549-9_9

    Chapter  Google Scholar 

  20. Liu, R., Lin, Z., Zhang, W., Tang, K., Zhixun, S.: Toward designing intelligent PDEs for computer vision: an optimal control approach. Image Vis. Comput. 31(1), 43–56 (2013)

    Article  Google Scholar 

  21. Liu, R., Zhong, G., Cao, J., Lin, Z., Shan, S., Luo, Z.: Learning to diffuse: a new perspective to design pdes for visual analysis. IEEE Trans. Pattern Anal. Mach. Intell. 38(12), 2457–2471 (2016)

    Article  Google Scholar 

  22. Lu, Y., Zhong, A., Li, Q., Dong, B.: Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. In International Conference on Machine Learning (2017)

    Google Scholar 

  23. Rezende, D.J., Mohamed, S.: Variational inference with normalizing flows. In: International Conference on Machine Learning, pp. 1530–1538. JMLR (2015)

    Google Scholar 

  24. Szegedy, C., et al.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  25. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)

    Google Scholar 

  26. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)

  27. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)

  28. Zagoruyko, S., Komodakis, N.: DiracNets: Training very deep neural networks without skip-connections. arXiv preprint arXiv:1706.00388 (2017)

  29. Zhang, H., Shao, J., Salakhutdinov, R.: Deep neural networks with multi-branch architectures are less non-convex. arXiv preprint arXiv:1806.01845 (2018)

  30. Zhang, X., Li, Z., Loy, C.C., Lin, D.: PolyNet: a pursuit of structural diversity in very deep networks. In: Computer Vision and Pattern Recognition, pp. 718–726 (2017)

    Google Scholar 

Download references

Acknowledgments

The work of Zhouchen Lin is supported in part by 973 Program of China under Grant 2015CB352502, in part by NSF of China under Grants 61625301 and 61731018, and in part by Beijing Academy of Artificial Intelligence (BAAI) and Microsoft Research Asia.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhouchen Lin .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 208 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

He, L., Xie, X., Lin, Z. (2019). Neural Ordinary Differential Equations with Envolutionary Weights. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2019. Lecture Notes in Computer Science(), vol 11857. Springer, Cham. https://doi.org/10.1007/978-3-030-31654-9_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-31654-9_51

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-31653-2

  • Online ISBN: 978-3-030-31654-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics