Skip to main content

A Principal Component Analysis Approach for Embedding Local Symmetries into Deep Learning Algorithms

  • Conference paper
  • First Online:
Book cover Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops (SAFECOMP 2020)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 12235))

Included in the following conference series:

  • 2191 Accesses

Abstract

Building robust-by-design Machine Learning algorithms is key for critical tasks such as safety or military applications. By leveraging on the ideas developed in the context of building invariant Support Vectors Machines, this paper introduces a convenient methodology for embedding local Lie groups symmetries into Deep Learning algorithms by performing a Principal Component Analysis on the corresponding Tangent Covariance Matrix. The projection of the input data onto the principal directions leads to a new data representation which allows singling out the components conveying the semantic information useful to the considered algorithmic task while reducing the dimension of the input manifold. Besides, our numerical testing emphasizes that, although less efficient than using Group-Convolutional Neural Networks as only dealing with local symmetries, our approach does improve accuracy and robustness without introducing significant computational overhead. Performance improvements up to 5% were obtained for low capacity algorithms, making this approach of particular interest for the engineering of safe embedded Artificial Intelligence systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bekkers, E.J.: B-spline CNNs on lie groups. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=H1gBhkBFDH

  2. Bergomi, M., Frosini, P., Giorgi, D., et al.: Towards a topological-geometrical theory of group equivariant non-expansive operators for data analysis and machine learning. Nat. Mach. Intell. 1, 423–433 (2002). https://doi.org/10.1038/s42256-019-0087-3

    Article  Google Scholar 

  3. Chapelle, O., Schölkopf, B.: Incorporating invariances in nonlinear support vector machines. In: Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, NIPS 2001, pp. 609–616. MIT Press, Cambridge (2001)

    Google Scholar 

  4. Cohen, T., Welling, M.: Group equivariant convolutional networks. In: Balcan, M.F., Weinberger, K.Q. (eds.) Proceedings of The 33rd International Conference on Machine Learning. Proceedings of Machine Learning Research, PMLR, New York, 20–22 June 2016, vol. 48, pp. 2990–2999 (2016). http://proceedings.mlr.press/v48/cohenc16.html

  5. Cohen, U., Chung, S., Lee, D., et al.: Separability and geometry of object manifolds in deep neural networks. Nat. Commun. 11(746), 1–13 (2020). https://doi.org/10.1038/s41467-020-14578-53

    Article  Google Scholar 

  6. Ensign, D., et al.: The complexity of explaining neural networks through (group) invariants. In: Hanneke, S., Reyzin, L. (eds.) Proceedings of the 28th International Conference on Algorithmic Learning Theory. Proceedings of Machine Learning Research, PMLR, Kyoto University, Kyoto, Japan, 15–17 Oct 2017, vol. 76, pp. 341–359 (2017). http://proceedings.mlr.press/v76/ensign17a.html

  7. Finzi, M., Stanton, S., Izmailov, P., Wilson, A.G.: Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. arXiv preprint arXiv:2002.12880 (2020)

  8. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Teh, Y.W., Titterington, M. (eds.) Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, PMLR, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010, vol. 9, pp. 249–256 (2010). http://proceedings.mlr.press/v9/glorot10a.html

  9. Kondor, R.: Group theoretical methods in machine learning. Ph.D. thesis (2008)

    Google Scholar 

  10. Kondor, R., Trivedi, S.: On the generalization of equivariance and convolution in neural networks to the action of compact groups. In: Dy, J.G., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018, Proceedings of Machine Learning Research, vol. 80, pp. 2752–2760. PMLR (2018). http://proceedings.mlr.press/v80/kondor18a.html

  11. Larochelle, H., Erhan, D., Courville, A., Bergstra, J., Bengio, Y.: An empirical evaluation of deep architectures on problems with many factors of variation. In: Proceedings of the 24th International Conference on Machine Learning. ICML 2007, pp. 473–480. Association for Computing Machinery, New York (2007). https://doi.org/10.1145/1273496.1273556

  12. LeCun, Y., Bengio, Y.: Convolutional Networks for Images, Speech, and Time Series, pp. 255–258. MIT Press, Cambridge (1998)

    Google Scholar 

  13. LeCun, Y., Cortes, C., Burges, C.: Mnist handwritten digit database. ATT Labs (2010). http://yann.lecun.com/exdb/mnist

  14. Leen, T.K.: From data distributions to regularization in invariant learning. Neural Comput. 7(5), 974–981 (1995). https://doi.org/10.1162/neco.1995.7.5.974

    Article  MathSciNet  Google Scholar 

  15. Mallat, S.: Understanding deep convolutional networks. Philos. Trans. R. Soc. A: Math. Phys. Eng. Sci. 374(2065) (2016). https://doi.org/10.1098/rsta.2015.0203

  16. Olver, P.: Applications of Lie Groups to Differential Equations. Springer, New York (1993)

    Book  Google Scholar 

  17. Schölkopf, B., Burges, C., Vapnik, V.: Incorporating invariances in support vector learning machines. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds.) ICANN 1996. LNCS, vol. 1112, pp. 47–52. Springer, Heidelberg (1996). https://doi.org/10.1007/3-540-61510-5_12

    Chapter  Google Scholar 

  18. Schölkopf, B., Simard, P., Smola, A., Vapnik, V.: Prior knowledge in support vector kernels. In: Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems, NIPS 1997, vol. 10, pp. 640–646. MIT Press, Cambridge (1998)

    Google Scholar 

  19. Schölkopf, B., Smola, A., Müller, K.-R.: Kernel principal component analysis. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, J.-D. (eds.) ICANN 1997. LNCS, vol. 1327, pp. 583–588. Springer, Heidelberg (1997). https://doi.org/10.1007/BFb0020217

    Chapter  Google Scholar 

  20. Simard, P.Y., LeCun, Y.A., Denker, J.S., Victorri, B.: Transformation invariance in pattern recognition — tangent distance and tangent propagation. In: Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 1524, pp. 239–274. Springer, Heidelberg (1998). https://doi.org/10.1007/3-540-49430-8_13

    Chapter  Google Scholar 

  21. Smets, B., Portegies, J., Bekkers, E., Duits, R.: PDE-based group equivariant convolutional neural networks (2020)

    Google Scholar 

  22. Vapnik, V.: The Nature of Statistical Learning Theory. Statistics for Engineering and Information Science. Springer, Heidelberg (2000). https://doi.org/10.1007/978-1-4757-3264-1

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pierre-Yves Lagrave .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lagrave, PY. (2020). A Principal Component Analysis Approach for Embedding Local Symmetries into Deep Learning Algorithms. In: Casimiro, A., Ortmeier, F., Schoitsch, E., Bitsch, F., Ferreira, P. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2020 Workshops. SAFECOMP 2020. Lecture Notes in Computer Science(), vol 12235. Springer, Cham. https://doi.org/10.1007/978-3-030-55583-2_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-55583-2_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-55582-5

  • Online ISBN: 978-3-030-55583-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics