Skip to main content

Neural Networks Comprising Sequentially Semiseparable Matrices with One Dimensional State Variable are Universal Approximators

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 2136))

  • 49 Accesses

Abstract

One approach towards handling the large resource requirements of modern neural networks is to use structured weight matrices. In this paper, we analyze the approximation capabilities of such neural networks. In particular, we investigate sequentially semiseparable (SSS) matrices with one dimensional state variable. This class of matrices is quite limited in their expressiveness, but it facilitates an efficient matrix-vector multiplication algorithm. Our contribution is to prove that neural networks comprising SSS matrices with one dimensional state variable are universal approximators. With our proof, we show that the same approximation capabilities which have been shown for weight matrices of low displacement rank also apply for SSS weight matrices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ailon, N., Leibovitch, O., Nair, V.: Sparse linear networks with a fixed butterfly structure: theory and practice. In: Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, vol. 161, pp. 1174–1184. PMLR (2021)

    Google Scholar 

  2. Börm, S., Grasedyck, L., Hackbusch, W.: Hierarchical matrices. Lecture Notes 21, 2003 (2003)

    Google Scholar 

  3. Chandrasekaran, S., et al.: Some fast algorithms for sequentially semiseparable representations. SIAM J. Matrix Anal. Appl. 27(2), 341–364 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Chatterjee, G.: Negative integral powers of a bidiagonal matrix. Math. Comput. 28(127), 713–714 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  5. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Sig. Syst. 2(4), 303–314 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dao, T., Gu, A., Eichhorn, M., Rudra, A., Ré, C.: Learning fast algorithms for linear transforms using butterfly factorizations. In: International Conference on Machine Learning, pp. 1517–1527. PMLR (2019)

    Google Scholar 

  7. Dao, T., et al.: Kaleidoscope: an efficient, learnable representation for all structured linear maps. In: International Conference on Learning Representations (2020)

    Google Scholar 

  8. De Sa, C., Cu, A., Puttagunta, R., Ré, C., Rudra, A.: A two-pronged progress in structured dense matrix vector multiplication. In: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1060–1079. SIAM (2018)

    Google Scholar 

  9. Dewilde, P., Van der Veen, A.J.: Time-varying systems and computations. Springer Science & Business Media (1998)

    Google Scholar 

  10. Fan, Y., Feliu-Faba, J., Lin, L., Ying, L., Zepeda-Núnez, L.: A multiscale neural network based on hierarchical nested bases. Res. Math. Sci. 6(2), 1–28 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  11. Fan, Y., Lin, L., Ying, L., Zepeda-Núnez, L.: A multiscale neural network based on hierarchical matrices. Multiscale Modeling Simul. 17(4), 1189–1213 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  12. Gantmakher, F., Krein, M.: Sur les matrices completement non négatives et oscillatoires. Compos. Math. 4, 445–476 (1937)

    MATH  Google Scholar 

  13. Golub, G.H., Van Loan, C.F.: Matrix computations. JHU press (2013)

    Google Scholar 

  14. Gui, J., Sun, Z., Ji, S., Tao, D., Tan, T.: Feature selection based on structured sparsity: a comprehensive study. IEEE Trans. Neural Networks Learn. Syst. 28(7), 1490–1507 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  16. Kissel, M., Diepold, K.: Deep convolutional neural networks with sequentially semiseparable weight matrices. In: ESANN 2022 Proceedings (2022)

    Google Scholar 

  17. Kissel, M., Gottwald, M., Gjeroska, B., Paukner, P., Diepold, K.: Backpropagation through states: training neural networks with sequentially semiseparable weight matrices. In: Proceedings of the 21st EPIA Conference on Artificial Intelligence (2022)

    Google Scholar 

  18. Kissel, M., Gronauer, S., Korte, M., Sacchetto, L., Diepold, K.: Exploiting structures in weight matrices for efficient real-time drone control with neural networks. In: Proceedings of the 21st EPIA Conference on Artificial Intelligence (2022)

    Google Scholar 

  19. Moczulski, M., Denil, M., Appleyard, J., de Freitas, N.: Acdc: a structured efficient linear layer. arXiv preprint arXiv:1511.05946 (2015)

  20. Pan, V.: Structured matrices and polynomials: unified superfast algorithms. Springer Science & Business Media (2001)

    Google Scholar 

  21. Qiao, L.B., Zhang, B.F., Su, J.S., Lu, X.C.: A systematic review of structured sparse learning. Front. Inf. Technol. Electron. Eng. 18, 445–463 (2017)

    Google Scholar 

  22. Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017)

    Google Scholar 

  23. Sindhwani, V., Sainath, T.N., Kumar, S.: Structured transforms for small-footprint deep learning. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 2, pp. 3088–3096 (2015)

    Google Scholar 

  24. Thomas, A.T., Gu, A., Dao, T., Rudra, A., Ré, C.: Learning compressed transforms with low displacement rank. Adv. Neural. Inf. Process. Syst. 2018, 9052 (2018)

    PubMed  PubMed Central  MATH  Google Scholar 

  25. Vandebril, R., Van Barel, M., Golub, G., Mastronardi, N.: A bibliography on semiseparable matrices. Calcolo 42(3), 249–270 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  26. Xie, D., Xiong, J., Pu, S.: All you need is beyond a good init: exploring better solution for training extremely deep convolutional neural networks with orthonormality and modulation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6176–6185 (2017)

    Google Scholar 

  27. Xu, Z., Li, Y., Cheng, X.: Butterfly-net2: simplified butterfly-net and fourier transform initialization. In: Mathematical and Scientific Machine Learning, pp. 431–450. PMLR (2020)

    Google Scholar 

  28. Yang, Z., Moczulski, M., Denil, M., De Freitas, N., Smola, A., Song, L., Wang, Z.: Deep fried convnets. In: Proceedings of the IEEE international conference on computer vision. pp. 1476–1483 (2015)

    Google Scholar 

  29. Zhao, L., Liao, S., Wang, Y., Li, Z., Tang, J., Yuan, B.: Theoretical properties for neural networks with weight matrices of low displacement rank. In: International Conference on Machine Learning, pp. 4082–4090. PMLR (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthias Kissel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kissel, M., Diepold, K. (2025). Neural Networks Comprising Sequentially Semiseparable Matrices with One Dimensional State Variable are Universal Approximators. In: Meo, R., Silvestri, F. (eds) Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2023. Communications in Computer and Information Science, vol 2136. Springer, Cham. https://doi.org/10.1007/978-3-031-74640-6_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-74640-6_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-74639-0

  • Online ISBN: 978-3-031-74640-6

  • eBook Packages: Artificial Intelligence (R0)

Publish with us

Policies and ethics