Abstract
One approach towards handling the large resource requirements of modern neural networks is to use structured weight matrices. In this paper, we analyze the approximation capabilities of such neural networks. In particular, we investigate sequentially semiseparable (SSS) matrices with one dimensional state variable. This class of matrices is quite limited in their expressiveness, but it facilitates an efficient matrix-vector multiplication algorithm. Our contribution is to prove that neural networks comprising SSS matrices with one dimensional state variable are universal approximators. With our proof, we show that the same approximation capabilities which have been shown for weight matrices of low displacement rank also apply for SSS weight matrices.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ailon, N., Leibovitch, O., Nair, V.: Sparse linear networks with a fixed butterfly structure: theory and practice. In: Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, vol. 161, pp. 1174–1184. PMLR (2021)
Börm, S., Grasedyck, L., Hackbusch, W.: Hierarchical matrices. Lecture Notes 21, 2003 (2003)
Chandrasekaran, S., et al.: Some fast algorithms for sequentially semiseparable representations. SIAM J. Matrix Anal. Appl. 27(2), 341–364 (2005)
Chatterjee, G.: Negative integral powers of a bidiagonal matrix. Math. Comput. 28(127), 713–714 (1974)
Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Sig. Syst. 2(4), 303–314 (1989)
Dao, T., Gu, A., Eichhorn, M., Rudra, A., Ré, C.: Learning fast algorithms for linear transforms using butterfly factorizations. In: International Conference on Machine Learning, pp. 1517–1527. PMLR (2019)
Dao, T., et al.: Kaleidoscope: an efficient, learnable representation for all structured linear maps. In: International Conference on Learning Representations (2020)
De Sa, C., Cu, A., Puttagunta, R., Ré, C., Rudra, A.: A two-pronged progress in structured dense matrix vector multiplication. In: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1060–1079. SIAM (2018)
Dewilde, P., Van der Veen, A.J.: Time-varying systems and computations. Springer Science & Business Media (1998)
Fan, Y., Feliu-Faba, J., Lin, L., Ying, L., Zepeda-Núnez, L.: A multiscale neural network based on hierarchical nested bases. Res. Math. Sci. 6(2), 1–28 (2019)
Fan, Y., Lin, L., Ying, L., Zepeda-Núnez, L.: A multiscale neural network based on hierarchical matrices. Multiscale Modeling Simul. 17(4), 1189–1213 (2019)
Gantmakher, F., Krein, M.: Sur les matrices completement non négatives et oscillatoires. Compos. Math. 4, 445–476 (1937)
Golub, G.H., Van Loan, C.F.: Matrix computations. JHU press (2013)
Gui, J., Sun, Z., Ji, S., Tao, D., Tan, T.: Feature selection based on structured sparsity: a comprehensive study. IEEE Trans. Neural Networks Learn. Syst. 28(7), 1490–1507 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Kissel, M., Diepold, K.: Deep convolutional neural networks with sequentially semiseparable weight matrices. In: ESANN 2022 Proceedings (2022)
Kissel, M., Gottwald, M., Gjeroska, B., Paukner, P., Diepold, K.: Backpropagation through states: training neural networks with sequentially semiseparable weight matrices. In: Proceedings of the 21st EPIA Conference on Artificial Intelligence (2022)
Kissel, M., Gronauer, S., Korte, M., Sacchetto, L., Diepold, K.: Exploiting structures in weight matrices for efficient real-time drone control with neural networks. In: Proceedings of the 21st EPIA Conference on Artificial Intelligence (2022)
Moczulski, M., Denil, M., Appleyard, J., de Freitas, N.: Acdc: a structured efficient linear layer. arXiv preprint arXiv:1511.05946 (2015)
Pan, V.: Structured matrices and polynomials: unified superfast algorithms. Springer Science & Business Media (2001)
Qiao, L.B., Zhang, B.F., Su, J.S., Lu, X.C.: A systematic review of structured sparse learning. Front. Inf. Technol. Electron. Eng. 18, 445–463 (2017)
Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017)
Sindhwani, V., Sainath, T.N., Kumar, S.: Structured transforms for small-footprint deep learning. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 2, pp. 3088–3096 (2015)
Thomas, A.T., Gu, A., Dao, T., Rudra, A., Ré, C.: Learning compressed transforms with low displacement rank. Adv. Neural. Inf. Process. Syst. 2018, 9052 (2018)
Vandebril, R., Van Barel, M., Golub, G., Mastronardi, N.: A bibliography on semiseparable matrices. Calcolo 42(3), 249–270 (2005)
Xie, D., Xiong, J., Pu, S.: All you need is beyond a good init: exploring better solution for training extremely deep convolutional neural networks with orthonormality and modulation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6176–6185 (2017)
Xu, Z., Li, Y., Cheng, X.: Butterfly-net2: simplified butterfly-net and fourier transform initialization. In: Mathematical and Scientific Machine Learning, pp. 431–450. PMLR (2020)
Yang, Z., Moczulski, M., Denil, M., De Freitas, N., Smola, A., Song, L., Wang, Z.: Deep fried convnets. In: Proceedings of the IEEE international conference on computer vision. pp. 1476–1483 (2015)
Zhao, L., Liao, S., Wang, Y., Li, Z., Tang, J., Yuan, B.: Theoretical properties for neural networks with weight matrices of low displacement rank. In: International Conference on Machine Learning, pp. 4082–4090. PMLR (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kissel, M., Diepold, K. (2025). Neural Networks Comprising Sequentially Semiseparable Matrices with One Dimensional State Variable are Universal Approximators. In: Meo, R., Silvestri, F. (eds) Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2023. Communications in Computer and Information Science, vol 2136. Springer, Cham. https://doi.org/10.1007/978-3-031-74640-6_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-74640-6_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-74639-0
Online ISBN: 978-3-031-74640-6
eBook Packages: Artificial Intelligence (R0)