Skip to main content

\(\kappa \)-Circulant Maximum Variance Bases

  • Conference paper
  • First Online:
KI 2021: Advances in Artificial Intelligence (KI 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12873))

Included in the following conference series:

Abstract

Principal component analysis (PCA), a well-known technique in machine learning and statistics, is typically applied to time-independent data, as it is based on point-wise correlations. Dynamic PCA (DPCA) handles this issue by augmenting the data set with lagged versions of itself. In this paper, we show that both, PCA and DPCA, are a special case of \(\kappa \)-circulant maximum variance bases. We formulate the constrained linear optimization problem of finding such \(\kappa \)-circulant bases and present a closed-form solution that allows further interpretation and significant speed-up for DPCA. Furthermore, the relation of the proposed bases to the discrete Fourier transform, finite impulse response filters as well as spectral density estimation is pointed out.

This work is supported by a grant from the German Ministry of Education and Research (BMBF; KMU-innovativ: Medizintechnik, 13GW0173E). We would like to express our gratitude to the reviewers for their efforts towards improving this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Matched filters are learned in a supervised setting, while here we restrict ourselves to the unsupervised case. Hence, the “matching” of the filter coefficients is according to a variance criterion (similar to PCA).

  2. 2.

    Principal component analysis is almost equivalent to the Karhunen-Loève transform (KLT) [14]. Further information regarding the relationship between PCA and KLT is given in [10].

  3. 3.

    The dot product \(\mathbf {u}^T\mathbf {x}\) serves as measure of similarity.

  4. 4.

    The discrete circular convolution of two sequences \(\mathbf {x},\mathbf {y}\in \mathbb {R}^{D}\) is written as \(\mathbf {x}\circledast \mathbf {y}\), while the linear convolution is written as \(\mathbf {x}*\mathbf {y}\).

  5. 5.

    Due to the constraint \(\left\Vert \mathbf {g}\right\Vert _2^2=1\) this is not trivial.

  6. 6.

    This interpretation is only valid under the assumptions mentioned in Sect. 3.3. Furthermore, the normalization of the autocorrelation (autocovariance) is to be performed as \(\mathbf {r}' = \frac{\mathbf {r}}{r_0}\), with the first component \(r_0\) of \(\mathbf {r}\) being the variance [16].

  7. 7.

    Let \(\mathbf {Y}=\mathbf {G}_\kappa \mathbf {X}\). Maximizing \(\left\Vert \mathbf {Y}\right\Vert _F^2\) (cf. Eq. 19) means maximizing the trace of the covariance matrix \(\mathbf {S}\propto \mathbf {Y}\mathbf {Y}^T\), which in turn is a measure for the total dispersion [18].

References

  1. Albawi, S., Mohammed, T.A., Al-Zawi, S.: Understanding of a convolutional neural network. In: 2017 International Conference on Engineering and Technology (ICET), pp. 1–6. IEEE (2017)

    Google Scholar 

  2. Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E.: The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Disc. 31(3), 606–660 (2016). https://doi.org/10.1007/s10618-016-0483-9

    Article  MathSciNet  Google Scholar 

  3. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)

    Article  Google Scholar 

  4. Bose, A., Saha, K.: Random Circulant Matrices. CRC Press (2018)

    Google Scholar 

  5. Casazza, P.G., Kutyniok, G., Philipp, F.: Introduction to finite frame theory. Finite Frames, pp. 1–53 (2013)

    Google Scholar 

  6. Chatfield, C.: The Analysis of Time Series: An Introduction. Chapman and Hall/CRC (2003)

    Google Scholar 

  7. Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., Muller, P.-A.: Deep learning for time series classification: a review. Data Min. Knowl. Disc. 33(4), 917–963 (2019). https://doi.org/10.1007/s10618-019-00619-1

    Article  MathSciNet  MATH  Google Scholar 

  8. Fulcher, B.D.: Feature-based time-series analysis. In: Feature Engineering for Machine Learning and Data Analytics, pp. 87–116. CRC Press (2018)

    Google Scholar 

  9. Garcia-Cardona, C., Wohlberg, B.: Convolutional dictionary learning: a comparative review and new algorithms. IEEE Trans. Comput. Imaging 4(3), 366–381 (2018)

    Article  MathSciNet  Google Scholar 

  10. Gerbrands, J.J.: On the relationships between SVD, KLT and PCA. Pattern Recogn. 14(1–6), 375–381 (1981)

    Article  MathSciNet  Google Scholar 

  11. Gray, R.M.: Toeplitz and Circulant Matrices: A Review (2006)

    Google Scholar 

  12. Jolliffe, I.T.: Principal components in regression analysis. In: Jolliffe, I.T. (ed.) Principal Component Analysis. SSS, pp. 129–155. Springer, New York (1986). https://doi.org/10.1007/978-1-4757-1904-8_8

    Chapter  Google Scholar 

  13. Ku, W., Storer, R.H., Georgakis, C.: Disturbance detection and isolation by dynamic principal component analysis. Chemom. Intell. Lab. Syst. 30(1), 179–196 (1995)

    Article  Google Scholar 

  14. Orfanidis, S.: SVD, PCA, KLT, CCA, and all that. Optimum Signal Processing, pp. 332–525 (2007)

    Google Scholar 

  15. Papyan, V., Romano, Y., Elad, M.: Convolutional neural networks analyzed via convolutional sparse coding. J. Mach. Learn. Res. 18(1), 2887–2938 (2017)

    MathSciNet  MATH  Google Scholar 

  16. Pollock, D.S.G., Green, R.C., Nguyen, T.: Handbook of Time Series Analysis, Signal Processing, and Dynamics. Elsevier (1999)

    Google Scholar 

  17. Rusu, C.: On learning with shift-invariant structures. Digit. Signal Process. 99, 102654 (2020)

    Article  Google Scholar 

  18. Seber, G.A.: Multivariate Observations, vol. 252. Wiley, Hoboken (2009)

    MATH  Google Scholar 

  19. Strang, G., Nguyen, T.: Wavelets and Filter Banks. SIAM (1996)

    Google Scholar 

  20. Tošić, I., Frossard, P.: Dictionary learning. IEEE Signal Process. Mag. 28(2), 27–38 (2011)

    Article  Google Scholar 

  21. Unser, M.: On the approximation of the discrete Karhunen-Loeve transform for stationary processes. Signal Process. 7(3), 231–249 (1984)

    Article  MathSciNet  Google Scholar 

  22. Vaswani, N., Narayanamurthy, P.: Static and dynamic robust PCA and matrix completion: a review. Proc. IEEE 106(8), 1359–1379 (2018)

    Article  Google Scholar 

  23. Vetterli, M., Kovačević, J., Goyal, V.K.: Foundations of Signal Processing. Cambridge University Press (2014)

    Google Scholar 

  24. Zhao, D., Lin, Z., Tang, X.: Laplacian PCA and its applications. In: 2007 IEEE 11th International Conference on Computer Vision, pp. 1–8. IEEE (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christopher Bonenberger .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bonenberger, C., Ertel, W., Schneider, M. (2021). \(\kappa \)-Circulant Maximum Variance Bases. In: Edelkamp, S., Möller, R., Rueckert, E. (eds) KI 2021: Advances in Artificial Intelligence. KI 2021. Lecture Notes in Computer Science(), vol 12873. Springer, Cham. https://doi.org/10.1007/978-3-030-87626-5_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87626-5_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87625-8

  • Online ISBN: 978-3-030-87626-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics