Abstract
In this paper, we introduce the linear subspace method for the second-order tensor. The subspace method based on the principal component analysis for vector data is a conventional and established method for pattern recognition and classification. A pair of orthonormal vector sets derived by the singular value decomposition of matrices provides a pair of linear spaces to express images. The application of the subspace method to a pair of linear subspaces provides recognition methodologies for images through tensor analysis.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Mathematical tools in the linear subspace method related to this work are summarised in Appendix.
- 2.
Each orthogonal projection in a linear space defines the corresponding linear subspace. We call this linear subspace the linear subspace defined by the orthogonal projection.
- 3.
If \(\langle \boldsymbol{F},\boldsymbol{G}\rangle =0\), then \(\boldsymbol{F}\) and \(\boldsymbol{G}\) are orthogonal. Since
$$ \langle \boldsymbol{P}_X\boldsymbol{F}(t)\boldsymbol{Q}_Y^*, \boldsymbol{P}_X^{\perp }\boldsymbol{F}(t)(\boldsymbol{Q}_Y^{\perp })^*\rangle = \langle \boldsymbol{F}(t), \boldsymbol{P}_X^*\boldsymbol{P}_X^{\perp }\boldsymbol{F}(t)(\boldsymbol{Q}_Y^{\perp })^*\boldsymbol{Q}_Y) \rangle =\langle \boldsymbol{F}(t), \boldsymbol{O}\boldsymbol{F}(t)\boldsymbol{O}\rangle , $$\(\boldsymbol{P}_X\boldsymbol{F}(t)\boldsymbol{Q}_Y^*\) and \(\boldsymbol{P}_X^{\perp }\boldsymbol{F}(t)(\boldsymbol{Q}_Y^{\perp })^*\) are orthogonal and \(\boldsymbol{P}_X^{\perp }\boldsymbol{F}(t)(\boldsymbol{Q}_Y^{\perp })^*\) is an element in the bilinear space \(\mathbf{\Pi }_X^\perp \times \mathbf{\Pi }_Y^\perp \).
References
Iijima, T.: Pattern Recognition, Corona-sha (1974)
Fukui, K., Maki, A.: Difference subspace and its generalization for subspace-based methods. IEEE Pattern Anal. Mach. Intell. 37, 2164–2177 (2015)
Oja, E.: Subspace Methods of Pattern Recognition. Research Studies Press (1983)
Gibson, J.J.: The Ecological Approach to Visual Perception, Houghton Mifflin (1979)
Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of IJCAI1981, pp. 674–679 (1981)
Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Bigun, J., Gustavsson, T. (eds.) SCIA 2003. LNCS, vol. 2749, pp. 363–370. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-45103-X_50
Ohnishi, N., Imiya, A.: Independent component analysis of optical flow for robot navigation. Neurocomputing 71, 2140–2163 (2008)
Wulff, J., Black, M.J.: Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In: CVPR, pp. 120–130 (2015)
Fan, S., Zhang, Y., Zhang, Y., Fang, Z.: Motion process monitoring using optical flow-based principal component analysis-independent component analysis method. Adv. Mech. Eng. 9, 1–12 (2017)
Horel, J.D.: Complex principal component analysis: theory and examples. J. Appl. Meteorology Climatology 23, 1660–1672 (1984)
Camiz, S., Creta, S.: Principal component analysis of complex data and application to climatology. In: Mola, F., Conversano, C., Vichi, M. (eds.) Classification, (Big) Data Analysis and Statistical Learning. SCDAKO, pp. 77–85. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-55708-3_9
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Let \(\boldsymbol{P}_1\) and \(\boldsymbol{P}_2\) be the orthogonal projections to the linear subspaces \(\mathbf{\Pi }_1\) and \(\mathbf{\Pi }_2\), respectively. Setting \(\sigma _i\) to be the singular values of \(\boldsymbol{P}_1\boldsymbol{P}_2\) and \(\boldsymbol{P}_2\boldsymbol{P}_1\) such that \(0\le \sigma _1<\sigma _2<\cdots <\sigma _k\), the principal angles between \(\mathbf{\Pi }_1\) and \(\mathbf{\Pi }_2\) are \(\cos ^{-1}\sigma _1>\cos ^{-1}\sigma _2>\cdots > \cos ^{-1}\sigma _k\), and the Grassmann distance along the geodesic between \(\mathbf{\Pi }_1\) and \(\mathbf{\Pi }_2\) is
for \(\theta _i=\cos ^{-1}\sigma _i\). For orthonormal bases \(\{\boldsymbol{u}_{1i}\}_{i=1}^m\) and \(\{\boldsymbol{u}_{2j}\}_{j=1}^n\),
are the orthogonal projections to linear subspaces \( \mathbf{\Pi }_1=\mathcal {L}(\{\boldsymbol{u}_{1i}\}_{i=1}^m) \) and \( \mathbf{\Pi }_2=\mathcal {L}(\{\boldsymbol{u}_{2i}\}_{j=1}^n) \), respectively. Since
that is,
the principal angles between \(\mathbf{\Pi }_1\) and \(\mathbf{\Pi }_2\) are \(\angle [\boldsymbol{u}_{1i}, \boldsymbol{u}_{2i}] =\cos ^{-1}\sqrt{|(\boldsymbol{u}_{1i}^{*}\boldsymbol{u}_{2i})|^2} \).
Let \(\{\boldsymbol{u}_{ji}\}_{i=1}^{k(j)}\) be orthogonal base sets of linear subspaces \(\mathbf{\Pi }_j\) and \(\boldsymbol{P}_j\) be the orthogonal projection to \(\mathbf{\Pi }_j\) fro \(j=1,2,\cdots , m\). If \(\boldsymbol{w}\in \mathbf{\Pi }_j\), \(\boldsymbol{P}_j\boldsymbol{w}=\boldsymbol{w}\) for \(j=1,2,\cdots ,m\). Therefore, if \(|\boldsymbol{P}_j\boldsymbol{w}-\boldsymbol{w}|^2\ll \epsilon ^2\), then \(\boldsymbol{w}\) lies in a space close to all of \(\{\mathbf{\Pi }_j\}_{j=1}^m\). These geometric conditions imply that the minimiser of
is close to all \(\{\mathbf{\Pi }_j\}_{j=1}^m\). The minimiser of \(J(\boldsymbol{w})\) with respect to \(|\boldsymbol{w}|=1\) derives the eigenvalue equation
Since
the relations
are satisfied. Therefore, the eigenvectors of \(\boldsymbol{A}=\sum _{j=1}^m\boldsymbol{P}_j\) for different eigenvalues are mutually orthogonal and all eigenvalues are nonnegative. Therefore, setting
for \(k+1=\min \{ k(1),k(2),\cdots ,k(m)\}\),
is the orthogonal projection to the linear subspace
for \(0<\epsilon \ll 1\). The orthogonal projection \(\boldsymbol{P}_{\wedge }\) is the canonical projection to the common linear subspace of \(\{\mathbf{\Pi }_j\}_{j=1}^m\). Therefore, for \(\boldsymbol{x}\in \mathbf{\Pi }_j\), \(\mathbf{\Pi }_{\wedge }\) excludes \(\boldsymbol{y}=(\boldsymbol{I}-\boldsymbol{P}_{\wedge })\boldsymbol{x}\).
Let \(\mathbf{\Pi }\) and \(\mathbf{\Omega }\) be a pair of linear subspace in \(\mathbf{C}^n\) with the condition \(\mathbf{\Pi }\cap \mathbf{\Omega }=\{\boldsymbol{o}\}\). For \(\forall \boldsymbol{p}\in \mathbf{\Pi }\) such that \(\boldsymbol{p}\ne \boldsymbol{o}\) and \(\forall \boldsymbol{q}\in \mathbf{\Omega }\) such that \(\boldsymbol{q}\ne \boldsymbol{o}\), if \(\boldsymbol{p}^*\boldsymbol{q}=0\), \(\mathbf{\Pi }\) and \(\mathbf{\Omega }\) are orthogonal complement each other. Therefore, the relations \(\boldsymbol{p}\ne \mathbf{\Omega }\) and \(\boldsymbol{q}\ne \mathbf{\Pi }\) are satisfied if \(\boldsymbol{p}\in \mathbf{\Pi }\) and \(\boldsymbol{q}\in \mathbf{\Omega }\). We set \(\mathbf{\Omega }^\perp :=\mathbf{\Pi }\) and \(\mathbf{\Pi }^\perp :=\mathbf{\Omega }\). Assuming \(\mathbf{\Pi }\cup \mathbf{\Omega }=\mathbf{C}^n\), \(\boldsymbol{x}\in \mathbf{R}^n\) is uniquely expressed as \(\boldsymbol{x}=\boldsymbol{p}+\boldsymbol{q}\) for \(\boldsymbol{p}\in \mathbf{\Pi }\) and \(\boldsymbol{q}\in \mathbf{\Omega }\). Therefore, using the orthogonal projections \(\boldsymbol{P}_\mathbf{\Pi }\) and \(\boldsymbol{P}_\mathbf{\Omega }=\boldsymbol{I}-\boldsymbol{P}_\mathbf{\Pi }=\boldsymbol{P}_\mathbf{\Pi }^\perp \) to \(\mathbf{\Pi }\) and \(\mathbf{\Omega }\), respectively, \(\boldsymbol{x}\in \mathbf{C}^n\) is uniquely decomposed into \(\boldsymbol{P}_\mathbf{\Pi }\boldsymbol{x}\) and \(\boldsymbol{P}_\mathbf{\Omega }\boldsymbol{x}\).
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Mochizuki, E., Sone, H., Itoh, H., Imiya, A. (2021). Subspace Discrimination Method for Images Using Singular Value Decomposition. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2021. Lecture Notes in Computer Science(), vol 13018. Springer, Cham. https://doi.org/10.1007/978-3-030-90436-4_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-90436-4_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90435-7
Online ISBN: 978-3-030-90436-4
eBook Packages: Computer ScienceComputer Science (R0)