Skip to main content
Log in

Classification by nearness in complementary subspaces

  • Short Paper
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

This study introduces a classifier founded on k-nearest neighbours in the complementary subspaces (NCS). The global space, spanned by all training samples, can be decomposed into the direct sum of two subspaces in terms of one class: the projection vectors of this class into one subspace are nonzero, and that into another subspace are zero. A query sample is projected into the two subspaces for each class, respectively. In each subspace, the distance from the projection vector to the mean of its k-nearest neighbours can be calculated, and the final classification rules are designed in terms of the two distances calculated in the two complementary subspaces, respectively. Allowing for the geometric meaning of Gram determinant and kernel trick, the classifier is naturally implemented in the kernel space. The experimental results on 1 synthetic, 13 IDA binary class, and five UCI multi-class data sets show that NCS compares favourably to the comparing classifiers, which is founded on the k-nearest neighbours or the nearest subspace, on almost all the data sets. The classifier can straightforwardly solve multi-classification problems, and the performance is promising.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Cover TM, Hart PE (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13:21–27

    Google Scholar 

  2. Veenman CJ, Reinders MJ (2005) The nearest subclass classifier: a compromise between the nearest mean and nearest neighbor classifier. IEEE Trans Pattern Anal Mach Intell 27: 1417–1429

    Article  Google Scholar 

  3. Cevikalp H, Larlus D, Douze M, Jurie F (2007) Local subspace classifiers: linear and nonlinear approaches. In: The 2007 IEEE Machine Learning for Signal Processing Workshop, Thessaloniki, Greece

  4. García-Pedrajas N (2009) Constructing ensembles of classifiers by means of weighted instance selection. IEEE Trans Neural Netw 20: 258–277

    Article  Google Scholar 

  5. Li B, Chen YW, Chen YQ (2008) The nearest neighbor algorithm of local probability centers. IEEE Trans Syst Man Cybern B 38: 141–154

    Article  Google Scholar 

  6. Jiang L, Cai Z, Wang D, Jiang S (2007) Survey of improving k-nearest-neighbor for classification. In: Fourth international conference on fuzzy systems and knowledge discovery, 2007 (FSKD 2007) 1: 679–683

  7. Fayed HA, Atiya AF (2009) A novel template reduction approach for the k-nearest neighbor method. IEEE Trans Neural Netw 20: 890–896

    Article  Google Scholar 

  8. Balachander T, Kothari R (1999) Kernel based subspace pattern classification. In: Proceedings of the international joint conference on neural networks, vol 5, pp 3119–3122

  9. Balachander T, Kothari R (1999) Introducing locality and softness in subspace classification. Pattern Anal Appl 2(1): 53–58

    Article  MATH  Google Scholar 

  10. Balachander T, Kothari R (1999) Oriented soft localized subspace classification. In: Proceedings of the 1999 IEEE international conference on acoustics, speech, and signal processing, 1999, vol 2, pp 1017–1020

  11. Nalbantov GI, Groenen PJF, Bioch JC (2007) Nearest convex hull classification. Tech. Rep. EI 2006-50, Econometric Institute

  12. Weinberger KQ, Blitzer J, Saul LK (2006) Distance metric learning for large margin nearest neighbor classification. In: NIPS. MIT Press

  13. Kumar MP, Torr P, Zisserman A (2007) An invariant large margin nearest neighbour classifier. In: IEEE 11th international conference on computer vision, 2007 (ICCV 2007), vol 2, pp 1–8

  14. Vincent P, Bengio Y (2001) K-local hyperplane and convex distance nearest neighbor algorithms. In: NIPS

  15. Cevikalp H, Triggs B, Polikar R (2008) Nearest hyperdisk methods for high-dimensional classification. In: Cohen WW, McCallum A, Roweis ST (eds) CICML ACM international conference proceeding series, vol 307, Helsinki, Finland. ACM, pp 120–127

  16. Sam H (2008) K-nearest neighbor finding using maxnearestdist. IEEE Trans Pattern Anal Mach Intell 30: 243–252

    Article  Google Scholar 

  17. Vapnik VN (1998) Statistical learning theory. Wiley-Interscience, New York

  18. Liu Y, Ge SS, Li C, You Z (2011) k-NS: a classifier by the distance to the nearest subspace. IEEE Trans Neural Netw 22: 1017–1020

    Article  Google Scholar 

  19. Liu Y, Cao X, Liu JG (2011) Classification using distances from samples to linear manifolds. Pattern Anal Appl. http://www.springerlink.com/content/k261001001288257/

  20. Barth N (1999) The gramian and k-volume in n-space: some classical results in linear algebra. J Young Investig 2. Accessed 19 July 2011

  21. Meyer CD (2001) Matrix analysis and applied linear algebra. SIAM, Philadelphia

  22. Horn RA, Johnson CR (1986) Matrix analysis. Cambridge University Press, Cambridge

  23. Laaksonen J (1997) Subspace classifiers in recognition of handwritten digits. PhD thesis, Helsinki University of Technology, Finland

  24. Bertero M (1986) Regularization methods for linear inverse problems. In: Lecture notes in mathematics. Springer, Berlin, pp 52–112

  25. Wieland A (1995) Twin spiral dataset. http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/neural/bench/cmu/bench.tgz. Accessed 19 July 2011

  26. Asuncion A, Newman D (2007) UCI machine learning repository. Accessed 19 July 2011

  27. Keerthi SS, Lin C-J (2003) Asymptotic behaviors of support vector machines with gaussian kernel. Neural Comput 15: 1667–1689

    Article  MATH  Google Scholar 

  28. Räetsch G, Onoda T, Müller K-R (2001) Soft margins for adaboost. Mach Learn 43: 287–320

    Article  Google Scholar 

  29. Mangasarian OL, Wild EW (2006) Multisurface proximal support vector machine classification via generalized eigenvalues. IEEE Trans Pattern Anal Mach Intell 28: 69–74

    Article  Google Scholar 

  30. Mu T, Nandi AK (2009) Multiclass classification based on extended support vector data description. IEEE Trans Syst Man Cybern B 39: 1206–1216

    Article  Google Scholar 

  31. Schölkopf B, Smola AJ (2001) Learning with Kernels: support vector machines, regularization, optimization, and beyond. MIT Press, Cambridge

  32. Liu Y, You Z, Cao L (2006) A novel and quick SVM-based multi-class classifier. Pattern Recognit 39: 2258–2264

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yiguang Liu.

Appendices

Appendix 1

Proof of (7)

Based on [22], there exists

$$ \left|\begin{array}{cc} A & B \\ C & D \\ \end{array} \right|=|A||D-CA^{-1}B| $$

when A −1 exists. In addition to

$$ G_{\kappa}({\mathcal{X}}_{1,m})= \left[ \begin{array}{cc} G_{\kappa}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}])& \kappa ([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}],x_{i}) \\ \kappa(x_{i},[{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) & G_{\kappa}(x_{i},x_{i}) \\ \end{array} \right] $$

we have

$$ \begin{aligned} |G_{\kappa}({\mathcal{X}}_{1,m})|&=&|G_{\kappa}([{\mathcal{X}}_{1,i-1}, {\mathcal{X}}_{i+1,m}])| [G_{\kappa}(x_{i},x_{i})-\kappa(x_{i},[{\mathcal{X}}_{1,i-1}, {\mathcal{X}}_{i+1,m}])\\ &&G_{\kappa}^{-1}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}])\kappa ([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}],x_{i})] \end{aligned} $$
(34)

Substituting (34) into (6) yields (7).

Appendix 2

Proof of (13)

From (11) and (12), we have

$$ \begin{aligned} \alpha^{T}\psi^{T}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) \psi([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}])\alpha\\ =&\alpha^{T}G_{\kappa}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) \alpha\\ =&\|\psi^{(i)}(x_{i})\|^{2}\\ =&\kappa(x_{i},[{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}])G_{\kappa} ^{-1}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}])\kappa([{\mathcal{X}} _{1,i-1},{\mathcal{X}}_{i+1,m}],x_{i})\\ =&\left[G_{\kappa}^{-1}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) \kappa([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}],x_{i})\right]^{T} G_{\kappa}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}])\\ &G_{\kappa}^{-1}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}])\kappa ([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}],x_{i})\\ \end{aligned} $$

From (35), we have

$$ \begin{aligned} 0=&\alpha^{T}G_{\kappa}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) \alpha-\left[G_{\kappa}^{-1}([{\mathcal{X}}_{1,i-1}, {\mathcal{X}}_{i+1,m}])\kappa([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}], x_{i})\right]^{T}\\ &G_{\kappa}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) G_{\kappa}^{-1}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}])\kappa ([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}],x_{i}) \end{aligned} $$

i.e.

$$ \begin{aligned} 0&=[\alpha+G_{\kappa}^{-1}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) \kappa([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}],x_{i})]^{T}\\ &\quad \times G_{\kappa}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) [\alpha-G_{\kappa}^{-1}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) \kappa([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}],x_{i})] \end{aligned} $$

so α takes

$$ \alpha=G_{\kappa}^{-1}([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}]) \kappa([{\mathcal{X}}_{1,i-1},{\mathcal{X}}_{i+1,m}],x_{i}) $$

and (13) is proved.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yang, M., Liu, Y., Zhong, B. et al. Classification by nearness in complementary subspaces. Pattern Anal Applic 16, 609–622 (2013). https://doi.org/10.1007/s10044-012-0308-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-012-0308-4

Keywords

Navigation