Skip to main content

On Using Prototype Reduction Schemes and Classifier Fusion Strategies to Optimize Kernel-Based Nonlinear Subspace Methods

  • Conference paper
Book cover AI 2003: Advances in Artificial Intelligence (AI 2003)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2903))

Included in the following conference series:

Abstract

In Kernel based Nonlinear Subspace (KNS) methods, the length of the projections onto the principal component directions in the feature space, is computed using a kernel matrix, K, whose dimension is equivalent to the number of sample data points. Clearly this is problematic, especially, for large data sets. To solve the problem,in [9] we earlier proposed a method of reducing the size of the kernel by invoking a Prototype Reduction Scheme (PRS) to reduce the data into a smaller representative subset, rather than define it in terms of the entire data set. In this paper we propose a new KNS classification method for further enhancing the efficiency and accuracy of the results presented in [9]. By sub-dividing the data into smaller subsets, we propose to employ a PRS as a pre-processing module, to yield more refined representative prototypes. Thereafter, a Classifier Fusion Strategies (CFS) is invoked as a post-processing module, so as to combine the individual KNS classification results to derive a consensus decision. Our experimental results demonstrate that the proposed mechanism significantly reduces the prototype extraction time as well as the computation time without sacrificing the classification accuracy. The results especially demonstrate that the computational advantage for large data sets is significant when a parallel programming philosophy is applied.

The work of the first author was done while visiting at Carleton University, Ottawa, Canada. The first author was partially supported by KOSEF, the Korea Science and Engineering Foundation, and the second author was partially supported by NSERC, Natural Sciences and Engineering Research Council of Canada.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Achlioptas, D., McSherry, F.: Fast computation of low-rank approximations. In: Proceedings of the Thirty-Third Annual ACM Symposium on the Theory of Computing, Hersonissos, Greece, pp. 611–618. ACM Press, New York (2001)

    Chapter  Google Scholar 

  2. Achlioptas, D., McSherry, F., Schölkopf, B.: Sampling techniques for kernel methods. In: Advances in Neural Information Processing Systems 14, pp. 335–342. MIT Press, Cambridge (2002)

    Google Scholar 

  3. Bezdek, J.C., Kuncheva, L.I.: Nearest prototype classifier designs: An experimental study. International Journal of Intelligent Systems 16(12), 1445–11473 (2001)

    Article  MATH  Google Scholar 

  4. Dasarathy, B.V.: Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. IEEE Computer Society Press, Los Alamitos (1991)

    Google Scholar 

  5. Hart, P.E.: The condensed nearest neighbor rule. IEEE Trans. Inform. Theory IT-14, 515–516 (1968)

    Article  Google Scholar 

  6. Kim, S.-W., Oommen, B.J.: On using prototype reduction schemes and classifier fusion strategies to optimize kernel-based nonlinear subspace methods (unabridged version of this paper)

    Google Scholar 

  7. Kim, S.-W., Oommen, B.J.: Enhancing prototype reduction schemes with LVQ3-type algorithms. Pattern Recognition 36(5), 1083–1093 (2003)

    Article  MATH  Google Scholar 

  8. Kim, S.-W., Oommen, B.J.: Recursive prototype reduction schemes applicable for large data sets. In: Caelli, T.M., Amin, A., Duin, R.P.W., Kamel, M.S., de Ridder, D. (eds.) SPR 2002 and SSPR 2002. LNCS, vol. 2396, pp. 528–537. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  9. Kim, S.-W., Oommen, B.J.: On using prototype reduction schemes to optimize kernel-based nonlinear subspace methods. In: The preliminary version can be found in the Proceedings of AI 2002, the 2002 Australian Joint Conference on Artificial Intelligence, Canberra, Australia, December 2002, pp. 155–166 (2002) (Submitted for publication)

    Google Scholar 

  10. Kittler, J., Hatef, M., Duin, R.P.W., Matas, J.: On combining classifiers. IEEE Trans. Pattern Anal. and Machine Intell. PAMI-20(3), 226–239 (1998)

    Article  Google Scholar 

  11. Kuncheva, L.I.: A theoretical study on six classifier fusion strategies. IEEE Trans. Pattern Anal. and Machine Intell. PAMI-24(2), 281–286 (2002)

    Article  Google Scholar 

  12. Maeda, E., Murase, H.: Multi-category classification by kernel based nonlinear subspace method. In: The Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 1999). IEEE Press, Los Alamitos (1999)

    Google Scholar 

  13. Oja, E.: Subspace Methods of Pattern Recognition. Research Studies Press, Hertfordshire (1983)

    Google Scholar 

  14. Sakano, H., Mukawa, N., Nakamura, T.: Kernel mutual subspace method and its application for object recognition. IEICE Trans. Information & Systems J84-D-II(8), 1549–1556 (2001) (in Japanese)

    Google Scholar 

  15. Schölkopf, B., Smola, A.J., Müller, K.-R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10, 1299–1319 (1998)

    Article  Google Scholar 

  16. Smola, A.J., Schölkopf, B.: Sparse greedy matrix approximation for machine learning. In: Proceedings of ICML 2000, Bochum, Germany, pp. 911–918. Morgan Kaufmann, San Francisco (2000)

    Google Scholar 

  17. Tipping, M.: Sparse kernel principal component analysis. In: Advances in Neural Information Processing Systems 13, pp. 633–639. MIT Press, Cambridge (2001)

    Google Scholar 

  18. Tsuda, K.: Subspace method in the Hilbert space. IEICE Trans. Information & Systems J82-D-II(4), 592–599 (1999) (in Japanese)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kim, SW., Oommen, B.J. (2003). On Using Prototype Reduction Schemes and Classifier Fusion Strategies to Optimize Kernel-Based Nonlinear Subspace Methods. In: Gedeon, T.(.D., Fung, L.C.C. (eds) AI 2003: Advances in Artificial Intelligence. AI 2003. Lecture Notes in Computer Science(), vol 2903. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24581-0_67

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-24581-0_67

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-20646-0

  • Online ISBN: 978-3-540-24581-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics