Skip to main content

A New Optimal Classifier Architecture to Aviod the Dimensionality Curse

  • Conference paper
  • First Online:
Pattern Recognition and Image Analysis (IbPRIA 2003)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2652))

Included in the following conference series:

  • 693 Accesses

Abstract

In paper we present the theoretical foundation for optimal classification using class-specific features and provide examples of its use. A new PDF projection theorem makes it possible to project probability density functions from a low-dimensional feature space back to the raw data space. An M-ary classifier is constructed by estimating the PDFs of class-specific features, then transforming each PDF back to the raw data space where they can be fairly compared. Although statistical sufficiency is not a requirement, the classifier thus constructed will become equivalent to the optimal Bayes classifier if the features meet sufficiency requirements individually for each class. This classifier is completely modular and avoids the dimensionality curse associated with large complex problems. By recursive application of the projection theorem, it is possible to analyze complex signal processing chains. It is possible to automate the feature and model selection process by direct comparison of log-likelihood values on the common raw data domain. Pre-tested modules are available for a wide range of features including linear functions of independent random variables, cepstrum, and MEL cepstrum.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baggenstoss, P.M.: A modified Baum-Welch algorithm for hidden Markov models with multiple observation spaces. IEEE Trans. Speech and Audio, 411–416 (2001)

    Article  Google Scholar 

  2. Baggenstoss, P.M.: The PDF projection theorem and the class-specific method. IEEE Trans Signal Processing, 672–685 (2003)

    Article  MathSciNet  Google Scholar 

  3. Belhumeur, P., Hespanha, J., Kriegman, D.: Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. PAMI 19(7), 711–720 (1997)

    Article  Google Scholar 

  4. Durbin, J.: Approximations for densities of sufficient estimators. Biometrika 67(2), 311–333 (1980)

    Article  MathSciNet  Google Scholar 

  5. Frimpong-Ansah, Pearce, K., Holmes, D., Dixon, W.: A stochastic/feature based recogniser and its training algorithm. In: ICASSP 1989, vol. 1, pp. 401–404 (1989)

    Google Scholar 

  6. Kay, S.M., Nuttall, A.H., Baggenstoss, P.M.: Multidimensional probability density function approximation for detection, classification and model order selection. IEEE Trans. Signal Processing, 2240–2252 (2001)

    Article  MathSciNet  Google Scholar 

  7. Kumar, S., Ghosh, J., Crawford, M.: A versatile framework for labeling imagery with large number of classes. In: Proceedings of the International Joint Conference on Neural Networks, Washington, D. C., pp. 2829–2833 (1999)

    Google Scholar 

  8. Kumar, S., Ghosh, J., Crawford, M.: A hierarchical multiclassifier system for hyperspectral data analysis. In: Kittler, J., Roli, F. (eds.) Multiple Classifier Systems, pp. 270–279. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  9. Oh, I.-S., Lee, J.-S., Suen, C.Y.: A class-modularity for character recognition. In: Proceedings of International Conference on Document Analysis and Recognition (ICDAR) 2001, Seattle, Washington, pp. 64–68 (2001)

    Google Scholar 

  10. Picone, J.W.: Signal modeling techniques in speech recognition. Proceedings of the IEEE 81(9), 1215–1247 (1993)

    Article  Google Scholar 

  11. Sebald, D.: Support vector machines and the multiple hypothesis test problem. IEEE Trans. Signal Processing 49(11), 2865–2872 (2001)

    Article  Google Scholar 

  12. Strawderman, R.L.: Higher-order asymptotic approximation: Laplace, saddlepoint, and related methods. Journal of the American Statistical Association 95(452), 1358–1364 (2000)

    Article  MathSciNet  Google Scholar 

  13. Watanabe, H., Yamaguchi, T., Katagiri, S.: Discriminative metric design for robust pattern recognition. IEEE Trans. Signal Processing 45(11), 2655–2661 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Baggenstoss, P.M. (2003). A New Optimal Classifier Architecture to Aviod the Dimensionality Curse. In: Perales, F.J., Campilho, A.J.C., de la Blanca, N.P., Sanfeliu, A. (eds) Pattern Recognition and Image Analysis. IbPRIA 2003. Lecture Notes in Computer Science, vol 2652. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-44871-6_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-44871-6_9

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40217-6

  • Online ISBN: 978-3-540-44871-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics