Skip to main content
Log in

On the Relationship Between the Support Vector Machine for Classification and Sparsified Fisher's Linear Discriminant

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

We show that the orientation and location of the separating hyperplane for 2-class supervised pattern classification obtained by the Support Vector Machine (SVM) proposed by Vapnik and his colleagues, is equivalent to the solution obtained by Fisher's Linear Discriminant on the set of Support Vectors. In other words, SVM can be seen as a way to ‘sparsify’ Fisher's Linear Discriminant in order to obtain the most generalizing classification from the training set.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. P.N Belhumeur, J.P. Hespanha, and D.J. Kriegman, Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection, in Proc. European Conference on Computer Vision, 1996.

  2. B.E. Boser, I.M. Guyon, and V.N. Vapnik, A training algorithm for optimal margin classifier, in Proc. 5th Workshop on Computational Learning Theory, pp. 144–152, 1992.

  3. S. Chen and D. Donoho, Atomic decomposition by basis pursuit, Technical Report Dept. of Statistics, TR-479, Stanford, 1995.

  4. C. Cortes and V.N. Vapnik, Support vector networks, Machine Learning 20, 1–25, 1995.

    Google Scholar 

  5. R. Courant and D. Hilbert, Methods of Mathematical Physics, Interscience Publishers Inc., 1953.

  6. R.O. Duda and P.E. Hart, Pattern Classification and Scene Analysis, John Wiley, New York, 1973.

    Google Scholar 

  7. F. Girosi, An equivalence between sparse approximation and support vector machines, Technical Report AI Memo 1606, MIT, 1997.

  8. G.F. Harpur and R.W. Prager, Development of low entropy coding in a recurrent network, Network, 7, 277–284, 1996.

    Google Scholar 

  9. D.W. Jacobs and D. Weinshall, Classifying images using non-metric distances, in Proc. International Conference on Computer Vision, January 1998.

  10. D.G. Luenberger, Linear and Nonlinear Programming, Addison-Wesley, 1937.

  11. P. Meer, D. Mintz, D. Kim, and A. Rosenfeld, Robust regression methods for computer vision: A review, International Journal of Computer Vision 6(1), 59–70, 1991.

    Google Scholar 

  12. B.A. Olshausen and D.J. Field, Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature 381(13), 1996.

  13. E. Osuna, R. Freund, and F. Girosi, Training support vector machines: An application to face detection, in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 1997.

  14. M. Pontil, S. Rogai, and A. Verri, Recognizing 3D objects with linear Support Vector Machines, in Proc. 5th European Conference on Computer Vision (ECCV), pp. 469–479, June, 1998.

  15. A. Tversky, Features of similarity, Psychological Review 84(4), 327–352, 1977.

    Google Scholar 

  16. V.N. Vapnik, The Nature of Statistical Learning, Springer, 1995.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shashua, A. On the Relationship Between the Support Vector Machine for Classification and Sparsified Fisher's Linear Discriminant. Neural Processing Letters 9, 129–139 (1999). https://doi.org/10.1023/A:1018677409366

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1018677409366

Navigation