Skip to main content

Comparing Support Vector Machines and Feed-forward Neural Networks with Similar Parameters

  • Conference paper
Intelligent Data Engineering and Automated Learning – IDEAL 2006 (IDEAL 2006)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 4224))

  • 1699 Accesses

Abstract

From a computational point of view, the main differences between SVMs and FNNs are (1) how the number of elements of their respective solutions (SVM-support vectors/FNN-hidden units) is selected and (2) how the (both hidden-layer and output-layer) weights are found. Sequential FNNs, however, do not show all of these differences with respect to SVMs, since the number of hidden units is obtained as a consequence of the learning process (as for SVMs) rather than fixed a priori. In addition, there exist sequential FNNs where the hidden-layer weights are always a subset of the data, as usual for SVMs. An experimental study on several benchmark data sets, comparing several aspects of SVMs and the aforementioned sequential FNNs, is presented. The experiments were performed in the (as much as possible) same conditions for both models. Accuracies were found to be very similar. Regarding the number of support vectors, sequential FNNs constructed models with less hidden units than SVMs. In addition, all the hidden-layer weights in the FNN models were also considered as support vectors by SVMs. The computational times were lower for SVMs, with absence of numerical problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press Inc., New York (1995)

    Google Scholar 

  • Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, NY (1995)

    MATH  Google Scholar 

  • Chen, S., Cowan, C.F.N., Grant, P.M.: Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks. IEEE Transactions on Neural Networks 2(2), 302–309 (1991)

    Article  Google Scholar 

  • Vincent, P., Bengio, Y.: Kernel Matching Pursuit. Machine Learning 48(1-3), 165–187 (2002); Special Issue on New Methods for Model Combination and Model Selection

    Google Scholar 

  • Romero, E., Alquézar, R.: A Sequential Algorithm for Feed-forward Neural Networks with Optimal Coefficients and Interacting Frequencies. Neurocomputing 69(13-15), 1540–1552 (2006)

    Article  Google Scholar 

  • Kwok, T.Y., Yeung, D.Y.: Constructive Algorithms for Structure Learning in Feedforward Neural Networks for Regression Problems. IEEE Transactions on Neural Networks 8(3), 630–645 (1997)

    Article  Google Scholar 

  • Chang, C.C., Lin, C.J.: LIBSVM: A Library for Support Vector Machines (2002), http://www.csie.ntu.edu.tw/~cjlin/libsvm

  • Tipping, M.: Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research 1, 211–244 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  • Valdés, J., García, R.: A Model for Heterogeneous Neurons and Its Use in Configuring Neural Networks for Classification Problems. In: Cabestany, J., Mira, J., Moreno-Díaz, R. (eds.) IWANN 1997. LNCS, vol. 1240, pp. 237–246. Springer, Heidelberg (1997)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Romero, E., Toppo, D. (2006). Comparing Support Vector Machines and Feed-forward Neural Networks with Similar Parameters. In: Corchado, E., Yin, H., Botti, V., Fyfe, C. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2006. IDEAL 2006. Lecture Notes in Computer Science, vol 4224. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11875581_11

Download citation

  • DOI: https://doi.org/10.1007/11875581_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-45485-4

  • Online ISBN: 978-3-540-45487-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics