Skip to main content

Support Vector Neural Training

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3697))

Abstract

SVM learning strategy based on progressive reduction of the number of training vectors is used for MLP training. Threshold for acceptance of useful vectors for training is dynamically adjusted during learning, leading to a small number of support vectors near decision borders and higher accuracy of the final solutions. Two problems for which neural networks have previously failed to provide good results are presented to illustrate the usefulness of this approach.

An erratum to this chapter can be found at http://dx.doi.org/10.1007/11550907_163 .

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Schölkopf, B., Smola, A.J.: Learning with Kernels. Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2001)

    Google Scholar 

  2. Duch, W., Adamczak, R., Jankowski, N.: Initialization and optimization of multilayered perceptrons. In: Proc. 3rd Conf. on Neural Networks and Their Applications, Kule, Poland, pp. 105–110 (1997)

    Google Scholar 

  3. Cohn, D., Atlas, L., Ladner, R.: Improving Generalization with Active Learning. Machine Learning 15, 201–221 (1994)

    Google Scholar 

  4. Engelbrecht, A.P.: Sensitivity Analysis for Selective Learning by Feedforward Neural Networks. Fundamenta Informaticae 45(1), 295–328 (2001)

    MATH  MathSciNet  Google Scholar 

  5. Duch, W.: Similarity based methods: a general framework for classification, approximation and association. Control and Cybernetics 29(4), 937–968 (2000)

    MATH  MathSciNet  Google Scholar 

  6. Nabnay, I., Bishop, C.: NETLAB software. Aston University, Birmingham (1997), http://www.ncrg.aston.ac.uk/netlab/

    Google Scholar 

  7. Duch, W., Jankowski, N., Grąbczewski, K., Naud, A., Adamczak, R.: Ghostminer software, http://www.fqspl.com.pl/ghostminer/

  8. Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. University of California, Irvine, http://www.ics.uci.edu/~mlearn/MLRepository.html

  9. Michie, D., Spiegelhalter, D.J., Taylor, C.C.: Machine Learning, Neural and Statistical Classification. Ellis Horwood, London (1994)

    MATH  Google Scholar 

  10. Duch, W., Adamczak, R., Grąbczewski, K.: A New Methodology of Extraction, Optimization and Application of Crisp and Fuzzy Logical Rules. IEEE Transactions on Neural Networks 12, 277–306 (2001)

    Article  Google Scholar 

  11. Schiffman, W., Joost, M., Werner, R.: Comparison of optimized backpropagation algorithms. In: Proc. of European Symp. on Artificial Neural Networks, Brussels, pp. 97–104 (1993)

    Google Scholar 

  12. Tax, D.M.J., Duin, R.P.W.: Uniform Object Generation for Optimizing One-class Classifiers. Journal of Machine Learning Research 2, 155–173 (2001)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Duch, W. (2005). Support Vector Neural Training. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Formal Models and Their Applications – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3697. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550907_11

Download citation

  • DOI: https://doi.org/10.1007/11550907_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28755-1

  • Online ISBN: 978-3-540-28756-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics