Abstract
SVM learning strategy based on progressive reduction of the number of training vectors is used for MLP training. Threshold for acceptance of useful vectors for training is dynamically adjusted during learning, leading to a small number of support vectors near decision borders and higher accuracy of the final solutions. Two problems for which neural networks have previously failed to provide good results are presented to illustrate the usefulness of this approach.
An erratum to this chapter can be found at http://dx.doi.org/10.1007/11550907_163 .
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Schölkopf, B., Smola, A.J.: Learning with Kernels. Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2001)
Duch, W., Adamczak, R., Jankowski, N.: Initialization and optimization of multilayered perceptrons. In: Proc. 3rd Conf. on Neural Networks and Their Applications, Kule, Poland, pp. 105–110 (1997)
Cohn, D., Atlas, L., Ladner, R.: Improving Generalization with Active Learning. Machine Learning 15, 201–221 (1994)
Engelbrecht, A.P.: Sensitivity Analysis for Selective Learning by Feedforward Neural Networks. Fundamenta Informaticae 45(1), 295–328 (2001)
Duch, W.: Similarity based methods: a general framework for classification, approximation and association. Control and Cybernetics 29(4), 937–968 (2000)
Nabnay, I., Bishop, C.: NETLAB software. Aston University, Birmingham (1997), http://www.ncrg.aston.ac.uk/netlab/
Duch, W., Jankowski, N., Grąbczewski, K., Naud, A., Adamczak, R.: Ghostminer software, http://www.fqspl.com.pl/ghostminer/
Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases. University of California, Irvine, http://www.ics.uci.edu/~mlearn/MLRepository.html
Michie, D., Spiegelhalter, D.J., Taylor, C.C.: Machine Learning, Neural and Statistical Classification. Ellis Horwood, London (1994)
Duch, W., Adamczak, R., Grąbczewski, K.: A New Methodology of Extraction, Optimization and Application of Crisp and Fuzzy Logical Rules. IEEE Transactions on Neural Networks 12, 277–306 (2001)
Schiffman, W., Joost, M., Werner, R.: Comparison of optimized backpropagation algorithms. In: Proc. of European Symp. on Artificial Neural Networks, Brussels, pp. 97–104 (1993)
Tax, D.M.J., Duin, R.P.W.: Uniform Object Generation for Optimizing One-class Classifiers. Journal of Machine Learning Research 2, 155–173 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Duch, W. (2005). Support Vector Neural Training. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Formal Models and Their Applications – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3697. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550907_11
Download citation
DOI: https://doi.org/10.1007/11550907_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28755-1
Online ISBN: 978-3-540-28756-8
eBook Packages: Computer ScienceComputer Science (R0)