Abstract
In general, a MLP training uses a training set containing only positive examples, which may change the neural network into an over confident network for solving the problem. A simple solution for this problem is the introduction of negative examples in the training set. Through this procedure, the network will be prepared for the cases it has not been trained for. Unfortunately, up to the present, the number of negative examples that must be used in the training process was not mentioned in the literature. Consequently, the present article aims at finding a general mathematical pattern for training a MLP with negative examples. With that end in view, we have used a regressive analytic technique in order to analyze the data resulted from training three neural networks for a number of three datasets: a dataset for letter recognition, one for the data supplied by a sonar and a last one for the data resulted from the medical tests for determining diabetes. The pattern testing was performed on a new database for confirming its truthfulness.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Haykin, S., “Neural Networks. A comprehensive Foundation”, Macmillan College Publishing, New York, 1994
Rosenblatt, N., “Principles of Neurodynamics”, Spartan Books, Washington, D.C., 1962
Rumelhart, D.E., McClelland and J.L. & PDP Research Group, “Parallel Distributed Processing: Explorations in the Microstructure of Cognition”, The MIT Press, Volume 1: Foundations, Cambridge, Massachusetts, 1986.
Schwartz, J.T., “The New Connectionism: Developing Relationships Between Neuroscience and Artificial Intelligence”, Proceedings of the American Academy of Arts and Science, Vol. 117, No. 1, p. 123 – 141, Daedalus, 1988
Rabunal, J. R., “Artificial Neural Networks in Real-life Applications”, Idea Group Pub, 2005
Raudys, S, Somorjai, R and Baumgartner, R., “Reducing the overconfidence of base classifiers when combining their decisions”, 4th International Workshop on Multiple Clasifier Systems (MCS 2003), vol. 2709, pag. 65’73, 2003
Hosom, J.P., Villiers, J. et al. “Training Hidden Markov Model/Artificial Neural Network (HMM/ANN) Hybrids for Automatic Speech Recognition (ASR)”, Center for Spoken Language Understanding (CSLU), 2006
Qun, Z., Principe and J. C., “Improve ATR performance by incorporating virtual negative examples”, Proceedings of the International Joint Conference on Neural Networks, pag. 3198-3203, 1999
Debevec, P., “A Neural Network for Facial Feature Location”, CS283 Course Project, UC Berkeley, 2001
Tikhonov, A. and Arsenin, V., “Solutions of ill-posed problems”, W.H. Winston, 1977
Girosi, F., Poggio, B., T. and Caprile, “Extensions of a theory of networks for approximation and learning:outliers and negative examples.”, Advances in Neural Information Processing Systems 3, R.P. Lippmann, J.E.Moody and D. S. Touretzky, pag. 750-756, San Mateo, CA, 1991
Cernazanu, C., Holban, S., „Determining the optimal percent of negative examples used in training the multilayer perceptron neural networks”, International Conference on Neural Networks, pag. 114-119, Prague, 2009
Cernazanu, C., Holban, S., „Improving neural network performances – training with negative examples”, International Conference on Telecommunications and Networking/International Conference on Industrial Electronics, Technology and Automation, University of Bridgeport, Novel Algorithms and Techniques in Telecommunications, Automation and Industrial Electronics, pag. 49-53, 2008
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer Science+Business Media B.V.
About this paper
Cite this paper
Cernazanu-Glavan, C., Holban, S. (2010). A Model for Determining the Number of Negative Examples used in Training a MLP. In: Sobh, T., Elleithy, K. (eds) Innovations in Computing Sciences and Software Engineering. Springer, Dordrecht. https://doi.org/10.1007/978-90-481-9112-3_92
Download citation
DOI: https://doi.org/10.1007/978-90-481-9112-3_92
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-9111-6
Online ISBN: 978-90-481-9112-3
eBook Packages: Computer ScienceComputer Science (R0)