Skip to main content

A Model for Determining the Number of Negative Examples used in Training a MLP

  • Conference paper
  • First Online:
Innovations in Computing Sciences and Software Engineering

Abstract

In general, a MLP training uses a training set containing only positive examples, which may change the neural network into an over confident network for solving the problem. A simple solution for this problem is the introduction of negative examples in the training set. Through this procedure, the network will be prepared for the cases it has not been trained for. Unfortunately, up to the present, the number of negative examples that must be used in the training process was not mentioned in the literature. Consequently, the present article aims at finding a general mathematical pattern for training a MLP with negative examples. With that end in view, we have used a regressive analytic technique in order to analyze the data resulted from training three neural networks for a number of three datasets: a dataset for letter recognition, one for the data supplied by a sonar and a last one for the data resulted from the medical tests for determining diabetes. The pattern testing was performed on a new database for confirming its truthfulness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Haykin, S., “Neural Networks. A comprehensive Foundation”, Macmillan College Publishing, New York, 1994

    Google Scholar 

  2. Rosenblatt, N., “Principles of Neurodynamics”, Spartan Books, Washington, D.C., 1962

    Google Scholar 

  3. Rumelhart, D.E., McClelland and J.L. & PDP Research Group, “Parallel Distributed Processing: Explorations in the Microstructure of Cognition”, The MIT Press, Volume 1: Foundations, Cambridge, Massachusetts, 1986.

    Google Scholar 

  4. Schwartz, J.T., “The New Connectionism: Developing Relationships Between Neuroscience and Artificial Intelligence”, Proceedings of the American Academy of Arts and Science, Vol. 117, No. 1, p. 123 – 141, Daedalus, 1988

    Google Scholar 

  5. Rabunal, J. R., “Artificial Neural Networks in Real-life Applications”, Idea Group Pub, 2005

    Google Scholar 

  6. Raudys, S, Somorjai, R and Baumgartner, R., “Reducing the overconfidence of base classifiers when combining their decisions”, 4th International Workshop on Multiple Clasifier Systems (MCS 2003), vol. 2709, pag. 65’73, 2003

    Google Scholar 

  7. Hosom, J.P., Villiers, J. et al. “Training Hidden Markov Model/Artificial Neural Network (HMM/ANN) Hybrids for Automatic Speech Recognition (ASR)”, Center for Spoken Language Understanding (CSLU), 2006

    Google Scholar 

  8. Qun, Z., Principe and J. C., “Improve ATR performance by incorporating virtual negative examples”, Proceedings of the International Joint Conference on Neural Networks, pag. 3198-3203, 1999

    Google Scholar 

  9. Debevec, P., “A Neural Network for Facial Feature Location”, CS283 Course Project, UC Berkeley, 2001

    Google Scholar 

  10. Tikhonov, A. and Arsenin, V., “Solutions of ill-posed problems”, W.H. Winston, 1977

    Google Scholar 

  11. Girosi, F., Poggio, B., T. and Caprile, “Extensions of a theory of networks for approximation and learning:outliers and negative examples.”, Advances in Neural Information Processing Systems 3, R.P. Lippmann, J.E.Moody and D. S. Touretzky, pag. 750-756, San Mateo, CA, 1991

    Google Scholar 

  12. Cernazanu, C., Holban, S., „Determining the optimal percent of negative examples used in training the multilayer perceptron neural networks”, International Conference on Neural Networks, pag. 114-119, Prague, 2009

    Google Scholar 

  13. Cernazanu, C., Holban, S., „Improving neural network performances – training with negative examples”, International Conference on Telecommunications and Networking/International Conference on Industrial Electronics, Technology and Automation, University of Bridgeport, Novel Algorithms and Techniques in Telecommunications, Automation and Industrial Electronics, pag. 49-53, 2008

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cosmin Cernazanu-Glavan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer Science+Business Media B.V.

About this paper

Cite this paper

Cernazanu-Glavan, C., Holban, S. (2010). A Model for Determining the Number of Negative Examples used in Training a MLP. In: Sobh, T., Elleithy, K. (eds) Innovations in Computing Sciences and Software Engineering. Springer, Dordrecht. https://doi.org/10.1007/978-90-481-9112-3_92

Download citation

  • DOI: https://doi.org/10.1007/978-90-481-9112-3_92

  • Published:

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-9111-6

  • Online ISBN: 978-90-481-9112-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics