Skip to main content

Active Learning Using a Constructive Neural Network Algorithm

  • Chapter
Constructive Neural Networks

Part of the book series: Studies in Computational Intelligence ((SCI,volume 258))

Abstract

Constructive neural network algorithms suffer severely from overfitting noisy datasets as, in general, they learn the set of available examples until zero error is achieved.We introduce in this work a method for detect and filter noisy examples using a recently proposed constructive neural network algorithm. The new method works by exploiting the fact that noisy examples are in general harder to be learnt than normal examples, needing a larger number of synaptic weight modifications. Different tests are carried out, both with controlled and real benchmark datasets, showing the effectiveness of the approach. Using different classification algorithms, it is observed an improved generalization ability in most cases when the filtered dataset is used instead of the original one.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Haykin, S.: Neural Networks: A Comprehensive Foundation. Macmillan/IEEE Press (1994)

    Google Scholar 

  2. Lawrence, S., Giles, C.L., Tsoi, A.C.: What Size Neural Network Gives Optimal Generalization? Convergence Properties of Backpropagation. In: Technical Report UMIACS-TR-96-22 and CS-TR-3617, Institute for Advanced Computer Studies, Univ. of Maryland (1996)

    Google Scholar 

  3. Gòmez, I., Franco, L., Subirats, J.L., Jerez, J.M.: Neural Networks Architecture Selection: Size Depends on Function Complexity. In: Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E. (eds.) ICANN 2006. LNCS, vol. 4131, pp. 122–129. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  4. Mezard, M., Nadal, J.P.: Learning in feedforward layered networks: The tiling algorithm, J. Physics A 22, 2191–2204 (1989)

    Article  MathSciNet  Google Scholar 

  5. Frean, M.: The upstart algorithm: A method for constructing and training feedforward neural networks. Neural Computation 2, 198–209 (1990)

    Article  Google Scholar 

  6. Parekh, R., Yang, J., Honavar, V.: Constructive Neural-Network Learning Algorithms for Pattern Classification. IEEE Transactions on Neural Networks 11, 436–451 (2000)

    Article  Google Scholar 

  7. Subirats, J.L., Jerez, J.M., Franco, L.: A New Decomposition Algorithm for Threshold Synthesis and Generalization of Boolean Functions. IEEE Transactions on Circuits and Systems I 55, 3188–3196 (2008)

    Article  Google Scholar 

  8. Nicoletti, M.C., Bertini, J.R.: An empirical evaluation of constructive neural network algorithms in classification tasks. International Journal of Innovative Computing and Applications 1, 2–13 (2007)

    Article  Google Scholar 

  9. Reed, R.: Pruning algorithms - a survey. IEEE Transactions on Neural Networks 4, 740–747 (1993)

    Article  Google Scholar 

  10. Smieja, F.J.: Neural network constructive algorithms: trading generalization for learning efficiency? Circuits, systems, and signal processing 12, 331–374 (1993)

    Article  MATH  Google Scholar 

  11. Bramer, M.A.: Pre-pruning classification trees to reduce overfitting in noisy domains. In: Yin, H., Allinson, N.M., Freeman, R., Keane, J.A., Hubbard, S. (eds.) IDEAL 2002. LNCS, vol. 2412, pp. 7–12. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  12. Hawkins, D.M.: The problem of Overfitting. Journal of Chemical Information and Computer Sciences 44, 1–12 (2004)

    MathSciNet  Google Scholar 

  13. Angelova, A., Abu-Mostafa, Y., Perona, P.: Pruning training sets for learning of object categories. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 1, pp. 494–501 (2005)

    Google Scholar 

  14. Cohn, D., Atlas, L., Ladner, R.: Improving Generalization with Active Learning. Mach. Learn. 15, 201–221 (1994)

    Google Scholar 

  15. Cachin, C.: Pedagogical pattern selection strategies. Neural Networks 7, 175–181 (1994)

    Article  Google Scholar 

  16. Kinzel, W., Rujan, P.: Improving a network generalization ability by selecting examples. Europhys. Lett. 13, 473–477 (1990)

    Article  Google Scholar 

  17. Franco, L., Cannas, S.A.: Generalization and Selection of Examples in Feedforward Neural Networks. Neural Computation 12(10), 2405–2426 (2000)

    Article  Google Scholar 

  18. Sánchez, J.S., Barandela, R., Marqués, A.I., Alejo, R., Badenas, J.: Analysis of new techniques to obtain quality training sets. Pattern Recognition Letters 24, 1015–1022 (2003)

    Article  Google Scholar 

  19. Jankowski, N., Grochowski, M.: Comparison of Instances Seletion Algorithms I. Algorithms Survey. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS (LNAI), vol. 3070, pp. 598–603. Springer, Heidelberg (2004)

    Google Scholar 

  20. Subirats, J.L., Franco, L., Jerez, J.M.: Competition and Stable Learning for Growing Compact Neural Architectures with Good Generalization Abilities: The C-Mantec Algorithm (2009) (in preparation)

    Google Scholar 

  21. Frean, M.: Thermal Perceptron Learning Rule. Neural Computation 4, 946–957 (1992)

    Article  Google Scholar 

  22. Rosenhlatt, F.: The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review 65, 386–408 (1959)

    Article  Google Scholar 

  23. Zhu, X., Wu, X.: Class noise vs. attribute noise: a quantitative study of their impacts. Artif. Intell. Rev. 22, 177–210 (2004)

    Article  MATH  Google Scholar 

  24. Merz, C.J., Murphy, P.M.: UCI Repository of Machine Learning Databases. Department of Information and Computer Science. University of California, Irvine (1998)

    Google Scholar 

  25. Prechelt, L.: Proben 1 – A Set of Benchmarks and Benchmarking Rules for Neural Network Training Algorithms. Technical Report (1994)

    Google Scholar 

  26. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kauffman, CA (1992)

    Google Scholar 

  27. Shawe-Taylor, J., Cristianini, N.: Support Vector Machines and other kernel-based learning methods. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  28. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann Publishers, San Francisco (2000), http://www.cs.waikato.ac.nz/ml/weka

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Subirats, J.L., Franco, L., Molina, I., Jerez, J.M. (2009). Active Learning Using a Constructive Neural Network Algorithm. In: Franco, L., Elizondo, D.A., Jerez, J.M. (eds) Constructive Neural Networks. Studies in Computational Intelligence, vol 258. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04512-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04512-7_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04511-0

  • Online ISBN: 978-3-642-04512-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics