Skip to main content

Network Complexity Analysis of Multilayer Feedforward Artificial Neural Networks

  • Chapter
Applications of Neural Networks in High Assurance Systems

Part of the book series: Studies in Computational Intelligence ((SCI,volume 268))

  • 1422 Accesses

Abstract

Artificial neural networks (NN) have been successfully applied to solve different problems in recent years, especially in the fields of pattern classification, system identification, and adaptive control. Unlike the traditional methods, the neural network based approach does not require a priori knowledge on the model of the unknown system and also has some other significant advantages, such as adaptive learning ability as well as nonlinear mapping ability. In general, the complexity of a neural network structure is measured by the number of free parameters in the network; that is, the number of neurons and the number and strength of connections between neurons (weights). Network complexity analysis plays an important role in the design and implementation of artificial neural networks - not only because the size of a neural network needs to be predetermined before it can be employed for any application, but also because this dimensionality may significantly affect the neural network learning and generalization ability. This chapter gives a general introduction on neural network complexity analysis. Different pruning algorithms for multi-layer feedforward neural networks are studied and computer simulation results are presented.

An Erratum of this chapter can be found at http://dx.doi.org/10.1007/978-3642-10690-3_12

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Fnaiech, N., Abid, S., Fnaiech, F., Cheriet, M.: A modified version of a formal pruning algorithm based on local relative variance analysis. In: First International Symposium on Control, Communications and Signal Processing, pp. 849–852 (2004)

    Google Scholar 

  2. Rosin, P., Fierens, F.: Improving Neural Network Generalisation. In: International Geoscience and Remote Sensing Symposium, pp. 1255–1257 (1995)

    Google Scholar 

  3. Bevilacqua, V., Mastronardi, G., Menolascina, F., Pannarale, P., Pedone, A.: A novel multi-objective genetic algorithm approach to artificial neural network topology optimisation: the breast cancer classification problem. In: IEEE International Joint Conference on Neural Networks, pp. 1958–1965 (2006)

    Google Scholar 

  4. Narendra, K., Parthasarathy, K.: Identification and control of dynamical systems using neural networks. IEEE Transaction on Neural Networks 1(1), 4–27 (1990)

    Article  Google Scholar 

  5. Lawrence, S., Giles, C., Tsoi, A.: Lessons in neural network training: overfitting may be harder than expected. In: Proceedings of the Fourteenth National Conference on Artificial Intelligence, pp. 540–545 (1997)

    Google Scholar 

  6. Engelbrecht, A.: A new pruning heuristic based on variance analysis of sensitivity information. IEEE Transactions on Neural Networks 12(6), 1389–1399 (2001)

    Article  Google Scholar 

  7. Giles, C., Lawrence, S.: Overfitting and Neural Networks: Conjugate Gradient and Backpropagation. In: Proceedings of the IEEE International Conference on Neural Networks, pp. 114–119 (2000)

    Google Scholar 

  8. Haykin, S.: Neural networks: a comprehensive foundation. Prentice Hall, New Jersey (1999)

    MATH  Google Scholar 

  9. Huynh, T., Setiono, R.: Effective neural network pruning using cross-validation. In: IEEE International Joint Conference on Neural Networks, pp. 972–977 (2005)

    Google Scholar 

  10. Karnin, E.: A simple procedure for pruning back-propagation trained neural networks. IEEE Transactions on Neural Networks 1(2), 239–242 (1990)

    Article  Google Scholar 

  11. Marsland, S., Nehmzow, S., Shapiro, J.: A self-organizing network that grows when required. Neural Networks 15(8-9), 1041–1058 (2002)

    Article  Google Scholar 

  12. Mozer, M., Smolensky, P.: Skeletonization: A technique for trimming the fat from a network via relevance assessment. In: Touretzky, D. (ed.) Advances in Neural Information Processing, pp. 107–115 (1989)

    Google Scholar 

  13. Ponnapalli, P., Ho, K., Thomson, M.: A formal selection and pruning algorithm for feedforward artificial neural network optimization. IEEE Transactions on Neural Networks 10(4), 964–968 (1999)

    Article  Google Scholar 

  14. Yen, G., Lu, H.: Hierarchical genetic algorithm based neural network design. In: IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks, pp. 168–175 (2002)

    Google Scholar 

  15. Chang, S.J., Leung, C.S., Wong, K.W., Sum, J.: A local training and pruning approach for neural networks. International Journal of Neural Networks 10(6), 425–438 (2000)

    Google Scholar 

  16. Chang, S.J., Sum, J., Wong, K.W., Leung, C.S.: Adaptive training and pruning in feedforward networks. Electronics Letters 37(2), 106–107 (2001)

    Article  Google Scholar 

  17. Wan, W., Hirasawa, K., Hu, J., Jin, C.: A new method to prune the neural network. In: Proceedings of the IEEE International Conference on Neural Networks, pp. 449–454 (2000)

    Google Scholar 

  18. Neruda, R., Stedry, A., Drkosova, J.: Kolmogorov learning for feedforward networks. In: Proceedings of the IEEE International Conference on Neural Networks, pp. 77–81 (2001)

    Google Scholar 

  19. Kamran, F., Harley, R.G., Burton, B., Habetler, T.G., Brooke, M.A.: A fast on-line neural-network training algorithm for a rectifier regulator. IEEE Trans on Power Electronics 13(2), 366–371 (1998)

    Article  Google Scholar 

  20. Hecht-Nielsen, R.: Kolmogorov’s mapping neural network existence theorem. In: Proceedings of the IEEE International Conference on Neural Networks, pp. 11–14 (1987)

    Google Scholar 

  21. Li, W.: A neural network controller for a class of phase-shifted full-bridge DC-DC converter. PhD thesis, California Polytechnic State University, San Luis Obispo (2006)

    Google Scholar 

  22. Li, W., Yu, X.: Improving DC power supply efficiency with neural network controller. In: Proceedings of the IEEE International Conference on Control and Automation, pp. 1575–1580 (2007)

    Google Scholar 

  23. Li, W., Yu, X.: A self-tuning controller for real-time voltage regulation. In: Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 2010–2014 (2007)

    Google Scholar 

  24. Quero, J.M., Carrasco, J.M., Franquelo, L.G.: Implementation of a neural controller for the series resonant converter. IEEE Trans on Industrial Electronics 49(3), 628–639 (2002)

    Article  Google Scholar 

  25. Leshno, M., Lin, V., Pinkus, A., Shocken, S.: Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural Networks 6, 861–867 (1993)

    Article  Google Scholar 

  26. Lin, F., Ye, H.: Switched inductor two-quadrant DC-DC converter with neural network control. In: IEEE International Conference on Power Electronics and Drive Systems, pp. 1114–1119 (1999)

    Google Scholar 

  27. El-Sharkh, M.Y., Rahman, A., Alam, M.S.: Neural networks-based control of active and reactive power of a stand-alone PEM fuel cell power plant. Journal of Power Resources 135(1-2), 88–94 (2004)

    Article  Google Scholar 

  28. Bebis, G., Georgiopoulo, M., Kasparis, T.: Coupling weight elimination with genetic algorithms to reduce network size and preserve generalization. Neurocomputing 17, 167–194 (1997)

    Article  Google Scholar 

  29. Sabo, D.: A Modified Iterative Pruning Algorithm for Neural Network Dimension Analysis. PhD thesis, California Polytechnic State University, San Luis Obispo (2007)

    Google Scholar 

  30. Sabo, D., Yu, X.: A New Pruning Algorithm for Neural Network Dimension Analysis. In: Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 3312–3317 (2008)

    Google Scholar 

  31. Kopel, A., Yu, X.: Optimize Neural Network Controller Design Using Genetic Algorithm. In: Proceedings of the World Congress on Intelligent Control and Automation, pp. 2012–2016 (2008)

    Google Scholar 

  32. Yu, X.: Reducing neural network size for dynamical system identification. In: Proceedings of the IASTED International Conference on Intelligent Systems and Control, pp. 328–333 (2000)

    Google Scholar 

  33. Yu, X.: Adaptive Neural Network Structure Based on Sensitivity Analysis. In: Proceedings of the World Forum on Smart Materials and Smart Structures Technology (2007)

    Google Scholar 

  34. Brouwer, R.: Automatic growing of a Hopfield style net-work during training for classification. Neural Networks 10(3), 529–537 (1997)

    Article  MathSciNet  Google Scholar 

  35. Vonk, E., Jain, L., Johnson, R.: Automatic generation of neural network architecture using evolutionary computation. World Scientific Publishing Co., Singapore (1997)

    MATH  Google Scholar 

  36. Huberman, B., Rumelhart, D.: Generalization by weight elimination with applications to forecasting. In: Lippmann, R., Moody, J. (eds.) Advances in neural information processing III, pp. 875–882. Morgan Kaufmann, San Francisco (1991)

    Google Scholar 

  37. Gupta, A., Lam, S.: Weight decay backpropagation for noisy data. Neural Networks 11, 1127–1137 (1998)

    Article  Google Scholar 

  38. Reed, R., Marks, R.: Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. MIT Press, Cambridge (1999)

    Google Scholar 

  39. Goldberg, D.: Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading (1989)

    MATH  Google Scholar 

  40. Bishop, C.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1994)

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Yu, H. (2010). Network Complexity Analysis of Multilayer Feedforward Artificial Neural Networks. In: Schumann, J., Liu, Y. (eds) Applications of Neural Networks in High Assurance Systems. Studies in Computational Intelligence, vol 268. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-10690-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-10690-3_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-10689-7

  • Online ISBN: 978-3-642-10690-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics