Abstract
This paper considers the convergence of chaos injection-based backpropagation algorithm. Both the weak convergence and strong convergence results are theoretically established.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Haykin, S.: Neural Networks and Learning Machines. Prentice Hall (2008)
Karnin, E.D.: A simple procedure for pruning back-propagation trained neural networks. IEEE Trans. Neural Netw. 1, 239–242 (1990)
Fine, T.L., Mukherjee, S.: Parameter convergence and learning curves for neural networks. Neural Computat. 11, 747–769 (1999)
Wu, W., Feng, G., Li, Z., Xu, Y.: Deterministic convergence of an online gradient method for bp neural networks. IEEE Trans. Neural Netw. 16, 533–540 (2005)
Wu, W., Wang, J., Chen, M.S., Li, Z.X.: Convergence analysis on online gradient method for BP neural networks. Neural Networks 24(1), 91–98 (2011)
Wang, J., Wu, W., Zurada, J.M.: Deterministic convergence of conjugate gradient mehtod for feedforward neural networks. Neurocomputing 74, 2368–2376 (2011)
Zhang, H.S., Wu, W., Liu, F., Yao, M.: Boundedness and convergence of online gadient method with penalty for feedforward neural networks. IEEE Trans. Neural Netw. 20(6), 1050–1054 (2009)
Zhang, H.S., Wu, W., Yao, M.C.: Boundedness and convergence of batch back-propagation algorithm with penalty for feedforward neural networks. Neurocomputing 89, 141–146 (2012)
Shao, H.M., Zheng, G.F.: Boundedness and convergence of online gradient method with penalty and momentum. Neurocomputing 74, 765–770 (2011)
Sum, J.P., Leung, C.S., Ho, K.I.: On-line node fault injection training algorithm for mlp networks: objective function and convergence analysis. IEEE Transactions on Neural Networks and Learning Systems 23(2), 211–222 (2012)
Ahmed, S.U., Shahjahan, M., Murase, K.: Injecting chaos in feedforward neetworks. Neural Process. Lett. 34, 87–100 (2011)
Bertsekas, D.P., Tsitsiklis, J.N.: Gradient convergence in gradient methods with errors. SIAM J. Optim. 3, 627–642 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhang, H., Liu, X., Xu, D. (2013). Convergence of Chaos Injection-Based Batch Backpropagation Algorithm For Feedforward Neural Networks. In: Guo, C., Hou, ZG., Zeng, Z. (eds) Advances in Neural Networks – ISNN 2013. ISNN 2013. Lecture Notes in Computer Science, vol 7951. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39065-4_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-39065-4_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39064-7
Online ISBN: 978-3-642-39065-4
eBook Packages: Computer ScienceComputer Science (R0)