Abstract
In between online and batch modes, there is a mini-batch concept that takes a subset of the training samples for updating the weights at each iteration. Traditional analysis of mini-batch is based on the stochastic gradient descent in which we assume that the process of taking mini-batches is performed in a random manner. In fact, practically, the mini-batch process is not in a random manner. In the last decade, many online and batch learning algorithms for fault aware radial basis function (RBF) networks. However, not much works on mini-batch for fault aware RBF networks are reported. This paper proposes a mini-batch learning algorithm for fault aware RBF networks. In our approach, rather than using the assumptions of the stochastic gradient descent, we consider that the partitions of mini-batches are fixed, and that those mini-batches are presented in a fixed order. Even with the above fixed arrangement, we are still able to prove that the training weight vector converges to the fault aware batch mode solution. In addition, we present the sufficient condition for the convergence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Liu, B., Kaneko, T.: Error analysis of digital filters realized with floating-point arithmetic. Proc. IEEE 57(10), 1735–1747 (1969)
Martolia, R., Jain, A., Singla, L.: Analysis survey on fault tolerance in radial basis function networks. In: 2015 IEEE International Conference on Computing, Communication Automation (ICCCA), pp. 469–473 (2015)
Murakami, M., Honda, N.: Fault tolerance comparison of IDS models with multilayer perceptron and radial basis function networks. In: International Joint Conference on Neural Networks, pp. 1079–1084 (2007)
Leung, C.S., Wan, W.Y., Feng, R.: A regularizer approach for RBF networks under the concurrent weight failure situation. IEEE Trans. Neural Networks Learn. Syst. 28(6), 1360–1372 (2017)
Feng, R., Han, Z.F., Wan, W.Y., Leung, C.S.: Properties and learning algorithms for faulty RBF networks with coexistence of weight and node failures. Neurocomputing 224, 166–176 (2017)
Konečný, J., Liu, J., Richtárik, P., Takac, M.: Mini-batch semi-stochastic gradient descent in the proximal setting. IEEE J. Sel. Top. Signal Process. 10(2), 242–255 (2016)
Wang, Z.Q., Manry, M.T., Schiano, L.: LMS learning algorithms: misconceptions and new results on converence. IEEE Trans. Neural Networks 11(1), 47–56 (2000)
Chen, S.: Local regularization assisted orthogonal least squares regression. Neurocomputing 69(4–6), 559–585 (2006)
Lichman, M.: UCI machine learning repository (2013). http://archive.ics.uci.edu/ml
Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 1–27 (2011)
Acknowledgments
The work was supported by a research grant from City University of Hong Kong (9610431).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Cha, E., Leung, CS., Wong, E. (2020). Convergence of Mini-Batch Learning for Fault Aware RBF Networks. In: Yang, H., Pasupa, K., Leung, A.CS., Kwok, J.T., Chan, J.H., King, I. (eds) Neural Information Processing. ICONIP 2020. Communications in Computer and Information Science, vol 1333. Springer, Cham. https://doi.org/10.1007/978-3-030-63823-8_62
Download citation
DOI: https://doi.org/10.1007/978-3-030-63823-8_62
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-63822-1
Online ISBN: 978-3-030-63823-8
eBook Packages: Computer ScienceComputer Science (R0)