Abstract
Several kernel-based perceptron learning methods on a budget have been proposed. In the early steps of learning, such methods record a new instance by allocating it a new kernel. In the later steps, however, useless memory must be forgotten to make space for recording important and new instances once the number of kernels reaches an upper bound. In such cases, it is important to find a way to determine what memory should be forgotten. This is an important process for yielding a high generalization capability. In this paper, we propose a new method that selects between one of two forgetting strategies, depending on the redundancy of the memory in the learning machine. If there is redundant memory, the learner replaces the most redundant memory with a new instance. If there is less redundant memory, the learner replaces the least recently used / least frequently used memory. Experimental results suggest that this proposed method is superior to existing learning methods on a budget.
This research was supported by JST Adaptable and Seamless Technology Transfer Program through Target-driven R&D Exploratory Research AS221Z01499A.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Dekel, O., Shalev-Shwartz, S., Singer, Y.: The forgetron: A kernel-based perceptron on a fixed budget. Technical report (2005), http://www.pascal-network.org/
Orabona, F., Keshet, J., Caputo, B.: The projectron: A bounded kernel-based perceptron. In: ICML 2008, pp. 720–727 (2008)
He, W., Wu, S.: A kernel-based perceptron with dynamic memory. Neural Networks 25, 105–113 (2011)
Yamauchi, K.: Pruning with replacement and automatic distance metric detection in limited general regression neural networks. In: IJCNN 2011, pp. 899–906. IEEE (July 2011)
Yamauchi, K.: Incremental learning on a budget and its application to quick maximum power point tracking of photovoltaic systems. In: The 6th International Conference on Soft Computing and Intelligent Systems, pp. 71–78. IEEE (November 2012)
Yamauchi, K., Kondo, Y., Maeda, A., Nakano, K., Kato, A.: Incremental learning on a budget and its application to power electronics. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013, Part II. LNCS, vol. 8227, pp. 341–351. Springer, Heidelberg (2013)
Specht, D.F.: A general regression neural network. IEEE Transactions on Neural Networks 2(6), 568–576 (1991)
Lee, D., Noh, S.H., Min, S.L., Choi, J., Kim, J.H., Cho, Y., Sang, K.C.: Lrfu: A spectrum of policies that subsumes the least recently used and least frequently used policies. IEEE Transaction on Computers 50(12), 1352–1361 (2001)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Kondo, Y., Yamauchi, K. (2014). A Dynamic Pruning Strategy for Incremental Learning on a Budget. In: Loo, C.K., Yap, K.S., Wong, K.W., Teoh, A., Huang, K. (eds) Neural Information Processing. ICONIP 2014. Lecture Notes in Computer Science, vol 8834. Springer, Cham. https://doi.org/10.1007/978-3-319-12637-1_37
Download citation
DOI: https://doi.org/10.1007/978-3-319-12637-1_37
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-12636-4
Online ISBN: 978-3-319-12637-1
eBook Packages: Computer ScienceComputer Science (R0)