Abstract
Recent improvement of the microcomputer enables it to execute complex intelligent algorithms on embedded systems. However, when using conventional incremental learning methods, its resources are often increased with learning, and continuing the execution of the incremental learning becomes difficult on small embedded systems. Moreover, for real applications, the response time should be reduced. This paper proposes a technique for implementing incremental learning methods on a budget. Normally, they proceed online learning by alternating recognition and learning, so that they cannot respond to the next new instance until the previous learning is finished. Unfortunately, their computational learining complexities are extremely high to realize a quick response to new input. Therefore, this paper introduces a multithreading technique for such learning schemes. The recognition and learning threads are executed in parallel so that the system can respond to a new instance even when it is in the progress of learning. Moreover, this paper shows that such multithreading learning schemes sometime need a “sleep-period” to complete the learning similar to a biological brain. During the “sleep-period,” the leaning system prohibits the receival of any sensory inputs and yielding outputs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Semaphore is the most commonly used method of performing an exclusive control.
- 2.
Although the LGRNN output for the i-th kernel center is recovered through the linear combination of the other kernels, there are no guarantees that the outputs for the other inputs are not changed.
References
Dekel, O., Shalev-Shwartz, S., Singer, Y.: The forgetron: a kernel-based perceptron on a budget. SIAM J. Comput. (SICOMP) 37(5), 1342–1372 (2008)
Orabona, F., Keshet, J., Caputo, B.: The projectron: a bounded kernel-based perceptron. In: ICML, pp. 720–727 (2008)
Yamauchi, K.: Pruning with replacement and automatic distance metric detection in limited general regression neural networks. In: Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 - August 5, 2011, pp. 899–906. The Institute of Electrical and Electronics Engineers, Inc., New York, July 2011
He, W., Si, W.: A kernel-based perceptron with dynamic memory. Neural Netw. 25, 105–113 (2011)
Yamauchi, K.: Incremental learning on a budget and its application to quick maximum power point tracking of photovoltaic systems. J. Adv. Comput. Intell. Intell. Inf. 18(4), 682–696 (2014)
Genov, R., Cauwenberghs, G.: Kerneltron: support vector “machine” in silicon. IEEE Trans. Neural Netw. 14(5), 1426–1434 (2003)
Hikawa, H., Kaida, K.: Novel FPGA implementation of hand sign recognition system with SOM-Hebb classifier. IEEE Trans. Circ. Syst. Video Technol. 25(1), 153–166 (2015)
Webb, A.R.: Functional approximation by feed-forward networks: a least-squares approach to generalization. IEEE Trans. Neural Netw. 5(3), 363–371 (1994)
Garcìa, S., Derrac, J., Cano, J., Herrera, F.: Prototype selection for nearest neighbor classification: taxonomy and empirical study. IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 417–435 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Nishio, D., Yamauchi, K. (2016). Multithreading Incremental Learning Scheme for Embedded System to Realize a High-Throughput. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9948. Springer, Cham. https://doi.org/10.1007/978-3-319-46672-9_24
Download citation
DOI: https://doi.org/10.1007/978-3-319-46672-9_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46671-2
Online ISBN: 978-3-319-46672-9
eBook Packages: Computer ScienceComputer Science (R0)