Abstract
Recent developments in the field of artificial intelligence have increased the demand for high performance computation devices. An edge device is highly restricted not only in terms of its computational power but also memory capacity. This study proposes a method that enables both inference and learning on an edge device. The proposed method involves a kernel machine that works in restricted environments by collaborating with its secondary storage system. The kernel parameters, which are not essential for calculating the output values for the upcoming inputs, are stored in the secondary storage to make space in the main memory. The essential kernel parameters stored in the secondary storage are loaded into the main memory when required. With the use of this strategy, the system can realize the recognition/regression tasks without reducing its generalization capability.
A part of this research was supported by the Chubu University Grant A.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
The original LGRNN algorithm has four learning options (Yamauchi (2014)). One of them being the projection operation. However this operation is not useful for the swap kernel regression. Hence, here we only use the replacement and ignore options.
- 2.
Even under such situations, if the related instance distribution area is wide enough, then the number of kernels of the same cluster is larger than \(|S_t^1|\), hence the swap occurs repeatedly (see Algorithm 2).
- 3.
Although the VGG16 model supports the processing of real color images, we still provide the re-scaled gray-level images as the input.
- 4.
Note that each new instance was tested its recognition result before the incremental learning of it, and the swap-kerne-regression learned it incrementally. So, if the swap kernel regression fails to recognize a new instance, the cumulative number of mistakes is incremented.
References
Dekel, O., Shalev-Shwartz, S., Singer, Y.: The forgetron: a kernel-based perceptron on a budget. SIAM J. Comput. (SICOMP) 37(5), 1342–1372 (2008). https://doi.org/10.1137/060666998
He, W., Wu, S.: A kernel-based perceptron with dynamic memory. Neural Networks 25, 105–113 (2012). https://doi.org/10.1016/j.neunet.2011.07.008
Kivinen, J., Smola, A.J., Williamson, R.C.: Online learning with kernels. IEEE Trans. Signal Process. 52(8), 2165–2176 (2004). https://doi.org/10.1109/TSP.2004.830991
Lee, D., et al.: LRFU: a spectrum of policies that subsumes the least recently used and least frequently used policies. IEEE Trans. Comput. 50(12), 1352–1361 (2001). https://doi.org/10.1109/TC.2001.970573
Orabona, F., Keshet, J., Caputo, B.: The projectron: a bounded kernel-based perceptron. In: ICML 2008, pp. 720–727 (2008)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations ICLR 2015 (2015), arXiv:1409.1556v6
Weston, J., Chopra, S., Bordes, A.: Memory networks. In: ICLR 2015 (2015)
Yamauchi, K.: An importance weighted projection method for incremental learning under unstationary environments. In: IJCNN2013: The International Joint Conference on Neural Networks 2013, pp. 1–9. The Institute of Electrical and Electronics Engineers, Inc., New York (2013). https://doi.org/10.1109/IJCNN.2013.6706779
Yamauchi, K.: Incremental learning on a budget and its application to quick maximum power point tracking of photovoltaic systems. J. Adv. Comput. Intell. Intell. Inf. 18(4), 682–696 (2014). https://doi.org/10.20965/jaciii.2014.p0682
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Yamamoto, M., Yamauchi, K. (2019). Swap Kernel Regression. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Theoretical Neural Computation. ICANN 2019. Lecture Notes in Computer Science(), vol 11727. Springer, Cham. https://doi.org/10.1007/978-3-030-30487-4_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-30487-4_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30486-7
Online ISBN: 978-3-030-30487-4
eBook Packages: Computer ScienceComputer Science (R0)