Abstract
One-class learning is a classical and hard computational intelligence task. In the literature, there are some effective and powerful solutions to address the problem. There are examples in the kernel machines realm, Support Vector Domain Description, and the recently proposed Import Vector Domain Description (IVDD), which directly delivers the sample probability of belonging to the class. Here, we propose and discuss two optimization techniques for IVDD to significantly improve the memory footprint and consequently to scale to datasets that are larger than the original formulation. We propose two strategies. First, we propose using random features to approximate the gaussian kernel together with a primal optimization algorithm. Second, we propose a Nyström-like approximation of the functional together with a fast converging and accurate self-consistent algorithm. In particular, we replace the a posteriori sparsity of the original optimization method of IVDD by randomly selecting a priori landmark samples in the dataset. We find this second approximation to be superior. Compared to the original IVDD with the RBF kernel, it achieves high accuracy, is much faster, and grants huge memory savings.





Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml
Cambria E, Huang G, Kasun LLC, Zhou H, Vong CM, Lin J, Yin J, Cai Z, Liu Q, Li K, Leung VCM, Feng L, Ong Y, Lim M, Akusok A, Lendasse A, Corona F, Nian R, Miche Y, Gastaldo P, Zunino R, Decherchi S, Yang X, Mao K, Oh B, Jeon J, Toh K, Teoh ABJ, Kim J, Yu H, Chen Y, Liu J (2013) Extreme learning machines [trends amp; controversies]. IEEE Intell Syst 28(6):30–59. https://doi.org/10.1109/MIS.2013.140
Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(27):1–27
Decherchi S, Rocchia W (2017) Import vector domain description: a kernel logistic one-class learning algorithm. IEEE Trans Neural Netw Learn Syst 28(7):1722–1729. https://doi.org/10.1109/TNNLS.2016.2547220
Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J Royal Statistic Soc Ser B 39:1–38
Erfani SM, Rajasegarar S, Karunasekera S, Leckie C (2016) High-dimensional and large-scale anomaly detection using a linear one-class svm with deep learning. Pattern Recognit 58:121–134. https://doi.org/10.1016/j.patcog.2016.03.028
Gerfo LL, Rosasco L, Odone F, Vito ED, Verri A (2008) Spectral algorithms for supervised learning. Neural Comput 20(7):1873–1897. https://doi.org/10.1162/neco.2008.05-07-517
Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501. https://doi.org/10.1016/j.neucom.2005.12.126 Neural Networks
Oglic D, Gärtner T (2017) Nyström method with kernel k-means++ samples as landmarks. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning, proceedings of machine learning research, vol 70. PMLR, International convention centre, Sydney, Australia, pp 2652–2660. http://proceedings.mlr.press/v70/oglic17a.html
Platt JC (1999) Advances in kernel methods. chap. Fast training of support vector machines using sequential minimal optimization. MIT Press, Cambridge, pp 185–208
Rahimi A, Recht B (2008) Random features for large-scale kernel machines. In: Platt JC, Koller D, Singer Y, Roweis ST (eds) Advances in neural information processing systems 20. Curran Associates, Inc., New York, pp 1177–1184
Rayana S (2016) ODDS library. http://odds.cs.stonybrook.edu
Rudi A, Camoriano R, Rosasco L (2015) Less is more: Nyström computational regularization. In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R (eds) Advances in neural information processing systems 28. Curran Associates Inc, New York, pp 1657–1665
Schölkopf B, Platt JC, Shawe-Taylor JC, Smola AJ, Williamson RC (2001) Estimating the support of a high-dimensional distribution. Neural Comput 13(7):1443–1471. https://doi.org/10.1162/089976601750264965
Tax DM, Duin RP (1999) Support vector domain description. Pattern Recognit Lett 20(11–13):1191–1199. https://doi.org/10.1016/S0167-8655(99)00087-2
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Decherchi, S., Cavalli, A. Fast and Memory-Efficient Import Vector Domain Description. Neural Process Lett 52, 511–524 (2020). https://doi.org/10.1007/s11063-020-10243-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-020-10243-6