Abstract
This paper proposes a density-based semi-supervised online sequential extreme learning machine (D-SOS-ELM). The proposed method can realize online learning of unlabeled samples chunk by chunk. Local density and distance are used to measure the similarity of patterns, and the patterns with high confidence are selected by the ‘follow’ strategy for online learning, which can improve the accuracy of learning. Through continuous patterns selection, the proposed method ultimately achieves effective learning of unlabeled patterns. Furthermore, using local density and relative distance can effectively respond to the relationship between patterns. Compared with the traditional distance-based similarity measure, the ability to deal with complex data is improved. Empirical study on several standard benchmark data sets demonstrates that the proposed D-SOS-ELM model outperforms state-of-art methods in terms of accuracy.






Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
The cutoff kernel can also be used for density description, but this method can lead to the same density of many points, which brings difficulties to subsequent operations. (cutoff kernel: \(\rho _i=\sum \nolimits _{j\in I_s\backslash {i}}\chi (d_{ij}-d_c)\), where \(\chi (x)=1\) if \(x<0\), else \(\chi (x)=0).\)
As a rule of thumb, one can choose \(d_c\) so that the average number of neighbors is around 1% to 2% of the total number of points in the data set. If \(d_c\) is too large, the density of each data will be so large that the discrimination is not high. If the value is too small, then one category may be split into multiples, and the extreme case is that each of the data is a separate class. Locking the range at 1–2% is the empirical value of Rodriguez’s work based on several data sets and multiple trials.
Rodriguez’s work suggests that in density clustering, a pattern X should belong to the same class as the pattern closest to the pattern X and with a larger density than pattern X.
References
Mackay DJC (2014) A practical Bayesian framework for backpropagation networks. Neural Comput 4(3):448–472
Sarabakha A, Imanberdiyev N, Kayacan E et al (2017) Novel Levenberg–Marquardt based learning algorithm for unmanned aerial vehicles. Inf Sci 416:361–380
Ding S, Xu X, Nie R (2014) Extreme learning machine and its applications. Neural Comput Appl 25(3–4):549–556
Xue J, Zhou SH, Liu Q et al (2018) Financial time series prediction using l2, 1 RF-ELM. Neurocomputing 277(14):176–186
Yang C, Huang K, Cheng H et al (2017) Haptic identification by ELM-controlled uncertain manipulator. IEEE Trans Syst Man Cybern Syst 47(8):2398–2409
Martinez-Garcia JA, Sancho-Gomez JL (2018) Performance analysis of No-Propagation and ELM algorithms in classification. Neural Comput Appl 10–12:1–11
Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501
Huang GB (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput 6(3):376–390
Li X, Mao W, Jiang W (2016) Multiple-kernel-learning-based extreme learning machine for classification design. Neural Comput Appl 27(1):175–184
Huang GB, Zhou H, Ding X et al (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B 42(2):513–529
Zou W, Yao F, Zhang B et al (2017) Improved Meta-ELM with error feedback incremental ELM as hidden nodes. Neural Comput Appl 8:1–8
Liang NY, Huang GB, Saratchandran P et al (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423
Zhao J, Wang Z, Dong SP (2012) Online sequential extreme learning machine with forgetting mechanism. Neurocomputing 87(15):79–89
Scardapane S, Comminiello D, Scarpiniti M et al (2015) Online sequential extreme learning machine with kernels. IEEE Trans Neural Netw Learn Syst 26(9):2214–2220
Zou QY, Wang XJ, Zhou CJ et al (2018) The memory degradation based online sequential extreme learning machine. Neurocomputing 275:2864–2879
Wang B, Huang S, Qiu J et al (2015) Parallel online sequential extreme learning machine based on MapReduce. Neurocomputing 149(PA):224–232
Huang G, Song S, Gupta JN et al (2014) Semi-supervised and unsupervised extreme learning machines. IEEE Trans Cybern 44(12):2405–2417
Liu S, Feng L, Wang H et al (2016) Extend semi-supervised ELM and a frame work. Neural Comput Appl 27(1):205–213
Krishnasamy G, Paramesran R (2016) Hessian semi-supervised extreme learning machine. Neurocomputing 207(C):560–567
Bisio F, Decherchi S, Gastaldo P et al (2016) Inductive bias for semi-supervised extreme learning machine. Neurocomputing 174(PA):154–167
Yi Y, Qiao S, Zhou W et al (2018) Adaptive multiple graph regularized semi-supervised extreme learning machine. Soft Comput 22(6):1–18
Jia X, Wang R, Liu J et al (2016) A semi-supervised online sequential extreme learning machine method. Neurocomputing 174(PA):168–178
Olivier C, Scholkopf B, Alexander Z (2006) A discussion of semi-supervised learning and transduction. In: Semi-supervised learning. MIT Press, New York
Kawakita M, Kanamori T (2013) Semi-supervised learning with density-ratio estimation. Mach Learn 91(2):189–209
Soares RGF, Chen H, Yao X (2018) Efficient cluster-based boosting for semisupervised classification. IEEE Trans Neural Netw Learn Syst 2018(99):1–14
Rodriguez A, Laio A (2014) Clustering by fast search and find of density peaks. Science 344(6191):1492
Jain A, Law M (2005) Data clustering: a user’s dilemma. Lect Notes Comput Sci 3776:1–10
Fu L, Medico E (2007) FLAME, a novel fuzzy clustering method for the analysis of DNA microarray data. BMC Bioinform 8(1):3
Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501
Pedregosa F, Gramfort A, Michel V et al (2013) Scikit-learn: machine learning in Python. J Mach Learn Res 12(10):2825–2830
Nigam K, Mccallum A, Mitchell TM (2006) Semi-supervised text classification using EM. AIAA J 36(36):62–68
Lin F, Cohen WW (2010) Semi-supervised classification of network data using very few labels. In: International conference on advances in social networks analysis and mining, pp 192–199
Cheung E, Li Y (2017) Self-training with adaptive regularization for S3VM. In: 2017 international joint conference on neural networks (IJCNN), Anchorage, AK, pp 3633–3640
Acknowledgements
This work is supported in part by, the National Natural Science Foundation of PR China (61773219, 61503192), Natural Science Foundation of Jiangsu Province (BK20161533), and Qing Lan Project of Jiangsu Province.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Xia, M., Wang, J., Liu, J. et al. Density-based semi-supervised online sequential extreme learning machine. Neural Comput & Applic 32, 7747–7758 (2020). https://doi.org/10.1007/s00521-019-04066-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-019-04066-3