Abstract
Extreme learning machine (ELM) has been extensively studied, due to its fast training and good generalization. Unfortunately, the existing ELM-based feature representation methods are uncompetitive with state-of-the-art deep neural networks (DNNs) when conducting some complex visual recognition tasks. This weakness is mainly caused by two critical defects: (1) random feature mappings (RFM) by ad hoc probability distribution is unable to well project various input data into discriminative feature spaces; (2) in the ELM-based hierarchical architectures, features from previous layer are scattered via RFM in the current layer, which leads to abstracting higher level features ineffectively. To address these issues, we aim to take advantage of label information for optimizing random mapping in the ELM, utilizing an efficient label alignment metric to learn a conditional random feature mapping (CRFM) in a supervised manner. Moreover, we proposed a new CRFM-based single-layer ELM (CELM) and then extended CELM to the supervised multi-layer learning architecture (ML-CELM). Extensive experiments on various widely used datasets demonstrate our approach is more effective than original ELM-based and other existing DNN feature representation methods with rapid training/testing speed. The proposed CELM and ML-CELM are able to achieve discriminative and robust feature representation, and have shown superiority in various simulations in terms of generalization and speed.
Similar content being viewed by others
References
Huang G-B, Zhu QY, Siew CK. Extreme learning machine: theory and applications. Neurocomputing. 2006;70(13):489–501.
Huang G-B, Zhou H, Ding X, Zhang R. Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B Cybern. 2012;42(2):513.
Savitha R, Suresh S, Kim HJ. A meta-cognitive learning algorithm for an extreme learning machine classifier. Cogn Comput. 2014;6(2):253–63.
Huang G-B, Song S, You K. Trends in extreme learning machines: a review. Neural Netw Offic J Int Neural Netw Soc. 2015;61(C):32–48.
Huang G-B. What are extreme learning machines? Filling the gap between Frank Rosenblatts dream and John von Neumanns puzzle. Cogn Comput. 2015;7:263–78.
Huang G-B, Chen L. Letters: Convex incremental extreme learning machine. Neurocomputing. 2012;70(16):3056–62.
Huang G-B. An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput. 2014;6(3):376–90.
Cao J, Zhang K, Luo M, Yin C, Lai X. Extreme learning machine and adaptive sparse representation for image classification. Neural Netw. 2016;81:91–102.
Iosifidis A, Tefas A, Pitas I. Graph embedded extreme learning machine. IEEE Trans Cybern. 2016;46(1):311–24.
Huang G-B, Wang DH, Lan Y. Extreme learning machines: a survey. Int J Mach Learn Cybern. 2011;2(2):107–22.
Lin SB, Liu X, Fang J, Xu ZB. Is extreme learning machine feasible? A theoretical assessment (part ii). IEEE Trans Neural Netw Learn Syst. 2014;26(1):21–34.
Wang XZ, Shao QY, Miao Q, Zhai JH. Architecture selection for networks trained with extreme learning machine using localized generalization error model. Neurocomputing. 2013;102(2):3–9.
Tang J, Deng C, Huang GB, Zhao B. Compressed-domain ship detection on spaceborne optical image using deep neural network and extreme learning machine. IEEE Trans Geosci Remote Sens. 2014;53(3):1174–85.
Deng C, Wang S, Li Z, Huang G B, Lin W. Content-insensitive blind image blurriness assessment using Weibull statistics and sparse extreme learning machine. IEEE Trans Syst Man Cybern Syst. 2017;PP(99):1–12.
Gritsenko A, Akusok A, Baek S, Miche Y, Lendasse A. Extreme learning machines for visualization+r: mastering visualization with target variables. Cogn Comput. 2017;3:1–14.
Zhang Z, Zhao X, Wang G. Fe-elm: a new friend recommendation model with extreme learning machine. Cogn Comput. 2017;9(5):659–70.
Wang B, Zhu R, Luo S, Yang X, Wang G. H-mrst: a novel framework for supporting probability degree range query using extreme learning machine. Cogn Comput. 2017;9(1):68–80.
Liu H, Qin J, Sun F, Guo D. Extreme kernel sparse learning for tactile object recognition. IEEE Trans Cybern. 2017;47(12):4509–20.
Vong CM, Ip WF, Chiu CC, Wong PK. Imbalanced learning for air pollution by meta-cognitive online sequential extreme learning machine. Cogn Comput. 2015;7(3):381–91.
Mao WT, Jiang M, Wang J, Li Y. Online extreme learning machine with hybrid sampling strategy for sequential imbalanced data. Cogn Comput. 2017;9(6):1–21.
Horata P, Chiewchanwattana S, Sunat K. Robust extreme learning machine. Neurocomputing. 2013; 102(2):31–44.
Li K, Zhang J, Xu H, Luo S, Li H. A semi-supervised extreme learning machine method based on co-training. J Comput Inf Syst. 2013;9(1):207–14.
Huang G, Song S, Gupta J, Wu C. Semi-supervised and unsupervised extreme learning machines. IEEE Trans Cybern. 2017;44(12):2405–17.
Kasun LLC, Yang Y, Huang G-B, Zhang Z. Dimension reduction with extreme learning machine. IEEE Trans Image Process. 2016;25(8):3906–18.
Tang J, Deng C, Huang G-B. Extreme learning machine for multilayer perceptron. IEEE Trans Neural Netw Learn Syst. 2016;27(4):809–21.
Johnson W, Lindenstrauss J. Extensions of Lipschitz mappings into a Hilbert space. 1982;26:189–206.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. CVPR. 2016;2016:770–8.
Zhong G, Yan S, Huang K. Reducing and stretching deep convolutional activation features for accurate image classification. Cogn Comput. 2018;10:1–8.
Wen G, Hou Z, Li H, Li D, Jiang L, Xun E. Ensemble of deep neural networks with probability-based fusion for facial expression recognition. Cogn Comput. 2017;9(5):597–10.
Liu H, Wu Y, Sun F, Fang B, Guo D. Weakly-paired multi-modal fusion for object recognition. IEEE Trans Autom Sci Eng., In press, https://doi.org/10.1109/TASE.2017.2692271.
Kasun LLC, Zhou H, Huang G-B, Wu C. Representational learning with extreme learning machine for big data. IEEE Intell Syst. 2013;28(6):31–4.
Yang Y, Wu QMJ. Multilayer extreme learning machine with subnetwork nodes for representation learning. IEEE Trans Cybern. 2016;46(11):2570–83.
Rahimi A, Recht B. Random features for large-scale kernel machines. Int Conf Neural Inf Process Syst. 2007:1177–84.
Cho Y, Saul LK. Kernel methods for deep learning. Adv Neural Inf Process Syst. 2012:342–50.
Sinha A, Duchi J. Learning kernels with random features. Adv Neural Inf Process Syst. 2016:1298–306.
Perez-Suay A, Amoros-Lopez J, Gomez-Chova L. Randomized kernels for large scale earth observation applications. Remote Sens Environ. 2017;202(3):54–63.
Huang G-B, Chen L, Siew CK. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw. 2006;17(4):879–92.
Vincent P, Larochelle H, Bengio Y, Manzagol PA. Extracting and composing robust features with denoising autoencoders. Int Conf Mach Learn. 2008:1096–103.
Lecun Y, Kavukcuoglu K, Farabet C. Convolutional networks and applications in vision. IEEE Int Symp Circuits Syst. 2010:253–6.
Liu X, Gao C, Li P. A comparative analysis of support vector machines and extreme learning machines. Elsevier Science Ltd. 2012.
Lcun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE. 2001;86(11):2278–324.
Nene SA, Nayar SK, Murase H. 1996. Columbia object image library (COIL-20) Technical Report CUCS-005-96.
Leibe B, Schiele B. Analyzing appearance and contour based methods for object categorization, CVPR 2003. 2003. p. II–409–15 vol. 2.
Lecun Y, Huang FJ, Bottou L. Learning methods for generic object recognition with invariance to pose and lighting, CVPR 2004. 2004. p. II–97–104 vol. 2.
Blake CL, Merz CJ. 1998. UCI Repository of machine learning databases. Dept. Inf. Comput. Sci., Univ. California, Irvine.
Larochelle H, Erhan D, Courville A, Bergstra J, Bengio Y. An empirical evaluation of deep architectures on problems with many factors of variation. Int Conf Mach Learn. 2007:473–80.
Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006; 313(5786):504.
Hinton GE, Osindero S, Teh YW. 2006. A fast learning algorithm for deep belief nets. MIT Press.
Zhang J, Ding S, Zhang N, Xue Y. Weight uncertainty in Boltzmann machine. Cogn Comput 2016;8(6):1064–73.
Kavukcuoglu K, Boureau YL, Boureau YL, Gregor K, Lecun Y. Learning convolutional feature hierarchies for visual recognition. Int Conf Neural Inf Process Syst. 20010:1090–8.
Funding
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant 91438203.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
All authors declare that they have no conflict of interest.
Informed Consent
Informed consent was not required as no human or animals were involved.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Rights and permissions
About this article
Cite this article
Li, C., Deng, C., Zhou, S. et al. Conditional Random Mapping for Effective ELM Feature Representation. Cogn Comput 10, 827–847 (2018). https://doi.org/10.1007/s12559-018-9557-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-018-9557-x