Abstract
Network embedding, which learns low-dimensional representations for each node with the goal of capturing and preserving the complex structure of original networks, has shown its necessity in network analysis. The structure of real-world networks is highly non-linear; however, most existing methods cannot be well applied due to their shallow models. While a few deep neural networks have been adopted to capture the highly non-linearity, the deep structure makes them difficult to optimize in practice. In this paper, we propose a novel deep network embedding method, which exploits the fast learning speeds of extreme learning machine (ELM). Particularly, we first design a deep ELM-based auto-encoder, based on which we then proposed an extended model to preserve both first-order and second-order proximities by a joint loss function. Extensive experiments on real-world network datasets show the effectiveness and efficiency of proposed method as compared to state-of-the-art embedding methods by network recovery, multi-class classification and multi-label classification tasks.
Similar content being viewed by others
Notes
For weighted networks, \(\mathrm {S}_{i,j}>0\), but in this paper we only consider unweighted networks.
References
Akusok A, Baek S, Miche Y, Björk KM, Nian R, Lauren P, Lendasse A (2016) Elmvis+: fast nonlinear visualization technique based on cosine distance and extreme learning machines. Neurocomputing 205:247–263
Belkin M, Niyogi P (2001) Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Proceedings of the 14th international conference on neural information processing systems: natural and synthetic. MIT Press, Cambridge, pp 585–591
Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798
Bhagat S, Cormode G, Muthukrishnan S (2011) Node classification in social networks. In: Aggarwal C (ed) Social network data analytics. Springer, Boston, MA, pp 115–148
Cao W, Gao J, Ming Z, Cai S, Shan Z (2018a) Fuzziness-based online sequential extreme learning machine for classification problems. Soft Comput 22(11):3487–3494
Cao W, Wang X, Ming Z, Gao J (2018b) A review on neural networks with random weights. Neurocomputing 275:278–287
Chang S, Han W, Tang J, Qi GJ, Aggarwal CC, Huang TS (2015) Heterogeneous network embedding via deep architectures. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, ACM, pp 119–128
Cox TF, Cox MA (2000) Multidimensional scaling. Chapman & Hall/CRC Press, London
Ding S, Zhang N, Zhang J, Xu X, Shi Z (2017) Unsupervised extreme learning machine with representational features. Int J Mach Learn Cybern 8(2):587–595
Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp 249–256
Grover A, Leskovec J (2016) node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp 855–864
Guo H, Tang R, Ye Y, Li Z, He X (2017) Deepfm: A factorization-machine based neural network for CTR prediction. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence, pp 1725–1731
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
Huang GB (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput 6(3):376–390
Huang GB (2015) What are extreme learning machines? filling the gap between frank rosenblatt’s dream and john von neumann’s puzzle. Cognitive Computation 7(3):263–278
Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16):3056–3062
Huang GB, Zhu QY, Siew CK (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: Neural networks, 2004. Proceedings. 2004 IEEE international joint conference on, IEEE, vol 2, pp 985–990
Huang GB, Chen L, Siew CK et al (2006a) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892
Huang GB, Zhu QY, Siew CK (2006b) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501
Huang GB, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B (Cybern) 42(2):513–529
Huang Z, Wang X (2018) Sensitivity of data matrix rank in non-iterative training. Neurocomputing 313:386–391
Kasun LLC, Zhou H, Huang GB, Vong CM (2013) Representational learning with extreme learning machine for big data. IEEE Intell Syst 28(6):31–34
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324
Liao L, He X, Zhang H, Chua TS (2017) Attributed social network embedding. arXiv:170504969
Liben-Nowell D, Kleinberg J (2007) The link-prediction problem for social networks. J Assoc Inf Sci Technol 58(7):1019–1031
Liu M, Liu B, Zhang C, Wang W, Sun W (2016) Semi-supervised low rank kernel learning algorithm via extreme learning machine. Int J Mach Learn Cybern 8(3):1–14
Luo D, Ding C, Nie F, Huang H (2011) Cauchy graph embedding. In: International conference on machine learning, pp 553–560
Mao W, Wang J, Xue Z (2016) An ELM-based model with sparse-weighting strategy for sequential data imbalance problem. Int J Mach Learn Cybern 8(4):1–13
McPherson M, Smith-Lovin L, Cook JM (2001) Birds of a feather: homophily in social networks. Annu Rev Sociol 27(1):415–444
Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, pp 3111–3119
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V et al (2011) Scikit-learn: machine learning in python. J Mach Learn Res 12(Oct):2825–2830
Perozzi B, Al-Rfou R, Skiena S (2014) Deepwalk: Online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp 701–710
Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326
Sen P, Namata G, Bilgic M, Getoor L, Gallagher B, Eliassi-Rad T (2008) Collective classification in network data. AI Mag 29(3):93–106
Tang J, Qu M, Wang M, Zhang M, Yan J, Mei Q (2015) Line: large-scale information network embedding. In: Proceedings of the 24th international conference on world wide web, international world wide web conferences steering committee, pp 1067–1077
Tang J, Deng C, Huang GB (2016) Extreme learning machine for multilayer perceptron. IEEE Trans Neural Netw Learn Syst 27(4):809–821
Tenenbaum JB, De Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323
Tsoumakas G, Katakis I (2007) Multi-label classification: an overview. Int J Data Warehous Min 3(3):1–13
Wang D, Cui P, Zhu W (2016) Structural deep network embedding. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp 1225–1234
Wang X, Cao W (2018) Non-iterative approaches in training feed-forward neural networks and their applications. Soft Comput 22(11):3473–3476
Wang X, Wang R, Xu C (2018) Discovering the relationship between generalization and uncertainty by incorporating complexity of classification. IEEE Trans Cybern 48(2):703–715
Wang XZ, Zhang T, Wang R (2017) Non-iterative deep learning: incorporating restricted boltzmann machine into multilayer random weight neural networks. IEEE Trans Syst Man Cybern Syst 99:1–10
Wang Z, Wang X (2018) A deep stochastic weight assignment network and its application to chess playing. J Parallel Distrib Comput 117:205–211
Yang Y, Wu QJ (2016) Multilayer extreme learning machine with subnetwork nodes for representation learning. IEEE Trans Cybern 46(11):2570–2583
Yang Y, Wu QJ, Wang Y (2016) Autoencoder with invertible functions for dimension reduction and image reconstruction. IEEE Trans Syst Man Cybern Syst 48(7):1065–1079
Yu W, Zhuang F, He Q, Shi Z (2015) Learning deep representations via extreme learning machines. Neurocomputing 149:308–315
Zhai J, Zhang S, Wang C (2017) The classification of imbalanced large data sets based on mapreduce and ensemble of ELM classifiers. Int J Mach Learn Cybern 8(3):1009–1017
Acknowledgements
This work is support by the 111 project (no. B17007).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chu, Y., Feng, C., Guo, C. et al. Network embedding based on deep extreme learning machine. Int. J. Mach. Learn. & Cyber. 10, 2709–2724 (2019). https://doi.org/10.1007/s13042-018-0895-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-018-0895-5