Abstract
Network representation learning is a de-facto tool for graph analytics. The mainstream of the previous approaches is to factorize the proximity matrix between nodes. However, if n is the number of nodes, since the size of the proximity matrix is \(n \times n\), it needs \(O(n^3)\) time and \(O(n^2)\) space to perform network representation learning. The proposed approach computes the representations of the clusters from similarities between clusters and computes the representations of nodes by referring to them. If l is the number of clusters, since \(l \ll n\), we can efficiently obtain the representations of clusters from a small \(l \times l\) similarity matrix. Experiments show that our approach can perform network representation learning more efficiently and effectively than existing approaches.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arriaga, R.I., Vempala, S.S.: An algorithmic theory of learning: robust concepts and random projection. Mach. Learn. 63(2), 161–182 (2006)
Bhowmick, A.K., Meneni, K., Danisch, M., Guillaume, J., Mitra, B.: Louvainne: hierarchical Louvain method for high quality and scalable network embedding. In: WSDM, pp. 43–51 (2020)
Chen, H., Sultan, S.F., Tian, Y., Chen, M., Skiena, S.: Fast and accurate network embeddings via very sparse random projection. In: CIKM, pp. 399–408 (2019)
Fujiwara, Y., Irie, G., Kuroyama, S., Onizuka, M.: Scaling manifold ranking based image retrieval. Proc. VLDB Endow. 8(4), 341–352 (2014)
Ida, Y., Fujiwara, Y., Kashima, H.: Fast sparse group lasso. In: NeurIPS, pp. 1700–1708 (2019)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS, pp. 3111–3119 (2013)
Nakatsuji, M., Fujiwara, Y., Toda, H., Sawada, H., Zheng, J., Hendler, J.A.: Semantic data representation for improving tensor factorization. In: AAAI, pp. 2004–2012 (2014)
Perozzi, B., Al-Rfou, R., Skiena, S.: Deepwalk: online learning of social representations. In: KDD, pp. 701–710 (2014)
Qiu, J., Dhulipala, L., Tang, J., Peng, R., Wang, C.: Lightne: a lightweight graph processing system for network embedding. In: Li, G., Li, Z., Idreos, S., Srivastava, D. (eds.) SIGMOD, pp. 2281–2289. ACM (2021)
Qiu, J., Dong, Y., Ma, H., Li, J., Wang, K., Tang, J.: Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In: WSDM, pp. 459–467 (2018)
Shiokawa, H., Fujiwara, Y., Onizuka, M.: Fast algorithm for modularity-based graph clustering. In: AAAI (2013)
Tsitsulin, A., Munkhoeva, M., Mottin, D., Karras, P., Oseledets, I.V., Müller, E.: FREDE: anytime graph embeddings. Proc. VLDB Endow. 14(6), 1102–1110 (2021)
Zhang, Z., Cui, P., Li, H., Wang, X., Zhu, W.: Billion-scale network embedding with iterative random projection. In: ICDM, pp. 787–796 (2018)
Zhu, H., Koniusz, P.: REFINE: random range finder for network embedding. In: Demartini, G., Zuccon, G., Culpepper, J.S., Huang, Z., Tong, H. (eds.) CIKM, pp. 3682–3686. ACM (2021)
Acknowledgment
This work was supported by JSPS KAKENHI Grant Number 22H03596.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fujiwara, Y., Ida, Y., Kumagai, A., Nakano, M., Kimura, A., Ueda, N. (2023). Efficient Network Representation Learning via Cluster Similarity. In: Wang, X., et al. Database Systems for Advanced Applications. DASFAA 2023. Lecture Notes in Computer Science, vol 13945. Springer, Cham. https://doi.org/10.1007/978-3-031-30675-4_20
Download citation
DOI: https://doi.org/10.1007/978-3-031-30675-4_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30674-7
Online ISBN: 978-3-031-30675-4
eBook Packages: Computer ScienceComputer Science (R0)