ABSTRACT
Most of the existing graph analytics for understanding social behavior focuses on learning from static rather than dynamic graphs using hand-crafted network features or recently emerged graph embeddings learned independently from a downstream predictive task, and solving predictive (e.g., link prediction) rather than forecasting tasks directly. To address these limitations, we propose (1) a novel task -- forecasting user interactions over dynamic social graphs, and (2) a novel deep learning, multi-task, node-aware attention model that focuses on forecasting social interactions, going beyond recently emerged approaches for learning dynamic graph embeddings. Our model relies on graph convolutions and recurrent layers to forecast future social behavior and interaction patterns in dynamic social graphs. We evaluate our model on the ability to forecast the number of retweets and mentions of a specific news source on Twitter (focusing on deceptive and credible news sources) with R^2 of 0.79 for retweets and 0.81 for mentions. An additional evaluation includes model forecasts of user-repository interactions on GitHub and comments to a specific video on YouTube with a mean absolute error close to 2% and R^2 exceeding 0.69. Our results demonstrate that learning from connectivity information over time in combination with node embeddings yields better forecasting results than when we incorporate the state-of-the-art graph embeddings e.g., Node2Vec and DeepWalk into our model. Finally, we perform in-depth analyses to examine factors that influence model performance across tasks and different graph types e.g., the influence of training and forecasting windows as well as graph topological properties.
- Peter J Brockwell, Richard A Davis, and Matthew V Calder. 2002. Introduction to time series and forecasting. Vol. 2. Springer.Google Scholar
- Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann Lecun. 2014. Spectral networks and locally connected networks on graphs. In International Conference on Learning Representations (ICLR2014), CBLS, April 2014 .Google Scholar
- Dorota Celi'nska. 2018. Coding Together in a Social Network: Collaboration Among GitHub Users. In Proceedings of the 9th International Conference on Social Media and Society (SMSociety '18). ACM, New York, NY, USA, 31--40. https://doi.org/10.1145/3217804.3217895Google ScholarDigital Library
- Hsinchun Chen, Xin Li, and Zan Huang. 2005. Link prediction approach to collaborative filtering. In Proceedings of the 5th ACM/IEEE-CS Joint Conference on Digital Libraries, 2005(JCDL). IEEE, 141--142.Google Scholar
- Ronan Collobert and Jason Weston. 2008. A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. In Proceedings of the 25th International Conference on Machine Learning (ICML). 160--167.Google ScholarDigital Library
- Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems. 3844--3852.Google Scholar
- Li Deng, Dong Yu, et almbox. 2014. Deep learning: methods and applications. Foundations and Trends® in Signal Processing , Vol. 7, 3--4 (2014), 197--387.Google Scholar
- Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. arXiv:1704.01212 (2017).Google Scholar
- Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research), Yee Whye Teh and Mike Titterington (Eds.), Vol. 9. 249--256.Google Scholar
- Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning .MIT Press. http://www.deeplearningbook.org.Google ScholarDigital Library
- Georgios Gousios and Diomidis Spinellis. 2012. GHTorrent: GitHub's data from a firehose. In Proceedings of Mining Software Repositories . 12--21.Google ScholarCross Ref
- Aditya Grover and Jure Leskovec. 2016. Node2Vec: Scalable Feature Learning for Networks. In Proceedings of ACM SIGKDD . 855--864.Google ScholarDigital Library
- Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017a. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems. 1025--1035.Google Scholar
- William L Hamilton, Rex Ying, and Jure Leskovec. 2017b. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584 (2017).Google Scholar
- Isaac Henrion, Johann Brehmer, Joan Bruna, Kyunghun Cho, Kyle Cranmer, Gilles Louppe, and Gaspar Rochette. 2017. Neural Message Passing for Jet Physics. (2017).Google Scholar
- Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. CoRR , Vol. abs/1412.6980 (2014). http://arxiv.org/abs/1412.6980Google Scholar
- Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations (ICLR) .Google Scholar
- Gü nter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. 2017. Self-Normalizing Neural Networks. CoRR , Vol. abs/1706.02515 (2017). http://arxiv.org/abs/1706.02515Google Scholar
- Yann LeCun, Léon Bottou, Genevieve B. Orr, and Klaus-Robert Müller. 1998. Efficient BackProp. In Neural Networks: Tricks of the Trade, This Book is an Outgrowth of a 1996 NIPS Workshop. 9--50.Google Scholar
- Antonio Lima, Luca Rossi, and Mirco Musolesi. 2014. Coding together at scale: GitHub as a collaborative social network. In Eighth International AAAI Conference on Weblogs and Social Media .Google Scholar
- Giang Hoang Nguyen, John Boaz Lee, Ryan A Rossi, Nesreen K Ahmed, Eunyee Koh, and Sungchul Kim. 2018. Continuous-time dynamic network embeddings. In 3rd International Workshop on Learning Representations for Big Networks (WWW BigNet) .Google ScholarDigital Library
- Ping-Feng Pai and Chih-Sheng Lin. 2005. A hybrid ARIMA and support vector machines model in stock price forecasting. Omega , Vol. 33, 6 (2005), 497--505.Google ScholarCross Ref
- Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. DeepWalk: Online Learning of Social Representations. In Proceedings of ACM SIGKDD . 701--710.Google ScholarDigital Library
- Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. LINE: Large-scale Information Network Embedding. In Proceedings of the 24th International Conference on World Wide Web (WWW '15). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 1067--1077. https://doi.org/10.1145/2736277.2741093Google ScholarDigital Library
- Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. 2017. Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research), , Doina Precup and Yee Whye Teh (Eds.), Vol. 70. 3462--3471.Google Scholar
- Petar Velivc ković , Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2017. Graph Attention Networks. arXiv preprint arXiv:1710.10903 (2017).Google Scholar
- Svitlana Volkova, Kyle Shaffer, Jin Yea Jang, and Nathan Hodas. 2017. Separating Facts from Fiction: Linguistic Models to Classify Suspicious and Trusted News Posts on Twitter. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) . 647--653.Google ScholarCross Ref
- Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How Powerful are Graph Neural Networks?. In International Conference on Learning Representations (ICLR) .Google Scholar
- Jiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, and Jure Leskovec. 2018. GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Model. In Proceedings of the 35th International Conference on Machine Learning (ICML). 5694--5703.Google Scholar
- Wenchao Yu, Charu C Aggarwal, and Wei Wang. 2017. Temporally factorized network modeling for evolutionary network analysis. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. ACM, 455--464.Google ScholarDigital Library
- Daokun Zhang, Jie Yin, Xingquan Zhu, and Chengqi Zhang. 2018. SINE: Scalable Incomplete Network Embedding. In 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 737--746.Google Scholar
- G Peter Zhang. 2003. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing , Vol. 50 (2003), 159--175.Google ScholarCross Ref
- Le-kui Zhou, Yang Yang, Xiang Ren, Fei Wu, and Yueting Zhuang. 2018. Dynamic Network Embedding by Modeling Triadic Closure Process. In AAAI .Google Scholar
- Tao Zhou, Jie Ren, Matúvs Medo, and Yi-Cheng Zhang. 2007. Bipartite network projection and personal recommendation. Physical Review E , Vol. 76, 4 (2007), 046115.Google ScholarCross Ref
Index Terms
- Learning from Dynamic User Interaction Graphs to Forecast Diverse Social Behavior
Recommendations
Detecting dynamic patterns in dynamic graphs using subgraph isomorphism
AbstractGraphs have been used in different fields of research for performing structural analysis of various systems. In order to compare the structure of two systems, the correspondence between their graphs has to be verified. The problem of graph ...
Distributed k-core decomposition and maintenance in large dynamic graphs
DEBS '16: Proceedings of the 10th ACM International Conference on Distributed and Event-based SystemsDistributed processing of large, dynamic graphs has recently received considerable attention, especially in domains such as the analytics of social networks, web graphs and spatial networks. k-core decomposition is one of the significant figures of ...
Dynamic chromatic number of regular graphs
A k-dynamic coloring of a graph G is a proper coloring of G with k colors such that for every vertex v@?V(G) of degree at least 2, the neighbors of v receive at least 2 colors. The dynamic chromatic number of a graph G, @g"2(G), is the least number k ...
Comments