Abstract
Online discussion platforms require extracting the discussion structure in order to support understanding the flow of these discussions. Towards this end, this paper proposes an approach that performs node classification which is the first step towards extracting the structure of online discussions. In this regard, the proposed approach employs a graph attention network (GAT) in order to directly learn the discussion structure. In specific, the GAT, which is a type of graph neural networks (GNNs), encodes the graph structures directly. In addition, the GAT, which is based on attention architecture, is able to deal with different graph structures. In order to evaluate the proposed approach, we have conducted a set of experiments on the persuasive essay dataset that is styled using the issue-based information system (IBIS). The experimental results show that the proposed approach is able to classify the nodes in online discussion structures accurately.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Bruna, J., Zaremba, W., Szlam, A., Lecun, Y.: Spectral networks and locally connected networks on graphs. In: International Conference on Learning Representations (ICLR 2014), CBLS, April 2014 (2014)
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 1189–1232 (2001)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
Gori, M., Monfardini, G., Scarselli, F.: A new model for learning in graph domains. In: Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 729–734. IEEE (2005)
Henaff, M., Bruna, J., LeCun, Y.: Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163 (2015)
Ito, T., Imi, Y., Ito, T., Hideshima, E.: COLLAGREE: a faciliator-mediated large-scale consensus support system. Collective Intell. 2014, 10–12 (2014)
Ito, T., Imi, Y., Sato, M., Ito, T., Hideshima, E.: Incentive mechanism for managing large-scale internet-based discussions on COLLAGREE. Collective Intell. 2015 (2015)
Ito, T., Suzuki, S., Yamaguchi, N., Nishida, T., Hiraishi, K., Yoshino, K.: D-agree: crowd discussion support system based on automated facilitation agent. In: AAAI, pp. 13614–13615 (2020)
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)
Kipf, T.N., Welling, M.: Variational graph auto-encoders. Stat 1050, 21 (2016)
Kunz, W.: Issues as elements of information systems. Institute of Urban & Regional Development, University of California Working Paper, Berkeley 131 (1970)
Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016)
Malone, T.W.: Superminds: the surprising power of people and computers thinking together. Little, Brown Spark (2018)
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc. (2019). http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
Powers, D.M.: Evaluation: from precision, recall and f-measure to ROC, informedness, markedness and correlation (2011)
Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 400–407 (1951)
Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Trans. Neural Netw. 20(1), 61–80 (2008)
Sengoku, A., et al.: Discussion tree for managing large-scale internet-based discussions. Collective Intell. 2016 (2016)
Seo, Y., Defferrard, M., Vandergheynst, P., Bresson, X.: Structured sequence modeling with graph convolutional recurrent networks. In: Cheng, L., Leung, A.C.S., Ozawa, S. (eds.) ICONIP 2018. LNCS, vol. 11301, pp. 362–373. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04167-0_33
Sperduti, A., Starita, A.: Supervised neural networks for the classification of structures. IEEE Trans. Neural Netw. 8(3), 714–735 (1997)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Stab, C., Gurevych, I.: Parsing argumentation structures in persuasive essays. Comput. Linguist. 43(3), 619–659 (2017)
Suzuki, S., et al.: Extraction of online discussion structures for automated facilitation agent. In: Ohsawa, Y., et al. (eds.) JSAI 2019. AISC, vol. 1128, pp. 150–161. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-39878-1_14
Takahashi, K., et al.: Incentive mechanism based on qualit of opinion for large-scale discussion support. Collective Intell. 2016, 16 (2016)
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)
Wang, M., et al.: Deep graph library: towards efficient and scalable deep learning on graphs. arXiv preprint arXiv:1909.01315 (2019)
Acknowledgements
This work was supported by JST CREST Grant Number JPMJCR15E1 including AIP challenge program, Japan.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Suzuki, S., Ito, T., Moustafa, A., Hadfi, R. (2021). A Node Classification Approach for Dynamically Extracting the Structures of Online Discussions. In: Yada, K., et al. Advances in Artificial Intelligence. JSAI 2020. Advances in Intelligent Systems and Computing, vol 1357. Springer, Cham. https://doi.org/10.1007/978-3-030-73113-7_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-73113-7_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-73112-0
Online ISBN: 978-3-030-73113-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)