Skip to main content

A Node Classification Approach for Dynamically Extracting the Structures of Online Discussions

  • Conference paper
  • First Online:
Advances in Artificial Intelligence (JSAI 2020)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1357))

Included in the following conference series:

Abstract

Online discussion platforms require extracting the discussion structure in order to support understanding the flow of these discussions. Towards this end, this paper proposes an approach that performs node classification which is the first step towards extracting the structure of online discussions. In this regard, the proposed approach employs a graph attention network (GAT) in order to directly learn the discussion structure. In specific, the GAT, which is a type of graph neural networks (GNNs), encodes the graph structures directly. In addition, the GAT, which is based on attention architecture, is able to deal with different graph structures. In order to evaluate the proposed approach, we have conducted a set of experiments on the persuasive essay dataset that is styled using the issue-based information system (IBIS). The experimental results show that the proposed approach is able to classify the nodes in online discussion structures accurately.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  2. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  Google Scholar 

  3. Bruna, J., Zaremba, W., Szlam, A., Lecun, Y.: Spectral networks and locally connected networks on graphs. In: International Conference on Learning Representations (ICLR 2014), CBLS, April 2014 (2014)

    Google Scholar 

  4. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)

    MATH  Google Scholar 

  5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  6. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 1189–1232 (2001)

    Google Scholar 

  7. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  8. Gori, M., Monfardini, G., Scarselli, F.: A new model for learning in graph domains. In: Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 729–734. IEEE (2005)

    Google Scholar 

  9. Henaff, M., Bruna, J., LeCun, Y.: Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163 (2015)

  10. Ito, T., Imi, Y., Ito, T., Hideshima, E.: COLLAGREE: a faciliator-mediated large-scale consensus support system. Collective Intell. 2014, 10–12 (2014)

    Google Scholar 

  11. Ito, T., Imi, Y., Sato, M., Ito, T., Hideshima, E.: Incentive mechanism for managing large-scale internet-based discussions on COLLAGREE. Collective Intell. 2015 (2015)

    Google Scholar 

  12. Ito, T., Suzuki, S., Yamaguchi, N., Nishida, T., Hiraishi, K., Yoshino, K.: D-agree: crowd discussion support system based on automated facilitation agent. In: AAAI, pp. 13614–13615 (2020)

    Google Scholar 

  13. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  14. Kipf, T.N., Welling, M.: Variational graph auto-encoders. Stat 1050, 21 (2016)

    Google Scholar 

  15. Kunz, W.: Issues as elements of information systems. Institute of Urban & Regional Development, University of California Working Paper, Berkeley 131 (1970)

    Google Scholar 

  16. Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016)

  17. Malone, T.W.: Superminds: the surprising power of people and computers thinking together. Little, Brown Spark (2018)

    Google Scholar 

  18. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc. (2019). http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf

  19. Powers, D.M.: Evaluation: from precision, recall and f-measure to ROC, informedness, markedness and correlation (2011)

    Google Scholar 

  20. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 400–407 (1951)

    Google Scholar 

  21. Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Trans. Neural Netw. 20(1), 61–80 (2008)

    Article  Google Scholar 

  22. Sengoku, A., et al.: Discussion tree for managing large-scale internet-based discussions. Collective Intell. 2016 (2016)

    Google Scholar 

  23. Seo, Y., Defferrard, M., Vandergheynst, P., Bresson, X.: Structured sequence modeling with graph convolutional recurrent networks. In: Cheng, L., Leung, A.C.S., Ozawa, S. (eds.) ICONIP 2018. LNCS, vol. 11301, pp. 362–373. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04167-0_33

    Chapter  Google Scholar 

  24. Sperduti, A., Starita, A.: Supervised neural networks for the classification of structures. IEEE Trans. Neural Netw. 8(3), 714–735 (1997)

    Article  Google Scholar 

  25. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  26. Stab, C., Gurevych, I.: Parsing argumentation structures in persuasive essays. Comput. Linguist. 43(3), 619–659 (2017)

    Article  MathSciNet  Google Scholar 

  27. Suzuki, S., et al.: Extraction of online discussion structures for automated facilitation agent. In: Ohsawa, Y., et al. (eds.) JSAI 2019. AISC, vol. 1128, pp. 150–161. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-39878-1_14

    Chapter  Google Scholar 

  28. Takahashi, K., et al.: Incentive mechanism based on qualit of opinion for large-scale discussion support. Collective Intell. 2016, 16 (2016)

    Google Scholar 

  29. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

  30. Wang, M., et al.: Deep graph library: towards efficient and scalable deep learning on graphs. arXiv preprint arXiv:1909.01315 (2019)

Download references

Acknowledgements

This work was supported by JST CREST Grant Number JPMJCR15E1 including AIP challenge program, Japan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shota Suzuki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suzuki, S., Ito, T., Moustafa, A., Hadfi, R. (2021). A Node Classification Approach for Dynamically Extracting the Structures of Online Discussions. In: Yada, K., et al. Advances in Artificial Intelligence. JSAI 2020. Advances in Intelligent Systems and Computing, vol 1357. Springer, Cham. https://doi.org/10.1007/978-3-030-73113-7_1

Download citation

Publish with us

Policies and ethics