Abstract
This paper describes our participating system run to the argumentative text understanding shared task for AI Debater at NLPCC 2021 (http://www.fudan-disc.com/sharedtask/AIDebater21/tracks.html). The tasks are motivated towards developing an autonomous debating system. We make an initial attempt with Track-3, namely, argument pair extraction from peer review and rebuttal where we extract arguments from peer reviews and their corresponding rebuttals from author responses. Compared to the multi-task baseline by the organizers, we introduce two significant changes: (i) we use ERNIE 2.0 token embedding, which can better capture lexical, syntactic, and semantic aspects of information in the training data, (ii) we perform double attention learning to capture long-term dependencies. Our proposed model achieves the state-of-the-art results with a relative improvement of 8.81% in terms of F1 score over the baseline model. We make our code available publicly at https://github.com/guneetsk99/ArgumentMining_SharedTask. Our team ARGUABLY is one of the third prize-winning teams in Track 3 of the shared task.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Abbott, R., Ecker, B., Anand, P., Walker, M.: Internet argument corpus 2.0: an sql schema for dialogic social media and the corpora to go with it. In: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LRECā16), pp. 4445ā4452 (2016)
Chakrabarty, T., Hidey, C., Muresan, S., McKeown, K., Hwang, A.: Ampersand: argument mining for persuasive online discussions. arXiv preprint arXiv:2004.14677 (2020)
Chen, Y., Kalantidis, Y., Li, J., Yan, S., Feng, J.: \( a^{\wedge }2\)-nets: double attention networks. arXiv preprint arXiv:1810.11579 (2018)
Cheng, L., Bing, L., Yu, Q., Lu, W., Si, L.: Argument pair extraction from peer review and rebuttal via multi-task learning. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 7000ā7011 (2020)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Falkenberg, L.J., Soranno, P.A.: Reviewing reviews: an evaluation of peer reviews of journal article submissions. Limnol. Oceanogr. Bull. 27(1), 1ā5 (2018)
Gao, Y., Eger, S., Kuznetsov, I., Gurevych, I., Miyao, Y.: Does my rebuttal matter? insights from a major nlp conference. arXiv preprint arXiv:1903.11367 (2019)
Gretz, S., et al.: A large-scale dataset for argument quality ranking: construction and analysis. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 7805ā7813 (2020)
Hou, Y., Jochim, C.: Argument relation classification using a joint inference model. In: Proceedings of the 4th Workshop on Argument Mining, pp. 60ā66 (2017)
Hua, X., Nikolov, M., Badugu, N., Wang, L.: Argument mining for understanding peer reviews. arXiv preprint arXiv:1903.10104 (2019)
Hua, X., Wang, L.: Neural argument generation augmented with externally retrieved evidence. arXiv preprint arXiv:1805.10254 (2018)
Kaji, N., Fujiwara, Y., Yoshinaga, N., Kitsuregawa, M.: Efficient staggered decoding for sequence labeling. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pp. 485ā494 (2010)
Kang, D., et al.: A dataset of peer reviews (peerread): collection, insights and nlp applications. arXiv preprint arXiv:1804.09635 (2018)
Kelly, J., Sadeghieh, T., Adeli, K.: Peer review in scientific publications: benefits, critiques, & a survival guide. Ejifcc 25(3), 227 (2014)
Kovanis, M., Porcher, R., Ravaud, P., Trinquart, L.: The global burden of journal peer review in the biomedical literature: strong imbalance in the collective enterprise. PloS one 11(11), e0166387 (2016)
Lawrence, J., Reed, C.: Argument mining: a survey. Comput. Linguist. 45(4), 765ā818 (2020)
Liu, Y., et al.: Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Mochales, R., Moens, M.F.: Argumentation mining. Artif. Intell. Law 19(1), 1ā22 (2011)
Persing, I., Ng, V.: End-to-end argumentation mining in student essays. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1384ā1394 (2016)
Poudyal, P.: A machine learning approach to argument mining in legal documents. In: Pagallo, U., Palmirani, M., Casanovas, P., Sartor, G., Villata, S. (eds.) AICOL 2015-2017. LNCS (LNAI), vol. 10791, pp. 443ā450. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00178-0_30
Rocha, G., Stab, C., Cardoso, H.L., Gurevych, I.: Cross-lingual argumentative relation identification: from english to portuguese. In: Proceedings of the 5th Workshop on Argument Mining, pp. 144ā154 (2018)
Schiller, B., Daxenberger, J., Gurevych, I.: Aspect-controlled neural argument generation. arXiv preprint arXiv:2005.00084 (2020)
Shnarch, E., et al.: Will it blend? blending weak and strong labeled data in a neural network for argumentation mining. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, vol. 2: Short Papers, pp. 599ā605 (2018)
Stab, C., Gurevych, I.: Identifying argumentative discourse structures in persuasive essays. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 46ā56 (2014)
Sun, Y., et al.: Ernie 2.0: a continual pre-training framework for language understanding. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 8968ā8975 (2020)
Swanson, R., Ecker, B., Walker, M.: Argument mining: extracting arguments from online dialogue. In: Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 217ā226 (2015)
Toledo, A., et al.: Automatic argument quality assessment-new datasets and methods. arXiv preprint arXiv:1909.01007 (2019)
Trautmann, D., Daxenberger, J., Stab, C., SchĆ¼tze, H., Gurevych, I.: Fine-grained argument unit recognition and classification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 9048ā9056 (2020)
Trautmann, D., Fromm, M., Tresp, V., Seidl, T., SchĆ¼tze, H.: Relational and fine-grained argument mining. Datenbank-Spektrum 20(2), 99ā105 (2020)
Xiong, W., Litman, D.: Automatically predicting peer-review helpfulness. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 502ā507 (2011)
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kohli, G.S., Kaur, P., Singh, M., Ghosal, T., Rana, P.S. (2021). ARGUABLY @ AI Debater-NLPCC 2021 Task 3: Argument Pair Extraction fromĀ Peer Review and Rebuttals. In: Wang, L., Feng, Y., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2021. Lecture Notes in Computer Science(), vol 13029. Springer, Cham. https://doi.org/10.1007/978-3-030-88483-3_48
Download citation
DOI: https://doi.org/10.1007/978-3-030-88483-3_48
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88482-6
Online ISBN: 978-3-030-88483-3
eBook Packages: Computer ScienceComputer Science (R0)