Abstract
Paraphrases are phrases/sentences transition preserving the same sense but using different wording. In contrast, in the case that a phrase/sentence gives more detail compared with another phrase/sentence, it does not hold paraphrases. This paper follows the assumption and verifies how elaboration relation between phrases/sentences helps to improve the performance of the paraphrase identification task. We present a sequential transfer learning framework that utilizes contextual features learned from elaboration relation for paraphrase identification. The method learns the elaboration relation model at first until the stable and then adapts paraphrase identification. The results using the benchmark dataset, Microsoft Research Paraphrase Corpus (MRPC), show that the method attained at 1.7% accuracy improvement compared with a baseline model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
The experiments were conducted on Nvidia GeForce RTX 2080Ti (12 GB memory).
- 3.
References
Alex, W., Amanpreet, S., Julian, M., Felix, H., Omer H., Samuel B.R.: GLUE: a multi-task benchmark and analysis platform for natural language understanding, arXiv preprint arXiv:1804.07461 (2018)
Arase, Y., Tsujii, J.: Transfer fine-tuning: a BERT case study. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 5393–5404 (2019)
Bhagat, R., Hovy, E.: What is a paraphrase? Assoc. Comput. Linguist. 39(3), 463–472 (2013)
Chen Z., Yu, C., Zhe, G., Siqi, S., Thomas, G., Jing, L.: FreeLB: enhanced adversarial training for natural language understanding. arXiv:1909.11764 (2019)
Colin, R., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint ArXiv: 1910.10683 (2020)
Dorr, N.M.B.J.: Generating phrasal and sentential paraphrases: a survey of data-driven methods. Assoc. Comput. Linguist. 36(3), 341–387 (2010)
Jacob, D., Ming-Wei, C., Kenton, L., Kristina, T.: BERT: pre-training on deep bidirectional transfomers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186 (2019)
Kingma, D.P., Ba, J.: ADAM: a method for stochastic optimization. In: The 3rd International Conference on Learning Representations, pp. 1–15 (2015)
Liang, C., Paritosh, P., Rajendran, V., Forbus, K.D.: Learning paraphrase identification with structural alignment. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 2859–2865 (2016)
Liu, X., He, P., Chen, W., Gao, J.: Learning general purpose distributed sentence representations via large scale multi-task learning. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4487–4496 (2019)
Liu, Y., et al.: RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint ArXiv: 1907.11692 (2019)
Clarkand, K.C.M.T., Le, Q.V., Manning, C.D.: ELECTRA: pre-training text encoders as discriminators rather than generators. In: Proceedings of the 8th International Conference on Learning Representations (2020)
Mann, W.C., Thompson, S.A.: Rhetorical structure theory: toward a functional theory of text organization. Text-Interdisc. J. Study Discourse 8(3), 243–281 (1988)
Matthew, P., et al.: Deep contextualized word representations. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2227–2237 (2018)
Miltsakaki, E., Prasad, R., Joshi, A., Webber, B.: The Penn discourse tree-bank. In: Proceedings of the Fourth International Conference on Language Resources and Evaluation (2004)
Phang, J., Fevry, T., Bowman, S.R.: Sentence encoders on STILTs: supplementary training on intermediate labeled-data tasks. arXiv preprint ArXiv: 1811.01088 (2019)
Prasad, R., Dinesh, N., Lee, A., Miltsakaki, E., Robaldo, L: The Penn discourse treebank 2.0. In: Proceedings of the 56th International Conference on Language Resources and Evaluation, pp. 2961–2968 (2008)
Quirk, C., Brockett, C., Dolan, B.: Monolingual machine translation for paraphrase generation. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pp. 142–149 (2004)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)
Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-Networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 3982–3992 (2019)
Sohn, K., et al.: FixMatch: simplifying semi-supervised learning with consistency and confidence. In: Proceedings of the 34th Conference on Neural Information Processing Systems (2020)
Stevenson, M.: Fact distribution in information extraction. In: The International Conference on Language Resources and Evaluation, pp. 183–201 (2007)
Subramanian, S., Trischler, A., Bengio, Y., Pal, C.J.: Learning general purpose distributed sentence representations via large scale multi-task learning. In: Proceedings of the 6th International Conference on Learning Representations (2018)
Sun, Y., et al.: ERNIE 2.0: a continual pre-training framework for language understanding. arXiv preprint ArXiv: 1907.12412 (2019)
Swampillai, K., Stevenson, M.: Inter-sentential relations in information extraction corpora. In: The International Conference on Language Resources and Evaluation, pp. 17–23 (2010)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Wei, W., et al.: StructBERT: incorporating language structures into pre-training for deep language understanding. arXiv preprint arXiv:1908.04577 (2019)
William, D.B., Brockett, C.: Automatically constructing a corpus of sentential paraphrases. In: Proceedings of the Third International Workshop on Paraphrasing, pp. 9–16 (2005)
Wolf, F., Gibson, E., Fisher, A., Knight, M.: A procedure for collecting a database of texts annotated with coherence relations, MIT NE20-448 (2003)
Xie, Q., Dai, Z., Hovy, E., Luong, M.T., Le, Q.V.: unsupervised data augmentation for consistency training. In: 34th Conference on Neural Information Processing Systems (2020)
Zhang, Y., Yang, Q.: A survey on multi-task learning. arXiv:1707.08114 (2018)
Lan, Z., Chen, Z., Goodman, Z., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. In: Proceedings of the 8th International Conference on Learning Representations (2020)
Acknowledgements
We are grateful to the anonymous reviewers for their comments and suggestions. This work was supported by the Grant-in-aid for JSPS, Grant Number 21K12026, JKA through its promotion funds from KEIRIN RACE, and Artificial Intelligence Research Promotion Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, S., Fukumoto, F., Li, J., Suzuki, Y. (2021). Paraphrase Identification with Neural Elaboration Relation Learning. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13111. Springer, Cham. https://doi.org/10.1007/978-3-030-92273-3_46
Download citation
DOI: https://doi.org/10.1007/978-3-030-92273-3_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-92272-6
Online ISBN: 978-3-030-92273-3
eBook Packages: Computer ScienceComputer Science (R0)