Abstract
Pre-trained models have been proved effective in natural language understanding. For long document understanding, the key challenges are long-range dependence and inference efficiency. Existing approaches, however, (i) usually cannot fully model the context structure and global semantics within a long document, (ii) and lack consistency assessment on common downstream tasks. To address these issues, we propose a novel Recurrent Transformers (RTrans) for long document understanding which can not only learn long contextual structure and relationships, but also be extended to diverse downstream tasks. Specifically, our model introduces recurrent transformer block to convey the token-level contextual information across segments and capture long-range dependence. The ranking strategy is utilized to aggregate the local and global information for final prediction. Experiments on diverse tasks that require understanding long document demonstrate superior and robust performance of RTrans and our approach achieves a better balance between effectiveness and efficiency.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, June 2019, pp. 4171–4186. Association for Computational Linguistics (2019)
Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach, arXiv preprint arXiv:1907.11692 (2019)
Hutchins, D., Schlag, I., Wu, Y., Dyer, E., Neyshabur, B.: Block-recurrent transformers, arXiv preprint arXiv:2203.07852 (2022)
Ding, M., Zhou, C., Yang, H., Tang, J.: CogLTX: applying BERT to long texts. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 12 792–12 804. Curran Associates Inc (2020)
Beltagy, I., Peters, M.E., Cohan, A.: Longformer: the long-document transformer, arXiv preprint arXiv:2004.05150 (2020)
Kitaev, N., Kaiser, L., Levskaya, A.: Reformer: the efficient transformer. In: International Conference on Learning Representations (2020)
Fiok, K., et al.: Text guide: improving the quality of long text classification by a text selection method based on feature importance. IEEE Access 9, 105 439–105 450 (2021)
Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q., Salakhutdinov, R.: Transformer-XL: attentive language models beyond a fixed-length context. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy, July 2019, pp. 2978–2988. Association for Computational Linguistics (2019)
Pappagari, R., Żelasko, P., Villalba, J., Carmiel, Y., Dehak, N.: Hierarchical transformers for long document classification. In: 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 838–844 (2019)
Gong, H., Shen, Y., Yu, D., Chen, J., Yu, D.: Recurrent chunking mechanisms for long-text machine reading comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, July 2020. Online, pp. 6751–6761 (2020)
Roy, A., Saffar, M., Vaswani, A., Grangier, D.: Efficient content-based sparse attention with routing transformers. Trans. Assoc. Comput. Linguist. 9, 53–68 (2021)
Rae, J.W., Potapenko, A., Jayakumar, S.M., Hillier, C., Lillicrap, T.P.: Compressive transformers for long-range sequence modelling. In: International Conference on Learning Representations (2020)
Park, H.H., Vyas, Y., Shah, K.: Efficient classification of long documents using transformers, arXiv preprint arXiv:2203.11258 (2022)
Zaheer, M., et al.: Big bird: transformers for longer sequences. In: Advances in Neural Information Processing Systems, vol. 33, pp. 17 283–17 297 (2020)
Wang, N., et al.: ClusterFormer: neural clustering attention for efficient and effective transformer. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, May 2022, pp. 2390–2402. Association for Computational Linguistics (2022)
Wang, S., Li, B.Z., Khabsa, M., Fang, H., Ma, H.: Linformer: self-attention with linear complexity, arXiv preprint arXiv:2006.04768 (2020)
Dai, Z., Lai, G., Yang, Y., Le, Q.: Funnel-transformer: filtering out sequential redundancy for efficient language processing. Adv. Neural. Inf. Process. Syst. 33, 4271–4282 (2020)
Wu, C., Wu, F., Qi, T., Huang, Y.: Hi-transformer: hierarchical interactive transformer for efficient and effective long document modeling. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pp. 848–853. Online: Association for Computational Linguistics, August 2021
Su, J., Lu, Y., Pan, S., Wen, B., Liu, Y.: Roformer: enhanced transformer with rotary position embedding (2021)
Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, October 2014, pp. 1724–1734. Association for Computational Linguistics (2014)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Kiesel, J., et al.: SemEval-2019 task 4: hyperpartisan news detection. In: Proceedings of the 13th International Workshop on Semantic Evaluation, Minneapolis, Minnesota, USA, June 2019, pp. 829–839. Association for Computational Linguistics (2019)
Lang, K.: NewsWeeder: learning to filter netnews. In: Machine Learning Proceedings, pp. 331–339. Elsevier (1995)
Diao, Q., Qiu, M., Wu, C.-Y., Smola, A.J., Jiang, J., Wang, C.: Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 193–202 (2014)
Bamman, D., Smith, N.A.: New alignment methods for discriminative book summarization, arXiv preprint arXiv:1305.1319 (2013)
Chalkidis, I., Fergadiotis, E., Malakasiotis, P., Androutsopoulos, I.: Large-scale multi-label text classification on EU legislation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019, pp. 6314–6322. Association for Computational Linguistics (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hao, C., Zhang, P., Xie, M., Zhao, D. (2023). Recurrent Transformers for Long Document Understanding. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds) Natural Language Processing and Chinese Computing. NLPCC 2023. Lecture Notes in Computer Science(), vol 14302. Springer, Cham. https://doi.org/10.1007/978-3-031-44693-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-44693-1_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44692-4
Online ISBN: 978-3-031-44693-1
eBook Packages: Computer ScienceComputer Science (R0)