Skip to main content

Recurrent Transformers for Long Document Understanding

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14302))

  • 1076 Accesses

Abstract

Pre-trained models have been proved effective in natural language understanding. For long document understanding, the key challenges are long-range dependence and inference efficiency. Existing approaches, however, (i) usually cannot fully model the context structure and global semantics within a long document, (ii) and lack consistency assessment on common downstream tasks. To address these issues, we propose a novel Recurrent Transformers (RTrans) for long document understanding which can not only learn long contextual structure and relationships, but also be extended to diverse downstream tasks. Specifically, our model introduces recurrent transformer block to convey the token-level contextual information across segments and capture long-range dependence. The ranking strategy is utilized to aggregate the local and global information for final prediction. Experiments on diverse tasks that require understanding long document demonstrate superior and robust performance of RTrans and our approach achieves a better balance between effectiveness and efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/HAOChuzhan/RTrans.

References

  1. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, June 2019, pp. 4171–4186. Association for Computational Linguistics (2019)

    Google Scholar 

  2. Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach, arXiv preprint arXiv:1907.11692 (2019)

  3. Hutchins, D., Schlag, I., Wu, Y., Dyer, E., Neyshabur, B.: Block-recurrent transformers, arXiv preprint arXiv:2203.07852 (2022)

  4. Ding, M., Zhou, C., Yang, H., Tang, J.: CogLTX: applying BERT to long texts. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 12 792–12 804. Curran Associates Inc (2020)

    Google Scholar 

  5. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: the long-document transformer, arXiv preprint arXiv:2004.05150 (2020)

  6. Kitaev, N., Kaiser, L., Levskaya, A.: Reformer: the efficient transformer. In: International Conference on Learning Representations (2020)

    Google Scholar 

  7. Fiok, K., et al.: Text guide: improving the quality of long text classification by a text selection method based on feature importance. IEEE Access 9, 105 439–105 450 (2021)

    Google Scholar 

  8. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q., Salakhutdinov, R.: Transformer-XL: attentive language models beyond a fixed-length context. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy, July 2019, pp. 2978–2988. Association for Computational Linguistics (2019)

    Google Scholar 

  9. Pappagari, R., Żelasko, P., Villalba, J., Carmiel, Y., Dehak, N.: Hierarchical transformers for long document classification. In: 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 838–844 (2019)

    Google Scholar 

  10. Gong, H., Shen, Y., Yu, D., Chen, J., Yu, D.: Recurrent chunking mechanisms for long-text machine reading comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, July 2020. Online, pp. 6751–6761 (2020)

    Google Scholar 

  11. Roy, A., Saffar, M., Vaswani, A., Grangier, D.: Efficient content-based sparse attention with routing transformers. Trans. Assoc. Comput. Linguist. 9, 53–68 (2021)

    Article  Google Scholar 

  12. Rae, J.W., Potapenko, A., Jayakumar, S.M., Hillier, C., Lillicrap, T.P.: Compressive transformers for long-range sequence modelling. In: International Conference on Learning Representations (2020)

    Google Scholar 

  13. Park, H.H., Vyas, Y., Shah, K.: Efficient classification of long documents using transformers, arXiv preprint arXiv:2203.11258 (2022)

  14. Zaheer, M., et al.: Big bird: transformers for longer sequences. In: Advances in Neural Information Processing Systems, vol. 33, pp. 17 283–17 297 (2020)

    Google Scholar 

  15. Wang, N., et al.: ClusterFormer: neural clustering attention for efficient and effective transformer. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, May 2022, pp. 2390–2402. Association for Computational Linguistics (2022)

    Google Scholar 

  16. Wang, S., Li, B.Z., Khabsa, M., Fang, H., Ma, H.: Linformer: self-attention with linear complexity, arXiv preprint arXiv:2006.04768 (2020)

  17. Dai, Z., Lai, G., Yang, Y., Le, Q.: Funnel-transformer: filtering out sequential redundancy for efficient language processing. Adv. Neural. Inf. Process. Syst. 33, 4271–4282 (2020)

    Google Scholar 

  18. Wu, C., Wu, F., Qi, T., Huang, Y.: Hi-transformer: hierarchical interactive transformer for efficient and effective long document modeling. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pp. 848–853. Online: Association for Computational Linguistics, August 2021

    Google Scholar 

  19. Su, J., Lu, Y., Pan, S., Wen, B., Liu, Y.: Roformer: enhanced transformer with rotary position embedding (2021)

    Google Scholar 

  20. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, October 2014, pp. 1724–1734. Association for Computational Linguistics (2014)

    Google Scholar 

  21. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  22. Kiesel, J., et al.: SemEval-2019 task 4: hyperpartisan news detection. In: Proceedings of the 13th International Workshop on Semantic Evaluation, Minneapolis, Minnesota, USA, June 2019, pp. 829–839. Association for Computational Linguistics (2019)

    Google Scholar 

  23. Lang, K.: NewsWeeder: learning to filter netnews. In: Machine Learning Proceedings, pp. 331–339. Elsevier (1995)

    Google Scholar 

  24. Diao, Q., Qiu, M., Wu, C.-Y., Smola, A.J., Jiang, J., Wang, C.: Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 193–202 (2014)

    Google Scholar 

  25. Bamman, D., Smith, N.A.: New alignment methods for discriminative book summarization, arXiv preprint arXiv:1305.1319 (2013)

  26. Chalkidis, I., Fergadiotis, E., Malakasiotis, P., Androutsopoulos, I.: Large-scale multi-label text classification on EU legislation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019, pp. 6314–6322. Association for Computational Linguistics (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hao, C., Zhang, P., Xie, M., Zhao, D. (2023). Recurrent Transformers for Long Document Understanding. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds) Natural Language Processing and Chinese Computing. NLPCC 2023. Lecture Notes in Computer Science(), vol 14302. Springer, Cham. https://doi.org/10.1007/978-3-031-44693-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44693-1_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44692-4

  • Online ISBN: 978-3-031-44693-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics