Skip to main content

Informer: An Efficient Transformer Architecture Using Convolutional Layers

  • Conference paper
  • First Online:
Agents and Artificial Intelligence (ICAART 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13251))

Included in the following conference series:

  • 466 Accesses

Abstract

The use of Transformer based architectures has been extended in recent years, reaching the level of State of the Art (SOTA) in numerous tasks in the field of Natural Language Processing (NLP). However, despite the advantages of this architecture, it has some negative factors, such as the high number of parameters it uses. That is why the use of this type of architecture can become expensive for research teams with limited resources. New variants have emerged with the purpose of improving the efficiency of the Transformer architecture, addressing different aspects of it. In this paper we will focus on the development of a new architecture that seeks to reduce the memory consumption of the Transformer, meanwhile is able to achieve a SOTA result in two different datasets [14] for the Neural Machine Translation (NMT) task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ainslie, J., et al.: ETC: encoding long and structured inputs in transformers (2020)

    Google Scholar 

  2. Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The long-document transformer (2020)

    Google Scholar 

  3. Brown, T.B., et al.: Language models are few-shot learners (2020)

    Google Scholar 

  4. Child, R., Gray, S., Radford, A., Sutskever, I.: Generating long sequences with sparse transformers (2019)

    Google Scholar 

  5. Choromanski, K., et al.: Masked language modeling for proteins via linearly scalable long-context transformers (2020)

    Google Scholar 

  6. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-XL: attentive language models beyond a fixed-length context (2019)

    Google Scholar 

  7. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (Long and Short Papers), vol. 1, pp. 4171–4186. Association for Computational Linguistics, Minneapolis, June 2019. https://doi.org/10.18653/v1/N19-1423, https://www.aclweb.org/anthology/N19-1423

  8. Ho, J., Kalchbrenner, N., Weissenborn, D., Salimans, T.: Axial attention in multidimensional transformers (2019)

    Google Scholar 

  9. Katharopoulos, A., Vyas, A., Pappas, N., Fleuret, F.: Transformers are RNNs: fast autoregressive transformers with linear attention (2020)

    Google Scholar 

  10. Kitaev, N., Kaiser, Ł., Levskaya, A.: Reformer: the efficient transformer (2020)

    Google Scholar 

  11. Lee, J., Lee, Y., Kim, J., Kosiorek, A.R., Choi, S., Teh, Y.W.: Set transformer: a framework for attention-based permutation-invariant neural networks (2019)

    Google Scholar 

  12. Lepikhin, D., et al.: GShard: scaling giant models with conditional computation and automatic sharding (2020)

    Google Scholar 

  13. Liu, P.J., et al.: Generating wikipedia by summarizing long sequences (2018)

    Google Scholar 

  14. Luong, M.T., Manning, C.D.: Stanford neural machine translation systems for spoken language domains. In: Stanford (2015)

    Google Scholar 

  15. Ojeda, C., Artal, C., Tejera, F.: Informer, an information organization transformer architecture. In: Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, pp. 381–389. INSTICC, SciTePress (2021). https://doi.org/10.5220/0010372703810389

  16. Parmar, N., et al.: Image transformer (2018)

    Google Scholar 

  17. Rae, J.W., Potapenko, A., Jayakumar, S.M., Hillier, C., Lillicrap, T.P.: Compressive transformers for long-range sequence modelling. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=SylKikSYDH

  18. Roy, A., Saffar, M., Vaswani, A., Grangier, D.: Efficient content-based sparse attention with routing transformers (2020)

    Google Scholar 

  19. Tay, Y., Bahri, D., Metzler, D., Juan, D.C., Zhao, Z., Zheng, C.: Synthesizer: rethinking self-attention in transformer models (2020)

    Google Scholar 

  20. Tay, Y., Bahri, D., Yang, L., Metzler, D., Juan, D.C.: Sparse sinkhorn attention (2020)

    Google Scholar 

  21. Tay, Y., Dehghani, M., Bahri, D., Metzler, D.: Efficient transformers: a survey (2020)

    Google Scholar 

  22. Vaswani, A., et al.: Attention is all you need. ArXiv (2017)

    Google Scholar 

  23. Wang, D., Gong, C., Liu, Q.: Improving neural language modeling via adversarial training (2019)

    Google Scholar 

  24. Wang, S., Li, B.Z., Khabsa, M., Fang, H., Ma, H.: Linformer: self-attention with linear complexity (2020)

    Google Scholar 

  25. Zaheer, M., et al.: Big bird: transformers for longer sequences (2020)

    Google Scholar 

Download references

Acknowledgements

We are grateful to the University of Las Palmas de Gran Canaria, which has partially financed this work with the ULPGC COVID 19-09 project.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Cristian Estupiñán-Ojeda , Cayetano Guerra-Artal or Mario Hernández-Tejera .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Estupiñán-Ojeda, C., Guerra-Artal, C., Hernández-Tejera, M. (2022). Informer: An Efficient Transformer Architecture Using Convolutional Layers. In: Rocha, A.P., Steels, L., van den Herik, J. (eds) Agents and Artificial Intelligence. ICAART 2021. Lecture Notes in Computer Science(), vol 13251. Springer, Cham. https://doi.org/10.1007/978-3-031-10161-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-10161-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-10160-1

  • Online ISBN: 978-3-031-10161-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics