Abstract
The use of Transformer based architectures has been extended in recent years, reaching the level of State of the Art (SOTA) in numerous tasks in the field of Natural Language Processing (NLP). However, despite the advantages of this architecture, it has some negative factors, such as the high number of parameters it uses. That is why the use of this type of architecture can become expensive for research teams with limited resources. New variants have emerged with the purpose of improving the efficiency of the Transformer architecture, addressing different aspects of it. In this paper we will focus on the development of a new architecture that seeks to reduce the memory consumption of the Transformer, meanwhile is able to achieve a SOTA result in two different datasets [14] for the Neural Machine Translation (NMT) task.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ainslie, J., et al.: ETC: encoding long and structured inputs in transformers (2020)
Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The long-document transformer (2020)
Brown, T.B., et al.: Language models are few-shot learners (2020)
Child, R., Gray, S., Radford, A., Sutskever, I.: Generating long sequences with sparse transformers (2019)
Choromanski, K., et al.: Masked language modeling for proteins via linearly scalable long-context transformers (2020)
Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-XL: attentive language models beyond a fixed-length context (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (Long and Short Papers), vol. 1, pp. 4171–4186. Association for Computational Linguistics, Minneapolis, June 2019. https://doi.org/10.18653/v1/N19-1423, https://www.aclweb.org/anthology/N19-1423
Ho, J., Kalchbrenner, N., Weissenborn, D., Salimans, T.: Axial attention in multidimensional transformers (2019)
Katharopoulos, A., Vyas, A., Pappas, N., Fleuret, F.: Transformers are RNNs: fast autoregressive transformers with linear attention (2020)
Kitaev, N., Kaiser, Ł., Levskaya, A.: Reformer: the efficient transformer (2020)
Lee, J., Lee, Y., Kim, J., Kosiorek, A.R., Choi, S., Teh, Y.W.: Set transformer: a framework for attention-based permutation-invariant neural networks (2019)
Lepikhin, D., et al.: GShard: scaling giant models with conditional computation and automatic sharding (2020)
Liu, P.J., et al.: Generating wikipedia by summarizing long sequences (2018)
Luong, M.T., Manning, C.D.: Stanford neural machine translation systems for spoken language domains. In: Stanford (2015)
Ojeda, C., Artal, C., Tejera, F.: Informer, an information organization transformer architecture. In: Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, pp. 381–389. INSTICC, SciTePress (2021). https://doi.org/10.5220/0010372703810389
Parmar, N., et al.: Image transformer (2018)
Rae, J.W., Potapenko, A., Jayakumar, S.M., Hillier, C., Lillicrap, T.P.: Compressive transformers for long-range sequence modelling. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=SylKikSYDH
Roy, A., Saffar, M., Vaswani, A., Grangier, D.: Efficient content-based sparse attention with routing transformers (2020)
Tay, Y., Bahri, D., Metzler, D., Juan, D.C., Zhao, Z., Zheng, C.: Synthesizer: rethinking self-attention in transformer models (2020)
Tay, Y., Bahri, D., Yang, L., Metzler, D., Juan, D.C.: Sparse sinkhorn attention (2020)
Tay, Y., Dehghani, M., Bahri, D., Metzler, D.: Efficient transformers: a survey (2020)
Vaswani, A., et al.: Attention is all you need. ArXiv (2017)
Wang, D., Gong, C., Liu, Q.: Improving neural language modeling via adversarial training (2019)
Wang, S., Li, B.Z., Khabsa, M., Fang, H., Ma, H.: Linformer: self-attention with linear complexity (2020)
Zaheer, M., et al.: Big bird: transformers for longer sequences (2020)
Acknowledgements
We are grateful to the University of Las Palmas de Gran Canaria, which has partially financed this work with the ULPGC COVID 19-09 project.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Estupiñán-Ojeda, C., Guerra-Artal, C., Hernández-Tejera, M. (2022). Informer: An Efficient Transformer Architecture Using Convolutional Layers. In: Rocha, A.P., Steels, L., van den Herik, J. (eds) Agents and Artificial Intelligence. ICAART 2021. Lecture Notes in Computer Science(), vol 13251. Springer, Cham. https://doi.org/10.1007/978-3-031-10161-8_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-10161-8_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-10160-1
Online ISBN: 978-3-031-10161-8
eBook Packages: Computer ScienceComputer Science (R0)