Skip to main content

Visual Transformer-Based Models: A Survey

  • Conference paper
  • First Online:
Pattern Recognition and Artificial Intelligence (ICPRAI 2022)

Abstract

After Transformer was first proposed by Vaswani et al. [1] in 2017, Transformer model has revolutionized and become the dominant methods in the field of natural language processing (NLP) which has achieved significant achievement. Transformer was first applied to Computer Vision fields in 2020, which called Vision Transformer (ViT) proposed by Dosovitskiy et al. ViT achieved state-of-the-art on image classification tasks at that time. In the past two years, the proliferation of Transformer in CV proves the effectiveness and breakthrough in various tasks including image classification, object detection, segmentation and low-level image tasks. In this paper, we focus on a review of Transformer-based models improved by ViT and Transformer backbones which is suitable for all kinds of image-level tasks, analyzing their improvement mechanisms, strengths and weaknesses. Furthermore, we briefly introduce the effective improvement of self-attention mechanism. In the end of this paper, some prospects have been put forward for future development on basis of the above Transformer-based models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  2. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2018). arXiv preprint arXiv:1810.04805

  3. Brown, T.B., et al.: Language models are few-shot learners (2020). arXiv preprint arXiv:2005.14165

  4. Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach (2019). arXiv preprint arXiv:1907.11692

  5. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: Albert: a lite bert for self-supervised learning of language representations (2019). arXiv preprint arXiv:1909.11942

  6. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  7. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale (2020). arXiv preprint arXiv:2010.11929

  8. Posner, M.I.: Attention: the mechanisms of consciousness. Proc. Natl. Acad. Sci. 91(16), 7398–7403 (1994)

    Article  Google Scholar 

  9. Mnih, V., Heess, N., Graves, A.: Recurrent models of visual attention. In: Advances in Neural Information Processing Systems, pp. 2204–2212 (2014)

    Google Scholar 

  10. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate (2014). arXiv preprint arXiv:1409.0473

  11. Zheng, G., Mukherjee, S., Dong, X.L., Li, F.: OpenTag: open attribute value extraction from product profiles. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1049–1058, July 2018

    Google Scholar 

  12. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., Hovy, E.: Hierarchical attention networks for document classification. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1480–1489, June 2016

    Google Scholar 

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  14. Sun, C., Shrivastava, A., Singh, S., Gupta, A.: Revisiting unreasonable effectiveness of data in deep learning era. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 843–852 (2017)

    Google Scholar 

  15. Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., Jégou, H.: Training data-efficient image transformers & distillation through attention. In: International Conference on Machine Learning, pp. 10347–10357. PMLR, July 2021

    Google Scholar 

  16. Wu, H., et al.: CvT: Introducing convolutions to vision transformers (2021). arXiv preprint arXiv:2103.15808

  17. Chu, X., Zhang, B., Tian, Z., Wei, X., Xia, H.: Do we really need explicit position encodings for vision transformers?. arXiv e-prints, arXiv-2102 (2021)

    Google Scholar 

  18. Zhang, Q., Yang, Y.:. ResT: an efficient transformer for visual recognition (2021). arXiv preprint arXiv:2105.13677

  19. Han, K., Xiao, A., Wu, E., Guo, J., Xu, C., Wang, Y.: Transformer in transformer (2021). arXiv preprint arXiv:2103.00112

  20. Chen, C.F., Fan, Q., Panda, R.: CrossViT: cross-attention multi-scale vision transformer for image classification (2021). arXiv preprint arXiv:2103.14899

  21. Li, Y., Zhang, K., Cao, J., Timofte, R., Van Gool, L.: LocalViT: bringing locality to vision transformers (2021). arXiv preprint arXiv:2104.05707

  22. Heo, B., Yun, S., Han, D., Chun, S., Choe, J., Oh, S.J.: Rethinking spatial dimensions of vision transformers (2021). arXiv preprint arXiv:2103.16302

  23. Wang, W., et al.: Pyramid vision transformer: a versatile backbone for dense prediction without convolutions. arXiv e-prints, arXiv-2102 (2021)

    Google Scholar 

  24. Wang, W., et al.: PVTv2: improved baselines with pyramid vision transformer (2021). arXiv preprint arXiv:2106.13797

  25. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows (2021). arXiv preprint arXiv:2103.14030

  26. Dong, X., et al.: CSWin transformer: a general vision transformer backbone with cross-shaped windows (2021). arXiv preprint arXiv:2107.00652

  27. Yang, J., et al.: Focal self-attention for local-global interactions in vision transformers (2021). arXiv preprint arXiv:2107.00641

  28. Huang, Z., Ben, Y., Luo, G., Cheng, P., Yu, G., Fu, B.: Shuffle transformer: rethinking spatial shuffle for vision transformer (2021). arXiv preprint arXiv:2106.03650

  29. Chu, X., et al.: Twins: revisiting the design of spatial attention in vision transformers. In: Thirty-Fifth Conference on Neural Information Processing Systems, May 2021

    Google Scholar 

  30. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  31. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

    Google Scholar 

  32. Yuan, L., et al.: Tokens-to-token ViT: training vision transformers from scratch on ImageNet (2021). arXiv preprint arXiv:2101.11986

  33. Zhang, X., Zhou, X., Lin, M., Sun, J.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856 (2018)

    Google Scholar 

Download references

Acknowledgments

This work was supported by Guangdong Province Key Laboratory of Computational Science at the Sun Yat-sen University (2020B1212060032), the National Natural Science Foundation of China (Grant no. 11971491, 11471012).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Tan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, X., Bi, N., Tan, J. (2022). Visual Transformer-Based Models: A Survey. In: El Yacoubi, M., Granger, E., Yuen, P.C., Pal, U., Vincent, N. (eds) Pattern Recognition and Artificial Intelligence. ICPRAI 2022. Lecture Notes in Computer Science, vol 13364. Springer, Cham. https://doi.org/10.1007/978-3-031-09282-4_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-09282-4_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-09281-7

  • Online ISBN: 978-3-031-09282-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics