Skip to main content

Authenticity Identification of Qi Baishi’s Shrimp Painting with Dynamic Token Enhanced Visual Transformer

  • Conference paper
  • First Online:
  • 1073 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13443))

Abstract

Automatic recognition of Chinese ink paintings’ authenticity is still a challenging task due to the high similarity between genuine and fake paintings, and the sparse discriminative information in Chinese ink paintings. To handle this challenging task, we propose the Dynamic Token Enhancement Transformer (DETE) to improve the model’s ability to identify the authenticity of Qi Baishi’s shrimp paintings. The proposed DETE method consists of two key components: dynamic patch creation (DPC) strategy and dynamic token enhancement (DTE) module. The DPC strategy creates patches with different sizes according to their contributions, forcing the network to focus on the important regions instead of meaningless ones. The DTE module gradually enhances the association between the class token and most impact tokens to improves the performance eventually. We collected a dataset of authenticity identification of Qi Baishi’s shrimp paintings and validated our method on this dataset. The results showed that our method outperformed the state-of-the-art methods. In addition, we further validated our method on two public available painting classification datasets WikiArt and ArtDL.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Mohammadi, M.R., Rustaee, F.: Hierarchical classification of fine-art paintings using deep neural networks. Iran J. Comput. Sci. 4, 59–66 (2021)

    Article  Google Scholar 

  2. Zhao, W., Zhou, D., Qiu, X., Jiang, W.: Compare the performance of the models in art classification. PLOS One 1–16 (2021)

    Google Scholar 

  3. Cetinic, E., Lipic, T., Grgic, S.: Fine-tuning convolutional neural networks for fine art classification. Expert Syst. Appl. 114, 107–118 (2018)

    Article  Google Scholar 

  4. Tan, W.R., Chan, C.S., Aguirre, H.E., Tanaka, K.: Ceci N’EST Pas Une pipe: a deep convolutional network for fine-art paintings classification. In: IEEE International Conference on Image Processing (ICIP), pp. 3703–3707 (2016)

    Google Scholar 

  5. Cömert, C., Özbayoğlu, M., Kasnakoğlu, C.: Painter prediction from artworks with transfer learning. In: International Conference on Mechatronics and Robotics Engineering (ICMRE), pp. 204–208 (2021)

    Google Scholar 

  6. Dobbs, Todd, Ras, Zbigniew: On art authentication and the Rijksmuseum challenge: a residual neural network approach. Expert Syst. Appl. 200(C), 116933 (2022)

    Article  Google Scholar 

  7. Cetinic, E., She, J.: Understanding and creating art with AI: review and outlook. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 18, 1–22 (2022)

    Article  Google Scholar 

  8. Elgammal, A., Kang, Y., Leeuw, M.D.: Picasso, matisse, or a fake? automated analysis of drawings at the stroke level for attribution and authentication. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  10. Dosovitskiy, A., et al.: An image is worth 16 \(\times \) 16 words: transformers for image recognition at scale. In: International Conference on Learning Representations (ICLR) (2021)

    Google Scholar 

  11. Brock, A., De, S., Smith, S.L.: Characterizing signal propagation to close the performance gap in unnormalized resnets. In: International Conference on Learning Representations (ICLR) (2021)

    Google Scholar 

  12. Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2921–2929 (2016)

    Google Scholar 

  13. Serrano, S., Smith, N.A.: Is attention interpretable? In: Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL), pp. 2931–2951 (2019)

    Google Scholar 

  14. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: International Conference on Machine Learning (ICML), vol. 70, pp. 1321–1330 (2017)

    Google Scholar 

  15. Saleh, B., Elgammal, A.: Large-scale classification of fine-art paintings: learning the right metric on the right feature. Int. J. Digit. Art Hist. (2) (2016)

    Google Scholar 

  16. Milani, F., Fraternali, P.: A dataset and a convolutional model for iconography classification in paintings. J. Comput. Cult. Heritage (JOCCH) 14, 1–18 (2021)

    Article  Google Scholar 

  17. Rao, Y., Chen, G., Lu, J., Zhou, J.: Counterfactual attention learning for fine-grained visual categorization and re-identification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1025–1034 (2021)

    Google Scholar 

  18. Luo, W., Zhang, H., Li, J., Wei, X.-S.: Learning semantically enhanced feature for fine-grained image classification. IEEE Signal Process. Lett. 27, 1545–1549 (2020)

    Article  Google Scholar 

  19. Hu, T., Qi, H.: See better before looking closer: weakly supervised data augmentation network for fine-grained visual classification. CoRR, abs/1901.09891 (2019)

    Google Scholar 

  20. Yue, X., et al.: Vision transformer with progressive sampling. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 387–396 (2021)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by grants from the National Natural Science Foundation of China (Nos. 61973221 and 62002232), the Natural Science Foundation of Guangdong Province of China (No. 2019A1515011165), and the Shenzhen Research Foundation for Basic Research, China (No. 20200824213635001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fu Qi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, W., Huang, X., Liu, X., Wu, H., Qi, F. (2022). Authenticity Identification of Qi Baishi’s Shrimp Painting with Dynamic Token Enhanced Visual Transformer. In: Magnenat-Thalmann, N., et al. Advances in Computer Graphics. CGI 2022. Lecture Notes in Computer Science, vol 13443. Springer, Cham. https://doi.org/10.1007/978-3-031-23473-6_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23473-6_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23472-9

  • Online ISBN: 978-3-031-23473-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics