Skip to main content

An Edge-Aware Transformer Framework forĀ Image Inpainting Detection

  • Conference paper
  • First Online:
Artificial Intelligence and Security (ICAIS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13339))

Included in the following conference series:

Abstract

Image inpainting methods based on Generative Adversarial Networks are very powerful in producing visually realistic images. It is widely used in image processing and computer vision, such as recovering damaged photos. However, image inpainting may also be maliciously used to change or delete contents, e.g. removing key objects to report fake news. Such inpainting forged images can bring serious adverse effects to society. Most existing inpainting forgery detection approaches using convolutional neural networks (CNN) have limited receptive fields and do not fully exploit the edge information of the forged regions, making them fail to effectively model the global information of forged regions and well preserve their edges. To fight against inpainting forgeries (not only deep learning (DL) based but also traditional ones), in this work, we propose an edge-aware transformer framework for image inpainting detection. To better perform feature extraction and learn discriminative features, we propose a two-stream Transformer to learn the global body features and fake edge features respectively. Further, a multi-modality cross attention module is employed to propagate their information interactively, thus greatly improving the detection results. Extensive experiments demonstrate the superiority of our scheme over existing ones, and our method exhibits desirable detection generalizability for both DL-based inpainting and traditional inpainting.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bertalmio, M., Bertozzi, A.L., Sapiro, G.: Navier-stokes, fluid dynamics, and image and video inpainting. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, p. I. IEEE (2001)

    Google ScholarĀ 

  2. Chen, X., Yan, B., Zhu, J., Wang, D., Yang, X., Lu, H.: Transformer tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8126ā€“8135 (2021)

    Google ScholarĀ 

  3. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248ā€“255. IEEE (2009)

    Google ScholarĀ 

  4. Dosovitskiy, A., et al.: An image is worth 16\(\times \)16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  5. Gloe, T., Bƶhme, R.: Theā€™dresden image databaseā€™for benchmarking digital image forensics. In: Proceedings of the 2010 ACM Symposium on Applied Computing, pp. 1584ā€“1590 (2010)

    Google ScholarĀ 

  6. Guo, Q., Gao, S., Zhang, X., Yin, Y., Zhang, C.: Patch-based image inpainting via two-stage low rank approximation. IEEE Trans. Vis. Comput. Graph. 24(6), 2023ā€“2036 (2017)

    ArticleĀ  Google ScholarĀ 

  7. Herling, J., Broll, W.: High-quality real-time video inpaintingwith pixmix. IEEE Trans. Vis. Comput. Graph. 20(6), 866ā€“879 (2014)

    ArticleĀ  Google ScholarĀ 

  8. Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Image completion using planar structure guidance. ACM Trans. Graph. (TOG) 33(4), 1ā€“10 (2014)

    Google ScholarĀ 

  9. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)

  10. Li, H., Luo, W., Huang, J.: Localization of diffusion-based inpainting in digital images. IEEE Trans. Inf. Forensics Secur. 12(12), 3050ā€“3064 (2017)

    ArticleĀ  Google ScholarĀ 

  11. Li, H., Huang, J.: Localization of deep inpainting using high-pass fully convolutional network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8301ā€“8310 (2019)

    Google ScholarĀ 

  12. Li, Y., Zhou, J.: Fast and effective image copy-move forgery detection via hierarchical feature point matching. IEEE Trans. Inf. Forensics Secur. 14(5), 1307ā€“1322 (2018)

    ArticleĀ  Google ScholarĀ 

  13. Liu, G., Reda, F.A., Shih, K.J., Wang, T.C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 85ā€“100 (2018)

    Google ScholarĀ 

  14. Lyu, S., Pan, X., Zhang, X.: Exposing region splicing forgeries with blind local noise estimation. Int. J. Comput. Vision 110(2), 202ā€“221 (2014)

    ArticleĀ  Google ScholarĀ 

  15. Tan, W., Wu, Y., Wu, P., Chen, B.: A survey on digital image copy-move forgery localization using passive techniques. J. New Media 1(1), 11 (2019)

    ArticleĀ  Google ScholarĀ 

  16. Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9(1), 23ā€“34 (2004)

    ArticleĀ  Google ScholarĀ 

  17. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998ā€“6008 (2017)

    Google ScholarĀ 

  18. Wang, J., Li, Y.: Splicing image and its localization: a survey. J. Inf. Hiding Priv. Prot. 1(2), 77 (2019)

    Google ScholarĀ 

  19. Wu, H., Zhou, J.: Iid-net: image inpainting detection network via neural architecture search and attention. IEEE Trans. Circ. Syst. Video Technol. 32, 1172ā€“1185 (2021)

    ArticleĀ  Google ScholarĀ 

  20. Wu, H., Zhou, J., Li, Y.: Deep generative model for image inpainting with local binary pattern learning and spatial attention. arXiv preprint arXiv:2009.01031 (2020)

  21. Wu, Y., Abdalmageed, W., Natarajan, P.: Mantra-net: manipulation tracing network for detection and localization of image forgeries with anomalous features. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google ScholarĀ 

  22. Yan, Z., Li, X., Li, M., Zuo, W., Shan, S.: Shift-net: image inpainting via deep feature rearrangement. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 1ā€“17 (2018)

    Google ScholarĀ 

  23. Yu, J., Lin, Z., Yang, J., Shen, X., Huang, T.: Free-form image inpainting with gated convolution. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (2019)

    Google ScholarĀ 

  24. Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018)

    Google ScholarĀ 

  25. Yu, T., et al.: Region normalization for image inpainting. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12733ā€“12740 (2020)

    Google ScholarĀ 

  26. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1452ā€“1464 (2017)

    ArticleĀ  Google ScholarĀ 

Download references

Acknowledgements

This work was supported in part by the Natural Science Foundation of China under Grant 62001304 and Grant 61871273.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanman Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, L., Li, Y., You, J., Liang, R., Li, X. (2022). An Edge-Aware Transformer Framework forĀ Image Inpainting Detection. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2022. Lecture Notes in Computer Science, vol 13339. Springer, Cham. https://doi.org/10.1007/978-3-031-06788-4_53

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06788-4_53

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06787-7

  • Online ISBN: 978-3-031-06788-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics