Abstract
Image inpainting methods based on Generative Adversarial Networks are very powerful in producing visually realistic images. It is widely used in image processing and computer vision, such as recovering damaged photos. However, image inpainting may also be maliciously used to change or delete contents, e.g. removing key objects to report fake news. Such inpainting forged images can bring serious adverse effects to society. Most existing inpainting forgery detection approaches using convolutional neural networks (CNN) have limited receptive fields and do not fully exploit the edge information of the forged regions, making them fail to effectively model the global information of forged regions and well preserve their edges. To fight against inpainting forgeries (not only deep learning (DL) based but also traditional ones), in this work, we propose an edge-aware transformer framework for image inpainting detection. To better perform feature extraction and learn discriminative features, we propose a two-stream Transformer to learn the global body features and fake edge features respectively. Further, a multi-modality cross attention module is employed to propagate their information interactively, thus greatly improving the detection results. Extensive experiments demonstrate the superiority of our scheme over existing ones, and our method exhibits desirable detection generalizability for both DL-based inpainting and traditional inpainting.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bertalmio, M., Bertozzi, A.L., Sapiro, G.: Navier-stokes, fluid dynamics, and image and video inpainting. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1, p. I. IEEE (2001)
Chen, X., Yan, B., Zhu, J., Wang, D., Yang, X., Lu, H.: Transformer tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8126ā8135 (2021)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248ā255. IEEE (2009)
Dosovitskiy, A., et al.: An image is worth 16\(\times \)16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Gloe, T., Bƶhme, R.: Theādresden image databaseāfor benchmarking digital image forensics. In: Proceedings of the 2010 ACM Symposium on Applied Computing, pp. 1584ā1590 (2010)
Guo, Q., Gao, S., Zhang, X., Yin, Y., Zhang, C.: Patch-based image inpainting via two-stage low rank approximation. IEEE Trans. Vis. Comput. Graph. 24(6), 2023ā2036 (2017)
Herling, J., Broll, W.: High-quality real-time video inpaintingwith pixmix. IEEE Trans. Vis. Comput. Graph. 20(6), 866ā879 (2014)
Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Image completion using planar structure guidance. ACM Trans. Graph. (TOG) 33(4), 1ā10 (2014)
Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)
Li, H., Luo, W., Huang, J.: Localization of diffusion-based inpainting in digital images. IEEE Trans. Inf. Forensics Secur. 12(12), 3050ā3064 (2017)
Li, H., Huang, J.: Localization of deep inpainting using high-pass fully convolutional network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8301ā8310 (2019)
Li, Y., Zhou, J.: Fast and effective image copy-move forgery detection via hierarchical feature point matching. IEEE Trans. Inf. Forensics Secur. 14(5), 1307ā1322 (2018)
Liu, G., Reda, F.A., Shih, K.J., Wang, T.C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 85ā100 (2018)
Lyu, S., Pan, X., Zhang, X.: Exposing region splicing forgeries with blind local noise estimation. Int. J. Comput. Vision 110(2), 202ā221 (2014)
Tan, W., Wu, Y., Wu, P., Chen, B.: A survey on digital image copy-move forgery localization using passive techniques. J. New Media 1(1), 11 (2019)
Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9(1), 23ā34 (2004)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998ā6008 (2017)
Wang, J., Li, Y.: Splicing image and its localization: a survey. J. Inf. Hiding Priv. Prot. 1(2), 77 (2019)
Wu, H., Zhou, J.: Iid-net: image inpainting detection network via neural architecture search and attention. IEEE Trans. Circ. Syst. Video Technol. 32, 1172ā1185 (2021)
Wu, H., Zhou, J., Li, Y.: Deep generative model for image inpainting with local binary pattern learning and spatial attention. arXiv preprint arXiv:2009.01031 (2020)
Wu, Y., Abdalmageed, W., Natarajan, P.: Mantra-net: manipulation tracing network for detection and localization of image forgeries with anomalous features. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Yan, Z., Li, X., Li, M., Zuo, W., Shan, S.: Shift-net: image inpainting via deep feature rearrangement. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 1ā17 (2018)
Yu, J., Lin, Z., Yang, J., Shen, X., Huang, T.: Free-form image inpainting with gated convolution. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (2019)
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018)
Yu, T., et al.: Region normalization for image inpainting. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12733ā12740 (2020)
Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1452ā1464 (2017)
Acknowledgements
This work was supported in part by the Natural Science Foundation of China under Grant 62001304 and Grant 61871273.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, L., Li, Y., You, J., Liang, R., Li, X. (2022). An Edge-Aware Transformer Framework forĀ Image Inpainting Detection. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2022. Lecture Notes in Computer Science, vol 13339. Springer, Cham. https://doi.org/10.1007/978-3-031-06788-4_53
Download citation
DOI: https://doi.org/10.1007/978-3-031-06788-4_53
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06787-7
Online ISBN: 978-3-031-06788-4
eBook Packages: Computer ScienceComputer Science (R0)