Skip to main content
Log in

A Dual-channel Augmented Attentive Dense-convolutional Network for power image splicing tamper detection

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Power image tampering brings certain security risks to the safe operation of power grids, among which splicing tampering is the most common. Although image tampering detection has received much attention in recent years, relatively little research has been conducted on the practical application to power systems, and the detection results are poor due to the difficulty of learning subtle edge features of tampered regions and the lack of power image tampering dataset for training. In order to effectively detect image tampering regions, this paper proposes a splicing tampering detection model with a Dual-channel Augmented Attentive Dense-convolutional Network (DAAD-Net) structure. The model consists of three main parts: backbone network feature extraction, augmented attention feature extraction, and tampered region detection. Firstly, the backbone network feature extraction module fuses the original tampered image features with the image residual features and transfers them into the backbone network to extract the feature map. Secondly, the augmented attention feature extraction module extracts the tampered region features in the higher and lower layers using hierarchical encoding and decoding operations. Finally, the extracted feature maps of each layer are sent to the tampered region detection module, and by combining the loss of each feature map for optimizing the network parameters. Additionally, we produced a power image tampering dataset containing 552 samples. Experiments demonstrate that the proposed method outperforms current state-of-the-art models, with 1% to 31% improvement in evaluation indicators, and has good robustness to noise and JPEG compression attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The data supporting the findings of this work are available from the corresponding author on reasonable request.

References

  1. Liu Z, Tian X, Bai W (2022) Dual-channel image splicing forgery detection model of electric power site (in Chinese). Appl Res Comput 39(4):1218–1223

    Google Scholar 

  2. Kaur G, Singh N, Kumar M (2022) Image forgery techniques: a review. Artif Intell Rev. https://doi.org/10.1007/s10462-022-10211-7

    Article  Google Scholar 

  3. Bianchi T, Piva A (2012) Image forgery localization via block-grained analysis of jpeg artifacts. IEEE Trans Inf Forensics Secur 7(3):1003–1017

    Article  Google Scholar 

  4. Lin Z, He J, Tang X, Tang C-K (2009) Fast, automatic and fine-grained tampered jpeg image detection via DCT coefficient analysis. Pattern Recogn 42(11):2492–2501

    Article  Google Scholar 

  5. Niu Y, Tondi B, Zhao Y, Ni R, Barni M (2021) Image splicing detection, localization and attribution via jpeg primary quantization matrix estimation and clustering. IEEE Trans Inf Forensics Secur 16:5397–5412

    Article  Google Scholar 

  6. Cozzolino D, Marra F, Poggi G, Sansone C, Verdoliva L (2017) Prnu-based forgery localization in a blind scenario. In: International conference on image analysis and processing. Springer, Berlin, pp 569–579

  7. Siwei L, Xunyu P, Xing Z (2014) Exposing region splicing forgeries with blind local noise estimation. Int J Comput Vis 110(2):202–221

    Article  Google Scholar 

  8. Fan W, Wang K, Cayre F (2015) General-purpose image forensics using patch likelihood under image statistical models. In: 2015 IEEE international workshop on information forensics and security (WIFS). IEEE, pp 1–6

  9. Bayar B, Stamm MC (2018) Constrained convolutional neural networks: a new approach towards general purpose image manipulation detection. IEEE Trans Inf Forensics Secur 13(11):2691–2706

    Article  Google Scholar 

  10. Barni M, Bondi L, Bonettini N, Bestagini P, Costanzo A, Maggini M, Tondi B, Tubaro S (2017) Aligned and non-aligned double jpeg detection using convolutional neural networks. J Vis Commun Image Represent 49:153–163

    Article  Google Scholar 

  11. Chen Y, Kang X, Shi YQ, Wang ZJ (2019) A multi-purpose image forensic method using densely connected convolutional neural networks. J Real-Time Image Proc 16(3):725–740

    Article  Google Scholar 

  12. Zhou P, Han X, Morariu VI, Davis LS (2018) Learning rich features for image manipulation detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1053–1061

  13. Tian X, Li H, Zhang Q, Zhou A (2021) Dual-channel r-FCN model for image forgery detection (in Chinese). Chin J Comput 44(2):370–383

    Google Scholar 

  14. Bappy JH, Simons C, Nataraj L, Manjunath B, Roy-Chowdhury AK (2019) Hybrid LSTM and encoder–decoder architecture for detection of image forgeries. IEEE Trans Image Process 28(7):3286–3300

    Article  MathSciNet  Google Scholar 

  15. Ding H, Chen L, Tao Q, Fu Z, Dong L, Cui X (2021) DCU-net: a dual-channel u-shaped network for image splicing forgery detection. Neural Comput Appl 1–17

  16. Zhuang P, Li H, Tan S, Li B, Huang J (2021) Image tampering localization using a dense fully convolutional network. IEEE Trans Inf Forensics Secur 16:2986–2999

    Article  Google Scholar 

  17. Gao S-H, Cheng M-M, Zhao K, Zhang X-Y, Yang M-H, Torr P (2019) Res2net: a new multi-scale backbone architecture. IEEE Trans Pattern Anal Mach Intell 43(2):652–662

    Article  Google Scholar 

  18. Kim T, Lee H, Kim D (2021) UACANET: uncertainty augmented context attention for polyp segmentation. In: Proceedings of the 29th ACM international conference on multimedia, pp 2167–2175

  19. Zhou X, Wang D, Krähenbühl P (2019) Objects as points. arXiv preprint arXiv:1904.07850

  20. Fu J, Liu J, Tian H, Li Y, Bao Y, Fang Z, Lu H (2019) Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3146–3154

  21. Chen S, Tan X, Wang B, Hu X (2018) Reverse attention for salient object detection. In: Proceedings of the European conference on computer vision (ECCV), pp 234–250

  22. Li H, Zhuang P, Li B (2021) A survey on deep learning based digital image tampering localization methods. J Signal Process 37(12):2278–2301 (in Chinese)

    Google Scholar 

  23. Kumar S, Gupta SK, Kaur M, Gupta U (2022) Vi-net: a hybrid deep convolutional neural network using VGG and inception v3 model for copy-move forgery classification. J Vis Commun Image Represent 89:103644

    Article  Google Scholar 

  24. Mazumdar A, Bora PK (2022) Two-stream encoder–decoder network for localizing image forgeries. J Vis Commun Image Represent 82:103417

    Article  Google Scholar 

  25. Huang Y, Bian S, Li H, Wang C, Li K (2022) Ds-unet: a dual streams unet for refined image forgery localization. Inf Sci 610:73–89

    Article  Google Scholar 

  26. Zhuo L, Tan S, Li B, Huang J (2022) Self-adversarial training incorporating forgery attention for image forgery localization. IEEE Trans Inf Forensics Secur 17:819–834

    Article  Google Scholar 

  27. Fridrich J, Kodovsky J (2012) Rich models for steganalysis of digital images. IEEE Trans Inf Forensics Secur 7(3):868–882

    Article  Google Scholar 

  28. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  29. Liu S, Huang D et al (2018) Receptive field block net for accurate and fast object detection. In: Proceedings of the European conference on computer vision (ECCV), pp 385–400

  30. Yuan Y, Chen X, Chen X, Wang J (2019) Segmentation transformer: object-contextual representations for semantic segmentation. arXiv preprint arXiv:1909.11065

  31. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Berlin, pp 234–241

  32. Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV), pp 801–818

  33. Chen J, Liao X, Wang W, Qian Z, Qin Z, Wang Y (2022) SNIS: a signal noise separation-based network for post-processed image forgery detection. IEEE Trans Circuits Syst Video Technol 33(2):935–951

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No.61772327), State Grid Gansu Electric Power Company (No.H2019-275).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiuxia Tian.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xing, J., Tian, X. & Han, Y. A Dual-channel Augmented Attentive Dense-convolutional Network for power image splicing tamper detection. Neural Comput & Applic 36, 8301–8316 (2024). https://doi.org/10.1007/s00521-024-09511-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-024-09511-6

Keywords