Skip to main content

Image-to-Image Translation Generative Adversarial Networks for Video Source Camera Falsification

  • Conference paper
  • First Online:
Digital Forensics and Cyber Crime (ICDF2C 2022)

Abstract

The emerging usage of multimedia devices led to a burst in criminal cases where digital forensics investigations are needed. This necessitate development of accurate digital forensic techniques which require not only the confirmation of the data integrity but also the verification of its origin source. To this end, machine and/or deep learning techniques are widely being employed within forensics tools. Nevertheless, while these techniques became an efficient tool for the forensic investigators, they also provided the attackers with novel methods for the data and source falsification. In this paper, we propose a simple and effective anti-forensics attack that uses generative adversarial networks (GANs) to compromise the video’s camera source traces. In our approach, we adopt the popular image-to-image translation GANs to fool the existing algorithms for video source camera identification. Our experimental results demonstrate that the proposed attack can be implemented to successfully compromise the existing forensic methods with 100% probability for non-flat videos while producing the high quality content. The results indicate the need for attack-prone video source camera identification forensics approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Auto-painter: cartoon image generation from sketch by using conditional Wasserstein generative adversarial networks. Neurocomputing 311, 78–87 (2018). https://doi.org/10.1016/j.neucom.2018.05.045

  2. GANs for medical image analysis. Artif. Intell. Med. 109, 101938 (2020). https://doi.org/10.1016/j.artmed.2020.101938

  3. Super-resolution using GANs for medical imaging. Proc. Comput. Sci. 173, 28–35 (2020). https://doi.org/10.1016/j.procs.2020.06.005. International Conference on Smart Sustainable Intelligent Computing and Applications Under ICITETM 2020

  4. Aldausari, N., Sowmya, A., Marcus, N., Mohammadi, G.: Video generative adversarial networks: a review. ACM Comput. Surv. 55(2), 1–25 (2022)

    Article  Google Scholar 

  5. Barni, M., Chen, Z., Tondi, B.: Adversary-aware, data-driven detection of double JPEG compression: how to make counter-forensics harder. In: 2016 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–6 (2016). https://doi.org/10.1109/WIFS.2016.7823902

  6. Chen, C., Stamm, M.: Robust camera model identification using demosaicing residual features. Multimed. Tools Appl. 80, 1–29 (2021). https://doi.org/10.1007/s11042-020-09011-4

    Article  Google Scholar 

  7. Chen, C., Zhao, X., Stamm, M.C.: MISLGAN: an anti-forensic camera model falsification framework using a generative adversarial network. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 535–539 (2018). https://doi.org/10.1109/ICIP.2018.8451503

  8. Chen, C., Zhao, X., Stamm, M.C.: Generative adversarial attacks against deep-learning-based camera model identification. IEEE Trans. Inf. Forensics Secur. PP, 1 (2019). https://doi.org/10.1109/TIFS.2019.2945198

  9. Cozzolino, D., Thies, J., Rössler, A., Nießner, M., Verdoliva, L.: SpoC: spoofing camera fingerprints (2019)

    Google Scholar 

  10. Cozzolino, D., Verdoliva, L.: Multimedia forensics before the deep learning era. In: Rathgeb, C., Tolosana, R., Vera-Rodriguez, R., Busch, C. (eds.) Handbook of Digital Face Manipulation and Detection. ACVPR, pp. 45–67. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-87664-7_3

  11. Dal Cortivo, D., Mandelli, S., Bestagini, P., Tubaro, S.: CNN-based multi-modal camera model identification on video sequences. J. Imag. 7(8), 135 (2021)

    Article  Google Scholar 

  12. Damiani, J.: A voice deepfake was used to scam a CEO out of \$243,000 (2019). https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=34e8298a2241

  13. Das, T.K.: Anti-forensics of JPEG compression detection schemes using approximation of DCT coefficients. Multimed. Tools Appl. 77(24), 31835–31854 (2018)

    Article  Google Scholar 

  14. Duan, B., Wang, W., Tang, H., Latapie, H., Yan, Y.: Cascade attention guided residue learning GAN for cross-modal translation (2019)

    Google Scholar 

  15. Flor, E., Aygun, R., Mercan, S., Akkaya, K.: PRNU-based source camera identification for multimedia forensics. In: 2021 IEEE 22nd International Conference on Information Reuse and Integration for Data Science (IRI), pp. 168–175 (2021). https://doi.org/10.1109/IRI51335.2021.00029

  16. Gauthier, J.: Conditional generative adversarial nets for convolutional face generation (2015)

    Google Scholar 

  17. Goodfellow, I., et al.: Generative adversarial nets. Advances in Neural Information Processing Systems, vol. 27 (2014)

    Google Scholar 

  18. Hosler, B., et al.: A video camera model identification system using deep learning and fusion. In: ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8271–8275 (2019). https://doi.org/10.1109/ICASSP.2019.8682608

  19. Jeong, S., Lee, J., Sohn, K.: Multi-domain unsupervised image-to-image translation with appearance adaptive convolution (2022)

    Google Scholar 

  20. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation (2017)

    Google Scholar 

  21. Kirchner, M., Bohme, R.: Hiding traces of resampling in digital images. IEEE Trans. Inf. Forensics Secur. 3(4), 582–592 (2008)

    Article  Google Scholar 

  22. Korshunova, I., Shi, W., Dambre, J., Theis, L.: Fast face-swap using convolutional neural networks (2016)

    Google Scholar 

  23. Li, Y., Min, M.R., Shen, D., Carlson, D., Carin, L.: Video generation from text. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI 2018/IAAI 2018/EAAI 2018. AAAI Press (2018)

    Google Scholar 

  24. Mayer, O., Stamm, M.C.: Countering anti-forensics of lateral chromatic aberration. Association for Computing Machinery, New York, NY, USA (2017)

    Google Scholar 

  25. Peng, F., Yin, L., Long, M.: BDC-GAN: bidirectional conversion between computer-generated and natural facial images for anti-forensics. IEEE Trans. Circ. Syst. Video Technol. 32, 1 (2022). https://doi.org/10.1109/TCSVT.2022.3177238

    Article  Google Scholar 

  26. Rong, D., Wang, Y., Sun, Q.: Video source forensics for IoT devices based on convolutional neural networks. Open J. Internet Things (OJIOT) 7(1), 23–31 (2021)

    Google Scholar 

  27. Sharma, S., Ravi, H., Subramanyam, A., Emmanuel, S.: Anti-forensics of median filtering and contrast enhancement. J. Vis. Commun. Image Represent. 66(C), 102682 (2020)

    Google Scholar 

  28. Shullani, D., Fontani, M., Iuliani, M., Alshaya, O., Piva, A.: Vision: a video and image dataset for source identification. EURASIP J. Inf. Secur. 2017, 15 (2017). https://doi.org/10.1186/s13635-017-0067-2

  29. Stamm, M.C., Lin, W.S., Liu, K.J.R.: Temporal forensics and anti-forensics for motion compensated video. IEEE Trans. Inf. Forensics Secur. 7(4), 1315–1329 (2012). https://doi.org/10.1109/TIFS.2012.2205568

    Article  Google Scholar 

  30. Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2Face: real-time face capture and reenactment of RGB videos, vol. 62, no. 1 (2018)

    Google Scholar 

  31. Tulyakov, S., Liu, M.Y., Yang, X., Kautz, J.: MoCoGAN: decomposing motion and content for video generation (2017)

    Google Scholar 

  32. Veksler, M., Aygun, R., Akkaya, K., Iyengar, S.: Video origin camera identification using ensemble CNNs of positional patches. In: 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (IEEE MIPR) (2022). (in Press)

    Google Scholar 

  33. Venkatesh, S., Zhang, H., Ramachandra, R., Raja, K., Damer, N., Busch, C.: Can GAN generated morphs threaten face recognition systems equally as landmark based morphs? - vulnerability and detection (2020)

    Google Scholar 

  34. Villegas, R., Yang, J., Hong, S., Lin, X., Lee, H.: Decomposing motion and content for natural video sequence prediction. ArXiv abs/1706.08033 (2017)

    Google Scholar 

  35. Vondrick, C., Pirsiavash, H., Torralba, A.: Generating videos with scene dynamics. In: NIPS 2016, pp. 613–621. Curran Associates Inc., Red Hook, NY, USA (2016)

    Google Scholar 

  36. Yu, J., Xue, H., Liu, B., Wang, Y., Zhu, S., Ding, M.: GAN-based differential private image privacy protection framework for the internet of multimedia things. Sensors 21(1), 58 (2021)

    Google Scholar 

  37. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251 (2017). https://doi.org/10.1109/ICCV.2017.244

  38. Zou, H., Yang, P., Ni, R., Zhao, Y., Zhou, N.: Anti-forensics of image contrast enhancement based on generative adversarial network (2021)

    Google Scholar 

Download references

Acknowledgements

Research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-21-1-0264. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maryna Veksler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Veksler, M., Caspard, C., Akkaya, K. (2023). Image-to-Image Translation Generative Adversarial Networks for Video Source Camera Falsification. In: Goel, S., Gladyshev, P., Nikolay, A., Markowsky, G., Johnson, D. (eds) Digital Forensics and Cyber Crime. ICDF2C 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 508. Springer, Cham. https://doi.org/10.1007/978-3-031-36574-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36574-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36573-7

  • Online ISBN: 978-3-031-36574-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics