skip to main content
10.1145/3643491.3660291acmconferencesArticle/Chapter ViewAbstractPublication PagesicmrConference Proceedingsconference-collections
research-article

Improving Generalization in Deepfake Detection via Augmentation with Recurrent Adversarial Attacks

Published: 10 June 2024 Publication History

Abstract

The crucial effort to counteract deepfakes and misinformation at large holds great importance in our society, especially at this moment in time. Deepfake detectors evolve at the same pace as deepfake generators, or even slower, and more than that, they are trained on a limited amount of data and do not achieve generalization in most situations. The primary challenge associated with training deepfakes lies in the necessity for a substantial number of diverse generated samples originating from a multitude of distinct models — an achievement not easily attained. For that reason, this work aims to leverage the one abundant resource at our disposal: real videos. This paper presents a novel training framework for deepfake detectors, aimed to improve generalization by continuously using adversarial attacks to generate new deepfakes that a detector might not be trained to recognize, starting from real samples. We aim to do that while keeping the generated samples as realistic as possible. We train the deepfake detector on the newly generated deepfakes, along with the original images, aiming to enhance its ability to differentiate between them. We show that this training method improved generalization to unseen datasets, while not using new data. More than that, this unsupervised method only uses real images, making it an easy to implement and adaptable way to improve generalization.

References

[1]
2019. Unet-Segmentation-Pytorch-Nest-of-Unets Github. Retrieved March 13 2023 from https://github.com/bigmb/Unet-Segmentation-Pytorch-Nest-of-Unets
[2]
Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. Openface 2.0: Facial behavior analysis toolkit. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEE, 59–66.
[3]
Liang Chen, Yong Zhang, Yibing Song, Lingqiao Liu, and Jue Wang. 2022. Self-supervised learning of adversarial example: Towards good generalizations for deepfake detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 18710–18719.
[4]
François Chollet. 2017. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1251–1258.
[5]
Brian Dolhansky, Russ Howes, Ben Pflaum, Nicole Baram, and Cristian Canton Ferrer. 2019. The deepfake detection challenge (dfdc) preview dataset. arXiv preprint arXiv:1910.08854 (2019).
[6]
Nick Dufour and Andrew Gully. 2019. Contributing data to deepfake detection research. Google AI Blog 1, 2 (2019), 3.
[7]
Apurva Gandhi and Shomik Jain. 2020. Adversarial perturbations fool deepfake detectors. In 2020 International joint conference on neural networks (IJCNN). IEEE, 1–8.
[8]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in neural information processing systems 27 (2014).
[9]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[10]
Alexandros Haliassos, Konstantinos Vougioukas, Stavros Petridis, and Maja Pantic. 2021. Lips don’t lie: A generalisable and robust approach to face forgery detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 5039–5049.
[11]
Shehzeen Hussain, Paarth Neekhara, Malhar Jere, Farinaz Koushanfar, and Julian McAuley. 2021. Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adversarial examples. In Proceedings of the IEEE/CVF winter conference on applications of computer vision. 3348–3357.
[12]
Debesh Jha, Michael A Riegler, Dag Johansen, Pål Halvorsen, and Håvard D Johansen. 2020. Doubleu-net: A deep convolutional neural network for medical image segmentation. In 2020 IEEE 33rd International symposium on computer-based medical systems (CBMS). IEEE, 558–564.
[13]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016).
[14]
Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, and Baining Guo. 2020. Face x-ray for more general face forgery detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 5001–5010.
[15]
Yuezun Li, Xin Yang, Pu Sun, Honggang Qi, and Siwei Lyu. 2020. Celeb-df: A large-scale challenging dataset for deepfake forensics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3207–3216.
[16]
Paarth Neekhara, Brian Dolhansky, Joanna Bitton, and Cristian Canton Ferrer. 2021. Adversarial threats to deepfake detection: A practical perspective. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 923–932.
[17]
Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, 2018. Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018).
[18]
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 234–241.
[19]
Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2019. Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1–11.
[20]
Kaede Shiohara and Toshihiko Yamasaki. 2022. Detecting deepfakes with self-blended images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18720–18729.
[21]
Dan-Cristian Stanciu and Bogdan Ionescu. 2023. Autoencoder-based data augmentation for deepfake detection. In Proceedings of the 2nd ACM International Workshop on Multimedia AI against Disinformation. 19–27.
[22]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[23]
Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. 2020. CNN-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 8695–8704.
[24]
Zhi Wang, Yiwen Guo, and Wangmeng Zuo. 2022. Deepfake forensics via an adversarial game. IEEE Transactions on Image Processing 31 (2022), 3541–3552.
[25]
Chin-Yuan Yeh, Hsi-Wen Chen, Shang-Lun Tsai, and Sheng-De Wang. 2020. Disrupting image-translation-based deepfake algorithms with adversarial attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops. 53–62.
[26]
Hanqing Zhao, Wenbo Zhou, Dongdong Chen, Tianyi Wei, Weiming Zhang, and Nenghai Yu. 2021. Multi-attentional deepfake detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2185–2194.
[27]
Tianchen Zhao, Xiang Xu, Mingze Xu, Hui Ding, Yuanjun Xiong, and Wei Xia. 2021. Learning self-consistency for deepfake detection. In Proceedings of the IEEE/CVF international conference on computer vision. 15023–15033.
[28]
Yinglin Zheng, Jianmin Bao, Dong Chen, Ming Zeng, and Fang Wen. 2021. Exploring temporal coherence for more general video face forgery detection. In Proceedings of the IEEE/CVF international conference on computer vision. 15044–15054.

Cited By

View all
  • (2024)MAD '24 Workshop: Multimedia AI against DisinformationProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3660000(1339-1341)Online publication date: 30-May-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MAD '24: Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation
June 2024
107 pages
ISBN:9798400705526
DOI:10.1145/3643491
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 June 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. GAN
  2. adversarial attack
  3. autoencoder
  4. data augmentation
  5. deepfake
  6. digital video forensics
  7. face manipulation
  8. generalization

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • AI4Media, A European Excellence Centre for Media, Society and Democracy, H2020 ICT-48-2020

Conference

ICMR '24
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)204
  • Downloads (Last 6 weeks)17
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)MAD '24 Workshop: Multimedia AI against DisinformationProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3660000(1339-1341)Online publication date: 30-May-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media