skip to main content
10.1145/3474085.3475518acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

FakeTagger: Robust Safeguards against DeepFake Dissemination via Provenance Tracking

Published: 17 October 2021 Publication History

Abstract

In recent years, DeepFake is becoming a common threat to our society, due to the remarkable progress of generative adversarial networks (GAN) in image synthesis. Unfortunately, existing studies that propose various approaches, in fighting against DeepFake and determining if the facial image is real or fake, is still at an early stage. Obviously, the current DeepFake detection method struggles to catch the rapid progress of GANs, especially in the adversarial scenarios where attackers can evade the detection intentionally, such as adding perturbations to fool the DNN-based detectors. While passive detection simply tells whether the image is fake or real, DeepFake provenance, on the other hand, provides clues for tracking the sources in DeepFake forensics. Thus, the tracked fake images could be blocked immediately by administrators and avoid further spread in social networks.
In this paper, we investigate the potentials of image tagging in serving the DeepFake provenance tracking. Specifically, we devise a deep learning-based approach, named FakeTagger, with a simple yet effective encoder and decoder design along with channel coding to embed message to the facial image, which is to recover the embedded message after various drastic GAN-based DeepFake transformation with high confidence. The embedded message could be employed to represent the identity of facial images, which further contributed to DeepFake detection and provenance. Experimental results demonstrate that our proposed approach could recover the embedded message with an average accuracy of more than 95% over the four common types of DeepFakes. Our research finding confirms effective privacy-preserving techniques for protecting personal photos from being DeepFaked.

References

[1]
Abdullah Bamatraf, Rosziati Ibrahim, and Mohd Najib B Mohd Salleh. 2010. Digital watermarking algorithm using LSB. In 2010 International Conference on Computer Applications and Industrial Electronics. IEEE, 155--159.
[2]
Mauro Barni, Kassem Kallas, Ehsan Nowroozi, and Benedetta Tondi. 2020. CNN Detection of GAN-Generated Face Images based on Cross-Band Co-occurrences Analysis. arXiv preprint arXiv:2007.12909 (2020).
[3]
Martin Bossert. 1999. Channel coding for telecommunications. John Wiley & Sons, Inc.
[4]
Nicholas Carlini and Hany Farid. 2020. Evading Deepfake-Image Detectors with White-and Black-Box Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 658--659.
[5]
Yupeng Cheng, Felix Juefei-Xu, Qing Guo, Huazhu Fu, Xiaofei Xie, Shang-Wei Lin, Weisi Lin, and Yang Liu. 2020. Adversarial Exposure Attack on Diabetic Retinopathy Imagery. arXiv preprint arXiv:2009.09231 (2020).
[6]
Kristy Choi, Kedar Tatwawadi, Tsachy Weissman, and Stefano Ermon. 2018b. NECST: neural joint source-channel coding. (2018).
[7]
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018a. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 8789--8797.
[8]
Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. 2020. Stargan v2: Diverse image synthesis for multiple domains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8188--8197.
[9]
Umur Aybars Ciftci, Ilke Demir, and Lijun Yin. 2020. Fakecatcher: Detection of synthetic portrait videos using biological signals. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).
[10]
Samantha Cole. 2018. We Are Truly F--ed: Everyone Is Making AI-Generated Fake Porn Now. https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley/. (Jan 25 2018).
[11]
Ricard Durall, Margret Keuper, and Janis Keuper. 2020. Watch your up-convolution: Cnn based generative deep neural networks are failing to reproduce spectral distributions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7890--7899.
[12]
Joseph Foley. 2021. 12 deepfake examples that terrified and amused the internet. https://www.creativebloq.com/features/deepfake-examples/. (Feb 10 2021).
[13]
Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fischer, Dorothea Kolossa, and Thorsten Holz. 2020. Leveraging Frequency Analysis for Deep Fake Image Recognition. arXiv preprint arXiv:2003.08685 (2020).
[14]
Apurva Gandhi and Shomik Jain. 2020. Adversarial perturbations fool deepfake detectors. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 1--8.
[15]
Ruijun Gao, Qing Guo, Felix Juefei-Xu, Hongkai Yu, Xuhong Ren, Wei Feng, and Song Wang. 2020. Making Images Undiscoverable from Co-Saliency Detection. arXiv preprint arXiv:2009.09258 (2020).
[16]
Ruijun Gao, Qing Guo, Felix Juefei-Xu, Hongkai Yu, and Wei Feng. 2021. Advhaze: Adversarial haze attack. arXiv preprint arXiv:2104.13673 (2021).
[17]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672--2680.
[18]
Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. 2017. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2017).
[19]
Qing Guo, Felix Juefei-Xu, Xiaofei Xie, Lei Ma, Jian Wang, Bing Yu, Wei Feng, and Yang Liu. 2020. Watch out! Motion is Blurring the Vision of Your Deep Neural Networks. In Advances in Neural Information Processing Systems (NeurIPS).
[20]
Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, and Xilin Chen. 2019. AttGAN: Facial attribute editing by only changing what you want. IEEE Transactions on Image Processing, Vol. 28, 11 (2019), 5464--5478.
[21]
Yihao Huang, Felix Juefei-Xu, Qing Guo, Xiaofei Xie, Lei Ma, Weikai Miao, Yang Liu, and Geguang Pu. 2020 a. FakeRetouch: Evading DeepFakes Detection via the Guidance of Deliberate Noise. arXiv preprint arXiv:2009.09213 (2020).
[22]
Yihao Huang, Felix Juefei-Xu, Run Wang, Qing Guo, Lei Ma, Xiaofei Xie, Jianwen Li, Weikai Miao, Yang Liu, and Geguang Pu. 2020 b. FakePolisher: Making DeepFakes More Detection-Evasive by Shallow Reconstruction. arXiv preprint arXiv:2006.07533 (2020).
[23]
Yihao Huang, Felix Juefei-Xu, Run Wang, Qing Guo, Lei Ma, Xiaofei Xie, Jianwen Li, Weikai Miao, Yang Liu, and Geguang Pu. 2020 c. FakePolisher: Making DeepFakes More Detection-Evasive by Shallow Reconstruction. In Proceedings of the ACM International Conference on Multimedia (ACM MM).
[24]
Yihao Huang, Felix Juefei-Xu, Run Wang, Qing Guo, Xiaofei Xie, Lei Ma, Jianwen Li, Weikai Miao, Yang Liu, and Geguang Pu. 2020 d. FakeLocator: Robust localization of GAN-based face manipulations. arXiv preprint arXiv:2001.09598 (2020).
[25]
Mei Jiansheng, Li Sukang, and Tan Xiaomei. 2009. A digital watermarking algorithm based on DCT and DWT. In Proceedings. The 2009 International Symposium on Web Information Systems and Applications (WISA 2009). Citeseer, 104.
[26]
Felix Juefei-Xu, Run Wang, Yihao Huang, Qing Guo, Lei Ma, and Yang Liu. 2021. Countering Malicious DeepFakes: Survey, Battleground, and Horizon. arXiv preprint arXiv:2103.00218 (2021).
[27]
Steffen Jung and Margret Keuper. 2020. Spectral distribution aware image generation. arXiv preprint arXiv:2012.03110 (2020).
[28]
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2017. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017).
[29]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4401--4410.
[30]
Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8110--8119.
[31]
S Katzenbeisser and FAP Petitcolas. 2000. Digital watermarking. Artech House, London, Vol. 2 (2000).
[32]
Mohammad Ibrahim Khan, Md Rahman, Md Sarker, Iqbal Hasan, et almbox. 2013. Digital watermarking for image authenticationbased on combined dct, dwt and svd transformation. arXiv preprint arXiv:1307.6328 (2013).
[33]
Yuezun Li, Ming-Ching Chang, and Siwei Lyu. 2018. In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 1--7.
[34]
Ming Liu, Yukang Ding, Min Xia, Xiao Liu, Errui Ding, Wangmeng Zuo, and Shilei Wen. 2019. Stgan: A unified selective transfer network for arbitrary image attribute editing. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3673--3682.
[35]
Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. 2017. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267 (2017).
[36]
Yisroel Mirsky and Wenke Lee. 2020. The Creation and Detection of Deepfakes: A Survey. arXiv preprint arXiv:2004.11138 (2020).
[37]
Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957 (2018).
[38]
Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 (2016).
[39]
Ivan Petrov, Daiheng Gao, Nikolay Chervoniy, Kunlin Liu, Sugasa Marangonda, Chris Umé, Jian Jiang, Luis RP, Sheng Zhang, Pingyu Wu, et almbox. 2020. Deepfacelab: A simple, flexible and extensible face swapping framework. arXiv preprint arXiv:2005.05535 (2020).
[40]
Christine I Podilchuk and Edward J Delp. 2001. Digital watermarking: algorithms and applications. IEEE signal processing Magazine, Vol. 18, 4 (2001), 33--46.
[41]
Hua Qi, Qing Guo, Felix Juefei-Xu, Xiaofei Xie, Lei Ma, Wei Feng, Yang Liu, and Jianjun Zhao. 2020. DeepRhythm: Exposing DeepFakes with Attentional Visual Heartbeat Rhythms. In Proceedings of the ACM International Conference on Multimedia (ACM MM).
[42]
Yuyang Qian, Guojun Yin, Lu Sheng, Zixuan Chen, and Jing Shao. 2020. Thinking in Frequency: Face Forgery Detection by Mining Frequency-aware Clues. arXiv preprint arXiv:2007.09355 (2020).
[43]
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention. Springer, 234--241.
[44]
Nataniel Ruiz, Sarah Adel Bargal, and Stan Sclaroff. 2020. Disrupting deepfakes: Adversarial attacks against conditional image translation networks and facial manipulation systems. In European Conference on Computer Vision. Springer, 236--251.
[45]
William E Ryan et al. 2004. An introduction to LDPC codes., bibinfonumpages23 pages.
[46]
Pouya Samangouei, Maya Kabkab, and Rama Chellappa. 2018. Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605 (2018).
[47]
Eran Segalis. 2020. Disrupting Deepfakes with an Adversarial Attack that Survives Training. arXiv preprint arXiv:2006.12247 (2020).
[48]
Pushpa Mala Siddaraju, D Jayadevappa, and K Ezhilarasan. 2015. Digital image watermarking techniques: a review. Int. J. Comput. Sci. Secur, Vol. 9, 3 (2015), 140--156.
[49]
Amit Kumar Singh, Nomit Sharma, Mayank Dave, and Anand Mohan. 2012. A novel technique for digital image watermarking in spatial domain. In 2012 2nd IEEE International Conference on Parallel, Distributed and Grid Computing. IEEE, 497--501.
[50]
Matthew Tancik, Ben Mildenhall, and Ren Ng. 2020. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2117--2126.
[51]
Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2387--2395.
[52]
Binyu Tian, Qing Guo, Felix Juefei-Xu, Wen Le Chan, Yupeng Cheng, Xiaohong Li, Xiaofei Xie, and Shengchao Qin. 2021 a. Bias Field Poses a Threat to DNN-Based X-Ray Recognition. In IEEE International Conference on Multimedia and Expo (ICME).
[53]
Binyu Tian, Felix Juefei-Xu, Qing Guo, Xiaofei Xie, Xiaohong Li, and Yang Liu. 2021 b. AVA: Adversarial Vignetting Attack against Visual Recognition. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI).
[54]
Ruben Tolosana, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales, and Javier Ortega-Garcia. 2020. Deepfakes and beyond: A survey of face manipulation and fake detection. arXiv preprint arXiv:2001.00179 (2020).
[55]
Run Wang, Felix Juefei-Xu, Yihao Huang, Qing Guo, Xiaofei Xie, Lei Ma, and Yang Liu. 2020 a. DeepSonar: Towards Effective and Robust Detection of AI-Synthesized Fake Voices. In Proceedings of the ACM International Conference on Multimedia (ACM MM).
[56]
Run Wang, Felix Juefei-Xu, Lei Ma, Xiaofei Xie, Yihao Huang, Jian Wang, and Yang Liu. 2020 b. FakeSpotter: A Simple yet Robust Baseline for Spotting AI-Synthesized Fake Faces. In International Joint Conference on Artificial Intelligence (IJCAI).
[57]
Run Wang, Felix Juefei-Xu, Lei Ma, Xiaofei Xie, Yihao Huang, Jian Wang, and Yang Liu. 2020 c. FakeSpotter: A Simple yet Robust Baseline for Spotting AI-Synthesized Fake Faces. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI).
[58]
Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. 2020 d. CNN-generated images are surprisingly easy to spot... for now. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 7.
[59]
Weilin Xu, David Evans, and Yanjun Qi. 2017. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017).
[60]
Chaofei Yang, Lei Ding, Yiran Chen, and Hai Li. 2020. Defending against gan-based deepfake attacks via transformation-aware adversarial faces. arXiv preprint arXiv:2006.07421 (2020).
[61]
Xin Yang, Yuezun Li, and Siwei Lyu. 2019. Exposing deep fakes using inconsistent head poses. In ICASSP 2019--2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 8261--8265.
[62]
Erkan Yavuz and Ziya Telatar. 2007. Improved SVD-DWT based digital image watermarking against watermark ambiguity. In Proceedings of the 2007 ACM symposium on Applied computing. 1051--1055.
[63]
Chin-Yuan Yeh, Hsi-Wen Chen, Shang-Lun Tsai, and Sheng-De Wang. 2020. Disrupting image-translation-based DeepFake algorithms with adversarial attacks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops. 53--62.
[64]
Liming Zhai, Felix Juefei-Xu, Qing Guo, Xiaofei Xie, Lei Ma, Wei Feng, Shengchao Qin, and Yang Liu. 2020. It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception. arXiv preprint arXiv:2009.09205 (2020).
[65]
Xu Zhang, Svebor Karaman, and Shih-Fu Chang. 2019. Detecting and simulating artifacts in gan fake images. In 2019 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, 1--6.
[66]
Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. 2018. Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV). 657--672.
[67]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision. 2223--2232.

Cited By

View all
  • (2025)Deepfakes in digital media forensics: Generation, AI-based detection and challengesJournal of Information Security and Applications10.1016/j.jisa.2024.10393588(103935)Online publication date: Feb-2025
  • (2024)A Novel Face Swapping Detection Scheme Using the Pseudo Zernike Transform Based Robust WatermarkingElectronics10.3390/electronics1324495513:24(4955)Online publication date: 16-Dec-2024
  • (2024)“It’s Up to Me Whether I Do—Or Don’t—Watch Deepfakes”: Deepfakes and Behavioral IntentionSage Open10.1177/2158244024130228214:4Online publication date: 25-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '21: Proceedings of the 29th ACM International Conference on Multimedia
October 2021
5796 pages
ISBN:9781450386517
DOI:10.1145/3474085
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 October 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deepfake forensics
  2. image tagging
  3. provenance tracking

Qualifiers

  • Research-article

Funding Sources

  • the fellowship of China National Postdoctoral Program for Innovative Talents
  • the Fundamental Research Funds for the Central Universities
  • the National Natural Science Foundation of China

Conference

MM '21
Sponsor:
MM '21: ACM Multimedia Conference
October 20 - 24, 2021
Virtual Event, China

Acceptance Rates

Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)181
  • Downloads (Last 6 weeks)6
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Deepfakes in digital media forensics: Generation, AI-based detection and challengesJournal of Information Security and Applications10.1016/j.jisa.2024.10393588(103935)Online publication date: Feb-2025
  • (2024)A Novel Face Swapping Detection Scheme Using the Pseudo Zernike Transform Based Robust WatermarkingElectronics10.3390/electronics1324495513:24(4955)Online publication date: 16-Dec-2024
  • (2024)“It’s Up to Me Whether I Do—Or Don’t—Watch Deepfakes”: Deepfakes and Behavioral IntentionSage Open10.1177/2158244024130228214:4Online publication date: 25-Nov-2024
  • (2024)Improving Sequential DeepFake Detection with Local information enhancementProceedings of the 6th ACM International Conference on Multimedia in Asia10.1145/3696409.3700276(1-1)Online publication date: 3-Dec-2024
  • (2024)Disrupting Diffusion: Token-Level Attention Erasure Attack against Diffusion-based CustomizationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681243(3587-3596)Online publication date: 28-Oct-2024
  • (2024)PUFshield: A Hardware-Assisted Approach for Deepfake Mitigation Through PUF-Based Facial Feature AttestationProceedings of the Great Lakes Symposium on VLSI 202410.1145/3649476.3660394(676-681)Online publication date: 12-Jun-2024
  • (2024)FaceSigns: Semi-fragile Watermarks for Media AuthenticationACM Transactions on Multimedia Computing, Communications, and Applications10.1145/364046620:11(1-21)Online publication date: 12-Sep-2024
  • (2024)PADVG: A Simple Baseline of Active Protection for Audio-Driven Video GenerationACM Transactions on Multimedia Computing, Communications, and Applications10.1145/363855620:6(1-19)Online publication date: 8-Mar-2024
  • (2024)DeepMark: A Scalable and Robust Framework for DeepFake Video DetectionACM Transactions on Privacy and Security10.1145/362997627:1(1-26)Online publication date: 5-Feb-2024
  • (2024)Hiding Face Into Background: A Proactive Countermeasure Against Malicious Face SwappingIEEE Transactions on Industrial Informatics10.1109/TII.2024.339626820:8(10613-10623)Online publication date: Aug-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media