skip to main content
research-article

DEEPFAKER: A Unified Evaluation Platform for Facial Deepfake and Detection Models

Published: 05 February 2024 Publication History

Abstract

Deepfake data contains realistically manipulated faces—its abuses pose a huge threat to the security and privacy-critical applications. Intensive research from academia and industry has produced many deepfake/detection models, leading to a constant race of attack and defense. However, due to the lack of a unified evaluation platform, many critical questions on this subject remain largely unexplored. How is the anti-detection ability of the existing deepfake models? How generalizable are existing detection models against different deepfake samples? How effective are the detection APIs provided by the cloud-based vendors? How evasive and transferable are adversarial deepfakes in the lab and real-world environment? How do various factors impact the performance of deepfake and detection models?
To bridge the gap, we design and implement DEEPFAKER a unified and comprehensive deepfake detection evaluation platform. Specifically, DEEPFAKER has integrated 10 state-of-the-art deepfake methods and 9 representative detection methods, while providing a user-friendly interface and modular design that allows for easy integration of new methods. Leveraging DEEPFAKER, we conduct a large-scale empirical study of facial deepfake/detection models and draw a set of key findings: (i) the detection methods have poor generalization on samples generated by different deepfake methods; (ii) there is no significant correlation between anti-detection ability and visual quality of deepfake samples; (iii) the current detection APIs have poor detection performance and adversarial deepfakes can achieve about 70% attack success rate on all cloud-based vendors, calling for an urgent need to deploy effective and robust detection APIs; (iv) the detection methods in the lab are more robust against transfer attacks than the detection APIs in the real-world environment; and (v) deepfake videos may not always be more difficult to detect after video compression. We envision that DEEPFAKER will benefit future research on facial deepfake and detection.

A Appendix

A.1 Details of the Deepfake Dataset

The dataset is generated by 10 state-of-the-art deepfake methods integrated in the current DEEPFAKER platform. It consists of two parts: the deepfake videos and the deepfake images. Among them, the deepfake videos are generated by the face swapping and face reenactment methods, including seven deepfake methods with a total of 21,000 videos. The deepfake images are generated by all the deepfake methods integrated in the platform, and the number is 24,000. An illustration of the deepfake dataset is shown in Figure 13.

A.2 Scalability of the DEEPFAKER Platform

Our DEEPFAKER platform is highly scalable and designed to be easily extensible, allowing new models to be integrated seamlessly. To integrate a new model into the platform, the following steps can be followed. First, the user should package their method into a Docker container with all necessary dependencies and configuration files. Second, the method should be integrated with the platform’s API by defining input and output parameters and mapping them to the API endpoints. This process involves creating a Flask application that serves as the API, defining endpoints that correspond to the desired input and output parameters of the method, and mapping those endpoints to a function that executes the method. A code example that demonstrates how to integrate a detection method into the platform is presented next.

A.3 Additional Deepfake-Detection Interaction Evaluation

Table 11 shows the evaluation results of detection methods against deepfake samples generated from data obtained online. We can see that the overall trend of evaluation results obtained from the downloaded data remains consistent with those from the CelebA-HQ dataset. The average performance of all detection methods ranges approximately between 50% and 70% on all deepfake samples. It highlights the limited generalization ability of current detection methods to previously unseen deepfake samples. The effectiveness of each detection method varies based on different types of deepfake data, revealing distinct advantages and limitations across various deepfake techniques.
Table 11.
 Face SwappingFace Reenactment 
MethodFSGAN\(_S\)FaceShifterSimSwapICfaceFSGAN\(_R\)FOMMMRAAAVG
DSP-FWA40.86%14.20%75.59%38.35%67.43%60.96%77.11%53.50%
VA-MLP(Face2Face)70.52%42.67%63.53%71.61%77.74%74.27%71.13%67.35%
VA-MLP(Deepfakes)70.94%32.44%56.47%76.40%74.37%45.48%66.71%60.40%
XceptionNet47.77%54.37%35.00%59.22%68.48%55.62%25.66%49.45%
Multi-task73.45%59.55%60.29%47.67%38.17%50.02%64.11%56.18%
CapsuleNet62.92%53.42%65.13%50.83%91.72%55.35%56.15%62.22%
CNNDetection90.28%89.60%30.47%79.45%96.19%39.92%70.59%70.93%
CViT91.49%42.78%63.88%91.61%82.63%22.97%53.49%64.12%
DefakeHop82.21%62.54%56.43%56.89%86.99%50.18%58.23%64.78%
LRNet70.03%72.41%49.65%66.91%78.26%55.66%54.73%63.95%
AVG70.05%52.40%55.64%63.90%76.20%51.04%59.79% 
Table 11. AUC Scores of Detection Methods against Deepfake Samples Generated from the Downloaded Data

A.4 Diffusion-Based Methods Performance Evaluation

We introduce and evaluate two diffusion-based methods, namely Diff-AE and DiffusionCLIP. Consistent with the experimental setup for evaluating attribute editing methods, we select 200 images from the source image dataset and manipulate the face of these images with five different attributes. Correspondingly, each method generates 1,000 attribute-edited images. The evaluation results in Table 12 show that the average AUC scores of various detection methods for Diff-AE and DiffusionCLIP are 58.23% and 46.63%, respectively. These scores are lower than the evaluation results of the GAN-based methods in Table 6. It also indicates that the diffusion-based models exhibit stronger anti-detection ability. This phenomenon may be because these existing detection methods are primarily designed for detecting fake images generated by GAN-based methods. Consequently, their generalization ability is relatively limited when applied to new images generated by diffusion models.
Table 12.
 Diff-AEDiffusionCLIPAVG
DSP-FWA49.29%30.29%39.79%
VA-MLP(Face2Face)64.66%59.43%62.05%
XceptionNet61.41%44.45%52.93%
Multi-task30.63%16.57%23.60%
CapsuleNet72.17%64.82%68.50%
CNNDetection72.21%64.21%67.71%
AVG58.23%46.63% 
Table 12. AUC Scores of Detection Methods against Diffusion-Based Attribute Editing Methods

A.5 Additional Detection APIs Effectiveness Evaluation

We use the AUC metric to evaluate the effectiveness of the detection APIs, and the results are presented in Table 13. It can be observed that the detection APIs performs better in distinguishing between real and fake images compared to videos. Moreover, for the detection of videos, the AUC scores of the detection APIs align with the overall trend of the local detection models presented in Tables 5 and 11, averaging between about 50% and 70%. Note that since the commercial detection APIs are constantly updated, Table 13 calculates the AUC scores based on the latest results returned by Baidu.
Table 13.
 VideoImage
Tencent55.26%75.41%
Baidu62.66%85.62%
Deepware70.35%
Table 13. Effectiveness Evaluation of Different Detection APIs with AUC Scores

A.6 Additional Detection Robustness Evaluation

Consistent with the experimental settings of iterative gradient sign attacks, we incorporate adversarial attack methods FGSM and PGD to generate adversarial deepfakes. The evaluation results of detection methods against adversarial deepfakes generated by FGSM are presented in Tables 14 and 15, whereas the robustness evaluations based on PGD are shown in Tables 16 and 17. Although the transfer attack capabilities of adversarial deepfakes generated under different attack methods are different (PGD \(\gt\) IFGSM \(\gt\) FGSM), these results collectively indicate the insufficient robustness of current detection models, falling significantly below our expectations. This emphasizes the demand for future detection models to enhance their robustness against adversarial attacks.
Table 14.
Table 14. Robustness Evaluation of Detection Methods against Adversarial Deepfake Videos Generated by FGSM
Table 15.
Table 15. Robustness Evaluation of Detection Methods against Adversarial Deepfake Images Generated by FGSM
Table 16.
Table 16. Robustness Evaluation of Detection Methods against Adversarial Deepfake Videos Generated by PGD
Table 17.
Image (tops-2023-05-0099-t18.jpg) is missing or otherwise invalid.
Table 17. Robustness Evaluation of Detection Methods against Adversarial Deepfake Images Generated by PGD

A.7 Impact of Input Quality

To verify the impact of the quality of the source image on the face reenactment methods, we generate fake samples using FOMM and FSGAN\(_R\) under different experimental settings. The evaluation results in Figure 14(a) and (b) show that the quality of the source images has no consistent effect on the anti-detection performance of the face reenactment methods across different detection methods.
For the impact of the quality the driving video on face swapping methods, we choose SimSwap and FSGAN\(_S\) to generate deepfake samples and use different detection methods to evaluate their anti-detection ability. According to the results in Figure 14(c) and (d), we can see that the fake samples generated from the strongly compressed (C40) driving videos have the worst anti-detection ability against VA-MLP and CViT. Therefore, the quality of the driving video has a significant influence on the anti-detection performance of the face swapping models.

A.8 Impact of Video Compression

In general, compressed deepfake videos are considered more challenging for detection methods. Here, we evaluate the impact of compressed videos on different deepfake and detection methods. To this end, we perform different degrees of compression on the fake videos generated by FOMM, FSGAN\(_R\), FSGAN\(_S\), and SimSwap. Figure 15 shows the anti-detection performance of the compressed videos on different detection models. We can see that the deepfake videos generated by FSGAN\(_R\), FSGAN\(_S\), and SimSwap are more difficult to detect after strong compression (C40). However, the fake videos generated by FOMM show the worst anti-detection performance across all detection methods under strong compression (low quality). Therefore, compressed fake videos are not always more difficult to detect.

References

[1]
Darius Afchar, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. 2018. MesoNet: A compact facial video forgery detection network. In Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security (WIFS’18). IEEE, Los Alamitos, CA, 1–7.
[2]
Andrew P. Bradley. 1997. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition 30, 7 (1997), 1145–1159.
[3]
John Brandon. 2018. Terrifying High-Tech Porn: Creepy ‘Deepfake’ Videos Are on the Rise. Retrieved December 6, 2023 from https://www.foxnews.com/tech/terrifying-high-tech-porn-creepy-deepfake-videos-are-on-the-rise
[4]
Adrian Bulat and Georgios Tzimiropoulos. 2017. How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 1021–1030.
[5]
Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi, and Andrew Zisserman. 2018. VGGFace2: A dataset for recognising faces across pose and age. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face and Gesture Recognition (FG’18). IEEE, Los Alamitos, CA, 67–74.
[6]
Nicholas Carlini and Hany Farid. 2020. Evading deepfake-image detectors with white-and black-box attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Los Alamitos, CA, 658–659.
[7]
Hong-Shuo Chen, Mozhdeh Rouhsedaghat, Hamza Ghani, Shuowen Hu, Suya You, and C.-C. Jay Kuo. 2021. DefakeHop: A light-weight high-performance deepfake detector. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME’20). IEEE, Los Alamitos, CA, 1–6.
[8]
Renwang Chen, Xuanhong Chen, Bingbing Ni, and Yanhao Ge. 2020. SimSwap: An efficient framework for high fidelity face swapping. In Proceedings of the 28th ACM International Conference on Multimedia. ACM, New York, NY, 2003–2011.
[9]
Alesia Chernikova and Alina Oprea. 2022. Fence: Feasible evasion attacks on neural networks in constrained environments. ACM Transactions on Privacy and Security 25, 4 (2022), 1–34.
[10]
Ivana Chingovska, André Anjos, and Sébastien Marcel. 2012. On the effectiveness of local binary patterns in face anti-spoofing. In Proceedings of the International Conference of the Biometrics Special Interest Group (BIOSIG’12). IEEE, Los Alamitos, CA, 1–7.
[11]
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. 2018. StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 8789–8797.
[12]
Brian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cristian Canton Ferrer. 2020. The Deepfake Detection Challenge (DFDC) dataset. arXiv preprint arXiv:2006.07397 (2020).
[13]
Paul Ekman and Wallace V. Friesen. 1978. FacialActionCodingSystem:A Technique for the Measurement of FacialMovement. Consulting Psychologists Press, Palo Alto, CA.
[14]
Apurva Gandhi and Shomik Jain. 2020. Adversarial perturbations fool deepfake detectors. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN’20). IEEE, Los Alamitos, CA, 1–8.
[15]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[16]
David Güera and Edward J. Delp. 2018. Deepfake video detection using recurrent neural networks. In Proceedings of the 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS’18). IEEE, Los Alamitos, CA, 1–6.
[17]
Yinan He, Bei Gan, Siyu Chen, Yichun Zhou, Guojun Yin, Luchuan Song, Lu Sheng, Jing Shao, and Ziwei Liu. 2021. ForgeryNet: A versatile benchmark for comprehensive forgery analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 4360–4369.
[18]
Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, and Xilin Chen. 2019. AttGAN: Facial attribute editing by only changing what you want. IEEE Transactions on Image Processing 28, 11 (2019), 5464–5478.
[19]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems 33 (2020), 6840–6851.
[20]
Shehzeen Hussain, Paarth Neekhara, Malhar Jere, Farinaz Koushanfar, and Julian McAuley. 2021. Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adversarial examples. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. IEEE, Los Alamitos, CA, 3348–3357.
[21]
Anil K. Jain, Patrick Flynn, and Arun A. Ross. 2007. Handbook of Biometrics. Springer Science & Business Media.
[22]
Felix Juefei-Xu, Run Wang, Yihao Huang, Qing Guo, Lei Ma, and Yang Liu. 2022. Countering malicious deepfakes: Survey, battleground, and horizon. International Journal of Computer Vision 130 (2022), 1678–1734.
[23]
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2017. Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017).
[24]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 4401–4410.
[25]
Hasam Khalid, Shahroz Tariq, Minha Kim, and Simon S. Woo. 2021. FakeAVCeleb: A novel audio-video multimodal deepfake dataset. arXiv preprint arXiv:2108.05080 (2021).
[26]
Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. 2022. DiffusionCLIP: Text-guided diffusion models for robust image manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 2426–2435.
[27]
Pavel Korshunov and Sébastien Marcel. 2018. Deepfakes: A new threat to face recognition? Assessment and detection. arXiv preprint arXiv:1812.08685 (2018).
[28]
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2018. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security. Chapman & Hall/CRC, 99–112.
[29]
Guillaume Lample, Neil Zeghidour, Nicolas Usunier, Antoine Bordes, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Fader networks: Manipulating images by sliding attributes. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS’17). 5969–5978.
[30]
Changjiang Li, Li Wang, Shouling Ji, Xuhong Zhang, Zhaohan Xi, Shanqing Guo, and Ting Wang. 2022. Seeing is living? Rethinking the security of facial liveness verification in the deepfake era. arXiv preprint arXiv:2202.10673 (2022).
[31]
Lingzhi Li, Jianmin Bao, Hao Yang, Dong Chen, and Fang Wen. 2019. FaceShifter: Towards high fidelity and occlusion aware face swapping. arXiv preprint arXiv:1912.13457 (2019).
[32]
Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, and Baining Guo. 2020. Face x-ray for more general face forgery detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 5001–5010.
[33]
Xinyang Li, Shengchuan Zhang, Jie Hu, Liujuan Cao, Xiaopeng Hong, Xudong Mao, Feiyue Huang, Yongjian Wu, and Rongrong Ji. 2021. Image-to-image translation via hierarchical style disentanglement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 8639–8648.
[34]
Yuezun Li. 2020. DSP-FWA. Retrieved December 6, 2023 from https://github.com/yuezunli/DSP-FWA
[35]
Yuezun Li and Siwei Lyu. 2018. Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656 (2018).
[36]
Yuezun Li, Xin Yang, Pu Sun, Honggang Qi, and Siwei Lyu. 2020. Celeb-DF: A large-scale challenging dataset for deepfake forensics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 3207–3216.
[37]
Chenhao Lin, Jingyi Deng, Pengbin Hu, Chao Shen, Qian Wang, and Qi Li. 2022. Towards benchmarking and evaluating deepfake detection. arXiv preprint arXiv:2203.02115 (2022).
[38]
Xiang Ling, Shouling Ji, Jiaxu Zou, Jiannan Wang, Chunming Wu, Bo Li, and Ting Wang. 2019. DEEPSEC: A uniform platform for security analysis of deep learning model. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP’19). IEEE, Los Alamitos, CA, 673–690.
[39]
Yaojie Liu, Joel Stehouwer, Amin Jourabloo, and Xiaoming Liu. 2019. Deep tree learning for zero-shot face anti-spoofing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 4680–4689.
[40]
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 3730–3738.
[41]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[42]
Falko Matern, Christian Riess, and Marc Stamminger. 2019. Exploiting visual artifacts to expose deepfakes and face manipulations. In Proceedings of the 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW’19). IEEE, Los Alamitos, CA, 83–92.
[43]
Yisroel Mirsky and Wenke Lee. 2021. The creation and detection of deepfakes: A survey. ACM Computing Surveys 54, 1 (2021), 1–41.
[44]
Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, and Dinesh Manocha. 2020. Emotions don’t lie: An audio-visual deepfake detection method using affective cues. In Proceedings of the 28th ACM International Conference on Multimedia. ACM, New York, NY, 2823–2832.
[45]
Arsha Nagrani, Joon Son Chung, and Andrew Zisserman. 2017. VoxCeleb: A large-scale speaker identification dataset. arXiv preprint arXiv:1706.08612 (2017).
[46]
Paarth Neekhara, Brian Dolhansky, Joanna Bitton, and Cristian Canton Ferrer. 2021. Adversarial threats to deepfake detection: A practical perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 923–932.
[47]
Huy H. Nguyen, Fuming Fang, Junichi Yamagishi, and Isao Echizen. 2019. Multi-task learning for detecting and segmenting manipulated facial images and videos. arXiv preprint arXiv:1906.06876 (2019).
[48]
Huy H. Nguyen, Junichi Yamagishi, and Isao Echizen. 2019. Capsule-Forensics: Using capsule networks to detect forged images and videos. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’19). IEEE, Los Alamitos, CA, 2307–2311.
[49]
Yuval Nirkin, Yosi Keller, and Tal Hassner. 2019. FSGAN: Subject agnostic face swapping and reenactment. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 7184–7193.
[50]
Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016).
[51]
Patrick Pérez, Michel Gangnet, and Andrew Blake. 2003. Poisson image editing. In ACM SIGGRAPH 2003 Papers. 313–318.
[52]
Konpat Preechakul, Nattanat Chatthee, Suttisak Wizadwongsa, and Supasorn Suwajanakorn. 2022. Diffusion autoencoders: Toward a meaningful and decodable representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 10619–10629.
[53]
Yuyang Qian, Guojun Yin, Lu Sheng, Zixuan Chen, and Jing Shao. 2020. Thinking in frequency: Face forgery detection by mining frequency-aware clues. In Proceedings of the European Conference on Computer Vision. 86–103.
[54]
Le Qin, Fei Peng, Min Long, Raghavendra Ramachandra, and Christoph Busch. 2021. Vulnerabilities of unattended face verification systems to facial components-based presentation attacks: An empirical study. ACM Transactions on Privacy and Security 25, 1 (2021), 1–28.
[55]
Valuates Reports. 2020. Facial Recognition Market Size to Reach USD 9.99 Billion by 2025. Retrieved December 6, 2023 from https://www.prnewswire.com/news-releases/facial-recognition-market-size-to-reach-usd-9-99-billion-by-2025--valuates-reports-301071952.html
[56]
Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2018. FaceForensics: A large-scale video dataset for forgery detection in human faces. arXiv preprint arXiv:1803.09179 (2018).
[57]
Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2019. FaceForensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE, Los Alamitos, CA, 1–11.
[58]
Ekraam Sabir, Jiaxin Cheng, Ayush Jaiswal, Wael AbdAlmageed, Iacopo Masi, and Prem Natarajan. 2019. Recurrent convolutional strategies for face manipulation detection in videos. Interfaces (GUI) 3, 1 (2019), 80–87.
[59]
Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 815–823.
[60]
Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 618–626.
[61]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2019. A general framework for adversarial examples with objectives. ACM Transactions on Privacy and Security 22, 3 (2019), 1–30.
[62]
Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. First order motion model for image animation. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS’19). 1–11.
[63]
Aliaksandr Siarohin, Oliver J. Woodford, Jian Ren, Menglei Chai, and Sergey Tulyakov. 2021. Motion representations for articulated animation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 13653–13662.
[64]
Jiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502 (2020).
[65]
Zekun Sun, Yujie Han, Zeyu Hua, Na Ruan, and Weijia Jia. 2021. Improving the efficiency and robustness of deepfakes detection through precise geometric features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 3609–3618.
[66]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[67]
Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2Face: Real-time face capture and reenactment of RGB videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 2387–2395.
[68]
Ruben Tolosana, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales, and Javier Ortega-Garcia. 2020. Deepfakes and beyond: A survey of face manipulation and fake detection. Information Fusion 64 (2020), 131–148.
[69]
Soumya Tripathy, Juho Kannala, and Esa Rahtu. 2020. ICface: Interpretable and controllable face reenactment using GANs. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision. IEEE, Los Alamitos, CA, 3385–3394.
[70]
Luisa Verdoliva. 2020. Media forensics and deepfakes: An overview. IEEE Journal of Selected Topics in Signal Processing 14, 5 (2020), 910–932.
[71]
Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A. Efros. 2020. CNN-generated images are surprisingly easy to spot . . . for now. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 8695–8704.
[72]
Deressa Wodajo and Solomon Atnafu. 2021. Deepfake video detection using convolutional vision transformer. arXiv preprint arXiv:2102.11126 (2021).
[73]
Zongze Wu, Dani Lischinski, and Eli Shechtman. 2021. StyleSpace analysis: Disentangled controls for StyleGAN image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 12863–12872.
[74]
Xin Yang, Yuezun Li, and Siwei Lyu. 2019. Exposing deep fakes using inconsistent head poses. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’19). IEEE, Los Alamitos, CA, 8261–8265.
[75]
Ning Yu, Larry S. Davis, and Mario Fritz. 2019. Attributing fake images to GANs: Learning and analyzing GAN fingerprints. In Proceedings of the IEEE/CVF International Conference on Computer Vision. IEEE, Los Alamitos, CA, 7556–7566.
[76]
Hanqing Zhao, Wenbo Zhou, Dongdong Chen, Tianyi Wei, Weiming Zhang, and Nenghai Yu. 2021. Multi-attentional deepfake detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos, CA, 2185–2194.
[77]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Los Alamitos, CA, 2223–2232.

Cited By

View all
  • (2024)Video and Audio Deepfake Datasets and Open Issues in Deepfake Technology: Being Ahead of the CurveForensic Sciences10.3390/forensicsci40300214:3(289-377)Online publication date: 13-Jul-2024
  • (2023)A Comprehensive Review of DeepFake Detection Using Advanced Machine Learning and Fusion MethodsElectronics10.3390/electronics1301009513:1(95)Online publication date: 25-Dec-2023

Index Terms

  1. DEEPFAKER: A Unified Evaluation Platform for Facial Deepfake and Detection Models

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Privacy and Security
    ACM Transactions on Privacy and Security  Volume 27, Issue 1
    February 2024
    369 pages
    EISSN:2471-2574
    DOI:10.1145/3613489
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 February 2024
    Online AM: 29 November 2023
    Accepted: 17 November 2023
    Revised: 11 October 2023
    Received: 15 May 2023
    Published in TOPS Volume 27, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Facial deepfake
    2. deepfake detection
    3. adversarial machine learning
    4. experimental evaluation

    Qualifiers

    • Research-article

    Funding Sources

    • National Natural Science Foundation of China
    • Shandong Provincial Natural Science Foundation

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)951
    • Downloads (Last 6 weeks)67
    Reflects downloads up to 01 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Video and Audio Deepfake Datasets and Open Issues in Deepfake Technology: Being Ahead of the CurveForensic Sciences10.3390/forensicsci40300214:3(289-377)Online publication date: 13-Jul-2024
    • (2023)A Comprehensive Review of DeepFake Detection Using Advanced Machine Learning and Fusion MethodsElectronics10.3390/electronics1301009513:1(95)Online publication date: 25-Dec-2023

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media