Abstract
Advances in Generative adversarial networks (GAN) have significantly improved the quality of synthetic facial images, posing threats to many vital areas. Thus, identifying whether a presented facial image is synthesized is of forensic importance. Our fundamental discovery is the lack of capillaries in the sclera of the GAN-generated faces, which is caused by the lack of physical/physiological constraints in the GAN model. Because there are more or fewer capillaries in people’s eyes, one can distinguish real faces from GAN-generated ones by carefully examining the sclera area. Following this idea, we first extract the sclera area from a probe image, then feed it into a residual attention network to distinguish GAN-generated faces from real ones. The proposed method is validated on the Flickr-Faces-HQ and StyleGAN2/StyleGAN3-generated face datasets. Experiments demonstrate that the capillary in the sclera is a very effective feature for identifying GAN-generated faces. Our code is available at: https://github.com/10961020/Deepfake-detector-based-on-blood-vessels.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Goodfellow, I., et al.: Generative adversarial nets. In: Neural Information Processing Systems (2014)
O’Sullivan, D.: A high school student created a fake 2020 us candidate. Twitter verified it. In: CNN Business (2020). https://cnn.it/3HpHfzz
Sganga, N.: Is that Facebook account real? Meta reports “rapid rise” in AI-generated profile pictures. CBS News (2022)
Wang, X., Guo, H., Hu, S., Chang, M.C., Lyu, S.: GAN-generated faces detection a survey and new perspectives. arXiv:2202.07145 (2022)
Verdoliva, L.: Media forensics and DeepFakes: an overview. arXiv:2001.06564, (2020)
Chen, B., Tan, W., Wang, Y., Zhao, G.: Distinguishing between natural and GAN-generated face images by combining global and local features. Chin. J. Electron. 31, 59–67 (2022)
Barni, M., Kallas, K., Nowroozi, E., Tondi, B.: CNN detection of GAN-generated face images based on cross-band co-occurrences analysis. In: 2020 IEEE International Workshop on Information Forensics and Security, pp. 1–6 (2020)
Wang, R., et al.: FakeSpotter: a simple yet robust baseline for spotting AI-synthesized fake faces. In: Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence (2019)
Wang, S.Y., Wang, O., Zhang, R., Owens, A., Efros, A.A.: CNN-generated images are surprisingly easy to spot... for now. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8695–8704 (2020)
Marra, F., Gragnaniello, D., Verdoliva, L., Poggi, G.: Do GANs leave artificial fingerprints? In: IEEE Conference on Multimedia Information Processing and Retrieval, pp. 506–511 (2019)
Yu, N., Davis, L.S., Fritz, M.: Attributing fake images to GANs: learning and analyzing GAN fingerprints. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7556–7566 (2019)
Pu, J., Mangaokar, N., Wang, B., Reddy, C.K., Viswanath, B.: Noisescope: Detecting deepfake images in a blind setting. In: Annual Computer Security Applications Conference, pp. 913–927 (2020)
Frank, J., Eisenhofer, T., Schönherr, L., Fischer, A., Kolossa, D., Holz, T.: Leveraging frequency analysis for deep fake image recognition. In: International Conference on Machine Learning, pp. 3247–3258 (2020)
Yang, X., Li, Y., Qi, H., Lyu, S.: Exposing GAN-synthesized faces using landmark locations. In: Proceedings of the ACM Workshop on Information Hiding and Multimedia Security, pp. 113–118 (2019)
Guo, H., Hu, S., Wang, X., Chang, M.C., Lyu, S.: Robust attentive deep neural network for exposing GAN-generated faces. In: IEEE Access10, 32574–32583 (2022)
Matern, F., Riess, C., Stamminger, M.: Exploiting visual artifacts to expose deepfakes and face manipulations. In: IEEE Winter Applications of Computer Vision Workshops, pp. 83–92 (2019)
Hu, S., Li, Y., Lyu, S.: Exposing GAN-generated faces using inconsistent corneal specular highlights. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2500–2504 (2021)
Guo, H., Hu, S., Wang, X., Chang, M.C., Lyu, S.: Eyes tell all: irregular pupil shapes reveal GAN-generated faces. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2904–2908 (2022)
Wang, F., et al.: Residual attention network for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156–3164 (2017)
Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9243–9252 (2019)
Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. arXiv:1809.11096 (2018)
Goetschalckx, L., Andonian, A., Oliva, A., Isola, P.: Ganalyze: toward visual definitions of cognitive image properties. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5744–5753 (2019)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: International Conference on Learning Representations (2015)
Karras, T., Aila, Y., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: International Conference on Learning Representations (2017)
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110–8119 (2020)
Karras, T., et al.: Alias-free generative adversarial networks. In: Advances in Neural Information Processing Systems, pp. 852–863 (2021)
Guo, M.H., et al.: Attention mechanisms in computer vision: a survey. Comput. Vis. Media 8, 331–368 (2022)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)
Jaderberg, M., Simonyan, K., Zisserman, A.: Spatial transformer networks. In: Advances in Neural Information Processing Systems (2015)
King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10, 1755–1758 (2009)
Pe Otsu, N.: A threshold selection method from gray-level histograms. In: IEEE Transactions on Systems, Man, and Cybernetics, pp. 62–66 (1979)
He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38
Corvi, R., Cozzolino, D., Zingarini, G., Poggi, G., Nagano, K., Verdoliva, L.: On the detection of synthetic images generated by diffusion models. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1–5 (2023)
Acknowledgements
The work is supported by the network emergency management research special topic (no. WLYJGL2023ZD003), the Opening Project of Guangdong Province Key Laboratory of Information Security Technology (Grant No. 2020B1212060078).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, T., Peng, A., Zeng, H. (2024). Ignored Details in Eyes: Exposing GAN-Generated Faces by Sclera. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Lecture Notes in Computer Science, vol 14451. Springer, Singapore. https://doi.org/10.1007/978-981-99-8073-4_43
Download citation
DOI: https://doi.org/10.1007/978-981-99-8073-4_43
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8072-7
Online ISBN: 978-981-99-8073-4
eBook Packages: Computer ScienceComputer Science (R0)