Skip to main content

EasyDeep: An IoT Friendly Robust Detection Method for GAN Generated Deepfake Images in Social Media

  • Conference paper
  • First Online:
Internet of Things. Technology and Applications (IFIPIoT 2021)

Abstract

Advancements in artificial intelligence, and especially deep learning technology have given birth to a new era of multimedia forgery. Deepfake takes it to a whole new level. This deep learning based technology creates new images with features which have been acquired from a different set of images. The rapid evolution of Generative Adversarial networks (GANs) provides an available route to create deepfakes. They generate highly sophisticated and realistic images through deep learning and implement deepfake using image-to-image translation. We propose a novel, memory-efficient lightweight machine learning based deepfake detection method which is successfully deployed in the IoT platform. A detection API is proposed along with the detection method. To the best of the authors’ knowledge, this effort is the first ever for detecting highly sophisticated GAN generated deepfake images at the edge. The novelty of the work is achieving a considerable amount of accuracy with a short training time and inference at the edge device. The total time for sending the image to the edge, detecting and result display through the API is promising. Some discussion is also provided to improve accuracy and to reduce the inference time. A comparative study is also made by performing a three-fold textural analysis - computation of Shannon’s entropy, measurement of some of Haralick’s texture features (like contrast, dissimilarity, homogeneity, correlation) and study of the histograms of the generated images. Even when generated fake images look similar to the corresponding real images, the results present clear evidence that they differ significantly from the real images in entropy, contrast, dissimilarity, homogeneity, and correlation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Image “hacker”. image: freepik.com. Accessed 07 June 2021

    Google Scholar 

  2. Mathworks. https://www.mathworks.com/help/images/texture-analysis-using-the-gray-level-co-occurrence-matrix-glcm.html. Accessed 28 Jan 2021

  3. scikit-image. http://poynton.ca/PDFs/ColorFAQ.pdf. Accessed 02 Feb 2021

  4. DARPA News, March 2021. https://www.darpa.mil/news-events/2021-03-02. Accessed 07 May 2021

  5. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN. arXiv: 1701.07875 (2017)

  6. Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis. arXiv: abs/1809.11096 (2018)

  7. Choi, Y., Choi, M., Kim, M., Ha, J., Kim, S., Choo, J.: StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)

    Google Scholar 

  8. Dogan, Y., Keles, H.Y.: Semi-supervised image attribute editing using generative adversarial networks. Neurocomputing 401, 338–352 (2020)

    Article  Google Scholar 

  9. Goodfellow, I., et al.: Generative adversarial nets. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K.Q. (eds.) Proceedings of Advances in Neural Information Processing Systems, vol. 27, pp. 2672–2680. Curran Associates, Inc. (2014)

    Google Scholar 

  10. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Proceedings of Advances in Neural Information Processing Systems, pp. 5767–5777 (2017)

    Google Scholar 

  11. Haralick, R.M., Shanmugam, K., Dinstein, I.: Textural features for image classification. IEEE Trans. Syst. Man Cybernet. SMC-3(6), 610–621 (1973)

    Google Scholar 

  12. He, P., Li, H., Wang, H.: Detection of fake images via the ensemble of deep representations from multi color spaces. In: Proceedings of IEEE International Conference on Image Processing, pp. 2299–2303 (2019)

    Google Scholar 

  13. Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. 36(4), 1–14 (2017)

    Article  Google Scholar 

  14. Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  15. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 4396–4405 (2019)

    Google Scholar 

  16. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 8107–8116 (2020)

    Google Scholar 

  17. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. arXiv: abs/1710.10196 (2017)

  18. Ke, G., et al.: LightGBM: a highly efficient gradient boosting decision tree. In: Proceedings of Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  19. Kim, T., Cha, M., Kim, H., Lee, J.K., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. arXiv: abs/1703.05192 (2017)

  20. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. arXiv: abs/1609.04802 (2016)

  21. Li, Y., Liu, S., Yang, J., Yang, M.: Generative face completion. arXiv: abs/1704.05838 (2017)

  22. Liu, M., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. arXiv:abs/1703.00848 (2017)

  23. Liu, M., Tuzel, O.: Coupled generative adversarial networks. arXiv: abs/1606.07536 (2016)

  24. Liu, Z., Qi, X., Torr, P.H.S.: Global texture enhancement for fake face detection in the wild. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 8057–8066 (2020)

    Google Scholar 

  25. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision, December 2015

    Google Scholar 

  26. McCloskey, S., Albright, M.: Detecting GAN-generated imagery using color cues. arXiv:abs/1812.08247 (2018)

  27. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv: 1411.1784 (2014)

  28. Mitra, A., Mohanty, S.P., Corcoran, P., Kougianos, E.: A novel machine learning based method for deepfake video detection in social media. In: Proceedings of IEEE International Symposium on Smart Electronic Systems (iSES) (Formerly iNiS), pp. 91–96 (2020)

    Google Scholar 

  29. Mitra, A., Mohanty, S.P., Corcoran, P., Kougianos, E.: A machine learning based approach for deepfake detection in social media through key video frame extraction. SN Comput. Sci. 2(2), 1–18 (2021)

    Article  Google Scholar 

  30. Nataraj, L., et al.: Detecting GAN generated fake images using co-occurrence matrices. arxiv:abs/1903.06836 (2019)

  31. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv: 1511.06434 (2016)

  32. Radim Tyleček, R.Š.: Spatial pattern templates for recognition of objects with regular structure. In: Proceedings of German Conference on Pattern Recognition, Saarbrucken, Germany (2013)

    Google Scholar 

  33. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 2234–2242 (2016)

    Google Scholar 

  34. Taigman, Y., Polyak, A., Wolf, L.: Unsupervised cross-domain image generation. arXiv: abs/1611.02200 (2016)

  35. Tariq, S., Lee, S., Kim, H., Shin, Y., Woo, S.S.: Detecting both machine and human created fake face images in the wild. In: Proceedings of the 2nd International Workshop on Multimedia Privacy and Security, pp. 81–87 (2018)

    Google Scholar 

  36. Varkarakis, V., Bazrafkan, S., Costache, G., Corcoran, P.: Validating seed data samples for synthetic identities - methodology and uniqueness metrics. IEEE Access 8, 152532–152550 (2020)

    Article  Google Scholar 

  37. Wang, R., et al.: FakeSpotter: a simple yet robust baseline for spotting AI-synthesized fake faces. In: Bessiere, C. (ed.) Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, pp. 3444–3451, July 2020

    Google Scholar 

  38. Wang, T., Liu, M., Zhu, J., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018)

    Google Scholar 

  39. Yi, Z., Zhang, H., Tan, P., Gong, M.: DualGAN: Unsupervised dual learning for image-to-image translation. arXiv: abs/1704.02510 (2017)

  40. Yu, N., Davis, L., Fritz, M.: Attributing fake images to GANs: learning and analyzing GAN fingerprints. In: Proceedings of IEEE International Conference on Computer Vision, pp. 7555–7565 (2019)

    Google Scholar 

  41. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2242–2251 (2017)

    Google Scholar 

  42. Zhu, J.Y., et al.: Toward multimodal image-to-image translation. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 465–476. Curran Associates, Inc. (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saraju P. Mohanty .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mitra, A., Mohanty, S.P., Corcoran, P., Kougianos, E. (2022). EasyDeep: An IoT Friendly Robust Detection Method for GAN Generated Deepfake Images in Social Media. In: Camarinha-Matos, L.M., Heijenk, G., Katkoori, S., Strous, L. (eds) Internet of Things. Technology and Applications. IFIPIoT 2021. IFIP Advances in Information and Communication Technology, vol 641. Springer, Cham. https://doi.org/10.1007/978-3-030-96466-5_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-96466-5_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-96465-8

  • Online ISBN: 978-3-030-96466-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics