Skip to main content

Multi-modal Face Anti-spoofing Based on a Single Image

  • Conference paper
  • First Online:
  • 2259 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 13021))

Abstract

Using multi-modal data, such as VIS, IR and Depth, for face anti-spoofing (FAS) is a robust and effective method, because complementary information between different modalities can better against various attacks. However, multi-modal data is difficult to obtain in application scenarios due to high costs, which makes the model trained with multi-modal data unavailable in the testing stage. We define this phenomenon as train-test inconsistency problem, which is ignored by most existing methods. To this end, we propose a novel multi-modal face anti-spoofing framework (GFF), which adopt multi-modal data during training, and only use a single modality during testing to simulate multi-modal input. Specifically, GFF is a two-step framework. In the step I, we adopt the GAN model to fit the face images distribution of different modalities, and learn the transform strategies between different distributions, so as to realize the generation from a single real modality image to other modalities. In the step II, we select the real face images in one modality and the generated images in the other modalities according to actual needs to construct a simulation dataset, which is used for training face anti-spoofing model. The advantage of GFF is that it has achieved a good trade-off between data capture cost and model performance in the real application scenarios. The experimental results show that the method proposed in this paper can effectively overcome the train-test inconsistency problem. On the CASIA-SRUF CeFA dataset, the performance of GFF surpasses the existing single-modality-based methods, and surprisingly surpasses some multi-modality-based methods.

This is a student paper.

This project is supported by the NSFC (62076258).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Due to some of our input are generated rather than real, our model is a multi-modal-like model.

References

  1. Chen, S., Li, W., Yang, H., Huang, D., Wang, Y.: 3D face mask anti-spoofing via deep fusion of dynamic texture and shape clues. In: FG, pp. 314–321 (2020)

    Google Scholar 

  2. Chen, X., Xu, S., Ji, Q., Cao, S.: A dataset and benchmark towards multi-modal face anti-spoofing under surveillance scenarios. IEEE Access 9, 28140–28155 (2021)

    Article  Google Scholar 

  3. Chikontwe, P., Gao, Y., Lee, H.J.: Transformation guided representation GAN for pose invariant face recognition. Multidimension. Syst. Signal Process. 32(2), 633–649 (2021)

    Article  Google Scholar 

  4. Choi, Y., Uh, Y., Yoo, J., Ha, J.: StarGAN v2: diverse image synthesis for multiple domains. In: CVPR, pp. 8185–8194 (2020)

    Google Scholar 

  5. Huang, Z., Chen, S., Zhang, J., Shan, H.: PFA-GAN: progressive face aging with generative adversarial network. IEEE Trans. Inf. Forensics Secur. 16, 2031–2045 (2021)

    Article  Google Scholar 

  6. Jiang, F., Liu, P., Shao, X., Zhou, X.: Face anti-spoofing with generated near-infrared images. Multimedia Tools Appl. 79(29–30), 21299–21323 (2020)

    Article  Google Scholar 

  7. Jiang, F., Liu, P., Zhou, X.: Multilevel fusing paired visible light and near-infrared spectral images for face anti-spoofing. Pattern Recogn. Lett. 128, 30–37 (2019)

    Article  Google Scholar 

  8. Kavitha, P., Vijaya, K.: Fuzzy local ternary pattern and skin texture properties based countermeasure against face spoofing in biometric systems. Comput. Intell. 37(1), 559–577 (2021)

    Article  MathSciNet  Google Scholar 

  9. Liu, A., et al.: Cross-ethnicity face anti-spoofing recognition challenge: a review. CoRR (2020)

    Google Scholar 

  10. Liu, A., et al.: CASIA-SURF CeFA: a benchmark for multi-modal cross-ethnicity face anti-spoofing. CoRR abs/2003.05136 (2020)

    Google Scholar 

  11. Luo, M., Cao, J., Ma, X., Zhang, X., He, R.: FA-GAN: face augmentation GAN for deformation-invariant face recognition. IEEE Trans. Inf. Forensics Secur. 16, 2341–2355 (2021)

    Article  Google Scholar 

  12. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) NeurIPS, pp. 8024–8035 (2019)

    Google Scholar 

  13. Qin, Y., et al.: Learning meta model for zero-and few-shot face anti-spoofing. In: AAAI (2020)

    Google Scholar 

  14. Shu, X., Tang, H., Huang, S.: Face spoofing detection based on chromatic ED-LBP texture feature. Multimedia Syst. 27(2), 161–176 (2021)

    Article  Google Scholar 

  15. Wang, Q., Fan, H., Sun, G., Ren, W., Tang, Y.: Recurrent generative adversarial network for face completion. IEEE Trans. Multimedia 23, 429–442 (2021)

    Article  Google Scholar 

  16. Wang, Z., et al.: Deep spatial gradient and temporal depth learning for face anti-spoofing. In: CVPR, pp. 5041–5050 (2020)

    Google Scholar 

  17. Yang, Q., Zhu, X., Fwu, J., Ye, Y., You, G., Zhu, Y.: PipeNet: selective modal pipeline of fusion network for multi-modal face anti-spoofing. In: CVPR Workshops, pp. 2739–2747. IEEE (2020)

    Google Scholar 

  18. Yu, Z., Wan, J., Qin, Y., Li, X., Li, S.Z., Zhao, G.: NAS-FAS: static-dynamic central difference network search for face anti-spoofing. IEEE Trans. Pattern Anal. Mach. Intell. 43(9), 3005–3023 (2021). https://doi.org/10.1109/TPAMI.2020.3036338

  19. Yu, Z., Li, X., Niu, X., Shi, J., Zhao, G.: Face anti-spoofing with human material perception. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 557–575. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_33

    Chapter  Google Scholar 

  20. Yu, Z., et al.: Multi-modal face anti-spoofing based on central difference networks. In: CVPR Workshops, pp. 2766–2774. IEEE (2020)

    Google Scholar 

  21. Yu, Z., et al.: Searching central difference convolutional networks for face anti-spoofing. In: CVPR (2020)

    Google Scholar 

  22. Zhang, S., et al.: CASIA-SURF: a large-scale multi-modal benchmark for face anti-spoofing. IEEE Trans. Biometrics Behav. Identity Sci. 2(2), 182–193 (2020)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuezhen Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Q., Liao, Z., Huang, Y., Lai, J. (2021). Multi-modal Face Anti-spoofing Based on a Single Image. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13021. Springer, Cham. https://doi.org/10.1007/978-3-030-88010-1_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88010-1_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88009-5

  • Online ISBN: 978-3-030-88010-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics