Skip to main content

CNN Based Predictor of Face Image Quality

  • Conference paper
  • First Online:
Pattern Recognition. ICPR International Workshops and Challenges (ICPR 2021)

Abstract

We propose a novel method for training Convolution Neural Network, named CNN-FQ, which takes a face image and outputs a scalar summary of the image quality. The CNN-FQ is trained from triplets of faces that are automatically labeled based on responses of a pre-trained face matcher. The quality scores extracted by the CNN-FQ are directly linked to the probability that the face matcher incorrectly ranks a randomly selected triplet of faces. We applied the proposed CNN-FQ, trained on CASIA database, for selection of the best quality image from a collection of face images capturing the same identity. The quality of the single face representation was evaluated on 1:1 Verification and 1:N Identification tasks defined by the challenging IJB-B protocol. We show that the recognition performance obtained when using faces selected based on the CNN-FQ scores is significantly higher than what can be achieved by competing state-of-the-art image quality extractors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    RetinaFace is available at https://github.com/biubug6/Pytorch_Retinaface.

  2. 2.

    Pre-trained SENet is available at https://github.com/ox-vgg/vgg_face2.

  3. 3.

    ROC is calculated from two metrics, True Acceptance Rate (TAR) and False Acceptance Rate (FAR). TAR corresponds to the probability that the system correctly accepts an authorised person and it is estimated by computing a fraction of matching pairs whose cosine distance is below a decision threshold. FAR corresponds to the probability that the system incorrectly accepts a non-authorised person and it is estimated by computing a fraction of non-matching pairs whose cosine distance is below the decision threshold.

  4. 4.

    DET and CMC plots are calculated in terms of two metrics, False Postitive Identifcation Rate (FPIR) and False Negative Identification Rate (FNIR). FPIR is defined as the proportion of non-mate searches with any candidates below a decision threshold. In this metric, only candidates at rank 1 are considered. The FNIR is defined as the proportion mate searches for which the known individual is outside the top R = 20 ranks, or has cosine distance above threshold.

  5. 5.

    https://github.com/yermandy/cnn-fq.

References

  1. Abaza, A., Harison, M., Bourlai, T., Ross, A.: Design and evolution of photometric image quality measures for effective face recognition. IET Biometrics 3(4), 314–324 (2014)

    Article  Google Scholar 

  2. Best-Rowden, L., Jain, A.K.: Learning face image quality from human assessments. IEEE Trans. Inf. Forensics Secur. 13, 3064–3077 (2018)

    Article  Google Scholar 

  3. Beveridge, J., Givens, G., Phillips, P., Draper, B.: Factors that influence algorithm performance in the face recognition grand challenge. Comput. Vis. Image Underst. 113(6), 750–762 (2009)

    Article  Google Scholar 

  4. Beveridge, J., Givens, G., Phillips, P., Draper, B., Bolme, D., Lui, Y.: FRVT 2006: Quo vadis face quality. Image Vis. Comput. 28(5), 732–743 (2010)

    Article  Google Scholar 

  5. Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: Vggface2: A dataset for recognising faces across pose and age. In: 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018). pp. 67–74 (2018)

    Google Scholar 

  6. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc.: Ser. B (Methodol.) 39(1), 1–22 (1977)

    MathSciNet  MATH  Google Scholar 

  7. Deng, J., Guo, J., Ververas, E., Kotsia, I., Zafeiriou, S.: Retinaface: Single-shot multi-level face localisation in the wild. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 5202–5211 (2020)

    Google Scholar 

  8. Du, L., Ling, H.: Cross-age face verification by coordinating with cross-face age verification. In: Conference on Computer Vision and Patter Recognition (2015)

    Google Scholar 

  9. Goswami, G., Bhardwaj, R., Singh, R., Vatsa, M.: MDLFace: memorability augmented deep learning for video face recognition. In: IEEE International Joint Conference on Biometrics (2014)

    Google Scholar 

  10. Goswami, G., Vatsa, M., Singh, R.: Face verification via learned representation on feature-rich video frames. IEEE Trans. Inf. Forensics Secur. 12(7), 1686–1689 (2017)

    Article  Google Scholar 

  11. Grother, P., Tabassi, E.: Performance of biometric quality measures. IEEE Trans. Pattern Recogn. Mach. Intell. 29(4), 531–543 (2007)

    Article  Google Scholar 

  12. Guo, Y., Zhang, L., Hu, Y., He, X., Gao, J.: MS-Celeb-1M: a dataset and benchmark for large-scale face recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 87–102. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_6

    Chapter  Google Scholar 

  13. Hernandez-Ortega, J., Galbally, J., Fierrez, J.: Faceqnet: Quality assessment of face recognition based on deep learning. In: International Conference on Biometrics (2019)

    Google Scholar 

  14. Abdurrahim, S.H., Samad, S.A., Huddin, A.B.: Review on the effects of age, gender, and race demographics on automatic face recognition. Vis. Comput. 34(11), 1617–1630 (2017). https://doi.org/10.1007/s00371-017-1428-z

    Article  Google Scholar 

  15. Kingma, D., Ba, J.: ADAM: A method for stochastic optimization. In: ICLR (2014)

    Google Scholar 

  16. Lu, B., Chen, J.C., Castillo, C.D., Chellappa, R.: An experimental evaluation of covariates effects on unconstrained face verification. IEEE Trans. Biomet. Behav. Identity Sci. 1(1), 42–55 (2019)

    Article  Google Scholar 

  17. Ferrara, M., Franco, A., Maio, D., Maltoni, D.: Face image conformance to ISO/ICAO standards in machine readable travel documents. IEEE Trans. Inf. Forensics Secur. 7(4), 1204–1213 (2012)

    Article  Google Scholar 

  18. Poh, N., et al.: Benchmarking quality-dependent and cost-sensitive score-level multimodal biometric fusion algorithms. IEEE Trans. Inf. Forensics Secur. 4(6), 849–866 (2009)

    Article  Google Scholar 

  19. Poh, N., Kittler, J.: A unified framework for biometric expert fusion incorporating quality measures. IEEE Trans. Pattern Anal. Mach. Intell. 34, 3–18 (2012)

    Article  Google Scholar 

  20. Ranjan, R., et al.: Crystal loss and quality pooling for unconstrained face verification and recognition. CoRR abs/1804.01159 (2018). http://arxiv.org/abs/1804.01159

  21. Sellahewa, H., Jassim, S.: Image-quality-based adaptive face recognition. IEEE Trans. Instrum. Measure. 59, 805–813 (2010)

    Article  Google Scholar 

  22. Vignesh, S., Priya, K., Channappayya, S.: Face image quality assessment for face selection in surveillance video using convolutional neural networks. In: IEEE Global Conference on Signal and Information Processing (2015)

    Google Scholar 

  23. Whitelam, C., et al.: Iarpa janus benchmark-b face dataset. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 592–600 (2017)

    Google Scholar 

  24. Wong, Y., Chen, S., Mau, S., Sanderson, C., Lovell, B.: Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition. In: CVPRW, pp. 74–81 (2011)

    Google Scholar 

  25. Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. CoRR abs/1411.7923 (2014). http://arxiv.org/abs/1411.7923

Download references

Acknowledgments

The research was supported by the Czech Science Foundation project GACR GA19-21198S.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vojtech Franc .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yermakov, A., Franc, V. (2021). CNN Based Predictor of Face Image Quality. In: Del Bimbo, A., et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12666. Springer, Cham. https://doi.org/10.1007/978-3-030-68780-9_52

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-68780-9_52

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68779-3

  • Online ISBN: 978-3-030-68780-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics