Skip to main content

Using Statistical and Artificial Neural Networks Meta-learning Approaches for Uncertainty Isolation in Face Recognition by the Established Convolutional Models

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Data Science (LOD 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 13164))

Abstract

We investigate ways of increasing trust in verdicts of the established Convolutional Neural Network models for the face recognition task. In the mission-critical application settings, additional metrics of the models’ uncertainty in their verdicts can be used for isolating low-trust verdicts in the additional ‘uncertain’ class, thus increasing trusted accuracy of the model at the expense of the sheer number of the ‘certain’ verdicts. In the study, six established Convolutional Neural Network models are tested on the makeup and occlusions data set partitioned to emulate and exaggerate the usual for real-life conditions training and test set disparity. Simple A/B and meta-learning supervisor Artificial Neural Network solutions are tested to learn the error patterns of the underlying Convolutional Neural Networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety (2016)

    Google Scholar 

  2. Andrychowicz, M., et al.: Learning to learn by gradient descent by gradient descent. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pp. 3988–3996, Curran Associates Inc., Red Hook (2016)

    Google Scholar 

  3. Bergstra, J., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: Advances in Neural Information Processing Systems, vol. 24. Curran Associates, Inc. (2011). https://proceedings.neurips.cc/paper/2011/file/86e8f7ab32cfd12577bc2619bc635690-Paper.pdf

  4. Briggs, W.: Uncertainty: The Soul of Modeling, Probability & Statistics. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-39756-6

    Book  MATH  Google Scholar 

  5. Chomsky, N.: Powers and Prospects: Reflections on Human Nature and the Social Order. South End Press, Boston (1996)

    Google Scholar 

  6. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 1126–1135. PMLR, August 2017. http://proceedings.mlr.press/v70/finn17a.html

  7. Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. CoRR arXiv:2106.03004 (2021)

  8. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning, ICML’16, vol. 48, pp. 1050–1059. JMLR.org (2016)

    Google Scholar 

  9. Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521(7553), 452–459 (2015). https://doi.org/10.1038/nature14541

    Article  Google Scholar 

  10. Graves, A.: Practical variational inference for neural networks. In: Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS’11, pp. 2348–2356. Curran Associates Inc., Red Hook (2011)

    Google Scholar 

  11. Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? arXiv:1703.04977 (2017)

  12. Kurd, Z., Kelly, T.: Establishing safety criteria for artificial neural networks. In: Palade, V., Howlett, R.J., Jain, L. (eds.) KES 2003. LNCS (LNAI), vol. 2773, pp. 163–169. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45224-9_24

    Chapter  Google Scholar 

  13. Lake, B.M., Ullman, T.D., Tenenbaum, J.B., Gershman, S.J.: Building machines that learn and think like people. Behav. Brain Sci. 40, e253 (2017). https://doi.org/10.1017/S0140525X16001837

    Article  Google Scholar 

  14. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pp. 6405–6416. Curran Associates Inc., Red Hook (2017)

    Google Scholar 

  15. Liu, X., Wang, X., Matwin, S.: Interpretable deep convolutional neural networks via meta-learning. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–9 (2018). https://doi.org/10.1109/IJCNN.2018.8489172

  16. MacKay, D.J.C.: A practical Bayesian framework for backpropagation networks. Neural Comput. 4(3), 448–472 (1992). https://doi.org/10.1162/neco.1992.4.3.448

    Article  Google Scholar 

  17. McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E.: A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag. 27(4), 12 (2006)

    Google Scholar 

  18. Neal, R.M.: Bayesian Learning for Neural Networks. Lecture Notes in Statistics, vol. 118. Springer, New York (1996). https://doi.org/10.1007/978-1-4612-0745-0

  19. Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. ArXiv arXiv:1803.02999 (2018)

  20. Postels, J., Ferroni, F., Coskun, H., Navab, N., Tombari, F.: Sampling-free epistemic uncertainty estimation using approximated variance propagation. CoRR arXiv:1908.00598 (2019)

  21. Ram, R., Müller, S., Pfreundt, F., Gauger, N., Keuper, J.: Scalable hyperparameter optimization with lazy Gaussian processes. In: 2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC), pp. 56–65 (2019)

    Google Scholar 

  22. Selitskaya, N., Sielicki, S., Christou, N.: Challenges in real-life face recognition with heavy makeup and occlusions using deep learning algorithms. In: Nicosia, G., et al. (eds.) LOD 2020. LNCS, vol. 12566, pp. 600–611. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64580-9_49

    Chapter  Google Scholar 

  23. Thrun, S., Pratt, L.: Learning to Learn. Springer, Boston (1998). https://doi.org/10.1007/978-1-4615-5529-2

  24. Turing, A.M.: I.-Computing machinery and intelligence. Mind LIX(236), 433–460 (1950). https://doi.org/10.1093/mind/LIX.236.433

  25. Vanschoren, J.: Meta-learning: a survey. ArXiv arXiv:1810.03548 (2018)

  26. Casimiro, A., Ortmeier, F., Schoitsch, E., Bitsch, F., Ferreira, P. (eds.): SAFECOMP 2020. LNCS, vol. 12235. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-55583-2

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stanislav Selitskiy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Selitskiy, S., Christou, N., Selitskaya, N. (2022). Using Statistical and Artificial Neural Networks Meta-learning Approaches for Uncertainty Isolation in Face Recognition by the Established Convolutional Models. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2021. Lecture Notes in Computer Science(), vol 13164. Springer, Cham. https://doi.org/10.1007/978-3-030-95470-3_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-95470-3_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-95469-7

  • Online ISBN: 978-3-030-95470-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics