Abstract
We investigate ways of increasing trust in verdicts of the established Convolutional Neural Network models for the face recognition task. In the mission-critical application settings, additional metrics of the models’ uncertainty in their verdicts can be used for isolating low-trust verdicts in the additional ‘uncertain’ class, thus increasing trusted accuracy of the model at the expense of the sheer number of the ‘certain’ verdicts. In the study, six established Convolutional Neural Network models are tested on the makeup and occlusions data set partitioned to emulate and exaggerate the usual for real-life conditions training and test set disparity. Simple A/B and meta-learning supervisor Artificial Neural Network solutions are tested to learn the error patterns of the underlying Convolutional Neural Networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety (2016)
Andrychowicz, M., et al.: Learning to learn by gradient descent by gradient descent. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pp. 3988–3996, Curran Associates Inc., Red Hook (2016)
Bergstra, J., Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: Advances in Neural Information Processing Systems, vol. 24. Curran Associates, Inc. (2011). https://proceedings.neurips.cc/paper/2011/file/86e8f7ab32cfd12577bc2619bc635690-Paper.pdf
Briggs, W.: Uncertainty: The Soul of Modeling, Probability & Statistics. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-39756-6
Chomsky, N.: Powers and Prospects: Reflections on Human Nature and the Social Order. South End Press, Boston (1996)
Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 1126–1135. PMLR, August 2017. http://proceedings.mlr.press/v70/finn17a.html
Fort, S., Ren, J., Lakshminarayanan, B.: Exploring the limits of out-of-distribution detection. CoRR arXiv:2106.03004 (2021)
Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the 33rd International Conference on International Conference on Machine Learning, ICML’16, vol. 48, pp. 1050–1059. JMLR.org (2016)
Ghahramani, Z.: Probabilistic machine learning and artificial intelligence. Nature 521(7553), 452–459 (2015). https://doi.org/10.1038/nature14541
Graves, A.: Practical variational inference for neural networks. In: Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS’11, pp. 2348–2356. Curran Associates Inc., Red Hook (2011)
Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? arXiv:1703.04977 (2017)
Kurd, Z., Kelly, T.: Establishing safety criteria for artificial neural networks. In: Palade, V., Howlett, R.J., Jain, L. (eds.) KES 2003. LNCS (LNAI), vol. 2773, pp. 163–169. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45224-9_24
Lake, B.M., Ullman, T.D., Tenenbaum, J.B., Gershman, S.J.: Building machines that learn and think like people. Behav. Brain Sci. 40, e253 (2017). https://doi.org/10.1017/S0140525X16001837
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pp. 6405–6416. Curran Associates Inc., Red Hook (2017)
Liu, X., Wang, X., Matwin, S.: Interpretable deep convolutional neural networks via meta-learning. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–9 (2018). https://doi.org/10.1109/IJCNN.2018.8489172
MacKay, D.J.C.: A practical Bayesian framework for backpropagation networks. Neural Comput. 4(3), 448–472 (1992). https://doi.org/10.1162/neco.1992.4.3.448
McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E.: A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag. 27(4), 12 (2006)
Neal, R.M.: Bayesian Learning for Neural Networks. Lecture Notes in Statistics, vol. 118. Springer, New York (1996). https://doi.org/10.1007/978-1-4612-0745-0
Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. ArXiv arXiv:1803.02999 (2018)
Postels, J., Ferroni, F., Coskun, H., Navab, N., Tombari, F.: Sampling-free epistemic uncertainty estimation using approximated variance propagation. CoRR arXiv:1908.00598 (2019)
Ram, R., Müller, S., Pfreundt, F., Gauger, N., Keuper, J.: Scalable hyperparameter optimization with lazy Gaussian processes. In: 2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC), pp. 56–65 (2019)
Selitskaya, N., Sielicki, S., Christou, N.: Challenges in real-life face recognition with heavy makeup and occlusions using deep learning algorithms. In: Nicosia, G., et al. (eds.) LOD 2020. LNCS, vol. 12566, pp. 600–611. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64580-9_49
Thrun, S., Pratt, L.: Learning to Learn. Springer, Boston (1998). https://doi.org/10.1007/978-1-4615-5529-2
Turing, A.M.: I.-Computing machinery and intelligence. Mind LIX(236), 433–460 (1950). https://doi.org/10.1093/mind/LIX.236.433
Vanschoren, J.: Meta-learning: a survey. ArXiv arXiv:1810.03548 (2018)
Casimiro, A., Ortmeier, F., Schoitsch, E., Bitsch, F., Ferreira, P. (eds.): SAFECOMP 2020. LNCS, vol. 12235. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-55583-2
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Selitskiy, S., Christou, N., Selitskaya, N. (2022). Using Statistical and Artificial Neural Networks Meta-learning Approaches for Uncertainty Isolation in Face Recognition by the Established Convolutional Models. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2021. Lecture Notes in Computer Science(), vol 13164. Springer, Cham. https://doi.org/10.1007/978-3-030-95470-3_26
Download citation
DOI: https://doi.org/10.1007/978-3-030-95470-3_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95469-7
Online ISBN: 978-3-030-95470-3
eBook Packages: Computer ScienceComputer Science (R0)