Skip to main content

Adding Probabilistic Certainty to Improve Performance of Convolutional Neural Networks

  • Conference paper
  • First Online:
High Performance Computing (CARLA 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1087))

Included in the following conference series:

  • 605 Accesses

Abstract

Convolutional Neural Networks (CNN) are successfully being used for different computer vision tasks, from labeling cancerous cells in medical images to identify traffic signals in self-driving cars. Supervised CNN classify raw input data according to the patterns learned from an input training set. This set is typically obtained by manually labeling the image which can lead to uncertainties in the data. The level of expertise of the professionals labeling the training set sometimes varies widely or some of the images used may not be clear and are difficult to label. This leads to data sets with pictures labeled differently by different experts or uncertainty in the experts opinions.

These kind of errors on the training set do happen more frequently when the CNN task is to classify numerous labels with similar characteristics. For example, when labeling damages on civil infrastructures after an earthquake, there are more than two hundred different labels with some of them similar to each other and the experts labeling the sets frequently disagree on which one to use. In this paper, we use probabilistic analysis to evaluate both the likelihood of the labels in the training set (produced by the CNN) and the likelihood’s uncertainty. The uncertainty in the likelihood is represented by a probability density and represents a spreading (as it were) of the CNN’s likelihood estimate over a range of values dictated by the uncertainty in the truth set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    At this stage of development, a triangular density function with a very narrow base has been used instead of an impulse.

  2. 2.

    This is a restatement of Table 2 ordered by photo id.

  3. 3.

    The output of the NN are simulated, we guess the output values of a NN based on our previous paper [1] where we used a Single Shot MultiBox Detector (SSD) [18].

References

  1. Patterson, B., Leone, G., Pantoja, M., Behrouzi, A.: Deep learning for automated image classification of seismic damage to built infrastructure. In: Proceedings of the 11th National Conference in Earthquake Engineering (2018)

    Google Scholar 

  2. Pantoja, M., Fabris, D., Behrouzi, A.: Deep learning basic overview concrete international magazine, September 2018

    Google Scholar 

  3. Tesla Crash Preliminary Report US department of transportation NHTSA PE 16–007

    Google Scholar 

  4. Sun, S., Chen, C., Carin, L.: Learning structured weight uncertainty in Bayesian neural networks. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, JMLR: W&CP, vol. 54, Fort Lauderdale (2017)

    Google Scholar 

  5. Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision NIPS (2017). https://arxiv.org/abs/1703.04977

  6. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the 33rd International Conference on Machine Learning PMLR, vol. 48, pp. 1050–1059 (2016)

    Google Scholar 

  7. Deceus, T.: Handling imprecise and uncertain class labels in classification and clustering. Bayesian Deep Learning COST Action IC 0702 Working group C, Mallorca, 16 March 2009

    Google Scholar 

  8. Gal, Y.: What my Deep Learning model Doesnt know, 3 July 2015

    Google Scholar 

  9. David, H.: The Certainty-Factor Model, Encyclopedia of Artificial Intelligence. 2nd edn. pp. 131–138, Wiley, New York

    Google Scholar 

  10. Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo

    Google Scholar 

  11. Zadeh, L.A., Klir, G.J., Yuan, B. (eds.): Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems Selected Papers. Advances in Fuzzy Systems Applications and Theory, vol 6. World Scientific

    Google Scholar 

  12. Knuth, D.E.: The Art of Computer Programming, vol. 2, Section 4.3.3, pp. 290–295

    Google Scholar 

  13. Press, W.H.: Numerical Recipes in C. Section 8.10, pp. 329–343 (1986)

    Google Scholar 

  14. Google Research Research Blog: AlphaGo: mastering the ancient game of Go with Machine Learning, 27 January 2016

    Google Scholar 

  15. Kendall, A., Badrinarayanan, V., Cipolla, R.: Bayesian SegNet: model uncertainty in deep convolutional encoder-decoder architectures for scene understanding, CoRR (2015). http://arxiv.org/abs/1511.02680

  16. Weideman, H.: Quantifying uncertainty in neural networks. https://hjweide.github.io/quantifying-uncertainty-in-neural-networks

  17. Avis, D., Fukuda, K.: A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra. Discrete Comput. Geom. 8(3), 295–313 (1992)

    Article  MathSciNet  Google Scholar 

  18. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  19. Github. https://github.com/mpantoja314/ImageTagVER

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria Pantoja .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pantoja, M., Kleinhenz, R., Fabris, D. (2020). Adding Probabilistic Certainty to Improve Performance of Convolutional Neural Networks. In: Crespo-Mariño, J., Meneses-Rojas, E. (eds) High Performance Computing. CARLA 2019. Communications in Computer and Information Science, vol 1087. Springer, Cham. https://doi.org/10.1007/978-3-030-41005-6_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-41005-6_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-41004-9

  • Online ISBN: 978-3-030-41005-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics