ABSTRACT
As the adoption of point prediction models, such as deep neural networks and Gradient Boosting methods, continues to grow in critical decision-making systems, ensuring the reliability of their inferences has become a paramount concern. These deterministic models often exhibit excessively high confidence even on out-of-distribution datasets, leading to potentially costly errors in essential applications like lending decisions and credit risk estimations. Hence, accurate uncertainty quantification is essential for practical and reliable applications.
In this research, we propose to address this issue by employing predictive uncertainty. Through extensive experimentation, we demonstrate that employing probabilistic methods for in-shifted or out-of-distribution data leads to improved results, fostering more reliable predictions. Also, we introduce two innovative methods that utilize gradient-boosting techniques to mine model uncertainty effectively. These methods offer an alternative perspective on uncertainty estimation, contributing to the growing body of research in this domain.
To validate the efficacy of our proposed methods, we conduct experiments on the Lending Club loan dataset, showcasing their potential to enhance decision-making in critical scenarios. Ultimately, our study emphasizes the significance of uncertainty estimation in credit risk assessment and highlights the practical benefits of incorporating probabilistic methods in deep neural networks and gradient-boosting-based models.
- Peter Martey Addo, Dominique Guegan, and Bertrand Hassani. 2018. Credit risk analysis using machine and deep learning models. Risks 6, 2 (2018), 38.Google ScholarCross Ref
- Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 (2016).Google Scholar
- Tianqi Chen and Carlos Guestrin. 2016. XGBoost. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM. https://doi.org/10.1145/2939672.2939785Google ScholarDigital Library
- Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning. PMLR, 1050–1059.Google Scholar
- Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learning. PMLR, 1321–1330.Google Scholar
- Dan Hendrycks and Kevin Gimpel. 2016. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136 (2016).Google Scholar
- José Miguel Hernández-Lobato and Ryan Adams. 2015. Probabilistic backpropagation for scalable learning of bayesian neural networks. In International conference on machine learning. PMLR, 1861–1869.Google Scholar
- Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine 29, 6 (2012), 82–97.Google Scholar
- Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in bayesian deep learning for computer vision?Advances in neural information processing systems 30 (2017).Google Scholar
- Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012).Google Scholar
- Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems 30 (2017).Google Scholar
- Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436–444.Google Scholar
- LendingClub. 2023. LendingClub Data. https://www.lendingclub.com/statistics/additional-statistics Accessed on 2023-10-25.Google Scholar
- David JC MacKay. 1992. The evidence framework applied to classification networks. Neural computation 4, 5 (1992), 720–736.Google Scholar
- Andrey Malinin, Bruno Mlodozeniec, and Mark Gales. 2019. Ensemble distribution distillation. arXiv preprint arXiv:1905.00076 (2019).Google Scholar
- Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).Google Scholar
- Jeffrey S. Vitter. 1985. Random Sampling with a Reservoir. ACM Trans. Math. Softw. 11, 1 (mar 1985), 37–57. https://doi.org/10.1145/3147.3165Google ScholarDigital Library
- Michał Woźniak, Manuel Grana, and Emilio Corchado. 2014. A survey of multiple classifier systems as hybrid systems. Information Fusion 16 (2014), 3–17.Google ScholarDigital Library
Recommendations
Decision-making under uncertainty: beyond probabilities: Challenges and perspectives
AbstractThis position paper reflects on the state-of-the-art in decision-making under uncertainty. A classical assumption is that probabilities can sufficiently capture all uncertainty in a system. In this paper, the focus is on the uncertainty that goes ...
Supporting informed decision-making under uncertainty and risk through interactive visualisation
AUIC '13: Proceedings of the Fourteenth Australasian User Interface Conference - Volume 139Informed decisions are based on the availability of information and the ability of decision-makers to manipulate this information. More often than not, the decision-relevant information is subject to uncertainty arising from different sources. ...
Comments