ABSTRACT
In order to represent and investigate interconnected data, Graph Neural Networks (GNN) offer a robust framework that deftly combines Graph theory with Machine learning. Most of the studies focus on performance but uncertainty measurement does not get enough attention. In this study, we measure the predictive uncertainty of several GNN models, to show how high performance does not ensure reliable performance. We use dropouts during the inference phase to quantify the uncertainty of these transformer models. This method, often known as Monte Carlo Dropout (MCD), is an effective low-complexity approximation for calculating uncertainty. Benchmark dataset was used with five GNN models: Graph Convolutional Network (GCN), Graph Attention Network (GAT), Personalized Propagation of Neural Predictions (PPNP), PPNP's fast approximation (APPNP) and GraphSAGE in our investigation. GAT proved to be superior to all the other models in terms of accuracy and uncertainty both in node classification. Among the other models, some that fared better in accuracy fell behind when compared using classification uncertainty.
- Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U. Rajendra Acharya, Vladimir Makarenkov, and Saeid Nahavandi. 2021. A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges. Information Fusion 76 (December 2021), 243--297. Google ScholarDigital Library
- Edmon Begoli, Tanmoy Bhattacharya, and Dimitri Kusnezov. 2019. The Need for Uncertainty Quantification in Machine-assisted Medical Decision Making. Nature (January 2019). Google ScholarCross Ref
- Vijay Prakash Dwivedi, Chaitanya K. Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. 2020. Benchmarking Graph Neural Networks. (2020). Google ScholarCross Ref
- Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. 2019. A Fair Comparison of Graph Neural Networks for Graph Classification. Google ScholarCross Ref
- Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning (ICML'16). JMLR.org, New York, USA, 1050--1059.Google Scholar
- Johannes Gasteiger, Aleksandar Bojchevski, and Stephan Günnemann. 2018. Predict then Propagate: Graph Neural Networks Meet Personalized PageRank. (2018). Google ScholarCross Ref
- William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive Representation Learning on Large Graphs. Google ScholarCross Ref
- Md. Farhadul Islam, Fardin Bin Rahman, Sarah Zabeen, Md. Azharul Islam, Md Sabbir Hossain, Md Humaion Kabir Mehedi, Meem Arafat Manab, and Annajiat Alim Rasel. 2022. RNN Variants vs Transformer Variants: Uncertainty in Text Classification with Monte Carlo Dropout. In 2022 25th International Conference on Computer and Information Technology (ICCIT). Cox's Bazar, Bangladesh, 7--12. Google ScholarCross Ref
- Md. Farhadul Islam, Sarah Zabeen, Md. Humaion Kabir Mehedi, Shadab Iqbal, and Annajiat Alim Rasel. 2022. Monte Carlo Dropout for Uncertainty Analysis and ECG Trace Image Classification. In Structural, Syntactic, and Statistical Pattern Recognition, Adam Krzyzak, Ching Y. Suen, Andrea Torsello, and Nicola Nobile (Eds.). Springer International Publishing, Cham, 173--182.Google Scholar
- Laurent Valentin Jospin, Hamid Laga, Farid Boussaid, Wray Buntine, and Mohammed Bennamoun. 2022. Hands-On Bayesian Neural Networks---A Tutorial for Deep Learning Users. IEEE Computational Intelligence Magazine 17, 2 (may 2022), 29--48. Google ScholarCross Ref
- Andrew McCallum. 2017. Cora Dataset. Google ScholarCross Ref
- Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The Graph Neural Network Model. IEEE Transactions on Neural Networks 20, 1 (2009), 61--80. Google ScholarDigital Library
- Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15, 56 (2014), 1929--1958. http://jmlr.org/papers/v15/srivastava14a.htmlGoogle ScholarDigital Library
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing Properties of Neural Networks. Google ScholarCross Ref
- Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2017. Graph Attention Networks. Google ScholarCross Ref
- Max Welling and Thomas N Kipf. 2016. Semi-supervised Classification with Graph Convolutional Networks. In J. International Conference on Learning Representations (ICLR 2017). Toulon, France.Google Scholar
- Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2020. Graph Neural Networks: A Review of Methods and Applications. AI Open 1 (2020), 57--81. Google ScholarCross Ref
Index Terms
- Exploring Node Classification Uncertainty in Graph Neural Networks
Recommendations
Training Reformulated Radial Basis Function Neural Networks Capable of Identifying Uncertainty in Data Classification
This paper introduces a learning algorithm that can be used for training reformulated radial basis function neural networks (RBFNNs) capable of identifying uncertainty in data classification. This learning algorithm trains a special class of ...
Signed Bipartite Graph Neural Networks
CIKM '21: Proceedings of the 30th ACM International Conference on Information & Knowledge ManagementSigned networks are such social networks having both positive and negative links. A lot of theories and algorithms have been developed to model such networks (e.g., balance theory). However, previous work mainly focuses on the unipartite signed networks ...
A survey of uncertainty in deep neural networks
AbstractOver the last decade, neural networks have reached almost every field of science and become a crucial part of various real world applications. Due to the increasing spread, confidence in neural network predictions has become more and more ...
Comments