ABSTRACT
Out-of-distribution (OOD) detection, which aims to identify OOD samples from in-distribution (ID) ones in test time, has become an essential problem in machine learning. However, existing works are mostly conducted on Euclidean data, and the problem in graph-structured data remains under-explored. Several recent works begin to study graph OOD detection, but they all need to train a graph neural network (GNN) from scratch with high computational cost. In this work, we make the first attempt to endow a well-trained GNN with the OOD detection ability without modifying its parameters. To this end, we design a post-hoc framework with Adaptive Amplifier for Graph OOD Detection, named AAGOD, concentrating on data-centric manipulation. The insight of AAGOD is to superimpose a parameterized amplifier matrix on the adjacency matrix of each original input graph. The amplifier can be seen as prompts and is expected to emphasize the key patterns helpful for graph OOD detection, thereby enlarging the gap between OOD and ID graphs. Then well-trained GNNs can be reused to encode the amplified graphs into vector representations, and pre-defined scoring functions can further convert the representations into detection scores. Specifically, we design a Learnable Amplifier Generator (LAG) to customize amplifiers for different graphs, and propose a Regularized Learning Strategy (RLS) to train parameters with no OOD data required. Experiment results show that AAGOD can be applied on various GNNs to enable the OOD detection ability. Compared with the state-of-the-art baseline in graph OOD detection, on average AAGOD has 6.21% relative enhancement in AUC and a 34 times faster training speed. Code and data are available at https://github.com/BUPT-GAMMA/AAGOD.
Supplemental Material
- Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 (2016).Google Scholar
- Petra Bevandić, Ivan Krevs o, Marin Oršić, and Sinivs a Šegvić. 2018. Discriminative out-of-distribution detection for semantic segmentation. arXiv preprint arXiv:1808.07703 (2018).Google Scholar
- Karsten M Borgwardt, Cheng Soon Ong, Stefan Schönauer, SVN Vishwanathan, Alex J Smola, and Hans-Peter Kriegel. 2005. Protein function prediction via graph kernels. Bioinformatics, Vol. 21, suppl_1 (2005), i47--i56.Google ScholarDigital Library
- Markus M Breunig, Hans-Peter Kriegel, Raymond T Ng, and Jörg Sander. 2000. LOF: identifying density-based local outliers. In SIGMOD. 93--104.Google Scholar
- Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. 2020b. Simple and deep graph convolutional networks. In ICML. PMLR, 1725--1735.Google Scholar
- Xingyu Chen, Xuguang Lan, Fuchun Sun, and Nanning Zheng. 2020a. A boundary based out-of-distribution classifier for generalized zero-shot learning. In ECCV. Springer, 572--588.Google Scholar
- Dengxin Dai and Luc Van Gool. 2018. Dark model adaptation: Semantic image segmentation from daytime to nighttime. In ITSC. IEEE, 3819--3824.Google Scholar
- Jesse Davis and Mark Goadrich. 2006. The relationship between Precision-Recall and ROC curves. In ICML. 233--240.Google Scholar
- Kaize Ding, Jundong Li, Nitin Agarwal, and Huan Liu. 2021. Inductive anomaly detection on attributed networks. In IJCAI. 1288--1294.Google Scholar
- Kaize Ding, Jundong Li, Rohit Bhanushali, and Huan Liu. 2019. Deep anomaly detection on attributed networks. In ICDM. SIAM, 594--602.Google Scholar
- Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In ICML.Google Scholar
- Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. ICLR (2017).Google Scholar
- Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2014. Distilling the knowledge in a neural network. NeurIPS (2014).Google Scholar
- Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. NeurIPS, Vol. 33 (2020), 22118--22133.Google Scholar
- Rui Huang, Andrew Geng, and Yixuan Li. 2021. On the importance of gradients for detecting distributional shifts in the wild. NeurIPS, Vol. 34 (2021), 677--689.Google Scholar
- Wei Jin, Tong Zhao, Jiayuan Ding, Yozen Liu, Jiliang Tang, and Neil Shah. 2022. Empowering graph representation learning with test-time graph transformation. arXiv preprint arXiv:2210.03561 (2022).Google Scholar
- Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. ICLR (2017).Google Scholar
- Matjavz Kukar. 2003. Transductive reliability estimation for medical diagnosis. ARTIF INTELL MED, Vol. 29, 1--2 (2003), 81--106.Google ScholarDigital Library
- Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. NeurIPS, Vol. 31 (2018).Google Scholar
- Zenan Li, Qitian Wu, Fan Nie, and Junchi Yan. 2022. Graphde: A generative framework for debiased learning and out-of-distribution detection on graphs. Advances in Neural Information Processing Systems, Vol. 35 (2022), 30277--30290.Google Scholar
- Shiyu Liang, Yixuan Li, and Rayadurgam Srikant. 2018. Enhancing the reliability of out-of-distribution image detection in neural networks. ICLR (2018).Google Scholar
- Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023 c. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Comput. Surveys, Vol. 55, 9 (2023), 1--35.Google ScholarDigital Library
- Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. 2020. Energy-based out-of-distribution detection. NeurIPS, Vol. 33 (2020), 21464--21475.Google Scholar
- Yixin Liu, Kaize Ding, Huan Liu, and Shirui Pan. 2023 a. GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection. WSDM (2023).Google Scholar
- Yixin Liu, Zhao Li, Shirui Pan, Chen Gong, Chuan Zhou, and George Karypis. 2021. Anomaly detection on attributed networks via contrastive self-supervised learning. TNNLS, Vol. 33, 6 (2021), 2378--2392.Google ScholarCross Ref
- Zemin Liu, Xingtong Yu, Yuan Fang, and Xinming Zhang. 2023 b. GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks. In Proceedings of the ACM Web Conference 2023. 417--428.Google ScholarDigital Library
- Xuexiong Luo, Jia Wu, Amin Beheshti, Jian Yang, Xiankun Zhang, Yuan Wang, and Shan Xue. 2022. Comga: Community-aware attributed graph anomaly detection. In WSDM. 657--665.Google ScholarDigital Library
- Rongrong Ma, Guansong Pang, Ling Chen, and Anton van den Hengel. 2022. Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation. In WSDM. 704--714.Google Scholar
- Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. 2019a. Provably powerful graph networks. NeurIPS, Vol. 32 (2019).Google Scholar
- Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. 2019b. Invariant and equivariant graph networks. ICLR (2019).Google Scholar
- Christopher Morris, Nils M Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. 2020. Tudataset: A collection of benchmark datasets for learning with graphs. arXiv preprint arXiv:2007.08663 (2020).Google Scholar
- Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. 2019. Do deep generative models know what they don't know? ICLR (2019).Google Scholar
- Jie Ren, Peter J Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark Depristo, Joshua Dillon, and Balaji Lakshminarayanan. 2019. Likelihood ratios for out-of-distribution detection. NeurIPS, Vol. 32 (2019).Google Scholar
- Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. 2018. Deep one-class classification. In ICML. PMLR, 4393--4402.Google Scholar
- Thomas Schlegl, Philipp Seeböck, Sebastian M Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. 2017. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In MICCAI. Springer, 146--157.Google Scholar
- Vikash Sehwag, Mung Chiang, and Prateek Mittal. 2021. Ssd: A unified framework for self-supervised outlier detection. ICLR (2021).Google Scholar
- Joan Serrà, David Álvarez, Vicencc Gómez, Olga Slizovskaia, José F Núnez, and Jordi Luque. 2020. Input complexity and out-of-distribution detection with likelihood-based generative models. ICLR (2020).Google Scholar
- Zheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, and Peng Cui. 2021. Towards out-of-distribution generalization: A survey. arXiv preprint arXiv:2108.13624 (2021).Google Scholar
- Maximilian Stadler, Bertrand Charpentier, Simon Geisler, Daniel Zügner, and Stephan Günnemann. 2021. Graph posterior network: Bayesian predictive uncertainty for node classification. NeurIPS, Vol. 34 (2021), 18033--18048.Google Scholar
- Mingchen Sun, Kaixiong Zhou, Xin He, Ying Wang, and Xin Wang. 2022. Gppt: Graph pre-training and prompt tuning to generalize graph neural networks. In ACM SIGKDD.Google Scholar
- Jeffrey J Sutherland, Lee A O'brien, and Donald F Weaver. 2003. Spline-fitting with a genetic algorithm: A method for developing classification structure-activity relationships. Journal of Chemical Information and Computer Sciences, Vol. 43, 6 (2003), 1906--1915.Google ScholarCross Ref
- Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. JMLR, Vol. 9, 11 (2008).Google Scholar
- Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. ICLR (2018).Google Scholar
- Petar Veličković, William Fedus, William L Hamilton, Pietro Liò, Yoshua Bengio, and R Devon Hjelm. 2019. Deep graph infomax. ICLR, Vol. 2, 3 (2019), 4.Google Scholar
- Sachin Vernekar, Ashish Gaurav, Vahdat Abdelzad, Taylor Denouden, Rick Salay, and Krzysztof Czarnecki. 2019. Out-of-distribution detection in classifiers via generation. NeurIPS (2019).Google Scholar
- Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, and Bo Han. 2022. Watermarking for Out-of-distribution Detection. NeurIPS (2022).Google Scholar
- Boris Weisfeiler and Andrei Leman. 1968. The reduction of a graph to canonical form and the algebra which appears therein. nti, Series, Vol. 2, 9 (1968), 12--16.Google Scholar
- Qitian Wu, Yiting Chen, Chenxiao Yang, and Junchi Yan. 2023. Energy-based Out-of-Distribution Detection for Graph Neural Networks. arXiv preprint arXiv:2302.02914 (2023).Google Scholar
- Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. 2018. MoleculeNet: a benchmark for molecular machine learning. Chemical science, Vol. 9, 2 (2018), 513--530.Google Scholar
- Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural networks? ICLR (2019).Google Scholar
- Pinar Yanardag and SVN Vishwanathan. 2015. Deep graph kernels. In SIGKDD. 1365--1374.Google Scholar
- Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. 2021. Do transformers really perform badly for graph representation? NeurIPS, Vol. 34 (2021), 28877--28888.Google Scholar
- Yuning You, Tianlong Chen, Yang Shen, and Zhangyang Wang. 2021. Graph contrastive learning automated. In ICML. PMLR, 12121--12132.Google Scholar
- Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. NeurIPS, Vol. 33 (2020), 5812--5823.Google Scholar
- Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, and Xia Hu. 2023. Data-centric artificial intelligence: A survey. arXiv preprint arXiv:2303.10158 (2023).Google Scholar
- Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2021. Understanding deep learning (still) requires rethinking generalization. Commun. ACM, Vol. 64, 3 (2021), 107--115.Google ScholarDigital Library
- Lingxiao Zhao and Leman Akoglu. 2021. On using classification datasets to evaluate graph outlier detection: Peculiar observations and new insights. Big Data (2021).Google Scholar
- Xujiang Zhao, Feng Chen, Shu Hu, and Jin-Hee Cho. 2020. Uncertainty aware semi-supervised learning on graph data. NeurIPS, Vol. 33 (2020), 12827--12836.Google Scholar
- Wenxuan Zhou, Fangyu Liu, and Muhao Chen. 2021. Contrastive out-of-distribution detection for pretrained transformers. EMNLP (2021).Google Scholar
- Ev Zisselman and Aviv Tamar. 2020. Deep residual flow for out of distribution detection. In CVPR. 13994--14003.Google Scholar
Index Terms
- A Data-centric Framework to Endow Graph Neural Networks with Out-Of-Distribution Detection Ability
Recommendations
GOOD-D: On Unsupervised Graph Out-Of-Distribution Detection
WSDM '23: Proceedings of the Sixteenth ACM International Conference on Web Search and Data MiningMost existing deep learning models are trained based on the closed-world assumption, where the test data is assumed to be drawn i.i.d. from the same distribution as the training data, known as in-distribution (ID). However, when models are deployed in an ...
Bank Fraud Detection with Graph Neural Networks
Abstract—This study proposes a method for detecting bank fraud based on graph neural networks. Financial transactions are represented in the form of a graph and analyzed with a graph neural network with the goal of detecting transactions typical of fraud ...
A novel Out-of-Distribution detection approach for Spiking Neural Networks: Design, fusion, performance evaluation and explainability
AbstractResearch around Spiking Neural Networks has ignited during the last years due to their advantages when compared to traditional neural networks, including their efficient processing and inherent ability to model complex temporal dynamics. Despite ...
Highlights- An Out-of-Distribution (OoD) detector suitable for Spiking Neural Networks.
- Fusion with other post-hoc methods further improves detection ability.
- An explainability method highlights features relevant for OoD detection.
- Results ...
Comments