Skip to main content

Human-Driven Active Verification for Efficient and Trustworthy Graph Classification

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2024)

Abstract

Graph representation learning methods have significantly transformed applications in various domains. However, their success often comes at the cost of interpretability, hindering them from being adopted in critical decision-making scenarios. In conventional graph classification, the integration of domain expertise to enhance model training has been underutilized, leading to discrepancies in decision outcomes between humans and models. To address this, we introduce a novel framework involving active human verification in graph classification processes. Our approach features a human-aligned representation learning component, achieved by seamlessly integrating Graph Neural Network architectures and leveraging human domain knowledge and feedback. This framework enhances model transparency and interpretability and fosters collaborative decision-making between humans and AI systems. Extensive evaluations and user studies prove the efficiency of our framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bui, T.C., Le, V.D., Li, W.S.: Generating real-time explanations for GNNs via multiple specialty learners and online knowledge distillation. IEEE Access (2023)

    Google Scholar 

  2. Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., Su, J.K.: This looks like that: deep learning for interpretable image recognition. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  3. Dai, E., Wang, S.: Towards self-explainable graph neural network. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 302–311 (2021)

    Google Scholar 

  4. Davoudi, S.O., Komeili, M.: Toward faithful case-based reasoning through learning prototypes in a nearest neighbor-friendly space. In: International Conference on Learning Representations (2021)

    Google Scholar 

  5. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  6. Feng, A., You, C., Wang, S., Tassiulas, L.: KerGNNs: interpretable graph neural networks with graph kernels. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 6614–6622 (2022)

    Google Scholar 

  7. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  8. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  9. Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  10. Li, X., et al.: BrainGNN: interpretable brain graph neural network for FMRI analysis. Med. Image Anal. 74, 102233 (2021)

    Article  Google Scholar 

  11. Liu, H., Tian, Y., Chen, C., Feng, S., Chen, Y., Tan, C.: Learning human-compatible representations for case-based decision support. In: The Eleventh International Conference on Learning Representations (2022)

    Google Scholar 

  12. Luo, D., et al.: Parameterized explainer for graph neural network. arXiv preprint arXiv:2011.04573 (2020)

  13. Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., Fernández-Leal, Á.: Human-in-the-loop machine learning: a state of the art. Artif. Intell. Rev. 56(4), 3005–3054 (2023)

    Article  Google Scholar 

  14. Ragno, A., La Rosa, B., Capobianco, R.: Prototype-based interpretable graph neural networks. IEEE Trans. Artif. Intell. (2022)

    Google Scholar 

  15. Ramos, G., Meek, C., Simard, P., Suh, J., Ghorashi, S.: Interactive machine teaching: a human-centered approach to building machine-learned models. Hum. Comput. Interact. 35(5–6), 413–451 (2020)

    Article  Google Scholar 

  16. Rossi, R., Ahmed, N.: The network data repository with interactive graph analytics and visualization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29 (2015)

    Google Scholar 

  17. Slade, S.: Case-based reasoning: a research paradigm. AI Mag. 12(1), 42–42 (1991)

    Google Scholar 

  18. Taesiri, M.R., Nguyen, G., Nguyen, A.: Visual correspondence-based explanations improve AI robustness and human-AI team accuracy. In: Advances in Neural Information Processing Systems, vol. 35, pp. 34287–34301 (2022)

    Google Scholar 

  19. Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. stat 1050, 20 (2017)

    Google Scholar 

  20. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018)

  21. Yuan, H., Yu, H., Gui, S., Ji, S.: Explainability in graph neural networks: a taxonomic survey. arXiv preprint arXiv:2012.15445 (2020)

  22. Zhang, Z., Liu, Q., Wang, H., Lu, C., Lee, C.: ProtGNN: towards self-explaining graph neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 9127–9135 (2022)

    Google Scholar 

  23. Zhang, Z., Cui, P., Zhu, W.: Deep learning on graphs: a survey. IEEE Trans. Knowl. Data Eng. 34(1), 249–270 (2020)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korean government (MSIT)(No. RS-2023-00222663, RS-2023-00262885).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tien-Cuong Bui .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bui, TC., Li, WS. (2024). Human-Driven Active Verification for Efficient and Trustworthy Graph Classification. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14645. Springer, Singapore. https://doi.org/10.1007/978-981-97-2242-6_9

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-2242-6_9

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-2241-9

  • Online ISBN: 978-981-97-2242-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics