Skip to main content

Enhancing Large Language Models with Graph-Based Node Sampling for Fault Attribution in Power Distribution Networks

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14874))

Included in the following conference series:

  • 703 Accesses

Abstract

The increasing complexity of power distribution networks necessitates advanced methods for fault attribution analysis to uphold system reliability and stability. This paper introduces a novel approach that integrates large language models (LLMs) with domain-specific knowledge graphs to tackle the challenges posed by high-dimensional and intricate fault data in power distribution networks. A multi-dimensional fault ontology [13] is proposed to efficiently structure the fault data, facilitating the creation of a comprehensive fault knowledge graph. To enhance the diagnostic predictions of the LLM, a reinforcement learning-based node selection algorithm is implemented, strategically choosing pertinent nodes from the fault graph to enhance the model’s reasoning abilities. Experimental findings illustrate that this approach surpasses traditional statistical methods and direct LLM reasoning, achieving superior accuracy and efficiency in fault diagnosis. By incorporating selective knowledge graph node sampling, irrelevant noise is filtered out from the fault data, sharpening the LLM’s focus and eradicating “AI hallucinations,” thereby improving analytical precision. Validation on a real-world dataset from a power company confirms the method’s efficacy, promising swift and accurate fault analysis while reducing the time required for power grid fault diagnosis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Achiam, J., et al.: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023)

  2. Feng, Y., Chen, X., Lin, B.Y., Wang, P., Yan, J., Ren, X.: Scalable multi- hop relational reasoning for knowledge-aware question answering. arXiv preprint arXiv:2005.00646 (2020)

  3. Juliani, A.: Simple reinforcement learning with tensorflow part 8: asynchronous actor-critic agents (A3C). Medium, Dated: Dec 16 (2016)

    Google Scholar 

  4. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intel. Res. 4, 237–285 (1996)

    Google Scholar 

  5. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  6. Lewis, P., et al.: Retrieval-augmented generation for knowledge-intensive NLP tasks. In: Advances in Neural Information Processing Systems, vol. 33, pp. 9459–9474 (2020)

    Google Scholar 

  7. Lin, B.Y., Chen, X., Chen, J., Ren, X.: Kagnet: Knowledge-aware graph networks for commonsense reasoning. arXiv preprint arXiv:1909.02151 (2019)

  8. Logan, I.V., Robert, L., Liu, N.F., Peters, M.E., Gardner, M., Singh, S.: Barack’s wife hillary: using knowledge-graphs for fact-aware language modeling. arXiv preprint arXiv:1906.07241 (2019)

  9. Maleki, N., Padmanabhan, B., Dutta, K.: AI hallucinations: a misnomer worth clarifying (2024)

    Google Scholar 

  10. Meng, Z., Liu, F., Shareghi, E., Su, Y., Collins, C., Collier, N.: Rewire-then-probe: a contrastive recipe for probing biomedical knowledge of pre-trained language models. arXiv preprint arXiv:2110.08173 (2021)

  11. OpenAI: ChatGPT. https://openai.com/blog/chatgpt (2022)

  12. Petroni, F., et al.: Language models as knowledge bases? arXiv preprint arXiv:1909.01066 (2019)

  13. Schaffer, J.: Is there a fundamental level? Noûs 37(3), 498–517 (2003)

    Article  Google Scholar 

  14. Sun, Y., Shi, Q., Qi, L., Zhang, Y.: JointLK: joint reasoning with language models and knowledge graphs for commonsense question answering. arXiv preprint arXiv:2112.02732 (2021)

  15. Sung, M., Lee, J., Yi, S., Jeon, M., Kim, S., Kang, J.: Can language models be biomedical knowledge bases? arXiv preprint arXiv:2109.07154 (2021)

  16. Swamy, V., Romanou, A., Jaggi, M.: Interpreting language models through knowledge graph extraction. arXiv preprint arXiv:2111.08546 (2021)

  17. Wilmot, D., Keller, F.: Memory and knowledge augmented language models for inferring salience in long-form stories. arXiv preprint arXiv:2109.03754 (2021)

  18. Wu, Y., Zhao, Y., Hu, B., Minervini, P., Stenetorp, P., Riedel, S.: An efficient memory-augmented transformer for knowledge-intensive NLP tasks. arXiv preprint arXiv:2210.16773 (2022)

  19. Yasunaga, M., Ren, H., Bosselut, A., Liang, P., Leskovec, J.: QA-GNN: reasoning with language models and knowledge graphs for question answering. arXiv preprint arXiv:2104.06378 (2021)

Download references

Acknowledgments

This work is supported by the Research Funds from State Grid Fujian (Research on Key Knowledge-Data Driven Event Knowledge Graph Technologies for Intelligent Decision-Making in Distribution Networks, 521304230008).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Deng Pan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, B. et al. (2024). Enhancing Large Language Models with Graph-Based Node Sampling for Fault Attribution in Power Distribution Networks. In: Huang, DS., Chen, W., Zhang, Q. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science, vol 14874. Springer, Singapore. https://doi.org/10.1007/978-981-97-5618-6_37

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5618-6_37

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5617-9

  • Online ISBN: 978-981-97-5618-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics