Abstract
The increasing complexity of power distribution networks necessitates advanced methods for fault attribution analysis to uphold system reliability and stability. This paper introduces a novel approach that integrates large language models (LLMs) with domain-specific knowledge graphs to tackle the challenges posed by high-dimensional and intricate fault data in power distribution networks. A multi-dimensional fault ontology [13] is proposed to efficiently structure the fault data, facilitating the creation of a comprehensive fault knowledge graph. To enhance the diagnostic predictions of the LLM, a reinforcement learning-based node selection algorithm is implemented, strategically choosing pertinent nodes from the fault graph to enhance the model’s reasoning abilities. Experimental findings illustrate that this approach surpasses traditional statistical methods and direct LLM reasoning, achieving superior accuracy and efficiency in fault diagnosis. By incorporating selective knowledge graph node sampling, irrelevant noise is filtered out from the fault data, sharpening the LLM’s focus and eradicating “AI hallucinations,” thereby improving analytical precision. Validation on a real-world dataset from a power company confirms the method’s efficacy, promising swift and accurate fault analysis while reducing the time required for power grid fault diagnosis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Achiam, J., et al.: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023)
Feng, Y., Chen, X., Lin, B.Y., Wang, P., Yan, J., Ren, X.: Scalable multi- hop relational reasoning for knowledge-aware question answering. arXiv preprint arXiv:2005.00646 (2020)
Juliani, A.: Simple reinforcement learning with tensorflow part 8: asynchronous actor-critic agents (A3C). Medium, Dated: Dec 16 (2016)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intel. Res. 4, 237–285 (1996)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Lewis, P., et al.: Retrieval-augmented generation for knowledge-intensive NLP tasks. In: Advances in Neural Information Processing Systems, vol. 33, pp. 9459–9474 (2020)
Lin, B.Y., Chen, X., Chen, J., Ren, X.: Kagnet: Knowledge-aware graph networks for commonsense reasoning. arXiv preprint arXiv:1909.02151 (2019)
Logan, I.V., Robert, L., Liu, N.F., Peters, M.E., Gardner, M., Singh, S.: Barack’s wife hillary: using knowledge-graphs for fact-aware language modeling. arXiv preprint arXiv:1906.07241 (2019)
Maleki, N., Padmanabhan, B., Dutta, K.: AI hallucinations: a misnomer worth clarifying (2024)
Meng, Z., Liu, F., Shareghi, E., Su, Y., Collins, C., Collier, N.: Rewire-then-probe: a contrastive recipe for probing biomedical knowledge of pre-trained language models. arXiv preprint arXiv:2110.08173 (2021)
OpenAI: ChatGPT. https://openai.com/blog/chatgpt (2022)
Petroni, F., et al.: Language models as knowledge bases? arXiv preprint arXiv:1909.01066 (2019)
Schaffer, J.: Is there a fundamental level? Noûs 37(3), 498–517 (2003)
Sun, Y., Shi, Q., Qi, L., Zhang, Y.: JointLK: joint reasoning with language models and knowledge graphs for commonsense question answering. arXiv preprint arXiv:2112.02732 (2021)
Sung, M., Lee, J., Yi, S., Jeon, M., Kim, S., Kang, J.: Can language models be biomedical knowledge bases? arXiv preprint arXiv:2109.07154 (2021)
Swamy, V., Romanou, A., Jaggi, M.: Interpreting language models through knowledge graph extraction. arXiv preprint arXiv:2111.08546 (2021)
Wilmot, D., Keller, F.: Memory and knowledge augmented language models for inferring salience in long-form stories. arXiv preprint arXiv:2109.03754 (2021)
Wu, Y., Zhao, Y., Hu, B., Minervini, P., Stenetorp, P., Riedel, S.: An efficient memory-augmented transformer for knowledge-intensive NLP tasks. arXiv preprint arXiv:2210.16773 (2022)
Yasunaga, M., Ren, H., Bosselut, A., Liang, P., Leskovec, J.: QA-GNN: reasoning with language models and knowledge graphs for question answering. arXiv preprint arXiv:2104.06378 (2021)
Acknowledgments
This work is supported by the Research Funds from State Grid Fujian (Research on Key Knowledge-Data Driven Event Knowledge Graph Technologies for Intelligent Decision-Making in Distribution Networks, 521304230008).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Liu, B. et al. (2024). Enhancing Large Language Models with Graph-Based Node Sampling for Fault Attribution in Power Distribution Networks. In: Huang, DS., Chen, W., Zhang, Q. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science, vol 14874. Springer, Singapore. https://doi.org/10.1007/978-981-97-5618-6_37
Download citation
DOI: https://doi.org/10.1007/978-981-97-5618-6_37
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5617-9
Online ISBN: 978-981-97-5618-6
eBook Packages: Computer ScienceComputer Science (R0)