Skip to main content

Self-consistency, Extract and Rectify: Knowledge Graph Enhance Large Language Model for Electric Power Question Answering

  • Conference paper
  • First Online:
Advanced Intelligent Computing Technology and Applications (ICIC 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14873))

Included in the following conference series:

  • 646 Accesses

Abstract

Electric power artificial intelligence has rapidly advanced in recent years, encompassing safety detection, assistant decision-making, and optimal scheduling. With the rise of Large Language Models (LLMs), knowledge-based AI is becoming increasingly prevalent across various domains. However, in the field of electric power, most of the knowledge-based AI is centered on Knowledge Graph (KG) techniques, while less research has been done on power LLMs. In this paper, we are inspired by Self-Consistency (SC) and propose a Self-Consistency, Extraction and Rectify framework—SCER, for the usage of KG-enhanced LLM in power operations and maintenance (O&M) question answering scenarios. Specifically, we transfer the SC from the general-purpose domain into the power domain and replace the original model with a Chinese sentence representation model to make it more localized. We design an Extract Mechanism to generate evidence chains through multiple random walks on the POMKG and a Rectify Mechanism to correct the score of the generated rationales. Extensive experiments and specific case studies on the POMQA dataset demonstrate the effectiveness of our proposed SCER for SC transfer and improvement in the power field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Besta, M., et al.: Graph of thoughts: solving elaborate problems with large language models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 17682–17690 (2024)

    Google Scholar 

  2. Cao, X., Liu, Y.: ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering. Appl. Intell. 53(10), 12032–12046 (2023)

    Article  Google Scholar 

  3. Chen, X., et al.: Universal self-consistency for large language model generation. arXiv preprint arXiv:2311.17311 (2023)

  4. Corso, M.P., Stefenon, S.F., Singh, G., Matsuo, M.V., Perez, F.L., Leithardt, V.R.Q.: Evaluation of visible contamination on power grid insulators using convolutional neural networks. Electr. Eng. 105(6), 3881–3894 (2023)

    Article  Google Scholar 

  5. Ding, R., et al.: Everything of thoughts: defying the law of penrose triangle for thought generation. arXiv preprint arXiv:2311.04254 (2023)

  6. Du, Z., et al..: GLM: general language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360 (2021)

  7. Fu, Y., Peng, H., Sabharwal, A., Clark, P., Khot, T.: Complexity-based prompting for multi-step reasoning. In: The Eleventh International Conference on Learning Representations (2022)

    Google Scholar 

  8. He, H., Zhang, H., Roth, D.: Rethinking with retrieval: faithful large language model inference. arXiv preprint arXiv:2301.00303 (2022)

  9. Huang, N., et al.: Endowing language models with multimodal knowledge graph representations. arXiv preprint arXiv:2206.13163 (2022)

  10. Imani, S., Du, L., Shrivastava, H.: Mathprompter: mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398 (2023)

  11. Li, X., et al.: Chain-of-knowledge: grounding large language models via dynamic knowledge adapting over heterogeneous sources. In: The Twelfth International Conference on Learning Representations (2023)

    Google Scholar 

  12. Li, Y., et al.: Making language models better reasoners with step-aware verifier. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315–5333 (2023)

    Google Scholar 

  13. Nie, Y., Williams, A., Dinan, E., Bansal, M., Weston, J., Kiela, D.: Adversarial nli: a new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599 (2019)

  14. Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., Wu, X.: Unifying large language models and knowledge graphs: a roadmap. IEEE Trans. Knowl. Data Eng. (2024)

    Google Scholar 

  15. Peng, B., et al.: Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813 (2023)

  16. Perozzi, B., Al-Rfou, R., Skiena, S.: Deepwalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 701–710 (2014)

    Google Scholar 

  17. Shao, Z., Gong, Y., Shen, Y., Huang, M., Duan, N., Chen, W.: Synthetic prompting: Generating chain-of-thought demonstrations for large language models. In: International Conference on Machine Learning, pp. 30706–30775. PMLR (2023)

    Google Scholar 

  18. Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980 (2020)

  19. Shum, K., Diao, S., Zhang, T.: Automatic prompt augmentation and selection with chain-of-thought from labeled data. arXiv preprint arXiv:2302.12822 (2023)

  20. Sun, J., et al.: Think-on-graph: deep and responsible reasoning of large language model with knowledge graph. arXiv preprint arXiv:2307.07697 (2023)

  21. Wang, X., et al.: Kepler: a unified model for knowledge embedding and pre-trained language representation. Trans. Assoc. Comput. Linguist. 9, 176–194 (2021)

    Article  Google Scholar 

  22. Wang, X., et al.: Optimal scheduling of integrated energy systems by fusing a graph neural network model and reinforcement learning. Power Syst. Protect. Control 51(24), 102–110 (2023)

    Google Scholar 

  23. Wang, X., et al.: Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 (2022)

  24. Wang, Y., Sun, Q, H.S.: M3e: moka massive mixed embedding model (2023)

    Google Scholar 

  25. Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural. Inf. Process. Syst. 35, 24824–24837 (2022)

    Google Scholar 

  26. Yao, S., et al.: Tree of thoughts: deliberate problem solving with large language models. In: Advances in Neural Information Processing Systems, vol. 36 (2024)

    Google Scholar 

  27. Ye, X., Shang, L., Dong, X., Liu, C., Tian, Y., Fang, H.: Knowledge graph for distribution network fault handling. Power Syst. Technol. 46(10), 3739–3749 (2022)

    Google Scholar 

  28. Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493 (2022)

Download references

Acknowledgments

This work is supported by the Research Funds from State Grid Gansu (SGGSKY00XTJS2310058).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chentao Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Âİ 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhao, J., Ma, Z., Zhao, H., Zhang, X., Liu, Q., Zhang, C. (2024). Self-consistency, Extract and Rectify: Knowledge Graph Enhance Large Language Model for Electric Power Question Answering. In: Huang, DS., Pan, Y., Guo, J. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science, vol 14873. Springer, Singapore. https://doi.org/10.1007/978-981-97-5615-5_40

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5615-5_40

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5614-8

  • Online ISBN: 978-981-97-5615-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics