Abstract
Electric power artificial intelligence has rapidly advanced in recent years, encompassing safety detection, assistant decision-making, and optimal scheduling. With the rise of Large Language Models (LLMs), knowledge-based AI is becoming increasingly prevalent across various domains. However, in the field of electric power, most of the knowledge-based AI is centered on Knowledge Graph (KG) techniques, while less research has been done on power LLMs. In this paper, we are inspired by Self-Consistency (SC) and propose a Self-Consistency, Extraction and Rectify frameworkâSCER, for the usage of KG-enhanced LLM in power operations and maintenance (O&M) question answering scenarios. Specifically, we transfer the SC from the general-purpose domain into the power domain and replace the original model with a Chinese sentence representation model to make it more localized. We design an Extract Mechanism to generate evidence chains through multiple random walks on the POMKG and a Rectify Mechanism to correct the score of the generated rationales. Extensive experiments and specific case studies on the POMQA dataset demonstrate the effectiveness of our proposed SCER for SC transfer and improvement in the power field.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Besta, M., et al.: Graph of thoughts: solving elaborate problems with large language models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 17682â17690 (2024)
Cao, X., Liu, Y.: ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering. Appl. Intell. 53(10), 12032â12046 (2023)
Chen, X., et al.: Universal self-consistency for large language model generation. arXiv preprint arXiv:2311.17311 (2023)
Corso, M.P., Stefenon, S.F., Singh, G., Matsuo, M.V., Perez, F.L., Leithardt, V.R.Q.: Evaluation of visible contamination on power grid insulators using convolutional neural networks. Electr. Eng. 105(6), 3881â3894 (2023)
Ding, R., et al.: Everything of thoughts: defying the law of penrose triangle for thought generation. arXiv preprint arXiv:2311.04254 (2023)
Du, Z., et al..: GLM: general language model pretraining with autoregressive blank infilling. arXiv preprint arXiv:2103.10360 (2021)
Fu, Y., Peng, H., Sabharwal, A., Clark, P., Khot, T.: Complexity-based prompting for multi-step reasoning. In: The Eleventh International Conference on Learning Representations (2022)
He, H., Zhang, H., Roth, D.: Rethinking with retrieval: faithful large language model inference. arXiv preprint arXiv:2301.00303 (2022)
Huang, N., et al.: Endowing language models with multimodal knowledge graph representations. arXiv preprint arXiv:2206.13163 (2022)
Imani, S., Du, L., Shrivastava, H.: Mathprompter: mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398 (2023)
Li, X., et al.: Chain-of-knowledge: grounding large language models via dynamic knowledge adapting over heterogeneous sources. In: The Twelfth International Conference on Learning Representations (2023)
Li, Y., et al.: Making language models better reasoners with step-aware verifier. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 5315â5333 (2023)
Nie, Y., Williams, A., Dinan, E., Bansal, M., Weston, J., Kiela, D.: Adversarial nli: a new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599 (2019)
Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., Wu, X.: Unifying large language models and knowledge graphs: a roadmap. IEEE Trans. Knowl. Data Eng. (2024)
Peng, B., et al.: Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813 (2023)
Perozzi, B., Al-Rfou, R., Skiena, S.: Deepwalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 701â710 (2014)
Shao, Z., Gong, Y., Shen, Y., Huang, M., Duan, N., Chen, W.: Synthetic prompting: Generating chain-of-thought demonstrations for large language models. In: International Conference on Machine Learning, pp. 30706â30775. PMLR (2023)
Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980 (2020)
Shum, K., Diao, S., Zhang, T.: Automatic prompt augmentation and selection with chain-of-thought from labeled data. arXiv preprint arXiv:2302.12822 (2023)
Sun, J., et al.: Think-on-graph: deep and responsible reasoning of large language model with knowledge graph. arXiv preprint arXiv:2307.07697 (2023)
Wang, X., et al.: Kepler: a unified model for knowledge embedding and pre-trained language representation. Trans. Assoc. Comput. Linguist. 9, 176â194 (2021)
Wang, X., et al.: Optimal scheduling of integrated energy systems by fusing a graph neural network model and reinforcement learning. Power Syst. Protect. Control 51(24), 102â110 (2023)
Wang, X., et al.: Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 (2022)
Wang, Y., Sun, Q, H.S.: M3e: moka massive mixed embedding model (2023)
Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. Adv. Neural. Inf. Process. Syst. 35, 24824â24837 (2022)
Yao, S., et al.: Tree of thoughts: deliberate problem solving with large language models. In: Advances in Neural Information Processing Systems, vol. 36 (2024)
Ye, X., Shang, L., Dong, X., Liu, C., Tian, Y., Fang, H.: Knowledge graph for distribution network fault handling. Power Syst. Technol. 46(10), 3739â3749 (2022)
Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493 (2022)
Acknowledgments
This work is supported by the Research Funds from State Grid Gansu (SGGSKY00XTJS2310058).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Âİ 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhao, J., Ma, Z., Zhao, H., Zhang, X., Liu, Q., Zhang, C. (2024). Self-consistency, Extract and Rectify: Knowledge Graph Enhance Large Language Model for Electric Power Question Answering. In: Huang, DS., Pan, Y., Guo, J. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science, vol 14873. Springer, Singapore. https://doi.org/10.1007/978-981-97-5615-5_40
Download citation
DOI: https://doi.org/10.1007/978-981-97-5615-5_40
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5614-8
Online ISBN: 978-981-97-5615-5
eBook Packages: Computer ScienceComputer Science (R0)