As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
This study investigates the verbalization of answers generated by knowledge graph question answering (KGQA) systems using large language models. In user-centric applications, such as dialogue systems and voice assistants, answer verbalization is an essential step to enhance the quality of interactions.
Methodology:
We experimented with different large language models to verbalize answers from knowledge-based question-answering systems. In particular, we fine-tuned the LLM models (T5, BART and PEGASUS) on different inputs, including SPARQL queries and triples, to determine which model performs best for answer verbalization.
Findings:
We found that fine-tuning language models and introducing additional knowledge such as SPARQL queries, achieve state-of-the-art results in verbalizing answers from KGQA systems.
Value:
Our approach can be used to generate answers verbalization for different KGQA systems, including dialogue systems or voice assistants.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.