Abstract:
The increasing global aging brings the substantial demand for healthcare knowledge among the elderly. Large Language Models (LLMs) based Conversation Agents (CAs) hold si...Show MoreMetadata
Abstract:
The increasing global aging brings the substantial demand for healthcare knowledge among the elderly. Large Language Models (LLMs) based Conversation Agents (CAs) hold significant promise for addressing the elderly’s healthcare knowledge inquiries. Yet, general LLMs often fall short in providing professional and practically usable healthcare conversations due to the lack of specific knowledge, possible hallucination issues and contextual comprehension biases. To address these challenges, we first propose a cost-effective, domain-specific questioning-answering (QA) generation framework based on knowledge distillation (KD). Based on this framework, we then built CareQA, the first Chinese healthcare QA dataset specifically for the elderly, with 41,694 QA pairs spanning geriatric diseases covering multiple categories. A comprehensive benchmarking experiment, including both automated and human evaluation, is conducted to examine the usability of CareQA. The results demonstrate that the LLMs fine-tuned on CareQA perform better in answering elderly healthcare-related questions.
Date of Conference: 03-06 December 2024
Date Added to IEEE Xplore: 10 January 2025
ISBN Information: