Abstract
Embodied conversational agents (ECAs) are digital characters that behave like humans and utter humanlike dialogues. They are digital twins of a human coach and are cost effective and reachable as compared to human coaches who typically are costly and have long wait times for appointment. To support a healthy life-style, multiple health experts may be needed and our multi-agent digital twins give access to all the coaches in the same session. To provide motivation and encourage following the advice given, these coaches use relational cues in their dialogues including empowerment, working alliance and affirmation cues which are found in actual human-human coaching sessions. Our digital twins simulate three coaches who are experts in diet, physical activity and cognitive health which collaborate with each other, and the human, to provide a holistic coaching experience. This paper reports on our use of Generative AI to modify neutral dialogues with these relational cues. We recommend that both automated and human validation be undertaken in the context of health advice.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bickmore, T.W., Pfeifer, L.M., Jack, B.W.: Taking the time to care: empowering low health literacy hospital patients with virtual nurse agents. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2009)
Lisetti, C.L., et al.: Building an on-demand avatar-based health intervention for behavior change. In: Twenty-Fifth International FLAIRS Conference (2012)
Abdulrahman, A., Richards, D.: In search of embodied conversational and explainable agents for health behaviour change and adherence. Multimodal Technol. Interact. 5(9), 56 (2021)
Bickmore, T.W.: Unspoken rules of spoken interaction. J. Commun. ACM 47(4), 38–44 (2004)
Lee, U., et al.: Few-shot is enough: exploring ChatGPT prompt engineering method for automatic question generation in English education. Educ. Inf. Technol. 1–33 (2023)
Zhang, T., et al.: BERTScore: evaluating text generation with BERT. arXiv preprint arXiv:1904.09675 (2019)
Johnson, D., et al.: Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model. PMC10002821 (2023)
Bickmore, T., Gruber, A., Picard, R.: Establishing the computer–patient working alliance in automated health behavior change interventions. Patient Educ. Couns. 59(1), 21–30 (2005)
Bickmore, T.: Relational Agents. [Web Page] 2019 [cited 2019 12th October 2019]; Definition of Relational Agents]. http://www.ccs.neu.edu/home/bickmore/agents/
Baker, S., Richards, D., Caldwell, P.: Relational agents to promote ehealth advice adherence. In: Pham, D.-N., Park, S.-B. (eds.) PRICAI 2014: Trends in Artificial Intelligence, pp. 1010–1015. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13560-1_87
Yin, L., Ring, L., Bickmore, T.: Using an interactive visual novel to promote patient empowerment through engagement. In: Proceedings of the International Conference on the Foundations of Digital Games, pp. 41–48 (2012)
Salman, S., Richards, D., Dras, M.: Identifying which relational cues users find helpful to allow tailoring of e-coach dialogues. Multimodal Technol. Interact. 7(10), 93 (2023)
Barange, M.: Task-oriented communicative capabilities of agents in collaborative virtual environments for training. Université de Bretagne occidentale-Brest (2015)
Chevaillier, P., et al.: Semantic modeling of virtual environments using mascaret. In: 2012 5th Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS). IEEE (2012)
Beinema, T., et al.: Tailoring coaching strategies to users’ motivation in a multi-agent health coaching application. Comput. Hum. Behav. 121, 106787 (2021)
Gámez Díaz, R., et al.: Digital twin coaching for physical activities: a survey. Sensors 20(20), 5936 (2020)
Evseeva, L.I., Shipunova, O.D., Pozdeeva, E.G., Trostinskaya, I.R., Evseev, V.V.: Digital learning as a factor of professional competitive growth. In: Antipova, T., Rocha, Á. (eds.) Digital Science 2019, pp. 241–251. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-37737-3_22
Palvia, S., et al.: Online Education: Worldwide Status, Challenges, Trends, and Implications, pp. 233–241. Taylor & Francis (2018)
Kaddour, J., et al.: Challenges and applications of large language models. arXiv preprint arXiv:2307.10169 (2023)
Li, P.: An empirical investigation of pre-trained transformer language models for open-domain dialogue generation. arXiv preprint arXiv:2003.04195 (2020)
Hudeček, V., Dušek, O.: Are large language models all you need for task-oriented dialogue? In: Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue (2023)
Marvin, G., Hellen, N., Jjingo, D., Nakatumba-Nabende, J.: Prompt engineering in large language models. In: Jeena Jacob, I., Piramuthu, S., Falkowski-Gilski, P. (eds.) Data Intelligence and Cognitive Informatics: Proceedings of ICDICI 2023, pp. 387–402. Springer, Singapore (2024). https://doi.org/10.1007/978-981-99-7962-2_30
Kalyan, K.S.: A survey of GPT-3 family large language models including ChatGPT and GPT-4. Nat. Lang. Process. J. 100048 (2023)
Amaro, I., et al.: AI unreliable answers: a case study on ChatGPT. In: International Conference on Human-Computer Interaction (2023)
Cer, D., et al.: Universal sentence encoder. arXiv preprint arXiv:1803.11175 (2018)
Kocoń, J., et al.: ChatGPT: Jack of All Trades, Master of None, vol. 99. Elsevier (2023)
Abdulrahman, A., Richards, D.: Modelling Working Alliance Using User-aware Explainable Embodied Conversational Agent for Behaviour Change: Framework and Empirical Evaluation (2019)
Fialho, P., Coheur, L., Quaresma, P.: Benchmarking natural language inference and semantic textual similarity for Portuguese. Information 11(10), 484 (2020)
Gaur, C., Kumar, A., Das, S.: Analysis and Design of Document Similarity Using BiLSTM and BERT. In: Shaw, R.N., Paprzycki, M., Ghosh, A. (eds.) ICACIS 2023. CCIS, vol. 1921, pp. 161–167. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-45124-9_12
Olson, D.: From utterance to text: the bias of language in speech and writing. Harv. Educ. Rev. 47(3), 257–281 (1977)
Bromme, R., Rambow, R., Nückles, M.: Expertise and estimating what other people know: the influence of professional experience and type of knowledge. J. Exp. Psychol. Appl. 7(4), 317 (2001)
Trout, J.D.: Scientific explanation and the sense of understanding. Philos. Sci. 69(2), 212–233 (2002)
Wei, J., et al.: Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Salman, S., Richards, D. (2025). Collaborating Digital Twins for Health Coaching. In: Mathieu, P., De la Prieta, F. (eds) Advances in Practical Applications of Agents, Multi-Agent Systems, and Digital Twins: The PAAMS Collection. PAAMS 2024. Lecture Notes in Computer Science(), vol 15157. Springer, Cham. https://doi.org/10.1007/978-3-031-70415-4_20
Download citation
DOI: https://doi.org/10.1007/978-3-031-70415-4_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70414-7
Online ISBN: 978-3-031-70415-4
eBook Packages: Computer ScienceComputer Science (R0)