Skip to main content

Advertisement

Log in

Mobile Conversational Agents for Context-Aware Care Applications

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Smart mobile devices have fostered new interaction scenarios that demand sophisticated interfaces. The main developers of operating systems for such devices provide APIs for developers to implement their own applications, including different solutions for graphical interfaces, sensor control, and voice interaction. Despite the usefulness of such resources, there are no strategies defined for coupling the multimodal interface with the possibilities that the devices offer to identify and adapt to the user needs, which is particularly important in domains such as Ambient Assisted Living. In this paper, we propose a framework that allows developing context-aware multimodal conversational agents that dynamically incorporate user-specific requirements and preferences as well as characteristics about the interaction environment, in order to improve and personalize the service that is provided. Our proposal integrates the facilities of the Android API in a modular architecture that emphasizes interaction management and context-awareness to build user-adapted, robust and maintainable applications. As a proof of concept, we have used the proposed framework to develop an Android app for older adults suffering from Alzheimer's. The app helps them to preserve their cognitive abilities and enhance their relationship with their environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://www.ispeech.org/developers/android.

  2. http://www.w3.org/TR/soap.

  3. http://docs.oasis-open.org/ws-caf/ws-context/v1.0/wsctx.

  4. http://www.alzheimersblog.org.

References

  1. Ábalos N, Espejo G, López-Cózar R, Callejas Z, Griol D. A multimodal dialogue system for an ambient intelligent application in home environments. Lect Notes Artif Intell. 2010;6231:484–91.

    Google Scholar 

  2. Ahmad F, Hogg-Johnson S, Stewart D, Skinner H, Glazier R, Levinson W. Computer-assisted screening for intimate partner violence and control: a randomized trial. Ann Intern Med. 2009;151(2):93–102.

    Article  PubMed  Google Scholar 

  3. Allen J, Ferguson G, Blaylock N, Byron D, Chambers N, Dzikovska M, Galescu L, Swift M. Chester: towards a personal medication advisor. J Biomed Inf. 2006;39(5):500–13.

    Article  Google Scholar 

  4. Almeida DD, Baptista CDS, Silva ED, Campelo C, Figueiredo HD, Lacerda Y. A context-aware system based on service-oriented architecture. 2006. In: Proceedings of AINA’06, pp. 205–210.

  5. Andre E, Bevacqua E, Heylen D, Niewiadomski R, Pelachaud C, Peters C, Poggi I, Rehm M. Non-verbal persuasion and communication in an affective agent. In: Emotion oriented systems. The humaine handbook. Cognitive technologies. Berlin: Springer; 2011. pp. 585–608.

  6. Ayesh A, Blewitt W. Models for computational emotions from psychological theories using type I fuzzy logic. Cogn Comput. 2015;7(3):285–308.

    Article  Google Scholar 

  7. Bahar-Fuchs A, Clare L, Woods B. Cognitive training and cognitive rehabilitation for persons with mild to moderate dementia of the Alzheimer’s or vascular type: a review. Alzheimer’s Res Thera. 2013;5(35):1–14.

    Google Scholar 

  8. Becker R, Cáceres R, Hanson K, Isaacman S, Loh J, Martonosi M, Rowland J, Urbanek S, Varshavsky A, Volinsky C. Human mobility characterization from cellular network data. Commun ACM. 2013;56(1):74–82.

    Article  Google Scholar 

  9. Bee N, Wagner J, André E, Charles F, Pizzi D, Cavazza M. Multimodal interaction with a virtual character in interactive storytelling. 2010. In: Proceedings of AAMAS’10, pp 1535–1536.

  10. Benus S. Social aspects of entrainment in spoken interaction. Cogn Comput. 2014;6(4):802–13.

    Article  Google Scholar 

  11. Bevacqua E, Mancini M, Pelachaud C. A listening agent exhibiting variable behaviour. Lect Notes Comput Sci. 2008;5208:262–9.

    Article  Google Scholar 

  12. Bickmore T, Giorgino T. Health dialog systems for patients and consumers. J Biomed Inf. 2006;39(5):556–71.

    Article  Google Scholar 

  13. Bickmore T, Mitchell S, Jack B, Paasche-Orlow M, Pfeifer L, O’Donnell J. Response to a relational agent by hospital patients with depressive symptoms. Interact Comput. 2010a;22:289–98.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Bickmore T, Puskar K, Schlenk E, Pfeifer L, Sereika S. Maintaining reality: Relational agents for antipsychotic medication adherence. Interact Comput. 2010b;22:276–88.

    Article  Google Scholar 

  15. Black LA, McTear MF, Black ND, Harper R, Lemon M. Appraisal of a conversational artefact and its utility in remote patient monitoring. 2005. In: Proceedings of CBMS’05, pp. 506–508.

  16. Blázquez G, Berlanga A, Molina J. Contexto: a fusion architecture to obtain mobile context. 2011. In: Proceedings of FUSION’11, pp. 1–8.

  17. Bonino D, Corno F. What would you ask to your home if it were intelligent? Exploring user expectations about next-generation homes. J Ambient Intell Smart Environ. 2011;3(2):111–6.

    Google Scholar 

  18. Bunt H, Alexandersson J, Carletta J, Choe J, Fang A, Hasida K, Lee K, Petukhova V, Popescu-Belis A, Romary L, Soria C, Traum D. Towards an ISO standard for dialogue act annotation. 2010. In: Proceedings of LREC’10, pp. 2548–2555.

  19. Campillo-Sánchez P, Gómez-Sanz J. Agent based simulation for creating ambient assisted living solutions. 2014. In: Proceedings of PAAMS’14, pp. 319–322.

  20. Cassell J. More than just another pretty face: embodied conversational interface agents. Commun ACM. 2000;43(4):70–8.

    Article  Google Scholar 

  21. Castells M, Fernández-Ardévol M, Linchuan-Qiu J, Sey A. Mobile communication and society: a global perspective. Cambridge: MIT Press; 2009.

    Google Scholar 

  22. Cavazza M, de la Cámara RS, Turunen M. How was your day? A companion ECA. 2010. In: Proceedings of AAMAS’10, pp. 1629–1630.

  23. Choi J, Twamley E. Cognitive rehabilitation therapies for Alzheimer’s disease: a review of methods to improve treatment engagement and self-efficacy. Neuropsychol Rev. 2013;23(1):48–62.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Corchado J, Bajo J, Abraham A. GerAmi: improving healthcare delivery in geriatric residences. Intell Syst. 2008;23(2):19–25.

    Article  Google Scholar 

  25. Delichatsios H, Friedman R, Glanz K, Tennstedt S, Smigelski C, Pinto B. Randomized trial of a talking computer to improve adults eating habits. Am J Health Promot. 2000;15:215–24.

    Article  Google Scholar 

  26. Dutoit T. An introduction to text-to-speech synthesis. Berlin: Kluwer Academic Publishers; 1996.

    Google Scholar 

  27. Evanini K, Hunter P, Liscombe J, Suendermann D, Dayanidhi K, Pieraccini R. Caller experience: a method for evaluating dialog systems and its automatic prediction. 2008. In: Proceedings of SLT’08, pp. 129–132.

  28. Farzanfar R, Frishkopf S, Migneault J, Friedman R. Telephone-linked care for physical activity: a qualitative evaluation of the use patterns of an information technology program for patients. J Biomed Inf. 2005;38:220–8.

    Article  Google Scholar 

  29. Fernández J, Pavón J. Talking agents: a distributed architecture for interactive artistic installations. Integr Comput Aided Eng. 2010;17(3):243–59.

    Google Scholar 

  30. Ghanem K, Hutton H, Zenilman J, Zimba R, Erbelding E. Audio computer assisted self interview and face to face interview modes in assessing response bias among STD clinic patients. Sex Transm Infect. 2005;81(5):421–5.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Giorgino T, Azzini I, Rognoni C, Quaglini S, Stefanelli M, Gretter R, Falavigna D. Automated spoken dialogue system for hypertensive patient home management. J Med Inf. 2004;74:159–67.

    Article  Google Scholar 

  32. Glanz K, Shigaki D, Farzanfar R, Pinto B, Kaplan B, Friedman R. Participant reactions to a computerized telephone system for nutrition and exercise counseling. Patient Educ Couns. 2003;49:157–63.

    Article  PubMed  Google Scholar 

  33. Gnjatovic M. Therapist-centered design of a Robot’s dialogue behavior. Cogn Comput. 2014;6(4):775–88.

    Article  Google Scholar 

  34. González-Vélez H, Mier M, Juliá-Sapé M, Arvanitis T, García-Gómez J, Robles M, Lewis P, Dasmahapatra S, Dupplaw D, Peet A, Arús C, Celda B, Van-Huffel S, Lluch-Ariet M. HealthAgents: distributed multi-agent brain tumor diagnosis and prognosis. J Appl Intell. 2009;30(3):191–202.

    Article  Google Scholar 

  35. Griol D, Molina J. A framework to develop adaptive multimodal dialog systems for android-based mobile devices. 2014. In: Proceedings of HAIS’14, pp. 25–36.

  36. Griol D, Hurtado L, Segarra E, Sanchis E. A statistical approach to spoken dialog systems design and evaluation. Speech Commun. 2008;50(8–9):666–82.

    Article  Google Scholar 

  37. Griol D, Callejas Z, López-Cózar R, Riccardi G. A domain-independent statistical methodology for dialog management in spoken dialog systems. Comput Speech Lang. 2014;28(3):743–68.

    Article  Google Scholar 

  38. Han B, Jia W, Shen J, Yuen M. Context-awareness in mobile web services. 2008. In: Proceedings of ISPA’04, pp. 519–528.

  39. Harrington T, Harrington M. Gerontechnology: why and how?. Maastricht: Shaker Publishing; 2000.

    Google Scholar 

  40. Hassenzahl M, Burmester M, Koller F. Mensch & Computer 2003. Interaktion in Bewegung, Vieweg+Teubner Verlag, chap AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität [A questionnaire for measuring perceived hedonic and pragmatic quality]; 2003. pp. 187–196.

  41. Heinroth T, Minker W. Introducing spoken dialogue systems into intelligent environments. Berlin: Kluwer Academic Publishers, Springer-Verlag; 2012.

    Google Scholar 

  42. Hofmann H, Hermanutz M, Tobisch V, Ehrlich U, Berton A, Minker W. Evaluation of in-car SDS notification concepts for incoming proactive events. 2014. In: Proceedings of IWSDS’14, pp. 102–112.

  43. Hone K, Graham R . Subjective assessment of speech-system interface usability. 1993. In: Proc. Eurospeech’01.

  44. Hubal R, Day R. Informed consent procedures: an experimental test using a virtual character in a dialog systems training application. J Biomed Inf. 2006;39:532–40.

    Article  Google Scholar 

  45. IDC. Worldwide quarterly mobile phone tracker. Tech. Rep. 2015; https://www.idc.com

  46. de Ipiña KL, Alonso J, Solé-Casals J, Barroso N, Henriquez P, Faundez-Zanuy M, Travieso C, Ecay-Torres M, Martínez-Lage P, Eguiraun H. On automatic diagnosis of Alzheimer’s disease based on spontaneous speech analysis and emotional temperature. Cogn Comput. 2015;7(1):44–55.

    Article  Google Scholar 

  47. Jokinen K. Natural interaction in spoken dialogue systems. 2003. In: Proceedings on Workshop ontologies and multilinguality in user interfaces, pp. 730–734.

  48. Kartakis S. A design-and-play approach to accessible user interface development in ambient intelligence environments. J Comput Ind. 2010;61(4):318–28.

    Article  Google Scholar 

  49. Kaufmann T, Pfister B. Syntactic language modeling with formal grammars. Speech Commun. 2012;54(6):715–31.

    Article  Google Scholar 

  50. Larson J. VoiceXML introduction to developing speech applications. New Jersey: Prentice Hall; 2002.

    Google Scholar 

  51. Leite I, Pereira A, Castellano G, Mascarenhas S, Martinho C, Paiva A. Modelling empathy in social robotic companions. Adv User Model. 2012;7138:135–47.

    Article  Google Scholar 

  52. Lemon O. Learning what to say and how to say it: joint optimisation of spoken dialogue management and natural language generation. Comput Speech Lang. 2011;25(2):210–21.

    Article  Google Scholar 

  53. Li S, Wrede B. Why and how to model multi-modal interaction for a mobile robot companion. 2007. In: Proceedings on AAAI Spring Symposium 2007 on Interaction Challenges for Intelligent Assistants, pp. 72–79.

  54. López V, Eisman E, Castro J, Zurita J. A case based reasoning model for multilingual language generation in dialogues. Expert Syst Appl. 2011;39(8):7330–7.

    Article  Google Scholar 

  55. López-Cózar R, Araki M. Spoken, multilingual and multimodal dialogue systems. Hoboken: Wiley; 2005.

    Google Scholar 

  56. Maglogiannis I, Zafiropoulos E, Anagnostopoulos I. An intelligent system for automated breast cancer diagnosis and prognosis using SVM based classifiers. J Appl Intell. 2009;30(1):24–36.

    Article  Google Scholar 

  57. McTear M, Callejas Z. Voice application development for android. Birmingham: Packt Publishing; 2013.

    Google Scholar 

  58. Metze F, Wechsung I, Schaffer S, Seebode J, Moller S. Reliable evaluation of multimodal dialogue systems. In: Human-computer interaction. Novel interaction methods and techniques. Berlin: Springer; 2009. pp. 75–83.

  59. Miesenberger K, Klaus J, Zagler W, Karshmer A (Eds). Computers helping people with special needs. 2010. In: Proceedings on ICCHP 2010, Lecture Notes on Computer Science 4061, Springer.

  60. Migneault JP, Farzanfar R, Wright J, Friedman R. How to write health dialog for a talking computer. J Biomed Inf. 2006;39(5):276–88.

    Article  Google Scholar 

  61. Minker W. Design considerations for knowledge source representations of a stochastically-based natural language understanding component. Speech Commun. 1999;28(2):141–54.

    Article  Google Scholar 

  62. Mooney K, Beck S, Dudley W, Farzanfar R, Friedman R. A computer-based telecommunication system to improve symptom care for women with breast cancer. Ann Behav Med Annu Meet Suppl. 2004;27:152–61.

    Google Scholar 

  63. Ohkawa Y, Suzuki M, Ogasawara H, Ito A, Makino S. A speaker adaptation method for non-native speech using learners’ native utterances for computer-assisted language learning systems. Speech Commun. 2009;51(10):875–82.

    Article  Google Scholar 

  64. O’Shaughnessy D. Automatic speech recognition: history, methods and challenges. Pattern Recognit. 2008;41(10):2965–79.

    Article  Google Scholar 

  65. Oulasvirta A, Rattenbury T, Ma L, Raita E. Habits make smartphone use more pervasive. Pers Ubiquitous Comput. 2012;16(1):105–14.

    Article  Google Scholar 

  66. Paek T, Pieraccini R. Automating spoken dialogue management design using machine learning: an industry perspective. Speech Commun. 2008;50(8–9):716–29.

    Article  Google Scholar 

  67. Payr S. Closing and closure in human-companion interactions: analyzing video data from a field study. 2010. In: Proceedings on IEEE RO-MAN’10, pp. 476–481.

  68. Pérez-Marín D, Pascual-Nieto I. Conversational agents and natural language interaction: techniques and effective practices. Hershey: IGI Global; 2011.

    Book  Google Scholar 

  69. Pfeifer L, Bickmore T. Designing embodied conversational agents to conduct longitudinal health interviews. 2010. In: Proceedings on IVA’10, pp. 4698–4703.

  70. Piau A, Campo E, Rumeau P, Vellas B, Nourhashemi F. Aging society and gerontechnology: a solution for an independent living? J Nutr Health Aging. 2014;18(1):97–112.

    Article  CAS  PubMed  Google Scholar 

  71. Pieraccini R. The voice in the machine: building computers that understand speech. Cambridge: MIT Press; 2012.

    Google Scholar 

  72. Pinto B, Friedman R, Marcus B, Kelley H, Tennstedt S, Gillman M. Effects of a computer-based, telephone-counseling system on physical activity. Am J Prev Med. 2002;23:113–20.

    Article  PubMed  Google Scholar 

  73. Prezerakos G, Tselikas N, Cortese G. Model-driven composition of context-aware web services using contextUML and aspects. 2007. In: Proceedings of ICWS’07, pp. 320–329.

  74. Ramelson H, Friedman R, Ockene J. An automated telephone-based smoking cessation education and counseling system. Patient Educ Couns. 1999;36:131–43.

    Article  CAS  PubMed  Google Scholar 

  75. Rehrl T, Geiger J, Golcar M, Gentsch S, Knobloch J, Rigoll G, Scheibl K, Schneider W, Ihsen S, Wallhoff F. The robot ALIAS as a database for health monitoring for elderly people. 2013. In: Proceedings of AAL’13, pp. 414–423.

  76. Riccardi G. Subjective assessment of speech-system interface usability. 1993. In: Proceedings in Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges (RFMIR’14), pp. 53–56.

  77. Rojas-Barahona L. Health care dialogue systems: practical and theoretical approaches to dialogue management. 2009. Ph.D. thesis, Universita degli Studi di Pavia.

  78. Rouillard J. Web services and speech-based applications around VoiceXML. J Netw. 2007;2(1):27–35.

    Google Scholar 

  79. Saz O, Yin SC, Lleida E, Rose R, Vaquero C, Rodríguez WR. Tools and technologies for computer-aided speech and language therapy. Speech Commun. 2009;51(10):948–67.

    Article  Google Scholar 

  80. Schatzmann J, Weilhammer K, Stuttle M, Young S. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. Knowl Eng Rev. 2006;21(2):97–126.

    Article  Google Scholar 

  81. Searle J. Speech acts. An essay on the philosophy of language. Cambridge: Cambridge University Press; 1969.

    Book  Google Scholar 

  82. Seneff S, Adler M, Glass J, Sherry B, Hazen T, Wang C, Wu T. Exploiting context information in spoken dialogue interaction with mobile devices. 2007. In: Proceedings of IMUx’07, pp. 1–11.

  83. Sixsmith A, Meuller S, Lull F, Klein M, Bierhoff I, Delaney S, Savage R. SOPRANO—an ambient assisted living system for supporting older people at home. 2009. In: Proceedings of ICOST’09, pp. 233–236.

  84. Stolcke A, Coccaro N, Bates R, Taylor P, Ess-Dykema CV, Ries K, Shriberg E, Jurafsky D, Martin R, Meteer M. Dialogue act modeling for automatic tagging and recognition of conversational speech. Comput Linguist. 2000;26(3):339–73.

    Article  Google Scholar 

  85. Syrdal D, Dautenhahn K, Koay K, Ho W. Views from within a narrative: evaluating long-term human-robot interaction in a naturalistic environment using open-ended scenarios. Cogn Comput. 2014;6(4):741–59.

    Article  Google Scholar 

  86. Torres F, Hurtado L, García F, Sanchis E, Segarra E. Error handling in a stochastic dialog system through confidence measures. Speech Commun. 2005;45(3):211–29.

    Article  Google Scholar 

  87. Traum D. Speech acts for dialogue agents. In: Foundations of rational agency. 1999. Berlin: Kluwer. pp 169–201.

  88. Traum D, Larsson S. Current and new directions in discourse and dialogue. In: The information state approach to dialogue management. 2003. Berlin: Kluwer, pp. 325–353.

  89. Tsilfidis A, Mporas I, Mourjopoulos J, Fakotakis N. Automatic speech recognition performance in different room acoustic environments with and without dereverberation preprocessing. Comput Speech Lang. 2013;27(1):380–95.

    Article  Google Scholar 

  90. Villarrubia G, de Paz J, Corchado J, Bajo J. EKG intelligent mobile system for home users. 2014. In: Proceedings of IBERAMIA’14, pp. 767–778.

  91. Wahlster W, Reithinger N, Blocher A. Smartkom: towards multimodal dialogues with anthropomorphic interface agents. 2001. In: Proceedings of Status Conference: Lead Projects Human-Computer Interaction, pp. 22–34.

  92. Wahlster W, editor. SmartKom: foundations of multimodal dialogue systems. Berlin: Springer; 2006.

    Google Scholar 

  93. Wang Y, Acero A. Rapid development of spoken language understanding grammars. Speech Commun. 2006;48(3–4):390–416.

    Article  Google Scholar 

  94. Williams J, Young S. Partially observable markov decision processes for spoken dialog systems. Comput Speech Lang. 2007;21(2):393–422.

    Article  Google Scholar 

  95. Williams J, Plassman B, Burke J, Holsinger T, Benjamin S. Preventing Alzheimer’s disease and cognitive decline. 2010. Evidence Report and Technology Assessment. Agency for Healthcare Research and Quality.

  96. Wolters M, Georgila K, Moore J, Logie R, MacPherson S. Reducing working memory load in spoken dialogue systems. Interact Comput. 2009;21(4):276–87.

    Article  Google Scholar 

  97. Wu WL, Lu RZ, Duan JY, Liu H, Gao F, Chen YQ. Spoken language understanding using weakly supervised learning. Comput Speech Lang. 2010;24(2):358–82.

    Article  Google Scholar 

  98. Young S. The statistical approach to the design of spoken dialogue systems. 2002. In: Tech Rep, Cambridge University Engineering Department.

  99. Young S. Cognitive user interfaces. IEEE Signal Process Magaz. 2011;27(3):128–40.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Griol.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Griol, D., Callejas, Z. Mobile Conversational Agents for Context-Aware Care Applications. Cogn Comput 8, 336–356 (2016). https://doi.org/10.1007/s12559-015-9352-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-015-9352-x

Keywords

Navigation