Skip to main content

Abstract

Chatbots are increasingly used in many health care applications and are a useful tool to interact with patients without the need of human intervention. However, if not carefully built and employed, patients could face adverse effects from the interaction with these systems, especially when they are in a vulnerable state of mind. This article explores some of the legal issues regarding the use of conversational agents aimed at offering psychological support.

Funded by the REMIDE project, Carlo Cattaneo University - LIUC.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Dr. Iulia Turc has argued that more attention should be paid to their responses in case of users’ suicidal ideation, see https://towardsdatascience.com/unconstrained-chatbots-condone-self-harm-e962509be2fa.

  2. 2.

    Replika has the least positive reviews if compared to the other apps selected for this article, due to recent changes in the model and in the free services. Users stress the fact that it does not seem intelligent and the dialogue feels scripted. In addition, users report that their preferences on triggers were ignored. Wysa has the majority of its reviews with a positive feedback from users: some highlight how comfortable it is to have someone to talk about their problem“anonymously”, claiming that it is better than seeking comfort from friends and family members; some claim that it has been useful to manage panic attacks and anxiety; some other say that it helps to get asleep. Negative reviews mostly focus on the fact that the conversation feels scripted, or that it is only available in English. Youper Therapy has some negative reviews due to technical issues, lack of different languages, and repetitive scripts. Positive ratings highlight the possibility of performing mindfulness exercises and the fact that it helps understanding users’ feelings. InnerHour has many positive reviews as well, with only limited critical opinions from its users, who report their quick mood improvement. Anima has many positive reviews, although it also has some mixed and negative reviews. Users complaint mostly about the price and the lack of different languages. Positive reviews report that it feels as if thay were talking to a real person.

  3. 3.

    Reviews are now localized and the rating may vary in different countries. Results shown are updated at the 04th of May 2022.

  4. 4.

    See Wysa’s Privacy Policy: https://legal.wysa.io/privacy-policy. Data is only de-identified, therefore it is still considered as “pseudonymized” under the legal framework of GDPR.

  5. 5.

    First paragraph: “Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk”, and second paragraph: “In assessing the appropriate level of security account shall be taken in particular of the risks that are presented by processing, in particular from accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to personal data transmitted, stored or otherwise processed”.

  6. 6.

    In fact, many reviews point out that they feel more comfortable in disclosing their problems to a chatbot than to family and friends, meaning that certain information are kept secret even from the closest persons in their lives.

  7. 7.

    See Wysa’s Privacy Police: “No human has access to or gets to monitor or respond during your chat with the AI Coach”, ibid.

  8. 8.

    See Wysa’s Privacy Policy under the “How do we handle your data when used for research purposes?” section, ibid.

  9. 9.

    Civil Decision no. 9 of 13.04.2022, see the statement of the Romanian Data Protection Authority here: https://www.dataprotection.ro/?page=Comunicat_Presa_14_04_2022 &lang=ro.

  10. 10.

    This is confirmed by the Guidelines issued by Article 29 Working Party: “The controller cannot avoid the Article 22 provisions by fabricating human involvement. For example, if someone routinely applies automatically generated profiles to individuals without any actual influence on the result, this would still be a decision based solely on automated processing”.

  11. 11.

    As noted by the EDBP guidelines, “Pursuant to Article 5(1)(b) GDPR, obtaining valid consent is always preceded by the determination of a specific, explicit and legitimate purpose for the intended processing activity. The need for specific consent in combination with the notion of purpose limitation in Article 5(1)(b) functions as a safeguard against the gradual widening or blurring of purposes for which data is processed, after a data subject has agreed to the initial collection of the data. This phenomenon, also known as function creep, is a risk for data subjects, as it may result in unanticipated use of personal data by the controller or by third parties and in loss of data subject control”, and “controllers should provide specific information with each separate consent request about the data that are processed for each purpose, in order to make data subjects aware of the impact of the different choices they have”.

  12. 12.

    The explanatory memorandum of the AIA notes that “The regulation follows a risk-based approach, differentiating between uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk”. Available at https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206 &from=EN.

  13. 13.

    See Article 7: “Where this Regulation refers to products or related services, such reference shall also be understood to include virtual assistants, insofar as they are used to access or control a product or related service”.

  14. 14.

    Recital 22 contains an explanation about the scope of the Regulation regarding virtual assistants and it also gives a hint about the relevance of chatbot apps.

  15. 15.

    The legal classification may differ as well: some authors have qualified those terms as mere juridical acts instead of contracts.

References

  1. Ahmed, A., et al.: A review of mobile chatbot apps for anxiety and depression and their self-care features. Comput. Methods Programs Biomed. Update 1, 100012 (2021)

    Article  Google Scholar 

  2. Alhasani, M., Mulchandani, D., Oyebode, O., Baghaei, N., Orji, R.: A systematic and comparative review of behavior change strategies in stress management apps: opportunities for improvement. Front. Public Health 10 (2022)

    Google Scholar 

  3. de Almeida, R.S., da Silva, T.: AI chatbots in mental health: are we there yet? In: Digital Therapies in Psychosocial Rehabilitation and Mental Health, pp. 226–243 (2022)

    Google Scholar 

  4. Article 29 Data Protection Working Party: Guidelines on automated individual decision-making and profiling for the purposes of regulation 2016/679. 1, WP215 (2017)

    Google Scholar 

  5. Callejas, Z., Griol, D.: Conversational agents for mental health and wellbeing. In: Lopez-Soto, T. (ed.) Dialog Systems. LAR, vol. 22, pp. 219–244. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-61438-6_11

    Chapter  Google Scholar 

  6. Corbett, C.F., Wright, P.J., Jones, K., Parmer, M.: Voice-activated virtual home assistant use and social isolation and loneliness among older adults: mini review. Front. Public Health 9 (2021)

    Google Scholar 

  7. Denecke, K., Schmid, N., Nüssli, S., et al.: Implementation of cognitive behavioral therapy in e-mental health apps: literature review. J. Med. Internet Res. 24(3), e27791 (2022)

    Article  Google Scholar 

  8. Denecke, K., Vaaheesan, S., Arulnathan, A.: A mental health chatbot for regulating emotions (SERMO)-concept and usability test. IEEE Trans. Emerg. Topics Comput. 9(3), 1170–1182 (2020)

    Article  Google Scholar 

  9. Dyoub, A., Costantini, S., Lisi, F.A.: An approach towards ethical chatbots in customer service. In: AIRO@ AI* IA (2019)

    Google Scholar 

  10. Elshout, M., Elsen, M., Leenheer, J., Loos, M., Luzak, J., et al.: Study on consumers’ attitudes towards terms and conditions (T &CS) (2016)

    Google Scholar 

  11. Fitzpatrick, K.K., Darcy, A., Vierhile, M.: Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Mental Health 4(2), e7785 (2017)

    Article  Google Scholar 

  12. Gaffney, H., Mansell, W., Tai, S., et al.: Conversational agents in the treatment of mental health problems: mixed-method systematic review. JMIR Mental Health 6(10), e14166 (2019)

    Article  Google Scholar 

  13. Gamble, A.: Artificial intelligence and mobile apps for mental healthcare: a social informatics perspective. Aslib J. Inf. Manage. 72(4), 509–523 (2020)

    Article  MathSciNet  Google Scholar 

  14. Gerke, S., Minssen, T., Cohen, G.: Ethical and legal challenges of artificial intelligence-driven healthcare. In: Artificial Intelligence in Healthcare, pp. 295–336. Elsevier (2020)

    Google Scholar 

  15. Goirand, M., Austin, E., Clay-Williams, R.: Implementing ethics in healthcare AI-based applications: a scoping review. Sci. Eng. Ethics 27(5), 1–53 (2021). https://doi.org/10.1007/s11948-021-00336-3

    Article  Google Scholar 

  16. Guarda, P., Petrucci, L.: Quando l’intelligenza artificiale parla: assistenti vocali e sanità digitale alla luce del nuovo regolamento generale in materia di protezione dei dati. BioLaw J.-Rivista di BioDiritto 2, 425–446 (2020)

    Google Scholar 

  17. Hänold, S.: Profiling and automated decision-making: legal implications and shortcomings. In: Corrales, M., Fenwick, M., Forgó, N. (eds.) Robotics, AI and the Future of Law. PLBI, pp. 123–153. Springer, Singapore (2018). https://doi.org/10.1007/978-981-13-2874-9_6

    Chapter  Google Scholar 

  18. Hasal, M., Nowaková, J., Ahmed Saghair, K., Abdulla, H., Snášel, V., Ogiela, L.: Chatbots: security, privacy, data protection, and social aspects. Concurr. Comput. Pract. Exp. 33(19), e6426 (2021)

    Article  Google Scholar 

  19. Kretzschmar, K., Tyroll, H., Pavarini, G., Manzini, A., Singh, I., Group, N.Y.P.A.: Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomed. Inform. Insights 11, 1178222619829083 (2019)

    Google Scholar 

  20. Ludvigsen, K., Nagaraja, S., Angela, D.: When is software a medical device? Understanding and determining the “intention’’ and requirements for software as a medical device in European union law. Eur. J. Risk Regul. 13(1), 78–93 (2022)

    Article  Google Scholar 

  21. Malgieri, G., Comandé, G.: Why a right to legibility of automated decision-making exists in the general data protection regulation. International Data Privacy Law (2017)

    Google Scholar 

  22. Malik, T., Ambrose, A.J., Sinha, C., et al.: Evaluating user feedback for an artificial intelligence-enabled, cognitive behavioral therapy-based mental health app (Wysa): qualitative thematic analysis. JMIR Hum. Factors 9(2), e35668 (2022)

    Article  Google Scholar 

  23. Martinez-Martin, N.: Trusting the bot: addressing the ethical challenges of consumer digital mental health therapy. In: Developments in Neuroethics and Bioethics, vol. 3, pp. 63–91. Elsevier (2020)

    Google Scholar 

  24. May, R., Denecke, K.: Security, privacy, and healthcare-related conversational agents: a scoping review. Inform. Health Soc. Care 47(2), 194–210 (2021)

    Article  Google Scholar 

  25. Möllmann, N.R., Mirbabaie, M., Stieglitz, S.: Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations. Health Inform. J. 27(4), 14604582211052392 (2021)

    Article  Google Scholar 

  26. Myers, A., Chesebrough, L., Hu, R., Turchioe, M.R., Pathak, J., Creber, R.M.: Evaluating commercially available mobile apps for depression self-management. In: AMIA Annual Symposium Proceedings, vol. 2020, p. 906. American Medical Informatics Association (2020)

    Google Scholar 

  27. Parviainen, J., Rantala, J.: Chatbot breakthrough in the 2020s? An ethical reflection on the trend of automated consultations in health care. Med. Health Care Philos. 25(1), 61–71 (2022). https://doi.org/10.1007/s11019-021-10049-w

    Article  Google Scholar 

  28. Ricci, F.: Libertà e responsabilità nei contratti telematici. Studi in onore di Giuseppe Benedetti 3, 1593–1609 (2007)

    Google Scholar 

  29. Ruane, E., Birhane, A., Ventresque, A.: Conversational AI: social and ethical considerations. In: AICS, pp. 104–115 (2019)

    Google Scholar 

  30. Sedlakova, J., Trachsel, M.: Conversational artificial intelligence in psychotherapy: a new therapeutic tool or agent? Am. J. Bioeth. 1–10 (2022)

    Google Scholar 

  31. Sweeney, C., et al.: Can chatbots help support a person’s mental health? Perceptions and views from mental healthcare professionals and experts. ACM Trans. Comput. Educ. 2(3), 1–15 (2021)

    Google Scholar 

  32. Vaidyam, A.N., Wisniewski, H., Halamka, J.D., Kashavan, M.S., Torous, J.B.: Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can. J. Psychiatry 64(7), 456–464 (2019)

    Article  Google Scholar 

  33. Vanderlyn, L., Weber, G., Neumann, M., Väth, D., Meyer, S., Vu, N.T.: "It seemed like an annoying woman": on the perception and ethical considerations of affective language in text-based conversational agents. In: Proceedings of the 25th Conference on Computational Natural Language Learning, pp. 44–57 (2021)

    Google Scholar 

  34. Veale, M., Edwards, L.: Clarity, surprises, and further questions in the article 29 working party draft guidance on automated decision-making and profiling. Comput. Law Secur. Rev. 34(2), 398–404 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chiara Gallese .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gallese, C. (2022). Legal Issues of the Use of Chatbot Apps for Mental Health Support. In: González-Briones, A., et al. Highlights in Practical Applications of Agents, Multi-Agent Systems, and Complex Systems Simulation. The PAAMS Collection. PAAMS 2022. Communications in Computer and Information Science, vol 1678. Springer, Cham. https://doi.org/10.1007/978-3-031-18697-4_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-18697-4_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-18696-7

  • Online ISBN: 978-3-031-18697-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics