Skip to main content

EmoAda: A Multimodal Emotion Interaction and Psychological Adaptation System

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14557))

Included in the following conference series:

Abstract

In recent years, anxiety and depression have placed a significant burden on society. However, the supply of psychological services is inadequate and costly. With advancements in multimedia computing and large language model technologies, there is hope for improving the current shortage of psychological resources. In this demo paper, we proposed a multimodal emotional interaction large language model (MEILLM) and develop EmoAda, A Multimodal Emotion Interaction and Psychological Adaptation System, providing users with cost-effective psychological support. EmoAda possesses multimodal emotional perception, personalized emotional support dialogue, and multimodal emotional interaction capabilities, helping users alleviate psychological stress and enhance psychological adaptation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Fan, C., Wang, Z., Li, J., Wang, S., Sun, X.: Robust facial expression recognition with global-local joint representation learning. Multimed. Syst. 1–11 (2022)

    Google Scholar 

  2. Haque, A., Guo, M., Miner, A.S., Fei-Fei, L.: Measuring depression symptom severity from spoken language and 3D facial expressions. arXiv preprint arXiv:1811.08592 (2018)

  3. Huang, Y., Zhai, D., Song, J., Rao, X., Sun, X., Tang, J.: Mental states and personality based on real-time physical activity and facial expression recognition. Front. Psych. 13, 1019043 (2023)

    Article  Google Scholar 

  4. Liu, S., et al.: Towards emotional support dialog systems. arXiv preprint arXiv:2106.01144 (2021)

  5. Santomauro, D.F., et al.: Global prevalence and burden of depressive and anxiety disorders in 204 countries and territories in 2020 due to the COVID-19 pandemic. Lancet 398, 1700–1712 (2021)

    Article  Google Scholar 

  6. Shen, Y., Song, K., Tan, X., Li, D., Lu, W., Zhuang, Y.: HuggingGPT: solving AI tasks with ChatGPT and its friends in huggingface. arXiv preprint arXiv:2303.17580 (2023)

  7. Sun, X., Huang, J., Zheng, S., Rao, X., Wang, M.: Personality assessment based on multimodal attention network learning with category-based mean square error. IEEE Trans. Image Process. 31, 2162–2174 (2022)

    Article  Google Scholar 

  8. Sun, X., Song, Y., Wang, M.: Toward sensing emotions with deep visual analysis: a long-term psychological modeling approach. IEEE Multimed. 27(4), 18–27 (2020)

    Article  Google Scholar 

  9. Turner, R.J., Brown, R.L.: Social support and mental health. Handb. Study Mental Health Soc. Contexts Theor. Syst. 2, 200–212 (2010)

    Google Scholar 

  10. Wang, J., Sun, X., Wang, M.: Emotional conversation generation with bilingual interactive decoding. IEEE Trans. Comput. Soc. Syst. 9(3), 818–829 (2021)

    Article  Google Scholar 

  11. Yin, S., et al.: A survey on multimodal large language models. arXiv preprint arXiv:2306.13549 (2023)

Download references

Acknowledgments

This work was supported by the National Key R&D Programme of China (2022YFC3803202), Major Project of Anhui Province under Grant 202203a05020011 and General Programmer of the National Natural Science Foundation of China (62376084).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiao Sun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dong, T., Liu, F., Wang, X., Jiang, Y., Zhang, X., Sun, X. (2024). EmoAda: A Multimodal Emotion Interaction and Psychological Adaptation System. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14557. Springer, Cham. https://doi.org/10.1007/978-3-031-53302-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53302-0_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53301-3

  • Online ISBN: 978-3-031-53302-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics