skip to main content
10.1145/3673971.3674018acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmhiConference Proceedingsconference-collections
research-article

Using Virtual Reality Technologies to Cope with Grief and Emotional Resilience of Absent Loved Ones

Published: 09 September 2024 Publication History

Abstract

Parting and death are an inevitable part of life. When we say goodbye to a loved one or pet, there is always a complex emotion, including inseparable love and endless longing. Starting from the end of 2019, the new coronavirus pneumonia caused a global pandemic, rapidly spreading to various countries and gradually becoming a global plague. Many people lost their beloved relatives without warning in this epidemic. And with the advent of the digital age, many parents need more time to spend with their children after returning home. Some parents have been working abroad for a long time and may not even be able to meet with them. Based on the above two problems, we developed a system that combines deep fake technology, voice cloning, and VR. In this study, we mainly use Autoencoder and PaddleSpeech as the main models to restore the faces and voices of deceased family members and combine them with VR to allow users to have an immersive experience so that family members who miss the deceased can watch the deep fake technology through VR The video was created to soothe the feeling of missing him. In addition, parents who are busy at work or have worked abroad for a long time can also use this paper technology to make their children feel at ease.

References

[1]
Lu Liwei. 2012. Research and Application of Deep Learning Networks. Institute of Electrical Engineering, National Chung Cheng University.
[2]
Saima Waseem, Syed Abdul Rahman Syed Abu Bakar, Bilal Ashfaq Ahmed, Zaid Omar, Taiseer Abdalla Elfadil Eisa, Mhassen Elnour Elneel Dalam. 2023. DeepFake on face and expression swap: a review. In Proceedings of the IEEE Access, vol. 11, pp. 117865-117906, 2023.
[3]
Saima Waseem, Syed Abdul Rahman Syed Abu-Bakar, Zaid Omar, Saba Baloch. 2023. Multi-attention-based approach for deepfake face and expression swap detection and localization. J Image Video Proc. 2023, 14 (2023). https://doi.org/10.1186/s13640-023-00614-z
[4]
Huang Qiqing. 2014. Facial Feature Point Detection Based on Deep Convolutional Network with Posture Perception. Department of Information Engineering, National Tsinghua University.
[5]
Krzysztof Milewski, Szymon Zaporowski, Andrzej Czyzewski. 2023. Comparison of the ability of neural network model and humans to detect a cloned voice. Electronics 12, no. 21: 4458. https://doi.org/10.3390/electronics12214458
[6]
Zheng Anjie. 2021. Chinese Speech Synthesis for Arbitrary Target Sounds. Institute of Information Engineering, National Sun Yat-sen University.
[7]
Rubio-Tamayo, Jose Luis, Manuel Gertrudix Barrio, and Francisco García García. 2017. Immersive environments and virtual reality: systematic review and advances in communication, interaction and simulation. Multimodal Technologies and Interaction 1, no. 4: 21. https://doi.org/10.3390/mti1040021
[8]
Zhang Jiaqi. 2020. Achievement Emotion Theory Explores the Correlation between VR Situation Anxiety, VR Situation Interest, and Learning Transfer. Department of Industrial Education, National Taiwan Normal University.
[9]
V.V.Kosonogov, K.V.Efimov, Z.K.Rakhmankulova, I.A.Zyabreva. 2023. Review of psychophysiological and psychotherapeutic studies of stress using virtual reality technologies. Neurosci Behav Physi 53, 81–91. https://doi.org/10.1007/s11055-023-01393-w
[10]
Li Yuhao. 2021. Exploring VR Narrative Techniques - Taking the Creation of VR Animation "Women" as an Example. Department of Design, National Taiwan Normal University.
[11]
Bai Chengxi. 2022. Code-Switched Speech Synthesis Based on Self-Supervised Learning and Domain-Adaptive Speaker Encoder. Department of Information Engineering, National Central University.
[12]
Chen Jianyou. 2021. End-to-end Feedforward Neural Network Chinese Speech Synthesis Based on Transformer. Department of Information Engineering, National Yunlin University of Science and Technology.
[13]
faceswap readme, https://github.com/luckyluckydadada/faceswap/blob/master/README.md
[14]
PaddleSpeech Speech Technology Course_One Sentence Speech Synthesis Full Process Practice, https://zhuanlan.zhihu.com/p/579118234
[15]
Hongkun Zhou, Xiaojun Wu, Linghua Yu. 2023. The comforting companion: using ai to bring loved one's voices to newborns, infants, and unconscious patients in ICU. Crit Care 27, 135. https://doi.org/10.1186/s13054-023-04418-5

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICMHI '24: Proceedings of the 2024 8th International Conference on Medical and Health Informatics
May 2024
349 pages
ISBN:9798400716874
DOI:10.1145/3673971
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 September 2024

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICMHI 2024

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 103
    Total Downloads
  • Downloads (Last 12 months)103
  • Downloads (Last 6 weeks)25
Reflects downloads up to 19 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media