Skip to main content

From Tissue to Sound: Model-Based Sonification of Medical Imaging

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (MICCAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14228))

  • 2871 Accesses

Abstract

We introduce a general design framework for the interactive sonification of multimodal medical imaging data. The proposed approach operates on a physical model that is generated based on the structure of anatomical tissues. The model generates unique acoustic profiles in response to external interactions, enabling the user to learn about how the tissue characteristics differ from rigid to soft, dense to sparse, structured to scattered. The acoustic profiles are attained by leveraging the topological structure of the model with minimal preprocessing, making this approach applicable to a diverse array of applications. Unlike conventional methods that directly transform low-dimensional data into global sound features, this approach utilizes unsupervised mapping of features between an anatomical data model and a sound model, allowing for the processing of high-dimensional data. We verified the feasibility of the proposed method with an abdominal CT volume. The results show that the method can generate perceptually discernible acoustic signals in accordance with the underlying anatomical structure. In addition to improving the directness and richness of interactive sonification models, the proposed framework provides enhanced possibilities for designing multisensory applications for multimodal imaging data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ernst, M.O., Di Luca, M.: Multisensory perception: from integration to remapping. In: Sensory Cue Integration, pp. 224–250 (2011)

    Google Scholar 

  2. Shams, L., Seitz, A.R.: Benefits of multisensory learning. Trends Cogn. Sci. 12(11), 411–417 (2008)

    Article  Google Scholar 

  3. Van der Burg, E., Olivers, C.N., Bronkhorst, A.W., Theeuwes, J.: Audiovisual events capture attention: evidence from temporal order judgments. J. Vis. 8(5), 2 (2008)

    Article  Google Scholar 

  4. Middlebrooks, J.C., Green, D.M.: Sound localization by human listeners. Annu. Rev. Psychol. 42(1), 135–159 (1991)

    Article  Google Scholar 

  5. Ronsse, R., et al.: Motor learning with augmented feedback: modality-dependent behavioral and neural consequences. Cereb. Cortex 21(6), 1283–1294 (2011)

    Article  Google Scholar 

  6. Hermann, T.: Taxonomy and definitions for sonification and auditory display. In: International Community for Auditory Display (2008)

    Google Scholar 

  7. Hermann, T., Hunt, A., Neuhoff, J.G.: The Sonification Handbook, vol. 1. Logos Verlag, Berlin (2011)

    Google Scholar 

  8. Franinovic, K., Serafin, S. (eds.): Sonic Interaction Design. MIT Press, Cambridge (2013)

    Google Scholar 

  9. Wegner, C.M., Karron, D.B.: Surgical navigation using audio feedback. In: Medicine Meets Virtual Reality, pp. 450–458. IOS Press (1997)

    Google Scholar 

  10. Ahmad, A., Adie, S.G., Wang, M., Boppart, S.A.: Sonification of optical coherence tomography data and images. Opt. Express 18(10), 9934–9944 (2010)

    Article  Google Scholar 

  11. Hansen, C., et al.: Auditory support for resection guidance in navigated liver surgery. Int. J. Med. Robot. Comput. Assist. Surg. 9(1), 36–43 (2013)

    Article  Google Scholar 

  12. Matinfar, S., et al.: Surgical soundtracks: towards automatic musical augmentation of surgical procedures. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 673–681. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66185-8_76

    Chapter  Google Scholar 

  13. Black, D., Hansen, C., Nabavi, A., Kikinis, R., Hahn, H.: A survey of auditory display in image-guided interventions. Int. J. Comput. Assist. Radiol. Surg. 12, 1665–1676 (2017). https://doi.org/10.1007/s11548-017-1547-z

    Article  Google Scholar 

  14. Joeres, F., Black, D., Razavizadeh, S., Hansen, C.: Audiovisual AR concepts for laparoscopic subsurface structure navigation. In: Graphics Interface 2021 (2021)

    Google Scholar 

  15. Parseihian, G., Gondre, C., Aramaki, M., Ystad, S., Kronland-Martinet, R.: Comparison and evaluation of sonification strategies for guidance tasks. IEEE Trans. Multimedia 18(4), 674–686 (2016)

    Article  Google Scholar 

  16. Ziemer, T., Black, D., Schultheis, H.: Psychoacoustic sonification design for navigation in surgical interventions. In: Proceedings of Meetings on Acoustics, vol. 30, no. 1, p. 050005. Acoustical Society of America (2017)

    Google Scholar 

  17. Ziemer, T., Schultheis, H., Black, D., Kikinis, R.: Psychoacoustical interactive sonification for short range navigation. Acta Acust. Acust. 104(6), 1075–1093 (2018)

    Article  Google Scholar 

  18. Ziemer, T., Schultheis, H.: Psychoacoustical signal processing for three-dimensional sonification. Georgia Institute of Technology (2019)

    Google Scholar 

  19. Matinfar, S., et al.: Sonification as a reliable alternative to conventional visual surgical navigation. Sci. Rep. 13(1), 5930 (2023). https://www.nature.com/articles/s41598-023-32778-z

  20. Matinfar, S., Hermann, T., Seibold, M., Fürnstahl, P., Farshad, M., Navab, N.: Sonification for process monitoring in highly sensitive surgical tasks. In: Proceedings of the Nordic Sound and Music Computing Conference 2019 (Nordic SMC 2019) (2019)

    Google Scholar 

  21. Roodaki, H., Navab, N., Eslami, A., Stapleton, C., Navab, N.: SonifEye: sonification of visual information using physical modeling sound synthesis. IEEE Trans. Vis. Comput. Graph. 23(11), 2366–2371 (2017)

    Article  Google Scholar 

  22. Hermann, T., Ritter, H.: Listen to your data: model-based sonification for data analysis. In: Advances in Intelligent Computing and Multimedia Systems, vol. 8, pp. 189–194 (1999)

    Google Scholar 

  23. Bovermann, T., Hermann, T., Ritter, H.: Tangible data scanning sonification model. Georgia Institute of Technology (2006)

    Google Scholar 

  24. Smith, J.O.: Physical modeling using digital waveguides. Comput. Music. J. 16(4), 74–91 (1992)

    Article  Google Scholar 

  25. Cook, P.R.: Physically informed sonic modeling (PhISM): synthesis of percussive sounds. Comput. Music. J. 21(3), 38–49 (1997)

    Article  Google Scholar 

  26. Smith, J.O.: Physical audio signal processing: for virtual musical instruments and audio effects. W3K Publishing (2010)

    Google Scholar 

  27. Leonard, J., Cadoz, C.: Physical modelling concepts for a collection of multisensory virtual musical instruments. In: New Interfaces for Musical Expression 2015, pp. 150–155 (2015)

    Google Scholar 

  28. Villeneuve, J., Leonard, J.: Mass-interaction physical models for sound and multi-sensory creation: starting anew. In: Proceedings of the 16th Sound & Music Computing Conference, pp. 187–194 (2019)

    Google Scholar 

  29. Mass Interaction Physics in Java/Processing Homepage. https://github.com/mi-creative/miPhysics_Processing. Accessed 4 Mar 2023

  30. Illanes, A., et al.: Novel clinical device tracking and tissue event characterization using proximally placed audio signal acquisition and processing. Sci. Rep. 8(1), 12070 (2018)

    Article  Google Scholar 

  31. Luo, X., et al.: WORD: a large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image. Med. Image Anal. 82, 102642 (2022)

    Article  Google Scholar 

  32. Zettinig, O., Salehi, M., Prevost, R., Wein, W.: Recent advances in point-of-care ultrasound using the ImFusion Suite for real-time image analysis. In: Stoyanov, D., et al. (eds.) POCUS/BIVPCS/CuRIOUS/CPM -2018. LNCS, vol. 11042, pp. 47–55. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01045-4_6

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sasan Matinfar .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Matinfar, S., Salehi, M., Dehghani, S., Navab, N. (2023). From Tissue to Sound: Model-Based Sonification of Medical Imaging. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14228. Springer, Cham. https://doi.org/10.1007/978-3-031-43996-4_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43996-4_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43995-7

  • Online ISBN: 978-3-031-43996-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics