Abstract
We introduce a general design framework for the interactive sonification of multimodal medical imaging data. The proposed approach operates on a physical model that is generated based on the structure of anatomical tissues. The model generates unique acoustic profiles in response to external interactions, enabling the user to learn about how the tissue characteristics differ from rigid to soft, dense to sparse, structured to scattered. The acoustic profiles are attained by leveraging the topological structure of the model with minimal preprocessing, making this approach applicable to a diverse array of applications. Unlike conventional methods that directly transform low-dimensional data into global sound features, this approach utilizes unsupervised mapping of features between an anatomical data model and a sound model, allowing for the processing of high-dimensional data. We verified the feasibility of the proposed method with an abdominal CT volume. The results show that the method can generate perceptually discernible acoustic signals in accordance with the underlying anatomical structure. In addition to improving the directness and richness of interactive sonification models, the proposed framework provides enhanced possibilities for designing multisensory applications for multimodal imaging data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ernst, M.O., Di Luca, M.: Multisensory perception: from integration to remapping. In: Sensory Cue Integration, pp. 224–250 (2011)
Shams, L., Seitz, A.R.: Benefits of multisensory learning. Trends Cogn. Sci. 12(11), 411–417 (2008)
Van der Burg, E., Olivers, C.N., Bronkhorst, A.W., Theeuwes, J.: Audiovisual events capture attention: evidence from temporal order judgments. J. Vis. 8(5), 2 (2008)
Middlebrooks, J.C., Green, D.M.: Sound localization by human listeners. Annu. Rev. Psychol. 42(1), 135–159 (1991)
Ronsse, R., et al.: Motor learning with augmented feedback: modality-dependent behavioral and neural consequences. Cereb. Cortex 21(6), 1283–1294 (2011)
Hermann, T.: Taxonomy and definitions for sonification and auditory display. In: International Community for Auditory Display (2008)
Hermann, T., Hunt, A., Neuhoff, J.G.: The Sonification Handbook, vol. 1. Logos Verlag, Berlin (2011)
Franinovic, K., Serafin, S. (eds.): Sonic Interaction Design. MIT Press, Cambridge (2013)
Wegner, C.M., Karron, D.B.: Surgical navigation using audio feedback. In: Medicine Meets Virtual Reality, pp. 450–458. IOS Press (1997)
Ahmad, A., Adie, S.G., Wang, M., Boppart, S.A.: Sonification of optical coherence tomography data and images. Opt. Express 18(10), 9934–9944 (2010)
Hansen, C., et al.: Auditory support for resection guidance in navigated liver surgery. Int. J. Med. Robot. Comput. Assist. Surg. 9(1), 36–43 (2013)
Matinfar, S., et al.: Surgical soundtracks: towards automatic musical augmentation of surgical procedures. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 673–681. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66185-8_76
Black, D., Hansen, C., Nabavi, A., Kikinis, R., Hahn, H.: A survey of auditory display in image-guided interventions. Int. J. Comput. Assist. Radiol. Surg. 12, 1665–1676 (2017). https://doi.org/10.1007/s11548-017-1547-z
Joeres, F., Black, D., Razavizadeh, S., Hansen, C.: Audiovisual AR concepts for laparoscopic subsurface structure navigation. In: Graphics Interface 2021 (2021)
Parseihian, G., Gondre, C., Aramaki, M., Ystad, S., Kronland-Martinet, R.: Comparison and evaluation of sonification strategies for guidance tasks. IEEE Trans. Multimedia 18(4), 674–686 (2016)
Ziemer, T., Black, D., Schultheis, H.: Psychoacoustic sonification design for navigation in surgical interventions. In: Proceedings of Meetings on Acoustics, vol. 30, no. 1, p. 050005. Acoustical Society of America (2017)
Ziemer, T., Schultheis, H., Black, D., Kikinis, R.: Psychoacoustical interactive sonification for short range navigation. Acta Acust. Acust. 104(6), 1075–1093 (2018)
Ziemer, T., Schultheis, H.: Psychoacoustical signal processing for three-dimensional sonification. Georgia Institute of Technology (2019)
Matinfar, S., et al.: Sonification as a reliable alternative to conventional visual surgical navigation. Sci. Rep. 13(1), 5930 (2023). https://www.nature.com/articles/s41598-023-32778-z
Matinfar, S., Hermann, T., Seibold, M., Fürnstahl, P., Farshad, M., Navab, N.: Sonification for process monitoring in highly sensitive surgical tasks. In: Proceedings of the Nordic Sound and Music Computing Conference 2019 (Nordic SMC 2019) (2019)
Roodaki, H., Navab, N., Eslami, A., Stapleton, C., Navab, N.: SonifEye: sonification of visual information using physical modeling sound synthesis. IEEE Trans. Vis. Comput. Graph. 23(11), 2366–2371 (2017)
Hermann, T., Ritter, H.: Listen to your data: model-based sonification for data analysis. In: Advances in Intelligent Computing and Multimedia Systems, vol. 8, pp. 189–194 (1999)
Bovermann, T., Hermann, T., Ritter, H.: Tangible data scanning sonification model. Georgia Institute of Technology (2006)
Smith, J.O.: Physical modeling using digital waveguides. Comput. Music. J. 16(4), 74–91 (1992)
Cook, P.R.: Physically informed sonic modeling (PhISM): synthesis of percussive sounds. Comput. Music. J. 21(3), 38–49 (1997)
Smith, J.O.: Physical audio signal processing: for virtual musical instruments and audio effects. W3K Publishing (2010)
Leonard, J., Cadoz, C.: Physical modelling concepts for a collection of multisensory virtual musical instruments. In: New Interfaces for Musical Expression 2015, pp. 150–155 (2015)
Villeneuve, J., Leonard, J.: Mass-interaction physical models for sound and multi-sensory creation: starting anew. In: Proceedings of the 16th Sound & Music Computing Conference, pp. 187–194 (2019)
Mass Interaction Physics in Java/Processing Homepage. https://github.com/mi-creative/miPhysics_Processing. Accessed 4 Mar 2023
Illanes, A., et al.: Novel clinical device tracking and tissue event characterization using proximally placed audio signal acquisition and processing. Sci. Rep. 8(1), 12070 (2018)
Luo, X., et al.: WORD: a large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image. Med. Image Anal. 82, 102642 (2022)
Zettinig, O., Salehi, M., Prevost, R., Wein, W.: Recent advances in point-of-care ultrasound using the ImFusion Suite for real-time image analysis. In: Stoyanov, D., et al. (eds.) POCUS/BIVPCS/CuRIOUS/CPM -2018. LNCS, vol. 11042, pp. 47–55. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01045-4_6
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Matinfar, S., Salehi, M., Dehghani, S., Navab, N. (2023). From Tissue to Sound: Model-Based Sonification of Medical Imaging. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14228. Springer, Cham. https://doi.org/10.1007/978-3-031-43996-4_20
Download citation
DOI: https://doi.org/10.1007/978-3-031-43996-4_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43995-7
Online ISBN: 978-3-031-43996-4
eBook Packages: Computer ScienceComputer Science (R0)