Skip to main content
Log in

Model-driven multicomponent volume exploration

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The current multicomponent volume segmentation and labeling methods are mostly hard to get correct segmentation and labeling results automatically and rely hardly on experts’ aids, which make related volume exploration to be time consuming, laborious and prone to errors and omissions. To solve this problem, we present a novel volume exploration method driven by admitted model. We first apply Gaussian mixture models to segment the raw volume. However, different components with similar value are still mixed. To segment these components further, we make use of region-grown principle to produce a fine-grained part segmentation. To label different parts automatically, we found that it is helpful to take advantage of annotated model, like human anatomy model (PlasticboyCC, http://www.plasticboy.co.uk/store/index.html, 2013). However, it is not straightforward to label segmented volume with geometric model automatically. Inspired by electors voting (Au et al., Comput Graph Forum 29:645–654, 2010), we propose a volume-model correspondence schema to overcome this intractable challenge. Moreover, it is essential to exploit intuitive interactive methods for interactive exploration, so we also developed practical precise interaction techniques to assist volume exploration. Our experiments with various data and discussion with specialists show that our method provides an efficient and impactful way to explore volume data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Ames, M., Naaman, M.: Why we tag: motivations for annotation in mobile and online media.In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 971–980 (2007)

  2. Attene, M., Robbiano, F., Spagnuolo, M., Falcidieno, B.: Characterization of 3D shape parts for semantic annotation. Comput. Aided Des. 41(10), 756–763 (2009)

    Article  Google Scholar 

  3. Au, O.K.C., Tai, C.L., Cohen-Or, D., Zheng, Y., Fu, H.: Electors voting for fast automatic shape correspondence. Comput. Graph. Forum 29(2), 645–654 (2010)

    Article  Google Scholar 

  4. Bourguignon, D., Cani, M.P., Drettakis, G.: Drawing for illustration and annotation in 3D. Comput. Graph. Forum 20(3), 114–122 (2001)

    Article  Google Scholar 

  5. Bruckner, S., Groller, M.E.: Volumeshop: an interactive system for direct volume illustration. In: IEEE Visualization, pp. 671–678 (2005)

  6. Cabezas, M., Oliver, A., Lladó, X., Freixenet, J., Bach Cuadra, M.: A review of atlas-based segmentation for magnetic resonance brain images. Comput. Methods Progr. Biomed. 104(3), e158–e177 (2011)

    Article  Google Scholar 

  7. Chan, M.Y., Qu, H., Chung, K.K., Mak, W.H., Wu, Y.: Relation-aware volume exploration pipeline. IEEE Trans. Vis. Comput. Graph. 14, 1683–1690 (2008)

    Article  Google Scholar 

  8. Chen, H.L.J., Samavati, F.F., Sousa, M.C., Mitchell, J.R.: Sketch-based volumetric seeded region growing. In: Eurographics Workshop on Sketch-Based Interfaces and Modeling, pp. 123–129 (2006)

  9. Chen, M., Ebert, D., Hagen, H., Laramee, R.S., van Liere, R., Ma, K.L., Ribarsky, W., Scheuermann, G., Silver, D.: Data, information, and knowledge in visualization. IEEE Trans. Comput. Graph. Appl. 29(1), 12–19 (2009)

    Article  Google Scholar 

  10. Correa, C., Ma, K.L.: Size-based transfer functions: a new volume exploration technique. IEEE Trans. Vis. Comput. Graph. 14(6), 1380–1387 (2008)

    Article  Google Scholar 

  11. Correa, C.D.: Visualizing what lies inside. ACM SIGGRAPH Comput. Graph. Build. Bridges Sci. Arts Technol. 43(2), 5:1–5:6 (2009)

  12. Correa, C.D., Ma, K.L.: Visibility histograms and visibility-driven transfer functions. IEEE Trans. Vis. Comput. Graph. 17(2), 192–204 (2011). doi:10.1109/tvcg.2010.35. Go to ISI http://WOS:000285110000008. Accessed 25 July 2013

  13. Drebin, R.A., Carpenter, L., Hanrahan, P.: Volume rendering. Comput. Graph. 8(3), 65–74 (1988)

  14. Engel, K., Hadwiger, M., Kniss, J., Rezk-Salama, C.: Real-Time Volume Graphics. A K Peters, Natick (2006)

  15. Fisher, M., Savva, M., Hanrahan, P.: Characterizing structural relationships in scenes using graph kernels. In: ACM SIGGRAPH 2011, pp. 34:1–34:12 (2011)

  16. Friese, K.I., Blanke, P., Wolter, F.E.: Yadivłan open platform for 3D visualization and 3D segmentation of medical data. Vis. Comput. 27(2), 129–139 (2011)

    Article  Google Scholar 

  17. Gerl, M., Rautek, P., Isenberg, T., Gröller, E.: Technical section: semantics by analogy for illustrative volume visualization. Comput. Graph. 36(3), 201–213 (2012)

    Article  Google Scholar 

  18. Guo, H.Q., Mao, N.Y., Yuan, X.R.: WYSIWYG (what you see is what you get) volume visualization. IEEE Trans. Vis. Comput. Graph. 17(12), 2106–2114 (2011)

    Article  Google Scholar 

  19. Höhne, K.: Voxel-Man 3D-Navigator: inner organs. In: Regional, Systemic and Radiological Anatomy/Innere Organe. Topographische, Systematische Und Radiologische Anatomie. Springer, New York (2003)

  20. Jung, Y., Kim, J., Eberl, S., Fulham, M., Feng, D.: Visibility-driven PET-CT visualisation with region of interest (ROI) segmentation. Vis. Comput. 29(6–8), 805–815 (2013). doi:10.1007/s00371-013-0833-1

  21. van Kaick, O., Tagliasacchi, A., Sidi, O., Zhang, H., Cohen-Or, D., Wolf, L., Hamarneh, G.: Prior knowledge for part correspondence. Comput. Graph. Forum (Proc. Eurograph.) 30(2), 553–562 (2011)

  22. van Kaick, O., Zhang, H., Hamarneh, G., Cohen-Or, D.: A survey on shape correspondence. Comput. Graph. Forum 30(6), 1681–1707 (2011)

    Article  Google Scholar 

  23. Kalogerakis, E., Hertzmann, A., Singh, K.: Learning 3D mesh segmentation and labeling. ACM Trans. Graph. (SIGGRAPH issue) 29(4), 102:1–102:12 (2010)

  24. Kniss, J., Kindlmann, G., Hansen, C.: Multi-dimensional transfer functions for interactive volume rendering. IEEE Trans. Vis. Comput. Graph. 21(8–10), 270–285 (2002)

    Article  Google Scholar 

  25. Li, W., Ritter, L., Agrawala, M., Curless, B., Salesin, D.: Interactive cutaway illustrations of complex 3D models. ACM Trans. Graph. (SIGGRAPH issue) 26(3), 31:1–31:11 (2007)

  26. Muöz-Moreno, E., Arbat-Plana, A., Batalle, D., Soria, G., Illa, M., Prats-Galino, A., Eixarch, E., Gratacos, E.: A magnetic resonance image based atlas of the rabbit brain for automatic parcellation. PLoS One 8(7), e67418 (2013)

  27. Nam, J.E., Maurer, M., Mueller, K.: A high-dimensional feature clustering approach to support knowledge-assisted visualization. Comput. Graph. 33(5), 607–615 (2009)

    Article  Google Scholar 

  28. Owada, S., Nielsen, F., Igarashi, T.: Volume catcher. In: Symposium on Interactive 3D Graphics and Games, pp. 111–116 (2005)

  29. Pagare, R., Shinde, A.: A study on image annotation techniques. Int. J. Comput. Appl. 37(6), 42–45 (2012)

    Google Scholar 

  30. Papaleo, L., Floriani, L.: Semantic-based segmentation and annotation of 3D models. In: International Conference on Image Analysis and Processing, pp. 103–112 (2009)

  31. Paraboschi, L., Biasotti, S., Falcidieno, B.: 3D scene comparison using topological graphs. In: Eurographics Italian Chapter Conference, pp. 87–93 (2007)

  32. PlasticboyCC: Plasticboy anatomy models store (2013). http://www.plasticboy.co.uk/store/index.html. Accessed 25 July 2013

  33. Praßni, J.S., Ropinski, T., Mensmann, J., Hinrichs, K.: Shape-based transfer functions for volume visualization. In: IEEE Pacific Visualization Symposium, pp. 9–16 (2010)

  34. Rautek, P., Bruckner, S., Gröller, M.E.: Interaction-dependent semantics for illustrative volume rendering. Comput. Graph. Forum 27(3), 847–854 (2008)

    Article  Google Scholar 

  35. Rezk Salama, C., Keller, M., Kohlmann, P.: High-level user interfaces for transfer function design with semantics. IEEE Trans. Vis. Comput. Graph. 12(5), 1021–1028 (2006)

    Article  Google Scholar 

  36. Ruiz, M., Bardera, A., Boada, I., Viola, I., Feixas, M., Sbert, M.: Automatic transfer functions based on informational divergence. IEEE Trans. Vis. Comput. Graph. 17(12), 1932–1941 (2011)

    Article  Google Scholar 

  37. Schiemann, T., Tiede, U., Höhne, K.H.: Segmentation of the visible human for high-quality volume-based visualization. Med. Image Anal. 1(4), 263–270 (1997)

    Article  Google Scholar 

  38. Shen, E., Cheng, Z.Q., Xia, J., Li, S.: Intuive volumetric eraser. In: Computational Visual Media Conference, 250–257 (2012)

  39. Super, B.J.: Knowledge-based part correspondence. Pattern Recogn. 40(10), 2818–2825 (2007)

    Article  MATH  Google Scholar 

  40. Tzeng, F.Y., Lum, E.B., Ma, K.L.: An intelligent system approach to higher-dimensional classification of volume data. IEEE Trans. Vis. Comput. Graph. 11(3), 273–284 (2005)

    Article  Google Scholar 

  41. Verbeek, J.J., Vlassis, N., Kröse, B.: Efficient greedy learning of Gaussian mixture models. Neural Comput. 15(2), 469–485 (2003)

    Article  MATH  Google Scholar 

  42. Wang, Y., Chen, W., Zhang, J., Dong, T., Shan, G., Chi, X.: Efficient volume exploration using the Gaussian mixture model. IEEE Trans. Vis. Comput. Graph. 17(11), 1560–1573 (2011)

  43. Wenyin, L., Dumais, S., Sun, Y., Zhang, H., Czerwinski, M., Field, B.: Semi-automatic image annotation. In: Conference on Human–Computer Interaction, pp. 326–333 (2001)

  44. Yousefi, S., Kehtarnavaz, N., Gholipour, A.: Improved labeling of subcortical brain structures in atlas-based segmentation of magnetic resonance images. IEEE Trans. Biomed. Eng. 59(7), 1808–1817 (2012)

    Article  Google Scholar 

  45. Yuan, X., Zhang, N., Nguyen, M.X., Chen, B.: Volume cutout. Vis. Comput. 21(8–10), 745–754 (2005)

  46. Zhou, J.L., Takatsuka, M.: Automatic transfer function generation using contour tree controlled residue flow model and color harmonics. IEEE Trans. Vis. Comput. Graph. 15(6), 1481–1488 (2009)

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank anonymous reviewers at TVCJ for their comments that helped us to improve the quality of this manuscript. The authors would also like to thank J.Y. Huang for checking reading of this manuscript. This research is supported by the National Natural Science Foundation of China under Grant No. 61170157, and the National Grand Fundamental Research 973 Program of China under Grant No. G2009CB72380.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Enya Shen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, E., Xia, J., Cheng, Z. et al. Model-driven multicomponent volume exploration. Vis Comput 31, 441–454 (2015). https://doi.org/10.1007/s00371-014-0940-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-014-0940-7

Keywords

Navigation