Skip to main content

Advertisement

Log in

SAVE: saliency-assisted volume exploration

  • Regular Paper
  • Published:
Journal of Visualization Aims and scope Submit manuscript

Abstract

Interactive visualization has become a valuable tool in visual exploration of scientific data. One prerequisite and fundamental issue is how to infer three-dimensional information through users’ two-dimensional input. Existing approaches commonly build on the hypothesis that user input is precise, which is sometimes invalid because of multiple causes like data noise, limited resolution of display devices and users’ casual input. In this paper, we reconsider some design choices of previous methods and propose an alternative effective algorithm for inferring interaction position in scientific data, especially volume data exploration. Our method automatically assists user interaction with the defined saliency. The presented saliency integrates data value, corresponding transfer function and user input. The result saliency implies remarkable regions of raw data as existing methods. Moreover, it reflects the areas of users’ concern. Thirdly, it eliminates the errors from data and device, helping users get the region they focus on. Various experiments have verified that our method can reasonably refine user interaction and effectively help users access interested features.

Graphical Abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Brodbeck D, Mazza R, Lalanne D (2009) Interactive visualization—A Survey. In: Lalanne D, Kohlas J (eds) Human machine interaction, vol 5440, Lecture notes in computer science Springer, Berlin pp 27–46

    Chapter  Google Scholar 

  • Dix A (2009) Human-computer interaction. Springer, pp 1327–1331

  • Ekaterinaris A., Schiff LB (1990) Vortical flows over delta wings and numerical prediction of vortex breakdown. In: AIAA aerospace sciences conference, pp 90–102

  • Gobbetti E, Pili P, Zorcolo A, Tuveri M (1998) Interactive virtual angioscopy. In: Proceedings of visualization’98, IEEE, pp 435–438

  • Goldstein E (2013) Sensation and perception. Cengage learning pp 3–20

  • Hibbard B (1999) Top ten visualization problems. SIGGRAPH Comput Graph 33(2):21–22

    Article  Google Scholar 

  • Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259

    Article  Google Scholar 

  • Janicke H, Chen M (2010) A salience-based quality metric for visualization. Comput Graph Forum 29(3):1183–1192

    Article  Google Scholar 

  • Johnson C (2004) Top scientific visualization research problems. IEEE Comput Graph Appl 24(4):13–17

    Article  Google Scholar 

  • Kehrer J, Hauser H (2013) Visualization and visual analysis of multifaceted scientific data: a survey. IEEE Trans Vis Comput Graph 19(3):495–513

    Article  Google Scholar 

  • Kim Y, Varshney A (2006) Saliency-guided enhancement for volume visualization. IEEE Trans Vis Comput Graph 12(5):925–932

    Article  Google Scholar 

  • Koch C, Ullman S, Matters of intelligence. Springer, 115–141

  • Koch C, Poggio T (1999) Predicting the visual world: silence is golden, nature neuroscience 2(1)

  • Kohlmann P, Bruckner S, Kanitsar A, Groller ME (2009) Contextual picking of volumetric structures. In: IEEE pacific visualization symposium, 20–23 April 2009 pp 185–192

  • Laramee R, Kosara R (2007) Challenges and unsolved problems, Human-centered visualization environments pp 231–254

  • Lee CH, Varshney A, Jacobs DW (2005) Mesh saliency. ACM Trans Graph 24(3):659–666

    Article  Google Scholar 

  • Luan S, Yu W, Wang J, Yu M, Weng S, Murakami M, Wang J, Xu H, Zhuo H (2013) Trapping of electromagnetic radiation in self-generated and preformed cavities. Laser Part Beams 31(04):589–595

    Article  Google Scholar 

  • Machiraju R, Fowler JE, Thompson D, Soni B, Schroeder W (2001) Data mining for scientific and engineering applications. Springer, pp 257–279

  • Nexus W (2014) https://www.webelements.com/nexus/cate-gory/chemistry/theoretical-chemistry/atomic-structure. Accessed 15 Jan 2014

  • Shen J, On the foundations of vision modeling: I Weber’s law and Weberized TV restoration, Phys D Nonlinear Phenom 175(3C4), 241–251 (2003)

  • Shen E, Cheng Z, Xia J, Li S (2012) Intuitive volume eraser. Computational visual media, Springer, pp 250–257

  • Toennies KD, Derz C (1997) Volume rendering for interactive 3D segmentation. In: medical imaging 1997, Int Soc Opt Photon pp 602–609

  • Wiebel A, Vos FM, Foerster D, Hege HC (2012) WYSIWYP: what you see is what you pick. IEEE Trans Vis Comput Graph 18(12):2236–2244

    Article  Google Scholar 

  • Yu HF, Wang CL, Shene CK, Chen JH (2012) Hierarchical streamline bundles. IEEE Trans Vis Comput Graph 18(8):1353–1367

    Article  Google Scholar 

  • Zhao Q, Koch C (2013) Learning saliency-based visual attention: a review. Signal Process 93(6):1401–1407

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank anonymous reviewers at JOV for their comments that helped us to improve the quality of this manuscript. The authors would also thank English professor B. Li and J.Y. Huang for checking reading of this manuscript. This research is supported by the National Natural Science Foundation of China under Grant No. 61170157, the National Grand Fundamental Research 973 Program of China under Grant No. G2009CB72380, and the Basic Research Program of NUDT.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Enya Shen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, E., Li, S., Cai, X. et al. SAVE: saliency-assisted volume exploration. J Vis 18, 369–379 (2015). https://doi.org/10.1007/s12650-014-0237-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12650-014-0237-y

Keywords

Navigation