skip to main content
10.1145/1149488.1149506acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmumConference Proceedingsconference-collections
Article

Selective rendering quality for an efficient navigational aid in virtual urban environments on mobile platforms

Published: 08 December 2005 Publication History

Abstract

The perception that we have of our world depends on the task we are currently performing in the environment, so if we are driving a car we will pay attention to the objects that are visually important to the task we are performing such as, the road, road signs, other vehicles, etc. The same is true when we explore virtual environments. The creation of high-fidelity 3D maps on mobile devices to aid navigation in urban environments is computationally very expensive, precluding achieving this quality at interactive rates. In this paper we present a case study to show how the human visual system may be exploited, when viewers are undertaking a task, to reduce the overall quality of the displayed image, without the users being aware of this reduction in quality. The displayed images are selectively rendered with the key features used to identify location and orientation in a 3D urban environment produced in high quality and the remainder of the image in low quality.

References

[1]
Bessa, M., Coelho, A., and Chalmers, A. 2004. Alternate feature location for rapid navigation using a 3d map on a mobile device. In MUM '04: Proceedings of the 3rd international conference on Mobile and ubiquitous multimedia, ACM Press, New York, NY, USA, 5--9.
[2]
Cater, K., Chalmers, A., and Ledda, P. 2002. Selective quality rendering by exploiting human inattentional blindness: Looking but not seeing. In Proceedings of Symposium on Virtual Reality Software and Technology 2002, ACM, 17--24.
[3]
Cater, K., Chalmers, A., and Ward, G. 2003. Detail to attention: Exploiting visual tasks for selective rendering. Eurographics Rendering Symposium, Leuven (June).
[4]
Ferwerda, J., and et al, P. 1996. A model of visual adaptation for realistic image synthesis. In Proceedings of SIGGRAPH 1996, ACM, 249--258.
[5]
Greenberg, D., Torrance, K., Shirley, P., Arvo, J., Ferwerda, J., Pattanaik, S., Lafortune, A., Walter, B., Foo, S., and Trumbore, B. 1997. A framework for realistic image synthesis. In Proceedings of SIGGRAPH 1997 (Special Session), ACM, 477--494.
[6]
Itti, L., and Koch, C. 2000. A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research 40, 10--12, 1489--1506.
[7]
Lubeke, D., and Hallen, B. 2001. Perceptually driven simplification for interactive rendering. In Proceedings of 12th Eurographics Workshop on Rendering, Eurographics, 221--223.
[8]
Maciel, P., and Shirley, P. 1995. Visual navigation of large environments using textured clusters. In Proceedings of Symposium on Interactive 3D Graphics, 95--102.
[9]
Mack, A., and Rock., I. 1998. Inattentional Blindness. Massachusetts Institute of Technology Press.
[10]
Mcnamara, A., Chalmers, A., Troscianko, T., and Gilchrist, I. 2000. Comparing real and synthetic scenes using human judgements of lightness. In 12th Eurographics Workshop on Rendering, 207--219.
[11]
Myszkowski, K., Tawara, T., Akamine, H., and Seidel, H. 2001. Perception-guided global illumination solution for animation rendering. In Proceedings of SIGGRAPH 2001, ACM, 221--230.
[12]
Ramasubramanian, M., Pattanaik, S., and Greenberg, D. 1999. A perceptually based physical error metric for realistic image synthesis. In Proceedings of SIGGRAPH 1999, ACM, 73--82.
[13]
Sillion, F., Drettakis, G., and Bodelet, B. 1997. Efficient impostor manipulation for real-time visualization of urban scenery. In Computer Graphics Forum (Proc. of Eurographics '97), D. Fellner and L. Szirmay-Kalos, Eds., vol. 16, 207--218.
[14]
Yarbus, A. 1967. Eye movements during perception of complex objects. Eye Movements and Vision Chapter VII, 171--196.
[15]
Yee, H., Pattanaik, S., and Greenberg, D. 2001. Satiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. ACM Transactions on Computer Graphics 20, 1, 39--65.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
MUM '05: Proceedings of the 4th international conference on Mobile and ubiquitous multimedia
December 2005
148 pages
ISBN:0473106582
DOI:10.1145/1149488
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 December 2005

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. 3D maps
  2. inattentional blindness
  3. mobile devices
  4. visual perception

Qualifiers

  • Article

Conference

MUM05

Acceptance Rates

Overall Acceptance Rate 190 of 465 submissions, 41%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 215
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media