Skip to main content

Advertisement

Log in

Real-time adjustment of contrast saliency for improved information visibility in mobile augmented reality

  • Original Article
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

Augmented reality (AR) “augments” virtual information over the real-world medium and is emerging as an important type of an information visualization technique. As such, the visibility and readability of the augmented information must be as high as possible amidst the dynamically changing real-world surrounding and background. In this work, we present a technique based on image saliency analysis to improve the conspicuity of the foreground augmentation to the background real-world medium by adjusting the local brightness contrast. The proposed technique is implemented on a mobile platform considering the usage nature of AR. The saliency computation is carried out for the augmented object’s representative color rather than all the pixels, and searching and adjusting over only a discrete number of brightness levels to produce the highest contrast saliency, thereby making real-time computation possible. While the resulting imagery may not be optimal due to such a simplification, our tests showed that the visibility was still significantly improved without much difference to the “optimal” ground truth in terms of correctly perceiving and recognizing the augmented information. In addition, we also present another experiment that explores in what fashion the proposed algorithm can be applied in actual AR applications. The results suggested that the users clearly preferred the automatic contrast modulation upon large movements in the scenery.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22

Similar content being viewed by others

Notes

  1. The symbol ‘\({ \ominus }\)’ is called the across-scale difference between two maps and is defined as interpolating the coarser scaled image to the finer one and making point-by-point subtraction (Itti et al. 1998).

  2. The symbol ‘\(\oplus\)’ is called the across-scale addition between two maps and is defined as interpolating the coarser scaled image to the finer one and making point-by-point addition (Itti et al. 1998).

References

  • Achanta R, Hemami S, Estrada F, Susstrunk S (2009) Frequency-tuned salient region detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2009, IEEE, pp 1597–1604

  • Ackerman E (2013) Could Google Glass hurt your eyes? A Harvard vision scientist and project glass advisor responds. http://www.forbes.com/sites/eliseackerman/2013/03/04/could-google-glass-hurt-your-eyes-a-harvard-vision-scientist-and-project-glass-advisor-responds/

  • Avery B, Sandor C, Thomas BH (2009) Improving spatial perception for augmented reality x-ray vision. In: Proceedings of the IEEE conference on virtual reality 2009, IEEE, pp 79–82

  • Baker DH (2013) What is the primary cause of individual differences in contrast sensitivity? PLoS ONE 8(7):e69536

    Article  Google Scholar 

  • Bauer T, Erdogan B (2010) Organizational Behavior, v.1.1. Flat World Knowledge, Irvington

    Google Scholar 

  • Birchfield S (2007) KLT: an implementation of the Kanade–Lucas–Tomasi feature tracker. http://www.ces.clemson.edu/~stb/klt/

  • Cheng MM, Zhang GX, Mitra NJ, Huang X, Hu SM (2011) Global contrast based salient region detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2011, IEEE, pp 409–416

  • Eadicicco L (2015) Sony just solved the biggest problem with Google Glass. http://www.businessinsider.com/sony-smartglasses-attach-solves-the-google-glass-style-problem-2015-1/

  • Ehrenstein WH (2003) Basics of seeing motion. Arq Bras Oftalmol 66(5):44–52

    Article  Google Scholar 

  • Gabbard JL, Swan JE, Hix D, Kim SJ, Fitch G (2007) Active text drawing styles for outdoor augmented reality: a user-based study and design implications. In: Proceedings of the IEEE conference on virtual reality 2007, IEEE, pp 35–42

  • GOOGLE (2015) Google Glass. http://www.google.com/glass/start/

  • HFES (2007) ANSI/HFES 100-2007 Human Factors Engineering of Computer Workstations. Human Factors and Ergonomic Society, Santa Monica

    Google Scholar 

  • Hincapié-Ramos JD, Ivanchuk L, Sridharan SK, Irani P (2014) SmartColor: real-time color correction and contrast for optical see-through head-mounted displays. In: Proceedings of the IEEE international symposium on mixed and augmented reality 2014, IEEE, pp 187–194

  • Hou X, Zhang L (2007) Saliency detection: a spectral residual approach. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2007, IEEE, pp 1–8

  • Human Benchmark (2017) Reaction time statistics. http://www.humanbenchmark.com/tests/reactiontime/statistics

  • ISO (1992) ISO 9241-3:1992: ergonomic requirements for office work with visual display terminals (VDTs)—part 3: visual display requirements. International Organization for Standardization, Geneva

    Google Scholar 

  • Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259

    Article  Google Scholar 

  • Kalkofen D, Veas E, Zollmann S, Steinberger M, Schmalstieg D (2013) Adaptive ghosted views for augmented reality. In: Proceedings of the IEEE international symposium on mixed and augmented reality 2013, IEEE, pp 1–9

  • Kosinski RJ (2008) A literature review on reaction time. Clemson University, Clemson

    Google Scholar 

  • Lee S, Kim GJ, Choi S (2009) Real-time tracking of visually attended objects in virtual environments and its application to LOD. IEEE Trans Vis Comput Graph 15(1):6–19

    Article  Google Scholar 

  • Loomis JM, Nakayama K (1973) A velocity analogue of brightness contrast. Perception 2(4):425–428

    Article  Google Scholar 

  • Ma YF, Zhang HJ (2003) Contrast-based image attention analysis by using fuzzy growing. In: Proceedings of the 11th ACM international conference on multimedia, ACM, pp 374–381

  • Meese TS, Hess RF, Williams CB (2005) Size matters, but not for everyone: individual differences for contrast discrimination. J Vis 5(11):928–947

    Article  Google Scholar 

  • National Geographic (2015) Africa’s Wild West. http://natgeotv.com/uk/africas-wild-west

  • Navab N, Traub J, Sielhorst T, Feuerstein M, Bichlmeier C (2007) Action-and workflow-driven augmented reality for computer-aided medical procedures. IEEE Comput Graph Appl 27(5):10–14

    Article  Google Scholar 

  • Perazzi F, Krähenbühl P, Pritch Y, Hornung A (2012) Saliency filters: contrast based filtering for salient region detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2012, IEEE, pp 733–740

  • Reid B (2014) Google Glass causing eye pain and muscle fatigue for some users. http://www.redmondpie.com/google-glass-causing-eye-pain-and-muscle-fatigue-for-some-users/

  • Sandor C, Cunningham A, Dey A, Mattila VV (2010) An augmented reality x-ray system based on visual saliency. In: Proceedings of the IEEE international symposium on mixed and augmented reality 2010, IEEE, pp 27–36

  • Supan P, Stuppacher I, Haller M (2006) Image based shadowing in real-time augmented reality. Int J Virtual Real 5(3):1–7

    Google Scholar 

  • Tatzgern M, Kalkofen D, Schmalstieg D (2013) Dynamic compact visualizations for augmented reality. In: Proceedings of the IEEE conference on virtual reality 2013, IEEE, pp 3–6

  • Veas EE, Mendez E, Feiner SK, Schmalstieg D (2011) Directing attention and influencing memory with visual saliency modulation. In: Proceedings of the SIGCHI conference on human factors in computing systems 2011, ACM, pp 1471–1480

  • Ware C (2012) Information visualization, perception for design, 3rd edn. Morgan Kaufmann, Burlington

    Google Scholar 

  • Zhai Y, Shah M (2006) Visual attention detection in video sequences using spatiotemporal cues. In: Proceedings of the 14th ACM international conference on multimedia, ACM, pp 815–824

  • Zollmann S, Kalkofen D, Mendez E, Reitmayr G (2010) Image-based ghostings for single layer occlusions in augmented reality. In: Proceedings of the IEEE international symposium on mixed and augmented reality 2010, IEEE, pp 19–26

Download references

Acknowledgements

This research was supported in part by the Basic Science Research Program funded by the National Research Foundation (NRF) and the Ministry of Science, ICT & Future Planning (MSIP) - No. 2011-0030079, and by the Institute for Information & communications Technology Promotion (IITP) grant also funded by MSIP - No. 2017-0-00179, “HD Haptic Technology for Hyper Reality Contents”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gerard Jounghyun Kim.

Appendix

Appendix

Appendix provides the details of the three brightness modulation algorithms compared in Sect. 3. The first algorithm (Fig. 23, labeled “global” or “G” in Table 1), computes the ground truth brightness adjustment value by considering all possible combinations of the foreground object’s variations and maximizing for the total saliency value. The second (Fig. 24, labeled “local” or “L” in Table 1), derives the near ground truth brightness only considering each augmentation object in isolation and therefore maximizing only the local saliency value for each of them. The third is the proposed simplified algorithm (labeled “S” in Table 1) shown in Fig. 25.

Fig. 23
figure 23

Pseudocode for ground truth—global (“G”)

Fig. 24
figure 24

Pseudocode for ground truth—local (“L”)

Fig. 25
figure 25

Pseudocode for the proposed simplified real-time algorithm (“S”)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ahn, E., Lee, S. & Kim, G.J. Real-time adjustment of contrast saliency for improved information visibility in mobile augmented reality. Virtual Reality 22, 245–262 (2018). https://doi.org/10.1007/s10055-017-0319-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-017-0319-y

Keywords

Navigation