Abstract
We introduce methods for augmenting aerial visualizations of Earth (from tools such as Google Earth or Microsoft Virtual Earth) with dynamic information obtained from videos. Our goal is to make Augmented Earth Maps that visualize plausible live views of dynamic scenes in a city. We propose different approaches to analyze videos of pedestrians and cars in real situations, under differing conditions to extract dynamic information. Then, we augment an Aerial Earth Maps (AEMs) with the extracted live and dynamic content. We also analyze natural phenomenon (skies, clouds) and project information from these to the AEMs to add to the visual reality. Our primary contributions are: (1) Analyzing videos with different viewpoints, coverage, and overlaps to extract relevant information about view geometry and movements, with limited user input. (2) Projecting this information appropriately to the viewpoint of the AEMs and modeling the dynamics in the scene from observations to allow inference (in case of missing data) and synthesis. We demonstrate this over a variety of camera configurations and conditions. (3) The modeled information from videos is registered to the AEMs to render appropriate movements and related dynamics. We demonstrate this with traffic flow, people movements, and cloud motions. All of these approaches are brought together as a prototype system for a real-time visualization of a city that is alive and engaging.
Similar content being viewed by others
References
Bouguet J-Y (2003) Pyramidal implementation of the lucas kanade feature tracker. In: Intel Corporation
Buhmann MD, Ablowitz MJ (2003) Radial basis functions: theory and implementations. Cambridge University, Cambridge
Chen G, Esch G, Wonka P, Müller P, Zhang E (2008) Interactive procedural street modeling. ACM Trans Graph 27(3):1–10
Efros AA, Berg EC, Mori G, Malik J (2003) Recognizing action at a distance. In: ICCV03, pp 726–733
Frey B, MacKay D (1998) A revolution: belief propagation in graphs with cycles. In: Neural information processing systems, pp 479–485
Girgensohn A, Kimber D, Vaughan J, Yang T, Shipman F, Turner T, Rieffel E, Wilcox L, Chen F, Dunnigan T (2007) Dots: support for effective video surveillance. In: ACM MULTIMEDIA ’07. ACM, New York, pp 423–432
Harris MJ (2005) Real-time cloud simulation and rendering. In: ACM SIGGRAPH 2005 Courses. New York, p. 222
Hartley RI (1997) In defense of the eight-point algorithm. PAMI Int J Pattern Anal Mach Intell 19(6):580–593
Hartley R, Zisserman A (2000) Multiple view geometry in computer vision. Cambridge University Press, Cambridge
Horry Y, Anjyo K-I, Arai K (1997) Tour into the picture: using a spidery mesh interface to make animation from a single image. In: Proceedings of ACM SIGGRAPH, New York, pp 225–232
Kanade T (2001) Eyevision system at super bowl 2001. http://www.ri.cmu.edu/events/sb35/tksuperbowl.html
Klein G, Murray D (2007) Parallel tracking and mapping for small AR workspaces. In: Proceedings of sixth IEEE and ACM international symposium on mixed and augmented reality (ISMAR’07)
Koller-meier EB, Ade F (2001) Tracking multiple objects using the condensation algorithm. JRAS
Kosecka J, Zhang W (2002) Video compass. In: Proceedings of ECCV. Springer, London, pp 476–490
Lewis JP (ed) (1989) Algorithms for solid noise synthesis
Man P (2006) Generating and real-time rendering of clouds. In: Central European seminar on computer graphics, pp 1–9
Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, Massachusetts
Perlin K (1985) An image synthesizer. SIGGRAPH Comput Graph 19(3):287–296
Rabaud V, Belongie S (2006) Counting crowded moving objects. In: CVPR ’06: Proceedings of IEEE computer vision and pattern recognition. IEEE Computer Society, pp 17–22
Ramanan D, Forsyth DA (2003) Automatic annotation of everyday movements. In: NIPS. MIT Press, Cambridge
Reynolds CW (1987) Flocks, herds and schools: a distributed behavioral model. In: ACM SIGGRAPH 1987. ACM Press, New York, pp 25–34
Reynolds CW (1999) Steering behaviors for autonomous characters. In: GDC ’99: Proceedings of game developers conference. Miller Freeman Game Group, pp 768–782
Sawhney HS, Arpa A, Kumar R, Samarasekera S, Aggarwal M, Hsu S, Nister D, Hanna K (2002) Video flashlights: real time rendering of multiple videos for immersive model visualization. In: 13th Eurographics workshop on Rendering. Eurographics Association, pp 157–168
Sebe IO, Hu J, You S, Neumann U (2003) 3d video surveillance with augmented virtual environments. In: IWVS ’03: First ACM SIGMM international workshop on Video surveillance. ACM, New York, pp 107–112
Seitz SM, Dyer CR (1996) View morphing. In: SIGGRAPH ’96. ACM, New York, pp 21–30
Seitz SM, Curless B, Diebel J, Scharstein D, Szeliski R (2006) A comparison and evaluation of multi-view stereo reconstruction algorithms. In: IEEE CVPR ’06, pp 519–528
Shao W, Terzopoulos D (2007) Autonomous pedestrians. Graphi Models 69(5):246–274
Shi J, Tomasi C (1994) Good features to track. In: Proceedings of IEEE CVPR. IEEE computer society, pp 593–600
Smart J, Cascio J, Paffendorf J (2007) Metaverse roadmap: pathways to the 3d web. Metaverse: a cross-industry public foresight project
Snavely N, Seitz SM, Szeliski R (2006) Photo tourism: exploring photo collections in 3d. In: Proceedings of ACM SIGGRAPH’06. ACM Press, New York, pp 835–846
Treuille A, Cooper S, Popović Z (2006) Continuum crowds. In: ACM SIGGRAPH 2006 papers, pp 1160–1168
Turk G, O’Brien JF (1999) Shape transformation using variational implicit functions. In: SIGGRAPH ’99. New York, NY, pp 335–342
Veenman CJ, Reinders MJT, Backer E (2001) Resolving motion correspondence for densely moving points. IEEE Trans Pattern Anal Mach Intell 23:54–72
Wang N (2004) Realistic and fast cloud rendering. J Graph Tools 9(3):21–40
Wang Y, Krum DM, Coelho EM, Bowman DA (2007) Contextualized videos: Combining videos with environment models to support situational understanding. Abdom Imag 13:1568–1575
Wood DM, Ball K, Lyon D, Norris C, Raab C (2006) A report on the surveillance society. Surveillance Studies Network, UK
Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. ACM Comput Surv 38(4):13
Zotti G, Groller ME (2005) A sky dome visualisation for identification of astronomical orientations. In: INFOVIS ’05. IEEE Computer Society, Washington, DC, p 2
Acknowledgments
This project was in part funded by a Google Research Award. We would like to thank Nick Diakopoulos, Matthias Grundmann, Myungcheol Doo and Dongryeol Lee for their help and comments on the work. Thanks also to the Georgia Tech Athletic Association (GTAA) for sharing with us videos of the college football games for research purposes. Finally, thanks to the reviewers for their valuable comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was done while authors Sangmin Oh and Jeonggyu Lee were with Georgia Institute of Technology.
Project homepage URL: http://www.cc.gatech.edu/cpl/projects/augearth.
Rights and permissions
About this article
Cite this article
Kim, K., Oh, S., Lee, J. et al. Augmenting aerial earth maps with dynamic information from videos. Virtual Reality 15, 185–200 (2011). https://doi.org/10.1007/s10055-010-0186-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10055-010-0186-2