Reference Hub1
Dynamic Insertion of Virtual Objects in Photographs

Dynamic Insertion of Virtual Objects in Photographs

Rui Nóbrega, Nuno Correia
Copyright: © 2013 |Volume: 4 |Issue: 2 |Pages: 18
ISSN: 1947-3117|EISSN: 1947-3125|EISBN13: 9781466633537|DOI: 10.4018/ijcicg.2013070102
Cite Article Cite Article

MLA

Nóbrega, Rui, and Nuno Correia. "Dynamic Insertion of Virtual Objects in Photographs." IJCICG vol.4, no.2 2013: pp.22-39. http://doi.org/10.4018/ijcicg.2013070102

APA

Nóbrega, R. & Correia, N. (2013). Dynamic Insertion of Virtual Objects in Photographs. International Journal of Creative Interfaces and Computer Graphics (IJCICG), 4(2), 22-39. http://doi.org/10.4018/ijcicg.2013070102

Chicago

Nóbrega, Rui, and Nuno Correia. "Dynamic Insertion of Virtual Objects in Photographs," International Journal of Creative Interfaces and Computer Graphics (IJCICG) 4, no.2: 22-39. http://doi.org/10.4018/ijcicg.2013070102

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Introducing virtual objects in photographs or video sequences presents several challenges, such as the pose estimation and the visually correct interaction boundaries of such objects. In this article a framework for the introduction of virtual objects in user-captured photos is discussed. Furthermore, the introduced virtual objects should be interactive and respond to real physical environments. The proposed detection system is semi-automatic and thus depends on the user to obtain the elements it needs. This operation should be significantly simple to accommodate the needs of a non-expert user. The system analyses a photo taken by the user and detects high-level features such as vanishing points, floor and scene orientation. Using these features it will be possible to create virtual mixed and augmented reality applications where the user takes one or more photos of a certain place and interactively introduces virtual objects or elements that blend with the picture in real time. This article discusses the techniques required to acquire images and information about the scenario involving the user. To demonstrate the framework, a proof-of-concept implementation is presented. This implementation was used to conduct a user study regarding the evaluation of the reliability of the concept. The presented results show a high reliability in the scene detection and that users are able and motivated to use this type of systems.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.