Abstract
Most methods for synthesizing panoramas assume that the scene is static. A few methods have been proposed for synthesizing stereo or motion panoramas, but there has been little attempt to synthesize panoramas that have both stereo and motion. One faces several challenges in synthesizing stereo motion panoramas, for example, to ensure temporal synchronization between left and right views in each frame, to avoid spatial distortion of moving objects, and to continuously loop the video in time. We have recently developed a stereo motion panorama method that tries to address some of these challenges. The method blends space-time regions of a video XYT volume, such that the blending regions are distinct and translate over time. This article presents a perception experiment that evaluates certain aspects of the method, namely how well observers can detect such blending regions. We measure detection time thresholds for different blending widths and for different scenes, and for monoscopic versus stereoscopic videos. Our results suggest that blending may be more effective in image regions that do not contain coherent moving objects that can be tracked over time. For example, we found moving water and partly transparent smoke were more effectively blended than swaying branches. We also found that performance in the task was roughly the same for mono versus stereo videos.
- Agarwala, A., Zheng, K., Pal, C., Agrawala, M., Cohen, M., Curless, B., Salesin, D., and Szeliski, R. 2005. Panoramic video textures. ACM Trans. Graph. 24, 3, 821--827. Google ScholarDigital Library
- Burt, P. J. and Adelson, E. H. 1983. A multiresolution spline with application to image mosaics. ACM Trans. Graph. 2, 4, 217--236. Google ScholarDigital Library
- Couture, V., Langer, M. S., and Roy, S. 2010. Capturing non-periodic omnistereo motions. In Proceedings of the 10th Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras (OMNIVIS).Google Scholar
- Couture, V., Langer, M. S., and Roy, S. 2011. Panoramic stereo video textures. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 1251--1258. Google ScholarDigital Library
- Daly, S., Held, R., and Hoffman, D. 2011. Perceptual issues in stereoscopic signal processing. IEEE Trans. Broadcas. 57, 2, 347--361.Google ScholarCross Ref
- Derpanis, K. G. and Wildes, R. P. 2012. Spacetime texture representation and recognition based on a spatiotemporal orientation analysis. IEEE Trans. Pattern Anal. Mach. Intell. 34, 6, 1193--1205. Google ScholarDigital Library
- Dixon, W. and Mood, A. M. 1948. A method for obtaining and analyzing sensitivity data. J. Ameri. Statist. Assoc. 43, 109--126.Google ScholarCross Ref
- Held, R. T. and Banks, M. S. 2008. Misperceptions in stereoscopic displays: A vision science perspective. In Proceedings of the 5th Symposium on Applied Perception in Graphics and Visualization (APGV). ACM, New York, 23--32. Google ScholarDigital Library
- Huang, H.-C. and Hung, Y.-P. 1998. Panoramic stereo imaging system with automatic disparity warping and seaming. Graph. Models . Image Proces. 60, 3, 196--208. Google ScholarDigital Library
- Ishiguro, H., Yamamoto, M., and Tsuji, S. 1992. Omni-Directional stereo. IEEE Trans. Pattern Anal. Mach. Intell. 14, 2, 257--262. Google ScholarDigital Library
- Kingdom, F. A. and Prins, N. 2010. Psychophysics: A Practical Introduction. Academic Press.Google Scholar
- Kwatra, V., Schdl, A., Essa, I., Turk, G., and Bobick, A. 2003. Graphcut textures: Image and video synthesis using graph cuts. ACM Trans. Graph. 22, 3, 277--286. Google ScholarDigital Library
- Mendiburu, B. 2009. 3D Movie Making: Stereoscopic Digital Cinema from Script to Screen. Focal Press.Google Scholar
- Naemura, T., Kaneko, M., and Harashima, H. 1998. Multi-User immersive stereo. In Proceedings of the IEEE International Conference on Image Processing 1. 903.Google Scholar
- Peleg, S., Ben-Ezra, M., and Pritch, Y. 2001. Omnistereo: Panoramic stereo imaging. IEEE Trans. Pattern Anal. Mach. Intell. 23, 3, 279--290. Google ScholarDigital Library
- Perez, P., Gangnet, M., and Blake, A. 2003. Poisson image editing. In SIGGRAPH Conference Proceedings. ACM Press, New York, 313--318. Google ScholarDigital Library
- Rav-Acha, A., Pritch, Y., Lischinski, D., and Peleg, S. 2007. Dynamosaicing: Mosaicing of dynamic scenes. IEEE Trans. Pattern Anal. Mach. Intell. 29, 10, 1789--1801. Google ScholarDigital Library
- Schödl, A., Szeliski, R., Salesin, D., and Essa, I. 2000. Video textures. In SIGGRAPH Conference Proceedings. ACM Press/Addison-Wesley Publishing, New York, 489--498. Google ScholarDigital Library
- Seitz, S., Kalai, A., and Shum, H. 2002. Omnivergent stereo. Int. J. Comput. Vis. 48, 3, 159--172. Google ScholarDigital Library
- Szeliski, R. 2010. Computer Vision: Algorithms and Applications. Springer, Chapter 9.3 (Compositing). Google Scholar
- Tardif, J.-P., Roy, S., and Trudeau, M. 2003. Multi-Projectors for arbitrary surfaces without explicit calibration nor reconstruction. In Proceedings of the International Conference on 3-D Digital Imaging and Modeling. 217--224.Google Scholar
- Vangorp, P., Chaurasia, G., Laffont, P.-Y., Fleming, R., and Drettakis, G. 2011. Perception of visual artifacts in image-based rendering of faades. Comput. Graph. Forum 30, 4, 1241--1250.Google ScholarDigital Library
- Vision3D. 2011. http://vision3d.iro.umontreal.ca/en/projects/omnistereo/.Google Scholar
- Wetherill, G. B. and Levitt, H. 1965. Sequential estimation of points on a psychometric function. Brit. J. Math. Statis. Psycho. 18, 1--10.Google ScholarCross Ref
- Woeste, H. 2009. Mastering Digital Panoramic Photography. Rocky Nook. Google ScholarDigital Library
Index Terms
- Perception of blending in stereo motion panoramas
Recommendations
Stereo Reconstruction from Multiperspective Panoramas
Abstract--A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular ...
Self-Calibration of Turntable Sequences from Silhouettes
This paper addresses the problem of recovering both the intrinsic and extrinsic parameters of a camera from the silhouettes of an object in a turntable sequence. Previous silhouette-based approaches have exploited correspondences induced by epipolar ...
Omnivergent Stereo
The notion of a virtual camera for optimal 3D reconstruction is introduced. Instead of planar perspective images that collect many rays at a fixed viewpoint, omnivergent cameras collect a small number of rays at many different viewpoints. The resulting ...
Comments