Abstract
With the recent advent of 360 cameras, spherical panorama images are becoming more popular and widely available. In a spherical panorama, alignment of the scene orientation to the image axes is important for providing comfortable and pleasant viewing experiences using VR headsets and traditional displays. This paper presents an automatic method for upright adjustment of 360 spherical panorama images without any prior information, such as depths and gyro sensor data. We take the Atlanta world assumption and use the horizontal and vertical lines in the scene to formulate a cost function for upright adjustment. In addition to fast optimization of the cost function, our method includes outlier handling to improve the robustness and accuracy of upright adjustment. Our method produces visually pleasing results for a variety of real-world spherical panoramas in less than a second, and the accuracy is verified using ground-truth data.
Similar content being viewed by others
References
Akinlar, C., Topal, C.: Edlines: A real-time line segment detector with a false detection control. Pattern Recognit. Lett. 32(13), 1633–1642 (2011)
Bazin, J.C., Demonceaux, C., Vasseur, P., Kweon, I.: Rotation estimation and vanishing point extraction by omnidirectional vision in urban environment. Int. J. Rob. Res. 31(1), 63–81 (2012)
Bosse, M., Rikoski, R.J., Leonard, J.J., Teller, S.J.: Vanishing points and 3D lines from omnidirectional video. In: Proceedings of IEEE International Conference on Image Processing (2002)
Gallagher, A.C.: Using vanishing points to correct camera rotation in images. In: Proceedings of Canadian Conference on Computer and Robot Vision, pp. 460–467. IEEE (2005)
von Gioi, R., Jakubowicz, J., Morel, J.M., Randall, G.: LSD: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2010)
He, K., Chang, H., Sun, J.: Content-aware rotation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 553–560 (2013)
Kamali, M., Banno, A., Bazin, J.C., Kweon, I.S., Ikeuchi, K.: Stabilizing omnidirectional videos using 3d structure and spherical image warping. In: Proceedings IAPR Conference on Machine Vision Applications, pp. 177–180 (2011)
Kopf, J.: 360 video stabilization. ACM Trans. Gr. (TOG) 35(6), 195 (2016)
Kopf, J., Lischinski, D., Deussen, O., Cohen-Or, D., Cohen, M.: Locally adapted projections to reduce panorama distortions. Comput. Gr. Forum 28(4), 1083–1089 (2009)
Lee, H., Shechtman, E., Wang, J., Lee, S.: Automatic upright adjustment of photographs with robust camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 833–844 (2014)
Scaramuzza, D.: Omnidirectional vision: from calibration to robot motion estimation. Ph.D. thesis, ETH Zurich (2008)
Schindler, G., Dellaert, F.: Atlanta world: an expectation maximization framework for simultaneous low-level edge grouping and camera calibration in complex man-made environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 203–209 (2004)
Wang, X., Cao, X., Guo, X., Song, Z.: Beautifying fisheye images using orientation and shape cues. In: Proceedings of the ACM International Conference on Multimedia, pp. 829–832 (2014)
Wang, Z., Jin, X., Xue, F., He, X., Li, R., Zha, H.: Panorama to cube: a content-aware representation method. In: Proceedings of the SIGGRAPH Asia 2015 Technical Briefs, SA ’15, pp. 6:1–6:4 (2015)
Acknowledgements
This work was supported by Institute for Information & communications Technology Promotion (IITP) grant (R0126-17-1078), the National Research Foundation of Korea (NRF) grant (NRF-2014R1A2A1A11052779), and Korea Creative Content Agency (KOCCA) grant (APP-0120150512002), funded by the Korea government (MSIP, MCST).
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Jung, J., Kim, B., Lee, JY. et al. Robust upright adjustment of 360 spherical panoramas. Vis Comput 33, 737–747 (2017). https://doi.org/10.1007/s00371-017-1368-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-017-1368-7