Abstract
We extend our prior research on light field view synthesis for volume data presented in the conference proceedings of VISIGRAPP 2019 [13]. In that prior research, we identified the best Convolutional Neural Network, depth heuristic, and image warping technique to employ in our light field synthesis method. Our research demonstrated that applying backward image warping using a depth map estimated during volume rendering followed by a Convolutional Neural Network produced high quality results. In this body of work, we further address the generalisation of Convolutional Neural Network applied to different volumes and transfer functions from those trained upon. We show that the Convolutional Neural Network (CNN) fails to generalise on a large dataset of head magnetic resonance images. Additionally, we speed up our implementation to enable better timing comparisons while remaining functionally equivalent to our previous method. This produces a real-time application of light field synthesis for volume data and the results are of high quality for low-baseline light fields.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adelson, E.H., et al.: The plenoptic function and the elements of early vision. In: Computational Models of Visual Processing, pp. 3–20. MIT (1991)
Frayne, S.: The looking glass (2018). https://lookingglassfactory.com/. Accessed 22 Nov 2018
Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 43–54. SIGGRAPH 1996. ACM (1996). https://doi.org/10.1145/237170.237200, http://doi.acm.org/10.1145/237170.237200
Kalantari, N.K., Wang, T.C., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. ACM Trans. Graph. 35(6), 193:1–193:10 (2016). https://doi.org/10.1145/2980179.2980251, http://doi.acm.org/10.1145/2980179.2980251
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Lanman, D., Luebke, D.: Near-eye light field displays. ACM Trans. Graph. (TOG) 32(6), 220 (2013)
Levoy, M., Hanrahan, P.: Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 31–42. SIGGRAPH 1996. ACM (1996)
Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, vol. 1, p. 4 (2017)
Lin, Z., Shum, H.Y.: A geometric analysis of light field rendering. Int. J. Comput. Vis. 58(2), 121–138 (2004). https://doi.org/10.1023/B:VISI.0000015916.91741.27
Lochmann, G., Reinert, B., Buchacher, A., Ritschel, T.: Real-time novel-view synthesis for volume rendering using a piecewise-analytic representation. In: Vision, Modeling and Visualization. The Eurographics Association (2016)
Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts. In: International Conference on Learning Representations (2017)
Mark, W.R., McMillan, L., Bishop, G.: Post-rendering 3D warping. In: Proceedings of the 1997 Symposium on Interactive 3D Graphics, pp. 7–16. ACM (1997)
Martin, S., Bruton, S., Ganter, D., Manzke, M.: Using a Depth Heuristic for Light Field Volume Rendering, pp. 134–144, May 2019. https://www.scitepress.org/PublicationsDetail.aspx?ID=ZRRCGeI7xV8=&t=1
Mildenhall, B., et al.: Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. arXiv preprint arXiv:1905.00889 (2019)
Mueller, K., Shareef, N., Huang, J., Crawfis, R.: Ibr-assisted volume rendering. In: Proceedings of IEEE Visualization, vol. 99, pp. 5–8. Citeseer (1999)
Park, S., Kim, Y., Park, S., Shin, J.A.: The impacts of three-dimensional anatomical atlas on learning anatomy. Anat. Cell Biol. 52(1), 76–81 (2019). https://doi.org/10.5115/acb.2019.52.1.76, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6449593/
Paszke, A., et al.: Automatic differentiation in pytorch (2017)
Penner, E., Zhang, L.: Soft 3D reconstruction for view synthesis. ACM Trans. Graph. 36(6), 235:1–235:11 (2017). https://doi.org/10.1145/3130800.3130855
Poldrack, R.A., et al.: A phenome-wide examination of neural and cognitive function. Sci. Data 3, 160110 (2016)
Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., Guibas, L.J.: Volumetric and multi-view CNNs for object classification on 3D data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5648–5656 (2016)
Roettger, S.: Heart volume dataset (2018). http://schorsch.efi.fh-nuernberg.de/data/volume/Subclavia.pvm.sav. Accessed 15 Aug 2018
Shade, J., Gortler, S., He, L.W., Szeliski, R.: Layered depth images. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, pp. 231–242. SIGGRAPH 1998. ACM, New York (1998). https://doi.org/10.1145/280814.280882, http://doi.acm.org/10.1145/280814.280882
Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.: Light field reconstruction using sparsity in the continuous fourier domain. ACM Trans. Graph. 34(1), 1–13 (2014). https://doi.org/10.1145/2682631
Shojaii, R., et al.: Reconstruction of 3-dimensional histology volume and its application to study mouse mammary glands. J. Vis. Exp.: JoVE 89, e51325 (2014). https://doi.org/10.3791/51325
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Srinivasan, P.P., Wang, T., Sreelal, A., Ramamoorthi, R., Ng, R.: Learning to synthesize a 4D RGBD light field from a single image. In: IEEE International Conference on Computer Vision (ICCV), pp. 2262–2270, October 2017. https://doi.org/10.1109/ICCV.2017.246
Sundén, E., et al.: Inviwo - an extensible, multi-purpose visualization framework. In: IEEE Scientific Visualization Conference (SciVis), pp. 163–164, October 2015. https://doi.org/10.1109/SciVis.2015.7429514
Vagharshakyan, S., Bregovic, R., Gotchev, A.: Light field reconstruction using shearlet transform. IEEE Trans. Pattern Anal. Mach. Intell. 40(1), 133–147 (2018). https://doi.org/10.1109/tpami.2017.2653101
Wang, T.-C., Zhu, J.-Y., Hiroaki, E., Chandraker, M., Efros, A.A., Ramamoorthi, R.: A 4D light-field dataset and CNN architectures for material recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 121–138. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_8
Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 606–619 (2014). https://doi.org/10.1109/TPAMI.2013.147
Wanner, S., Meister, S., Goldluecke, B.: Datasets and benchmarks for densely sampled 4D light fields. In: Vision, Modeling, and Visualization (2013)
Wu, G., Liu, Y., Dai, Q., Chai, T.: Learning sheared EPI structure for light field reconstruction. IEEE Trans. Image Process. 28(7), 3261–3273 (2019). https://doi.org/10.1109/TIP.2019.2895463
Wu, G., et al.: Light field image processing: an overview. IEEE J. Sel. Top. Sig. Process. 11(7), 926–954 (2017). https://doi.org/10.1109/jstsp.2017.2747126
Wu, G., Zhao, M., Wang, L., Dai, Q., Chai, T., Liu, Y.: Light field reconstruction using deep convolutional network on EPI. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1638–1646, July 2017. https://doi.org/10.1109/CVPR.2017.178
Yoon, Y., Jeon, H.G., Yoo, D., Lee, J.Y., So Kweon, I.: Learning a deep convolutional network for light-field image super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 24–32, December 2015. https://doi.org/10.1109/ICCVW.2015.17
Zellmann, S., Aumüller, M., Lang, U.: Image-based remote real-time volume rendering: decoupling rendering from view point updates. In: ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 1385–1394. ASME (2012)
Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Zhou, T., Tucker, R., Flynn, J., Fyffe, G., Snavely, N.: Stereo magnification: Learning view synthesis using multiplane images. arXiv preprint arXiv:1805.09817 (2018)
Acknowledgements
This research has been conducted with the financial support of Science Foundation Ireland (SFI) under Grant Number 13/IA/1895.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Martin, S.K., Bruton, S., Ganter, D., Manzke, M. (2020). Synthesising Light Field Volume Visualisations Using Image Warping in Real-Time. In: Cláudio, A., et al. Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2019. Communications in Computer and Information Science, vol 1182. Springer, Cham. https://doi.org/10.1007/978-3-030-41590-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-41590-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-41589-1
Online ISBN: 978-3-030-41590-7
eBook Packages: Computer ScienceComputer Science (R0)