Abstract
High quality depth images are required for stable and accurate camera tracking and 3D modeling. The method to improve the depth image quality using deep learning requires large and accurate datasets in advance. Datasets created by Middlebury Datasets which are typical depth image datasets do not always represent the feature of noise caused by depth cameras, and the number of datasets is not enough. In this study, the method for creating datasets for deep learning was developed and evaluated. The proposed method can improve the accuracy of the distance for each pixel by aligning the positions of pixels that capture the same part of the real world with multiple frames. In addition to super-resolution and denoising, the images are preprocessed such as patch division and data augmentation to eliminate the holes in correct depth images. By using this method, large number of real environment datasets can be automatically created. Two neural networks using Middlebury datasets and the datasets generated by the proposed method were trained respectively, and produced high quality depth images with them. In order to compare the Middlebury Datasets and the proposed method, we visually evaluated the hole filling and the smoothness of edges and surfaces of objects from the results. The result showed the network using the datasets created by the proposed method can remove noise rather than that using Middlebury Datasets since they include noise features caused by the performance limits of depth cameras.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Marchand, E., Uchiyama, H., Spindler, F.: Pose estimation for augmented reality: a hands-on survey. IEEE Trans. Visual Comput. Graphics 22(12), 2633–2651 (2015)
Kähler, O., Prisacariu, V.A., Ren, C.Y., Sun, X., Torr, P., Murray, D.: Very high frame rate volumetric integration of depth images on mobile devices. IEEE Trans. Visual Comput. Graphics 21(11), 1241–1250 (2015)
Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), pp. 839–846 (1998)
Choudhury, P., Tumblin, J.: The trilateral filter for high contrast images and meshes. In: Proceedings of the 14th Eurographics Workshop on Rendering, pp. 186–196 (2003)
Newcombe, R.A., et al.: Kinectfusion: real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 127–136 (2011)
Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vision 47(1–3), 7–42 (2002)
Scharstein, D., Szeliski, R.: High-accuracy stereo depth maps using structured light. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 195–202 (2003)
Scharstein, D., Pal, C.: Learning conditional random fields for stereo. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)
Hirschmuller, H., Scharstein, D.: Evaluation of cost functions for stereo matching. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)
Lu, S., Ren, X., Liu, F.: Depth enhancement via low-rank matrix completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3390–3397 (2014)
Lu, S., Ren, X., Liu, F.: Depth enhancement via low-rank matrix completion. http://web.cecs.pdx.edu/~fliu/project/depth-enhance/Middlebury.htm. Accessed 22 Dec 2020
Yuki, H., Naoya, M., Toyohiro, H., Hirotake, I., Hiroshi, S., Yuya, K.: Performance evaluation of scanning support system for constructing 3D reconstruction models. In: IEEE 5th International Conference on Computer and Communications (ICCC) (2019)
Ethan, R., Vincent, R., Kurt, K., Gary, R.B.: ORB: an efficient alternative to sift or surf. In: International Conference on Computer Vision (ICCV), pp. 2564–2571 (2011)
ASUS: Xtion pro live. https://www.asus.com/jp/3D-Sensor/Xtion_PRO_LIVE/. Accessed 26 Dec 12
Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: Proceedings of The 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 303–312 (1996)
Zhu, J., Zhang, J., Cao, Y., Wang, Z.: Image guided depth enhancement via deep fusion and local linear regularizaron. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 4068–4072 (2017). https://doi.org/10.1109/ICIP.2017.8297047
Voynov, O., et al.: Perceptual deep depth super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5653–5663 (2019)
Bojanowski, P., Joulin, A., Lopez-Pas, D., Szlam, A.: Optimizing the latent space of generative networks. In: Proceedings of the 35th International Conference on Machine Learning, vol. 80, pp. 600–609, 10–15 July 2018
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Murayama, M., Harazono, Y., Ishii, H., Shimoda, H., Taruta, Y., Koda, Y. (2021). Development of Real Environment Datasets Creation Method for Deep Learning to Improve Quality of Depth Image. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2021. Lecture Notes in Computer Science(), vol 12797. Springer, Cham. https://doi.org/10.1007/978-3-030-77772-2_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-77772-2_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-77771-5
Online ISBN: 978-3-030-77772-2
eBook Packages: Computer ScienceComputer Science (R0)