skip to main content
research-article

A Fast View Synthesis Implementation Method for Light Field Applications

Published: 12 November 2021 Publication History

Abstract

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.

References

[1]
Shai Avidan and Ariel Shamir. 2007. Seam carving for content-aware image resizing. In ACM SIGGRAPH 2007 Papers. 10–es.
[2]
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. 2018. Variational image compression with a scale hyperprior. In International Conference on Learning Representations. https://openreview.net/forum?id=rkcQFMZRb.
[3]
James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems. 2546–2554.
[4]
Cristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 535–541.
[5]
Gaurav Chaurasia, Sylvain Duchene, Olga Sorkine-Hornung, and George Drettakis. 2013. Depth synthesis and local warps for plausible image-based navigation. ACM Trans. Graph. 32, 3 (2013), 1–12.
[6]
Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Huimin Ma, Sanja Fidler, and Raquel Urtasun. 2018. 3D object proposals using stereo imagery for accurate object class detection. IEEE Trans. Pattern Anal. Mach. Intell. 40, 5 (2018), 1259–1272.
[7]
Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. 2017. A survey of model compression and acceleration for deep neural networks. arXiv:1710.09282. Retrieved from https://arxiv.org/abs/1710.09282.
[8]
Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv:1602.02830. Retrieved from https://arxiv.org/abs/1602.02830.
[9]
Emily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. 2014. Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in Neural Information Processing Systems. 1269–1277.
[10]
Piotr Didyk, Pitchaya Sitthi-Amorn, William Freeman, Frédo Durand, and Wojciech Matusik. 2013. Joint view expansion and filtering for automultiscopic 3D displays. ACM Trans. Graph. 32, 6 (2013), 1–8.
[11]
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2015. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2 (2015), 295–307.
[12]
Chao Dong, Chen Change Loy, and Xiaoou Tang. 2016. Accelerating the super-resolution convolutional neural network. In Proceedings of the European Conference on Computer Vision. Springer, 391–407.
[13]
Lu Fang, Oscar C. Au, Ketan Tang, Xing Wen, and Hanli Wang. 2011. Novel 2-D MMSE subpixel-based image down-sampling. IEEE Trans. Circ. Syst. Vid. Technol. 22, 5 (2011), 740–753.
[14]
Reuben A. Farrugia, Christian Galea, and Guillemot Christine. 2017. Super resolution of light field images using linear subspace projection of patch-volumes. IEEE J. Select. Top. Sign. Process. 11, 7 (2017), 1058–1071.
[15]
Mingtao Feng, Syed Zulqarnain Gilani, Yaonan Wang, Liang Zhang, and Ajmal Mian. 2021. Relation graph network for 3D object detection in point clouds. IEEE Trans. Image Process. 30 (2021), 92–107.
[16]
Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. 2015. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems. 2962–2970.
[17]
John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. 2016. Deepstereo: Learning to predict new views from the world’s imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5515–5524.
[18]
Wei Gao, Sam Kwong, and Yuheng Jia. 2017. Joint machine learning and game theory for rate control in high efficiency video coding. IEEE Trans. Image Process. 26, 12 (2017), 6074–6089.
[19]
Wei Gao, Sam Kwong, Hui Yuan, and Xu Wang. 2016. DCT coefficient distribution modeling and quality dependency analysis based frame-level bit allocation for HEVC. IEEE Trans. Circ. Syst. Vid. Technol. 26, 1 (2016), 139–153.
[20]
Wei Gao, Sam Kwong, Yu Zhou, and Hui Yuan. [n.d.].
[21]
Wei Gao, Lvfang Tao, Linjie Zhou, Dinghao Yang, Xiaoyu Zhang, and Zixuan Guo. 2020. Low-rate image compression with super-resolution learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’20). 607–610.
[22]
Todor Georgeiv, Ke Colin Zheng, Brian Curless, David Salesin, Shree Nayar, and Chintan Intwala. 2006. Spatio-angular resolution tradeoff in integral photography. In Proceedings of Eurographics Symposium on Rendering. 263–272.
[23]
Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, 249–256.
[24]
Ariel Gordon, Elad Eban, Ofir Nachum, Bo Chen, Hao Wu, Tien-Ju Yang, and Edward Choi. 2018. Morphnet: Fast & simple resource-constrained structure learning of deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1586–1595.
[25]
Yiwen Guo, Anbang Yao, and Yurong Chen. 2016. Dynamic network surgery for efficient dnns. In Advances in Neural Information Processing Systems. 1379–1387.
[26]
Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems. 1135–1143.
[27]
Stephen José Hanson and Lorien Y. Pratt. 1989. Comparing biases for minimal network construction with back-propagation. In Advances in Neural Information Processing Systems. 177–185.
[28]
Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. 2018. Deep back-projection networks for super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1664–1673.
[29]
Babak Hassibi and David G. Stork. 1993. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems. 164–171.
[30]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision. 1026–1034.
[31]
Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. 2018. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV’18). 784–800.
[32]
Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang. 2019. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4340–4349.
[33]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531. Retrieved from https://arxiv.org/abs/1503.02531.
[34]
Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2011. Sequential model-based optimization for general algorithm configuration. In Proceedings of the International Conference on Learning and Intelligent Optimization. Springer, 507–523.
[35]
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2704–2713.
[36]
Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. 2014. Speeding up convolutional neural networks with low rank expansions. arXiv:1405.3866. Retrieved from https://arxiv.org/abs/1405.3866.
[37]
Feng Jiang, Wen Tao, Shaohui Liu, Jie Ren, Xun Guo, and Debin Zhao. 2017. An end-to-end compression framework based on convolutional neural networks. IEEE Trans. Circ. Syst. Vid. Technol. 28, 10 (2017), 3007–3018.
[38]
Qiuping Jiang, Feng Shao, Wei Gao, Zhuo Chen, Gangyi Jiang, and Yo-Sung Ho. 2019. Unified no-reference quality assessment of singly and multiply distorted stereoscopic images. IEEE Trans. Image Process. 28, 4 (2019), 1866–1881.
[39]
Qiuping Jiang, Feng Shao, Wei Gao, Hong Li, and Yo-Sung Ho. 2019. A risk-aware pairwise rank learning approach for visual discomfort prediction of stereoscopic 3D. IEEE Sign. Process. Lett. 26, 11 (2019), 1588–1592.
[40]
Donald R. Jones. 2001. A taxonomy of global optimization methods based on response surfaces. J. Global Optim. 21, 4 (2001), 345–383.
[41]
Nima Khademi Kalantari, Ting-Chun Wang, and Ravi Ramamoorthi. 2016. Learning-based view synthesis for light field cameras. ACM Trans. Graph. 35, 6 (2016), 1–10.
[42]
Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer vision and Pattern Recognition. 1646–1654.
[43]
Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1637–1645.
[44]
Diederik P. and Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. https://openreview.net/forum?id=8gmWwjFyLj.
[45]
Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. 2017. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 624–632.
[46]
Yann LeCun, John S. Denker, and Sara A. Solla. 1990. Optimal brain damage. In Advances in Neural Information Processing Systems. 598–605.
[47]
Anat Levin and Fredo Durand. 2010. Linear view synthesis using a dimensionality gap light field prior. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 1831–1838.
[48]
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2017. Pruning filters for efficient ConvNets. In International Conference on Learning Representations. https://openreview.net/forum?id=rJqFGTslg.
[49]
Yue Li, Dong Liu, Houqiang Li, Li Li, Zhu Li, and Feng Wu. 2018. Learning a convolutional neural network for image compact-resolution. IEEE Trans. Image Process. 28, 3 (2018), 1092–1107.
[50]
Guibiao Liao, Wei Gao, Qiuping Jiang, Ronggang Wang, and Ge Li. 2020. MMNet: Multi-stage and multi-scale fusion network for RGB-d salient object detection. In Proceedings of the 28th ACM International Conference on Multimedia. Association for Computing Machinery, 2436–2444.
[51]
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. 2017. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 136–144.
[52]
Ping Luo, Zhenyao Zhu, Ziwei Liu, Xiaogang Wang, and Xiaoou Tang. 2016. Face model compression by distilling knowledge from neurons. In Proceedings of the 30th AAAI Conference on Artificial Intelligence.
[53]
Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. 2019. Importance estimation for neural network pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11264–11272.
[54]
James Pearson, Mike Brookes, and Pier Luigi Dragotti. 2013. Plenoptic layer-based modeling for image based rendering. IEEE Trans. Image Process. 22, 9 (2013), 3405–3419.
[55]
Sergi Pujades, Frédéric Devernay, and Bastian Goldluecke. 2014. Bayesian view synthesis and image-based rendering principles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3906–3913.
[56]
Abhilash Sunder Raj, Michael Lowney, Raj Shah, and Gordon Wetzstein. 2020. The stanford lytro light field archive. Retrieved on Sept. 1, 2020 from http://lightfields.stanford.edu/LF2016.html.
[57]
Roberto Rigamonti, Amos Sironi, Vincent Lepetit, and Pascal Fua. 2013. Learning separable filters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2754–2761.
[58]
Tara N. Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran. 2013. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 6655–6659.
[59]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision. 618–626.
[60]
Lixin Shi, Haitham Hassanieh, Abe Davis, Dina Katabi, and Fredo Durand. 2014. Light field reconstruction using sparsity in the continuous fourier domain. ACM Trans. Graph. 34, 1 (2014), 1–13.
[61]
Changha Shin, Hae-Gon Jeon, Youngjin Yoon, In So Kweon, and Seon Joo Kim. 2018. Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4748–4757.
[62]
Pratul P. Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, and Ren Ng. 2017. Learning to synthesize a 4d rgbd light field from a single image. In Proceedings of the IEEE International Conference on Computer Vision. 2243–2251.
[63]
Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al. 2015. Convolutional neural networks with low-rank regularization. arXiv:1511.06067. Retrieved from https://arxiv.org/abs/1511.06067.
[64]
Ying Tai, Jian Yang, and Xiaoming Liu. 2017. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3147–3155.
[65]
Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. 2019. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2820–2828.
[66]
Tong Tong, Gen Li, Xiejie Liu, and Qinquan Gao. 2017. Image super-resolution using dense skip connections. In Proceedings of the IEEE International Conference on Computer Vision. 4799–4807.
[67]
Suren Vagharshakyan, Robert Bregovic, and Atanas Gotchev. 2017. Accelerated shearlet-domain light field reconstruction. IEEE J. Select. Top. Sign. Process. 11, 7 (2017), 1082–1091.
[68]
Suren Vagharshakyan, Robert Bregovic, and Atanas Gotchev. 2017. Light field reconstruction using shearlet transform. IEEE Trans. Pattern Anal. Mach. Intell. 40, 1 (2017), 133–147.
[69]
Sven Wanner and Bastian Goldluecke. 2012. Spatial and angular variational super-resolution of 4D light fields. In Proceedings of the European Conference on Computer Vision. Springer, 608–621.
[70]
Sven Wanner and Bastian Goldluecke. 2013. Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36, 3 (2013), 606–619.
[71]
Gaochang Wu, Mandan Zhao, Liangyong Wang, Qionghai Dai, Tianyou Chai, and Yebin Liu. 2017. Light field reconstruction using deep convolutional network on EPI. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6319–6327.
[72]
Youngjin Yoon, Hae-Gon Jeon, Donggeun Yoo, Joon-Young Lee, and In So Kweon. 2015. Learning a deep convolutional network for light-field image super-resolution. In Proceedings of the IEEE International Conference on Computer Vision Workshops. 24–32.
[73]
Yang You, Aydın Buluç, and James Demmel. 2017. Scaling deep learning on gpu and knights landing clusters. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. 1–12.
[74]
Jiahui Yu and Thomas Huang. 2019. AutoSlim: Towards one-shot architecture search for channel numbers. arXiv:1903.11728. Retrieved from https://arxiv.org/abs/1903.11728.
[75]
Hui Yuan, Sam Kwong, Xu Wang, Wei Gao, and Yun Zhang. 2015. Rate distortion optimized inter-view frame level bit allocation method for MV-HEVC. IEEE Trans. Multimedia 17, 12 (2015), 2134–2146.
[76]
Fang-Lue Zhang, Jue Wang, Eli Shechtman, Zi-Ye Zhou, Jia-Xin Shi, and Shi-Min Hu. 2016. Plenopatch: Patch-based plenoptic image manipulation. IEEE Trans. Vis. Comput. Graph. 23, 5 (2016), 1561–1573.
[77]
Jing Zhang, Yang Cao, Zheng-Jun Zha, Zhigang Zheng, Chang-Wen Chen, and Zengfu Wang. 2016. A unified scheme for super-resolution and depth estimation from asymmetric stereoscopic video. IEEE Trans. Circ. Syst. Vid. Technol. 26, 3 (2016), 479–493.
[78]
Zhoutong Zhang, Yebin Liu, and Qionghai Dai. 2015. Light field from micro-baseline image pair. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3800–3809.
[79]
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. 2017. Incremental network quantization: Towards lossless CNNs with low-precision weights. In International Conference on Learning Representations. https://openreview.net/forum?id=HyQJ-mclg.

Cited By

View all
  • (2024)Rethinking Feature Mining for Light Field Salient Object DetectionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/367696720:10(1-24)Online publication date: 8-Jul-2024
  • (2024)Interpretable Task-inspired Adaptive Filter Pruning for Neural Networks Under Multiple ConstraintsInternational Journal of Computer Vision10.1007/s11263-023-01972-x132:6(2060-2076)Online publication date: 6-Jan-2024
  • (2024)Light field angular super-resolution by view-specific queriesThe Visual Computer10.1007/s00371-024-03620-yOnline publication date: 22-Sep-2024
  • Show More Cited By

Index Terms

  1. A Fast View Synthesis Implementation Method for Light Field Applications

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Multimedia Computing, Communications, and Applications
    ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 17, Issue 4
    November 2021
    529 pages
    ISSN:1551-6857
    EISSN:1551-6865
    DOI:10.1145/3492437
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 November 2021
    Accepted: 01 March 2021
    Revised: 01 February 2021
    Received: 01 October 2020
    Published in TOMM Volume 17, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Deep learning-based view synthesis
    2. real-time acceleration
    3. compact representation
    4. filter pruning
    5. deep neural network compression
    6. light field systems

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • Ministry of Science and Technology of China - Science and Technology Innovations 2030
    • Natural Science Foundation of China
    • Guangdong Basic and Applied Basic Research Foundation
    • Shenzhen Science and Technology Plan Basic Research Project
    • Open Projects Program of National Laboratory of Pattern Recognition (NLPR)
    • CCF-Tencent Open Fund

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)47
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 14 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Rethinking Feature Mining for Light Field Salient Object DetectionACM Transactions on Multimedia Computing, Communications, and Applications10.1145/367696720:10(1-24)Online publication date: 8-Jul-2024
    • (2024)Interpretable Task-inspired Adaptive Filter Pruning for Neural Networks Under Multiple ConstraintsInternational Journal of Computer Vision10.1007/s11263-023-01972-x132:6(2060-2076)Online publication date: 6-Jan-2024
    • (2024)Light field angular super-resolution by view-specific queriesThe Visual Computer10.1007/s00371-024-03620-yOnline publication date: 22-Sep-2024
    • (2024)Open-Source Projects for 3D Point CloudsDeep Learning for 3D Point Clouds10.1007/978-981-97-9570-3_9(255-272)Online publication date: 10-Oct-2024
    • (2024)Deep-Learning-Based Point Cloud Enhancement IIDeep Learning for 3D Point Clouds10.1007/978-981-97-9570-3_4(99-130)Online publication date: 10-Oct-2024
    • (2024)Deep-Learning-based Point Cloud Enhancement IDeep Learning for 3D Point Clouds10.1007/978-981-97-9570-3_3(71-97)Online publication date: 10-Oct-2024
    • (2024)Learning Basics for 3D Point CloudsDeep Learning for 3D Point Clouds10.1007/978-981-97-9570-3_2(29-70)Online publication date: 10-Oct-2024
    • (2024)Future Work on Deep Learning-Based Point Cloud TechnologiesDeep Learning for 3D Point Clouds10.1007/978-981-97-9570-3_11(301-315)Online publication date: 10-Oct-2024
    • (2024)Typical Engineering Applications of 3D Point CloudsDeep Learning for 3D Point Clouds10.1007/978-981-97-9570-3_10(273-299)Online publication date: 10-Oct-2024
    • (2024)Introduction to 3D Point Clouds: Datasets and PerceptionDeep Learning for 3D Point Clouds10.1007/978-981-97-9570-3_1(1-27)Online publication date: 10-Oct-2024
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media