ABSTRACT
We present a contextual generative network for 3D shapes based on a conditional variational autoencoder, which learns a subspace of plausible complementary parts in the context of a partial shape. With the learned part subspace prior, which encodes bi-part spatial relations and geometry descriptions, a shape is generated via iterative “next part reasoning”, where a next part is sampled conditioned on a partial shape. Furthermore, our conditional subspace allows not just one, but a set of reasonable next parts to be generated, which adds controllability (e.g., via user selection) to the generative process. Our core idea of reasoning about next parts via conditional modeling offers a new way of understanding shape structures via part correlation modeling. Evaluations show the effectiveness of our approach and also the diversity of the generated shapes.
- Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. 2018. Learning Representations and Generative Models for 3D Point Clouds. arXiv preprint arXiv:1707.02392(2018).Google Scholar
- Angel X. Chang, Thomas Funkhouser, Leonidas J. Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. 2015. ShapeNet: An Information-Rich 3D Model Repository. arXiv:1512.03012 [cs.GR] (2015).Google Scholar
- Siddhartha Chaudhuri, Daniel Ritchie, Jiajun Wu, Kai Xu, and Hao Zhang. 2020. Learning generative models of 3D structures. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 643–666.Google Scholar
- Zhiqin Chen and Hao Zhang. 2019. Learning Implicit Fields for Generative Shape Modeling. In CVPR.Google Scholar
- Haoqiang Fan, Hao Su, and Leonidas Guibas. 2016. A point set generation network for 3D object reconstruction from a single image. arXiv preprint arXiv:1612.00603(2016).Google Scholar
- Yutong Feng, Yifan Feng, Haoxuan You, Xibin Zhao, and Yue Gao. 2019. MeshNet: Mesh neural network for 3D shape representation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 8279–8286.Google ScholarDigital Library
- Rohit Girdhar, David F Fouhey, Mikel Rodriguez, and Abhinav Gupta. 2016. Learning a predictable and generative vector representation for objects. In European Conference on Computer Vision. Springer, 484–499.Google ScholarCross Ref
- Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. 2018. A Papier-Mâché Approach to Learning 3D Surface Generation. In Proc. CVPR. 216–224.Google ScholarCross Ref
- Rana Hanocka, Amir Hertz, Noa Fish, Raja Giryes, Shachar Fleishman, and Daniel Cohen-Or. 2019. MeshCNN: a network with an edge. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1–12.Google ScholarDigital Library
- Jun Li, Chengjie Niu, and Kai Xu. 2019. Learning Part Generation and Assembly for Structure-aware Shape Synthesis. arXiv preprint arXiv:1906.06693(2019).Google Scholar
- Jun Li, Kai Xu, Siddhartha Chaudhuri, Ersin Yumer, Hao Zhang, and Leonidas Guibas. 2017. GRASS: Generative Recursive Autoencoders for Shape Structures. arXiv preprint arXiv:1705.02090(2017).Google Scholar
- Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. 2018. Pointcnn: Convolution on x-transformed points. In Advances in neural information processing systems. 820–830.Google Scholar
- Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2019. Occupancy Networks: Learning 3D Reconstruction in Function Space. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).Google ScholarCross Ref
- Kaichun Mo, Paul Guerrero, Li Yi, Hao Su, Peter Wonka, Niloy Mitra, and Leonidas J Guibas. 2019. StructureNet: Hierarchical Graph Networks for 3D Shape Generation. ACM Trans. on Graph. (SIGGRAPH Asia)(2019).Google Scholar
- Kaichun Mo, Shilin Zhu, Angel X. Chang, Li Yi, Subarna Tripathi, Leonidas J. Guibas, and Hao Su. 2019. PartNet: A Large-Scale Benchmark for Fine-Grained and Hierarchical Part-Level 3D Object Understanding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarCross Ref
- Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In CVPR.Google Scholar
- Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. 2017. Octnet: Learning deep 3d representations at high resolutions. In Proc. CVPR, Vol. 3.Google ScholarCross Ref
- Nadav Schor, Oren Katzir, Hao Zhang, and Daniel Cohen-Or. 2018. CompoNet: Learning to Generate the Unseen by Part Synthesis and Composition. arXiv preprint arXiv:1811.07441(2018).Google Scholar
- Amir Arsalan Soltani, Haibin Huang, Jiajun Wu, Tejas D. Kulkarni, and Joshua B. Tenenbaum. 2017. Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes with Deep Generative Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1511–1519.Google ScholarCross Ref
- Hao Su, Charles Ruizhongtai Qi, Kaichun Mo, and Leonidas J. Guibas. 2017. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). to appear.Google Scholar
- Jiapeng Tang, Xiaoguang Han, Junyi Pan, Kui Jia, and Xin Tong. 2019. A skeleton-bridged deep learning approach for generating meshes of complex topologies from single rgb images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4541–4550.Google ScholarCross Ref
- Hao Wang, Nadav Schor, Ruizhen Hu, Haibin Huang, Daniel Cohen-Or, and Hui Huang. 2018. Global-to-Local Generative Model for 3D Shapes. ACM Transactions on Graphics (Proc. SIGGRAPH ASIA) 37, 6 (2018), 214:1—214:10.Google Scholar
- Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. 2018. Pixel2mesh: Generating 3d mesh models from single rgb images. In ECCV. 52–67.Google Scholar
- Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, and Xin Tong. 2017. O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis. ACM Transactions on Graphics (SIGGRAPH) 36, 4 (2017).Google ScholarDigital Library
- Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. 2016. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in Neural Information Processing Systems. 82–90.Google Scholar
- Rundi Wu, Yixin Zhuang, Kai Xu, Hao Zhang, and Baoquan Chen. 2020. PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarCross Ref
- Guandao Yang, Xun Huang, Zekun Hao, Ming-Yu Liu, Serge Belongie, and Bharath Hariharan. 2019. PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows. arXiv (2019).Google Scholar
- Yin Zhou and Oncel Tuzel. 2018. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4490–4499.Google ScholarCross Ref
- Chuhang Zou, Ersin Yumer, Jimei Yang, Duygu Ceylan, and Derek Hoiem. 2017. 3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks. 2017 IEEE International Conference on Computer Vision (ICCV) (Oct 2017). https://doi.org/10.1109/iccv.2017.103Google ScholarCross Ref
Recommendations
Multi-chart generative surface modeling
This paper introduces a 3D shape generative model based on deep neural networks. A new image-like (i.e., tensor) data representation for genus-zero 3D shapes is devised. It is based on the observation that complicated shapes can be well represented by ...
RML2SHACL: RDF Generation Taking Shape
K-CAP '21: Proceedings of the 11th Knowledge Capture ConferenceRDF graphs are often generated by mapping data in other (semi-)structured data formats to RDF. Such mapped graphs have a repetitive structure defined by (i) the mapping rules and (ii) the schema of the input sources. However, this information is not ...
General Shape Generation by Contouring Fractals and Applying Linear Boundary Regression
AIPR '07: Proceedings of the 36th Applied Imagery Pattern Recognition WorkshopThe ALISA Component Module (ACM) has been developed as a general-purpose shape classifier for objects in Digital Transmissive Images (DTIs). The Component Module classifies contoured DTIs intended to reveal the internal structures of objects and to ...
Comments