Elsevier

Graphical Models

Volume 85, May 2016, Pages 46-55
Graphical Models

Structure guided interior scene synthesis via graph matching

https://doi.org/10.1016/j.gmod.2016.03.004Get rights and content

Abstract

We present a method for reshuffle-based 3D interior scene synthesis guided by scene structures. Given several 3D scenes, we form each 3D scene as a structure graph associated with a relationship set. Considering both the object similarity and relation similarity, we then establish a furniture-object-based matching between scene pairs via graph matching. Such a matching allows us to merge the structure graphs into a unified structure, i.e., Augmented Graph (AG). Guided by the AG, we perform scene synthesis by reshuffling objects through three simple operations, i.e., replacing, growing and transfer. A synthesis compatibility measure considering the environment of the furniture objects is also introduced to filter out poor-quality results. We show that our method is able to generate high-quality scene variations and outperforms the state of the art.

Introduction

Recently, 3D interior scenes have received more and more attention due to the huge demand in the industries such as computer games and virtual reality. However, designing and creating 3D digital scenes are still time-consuming even for artists. Fisher et al. [1] provided an efficient solution for 3D scene synthesis from examples based on moderate-to-large scene datasets. However, such a learning based algorithm is still complicated due to the complexity for data collection. Besides, the learned probability model might not always achieve user-desired constraints, such as rigid grid layouts or exact alignment relationships. Such issues might be solved by utilizing the original examples rather than learning from the example dataset.

It is still a desirable way to synthesize scenes directly from a small set of 3D scene examples without learning algorithms. Starting from such a point, Xie et al. [2] introduced a non-learning-based scene synthesis method by grouping the furniture objects into different types of units and reshuffling the interchangeable objects from the same units. Although their method could generate some kind of diverse new scenes, it is still rather limited due to the limited grouping types. In addition, their local analysis ignored the scene’s layout structure information, which is, however, a very important guidance cue for scene generation. We observed that there is a latent rule in the layout distribution of the scene furniture objects locally and globally. Locally, furniture objects often ‘contact’ with each other, following a certain kind of relation. For example, a chair often closely faces a table, and a bedside cabinet is always at one side of a bed with one side aligned. Globally, these furniture objects with the local relationships form a layout structure. Based on the above observations, we carefully analyze the layout structures of the exemplar scenes and synthesize new scenes utilizing the relations between the layout structures, which have not been explored by Xie et al. [2].

Given several 3D interior scenes as examples, our goal is to synthesize new scenes with variations using a geometric approach rather than a learning-based strategy. Although furniture objects vary a lot in geometry, they latently relate with each other according to the relations among objects. In this paper, we first define five kinds of relations between furniture objects (Fig. 4(a-e)), i.e., support relation, vertical contact relation, facing relation, aligned relation and close relation, which widely exist in the 3D interior scenes. Then we represent each 3D scene as a structure graph. Our structure graph is different from the previous ones [2], since we associate a relationship set rather than a single relationship with each edge in the structure graph. We establish a matching between the layout subgraphs (Fig. 4(f)) via graph matching, which provides a cue to relate two structure graphs. Based on the matching, we merge the scene structures into an Augmented Graph (AG), which encodes all the layout structure information among the examples. We then utilize the AG to guide scene synthesis by using several simple and efficient operations, i.e., replacing, growing and transfer. The growing operation is especially efficient for adding a new object. These operations provide a flexible and user-friendly way to synthesize diverse scenes. To evaluate scene quality and avoid low-quality scenes during the synthesis, we introduce a synthesis compatibility value to measure each synthesis operation and the quality of a resulting scene.

Our main contribution lies in the following three points:

  • (1)

    We represent a 3D interior scene as a structure graph associated with a relationship set, and introduce a furniture object matching method between scene pairs via graph matching. Our scene matching is general and efficient, which can be used for other applications besides scene synthesis.

  • (2)

    We introduce a unified structure, Augmented Graph, to encode all the layout information from examples, augmented from the matched structure graphs. Guided by the AG, we provide three simple reshuffle-based synthesis operations, i.e., replacing, growing and transfer, to generate diverse new scenes.

  • (3)

    We also introduce a synthesis compatibility metric to measure scene quality during the synthesis, making it efficient to filter out poor quality synthesis results.

Section snippets

Related work

It has still been a challenging problem for rapidly designing and creating 3D contents, such as shapes and 3D scenes. In recent years, continuous progresses have been made for shape processing (see [3] for more details). Here we only focus on example-based manipulation and analysis for shapes and 3D scenes.

Overview

Inspired by the part-based methods for shape synthesis and scene analysis (see the discussions in the previous section), we provide a structure-guided method for synthesizing 3D interior scenes from a small set of examples. Our input is a small set of exemplar 3D interior scenes. Each interior scene has been segmented into single furniture objects (Fig. 1) and oriented uprightly [23]. As discussed previously our approach does not need the furniture objects to be semantically tagged or labeled.

Scene matching

In this section we first show how we determine the facing direction of each furniture object. First, we compute a symmetry plane (if any) (Fig. 3) for each furniture object. The facing direction is always parallel to the symmetry plane. Users can specify the facing direction manually if none or multiple symmetry planes exist. In general, furniture objects which are located in the boundary region of a 3D scene often have facing directions pointing to the scene’s center. This motivated us to

Scene synthesis guided by augmented graph

Once furniture objects are matched using the scene matching method described in the previous section, we can synthesize new scenes by simply reshuffling the matched furniture objects. However, it is quite often that two layout graphs have different numbers of objects. Therefore, there may exist some object nodes without any correspondence. Thus simply reshuffling between corresponding furniture objects cannot always lead to diverse new scenes. Inspired by the work in [13] for blending shapes,

Data

We tested our method on example sets of 3D interior scenes coming from Xu et al. [22]’s open dataset. That dataset contains varying categories of interior scenes such as the living room, meeting room etc. Some representative scene synthesis results are shown in Fig. 9. The scenes were segmented into meaningful single furniture objects. We extracted the structure graphs, and matched them via graph matching. Please refer to the details of the graph matching in the supplementary materials. After

Conclusion

In this paper, we introduced a method to synthesize scenes directly from unlabeled 3D interior scenes. Each scene is formulated as a structure graph associated with a relationship set. We establish a one-to-one matching between the layout subgraphs of the structure graph pairs via graph matching, and augment them into a unified structure Augmented Graph. Based on the Augmented Graph, we define three synthesis operations, i.e., replacing, growing, transfer, providing a flexible way to synthesize

Acknowledgments

This work was supported by the Natural Science Foundation of China (Project Number 61521002, 61120106007), Research Grant of Beijing Higher Institution Engineering Research Center, and Tsinghua University Initiative Scientific Research Program. Hongbo Fu was partially supported by grants from the Re425 search Grants Council of HKSAR, China (Project No. 113513, 11204014 and 11300615).

References (31)

  • M. Fisher et al.

    Example-based synthesis of 3d object arrangements

    ACM SIGGRAPH Asia 2012 papers

    (2012)
  • XieH. et al.

    Reshuffle-based interior scene synthesis

    Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry

    (2013)
  • N. Mitra et al.

    Structure-aware shape processing

    SIGGRAPH Asia 2013 Courses

    (2013)
  • T.A. Funkhouser et al.

    Modeling by example

    ACM Trans. Graph.

    (2004)
  • A. Jain et al.

    Exploring shape variations by 3d-model decomposition and part-based recombination

    Comput. Graph. Forum (Proc. Eurograph. 2012)

    (2012)
  • ZhengY. et al.

    Smart variations: functional substructures for part compatibility

    Comput. Graph. Forum (Eurograph.)

    (2013)
  • HuangS.-S. et al.

    Support substructures: support-induced part-level structural representation

    IEEE Trans. Vis. Comput. Graph.

    (2015)
  • LiuH. et al.

    Replaceable substructures for efficient part-based modeling

    Comput. Graph. Forum

    (2015)
  • E. Kalogerakis et al.

    A probabilistic model for component-based shape synthesis

    ACM Trans. Graph.

    (2012)
  • S. Chaudhuri et al.

    Probabilistic reasoning for assembly-based 3d modeling

    ACM Trans. Graph.

    (2011)
  • S. Chaudhuri et al.

    Data-driven suggestions for creativity support in 3d modeling

    ACM SIGGRAPH Asia 2010 Papers

    (2010)
  • XuK. et al.

    Fit and diverse: set evolution for inspiring 3d shape galleries

    ACM Trans. Graph.

    (2012)
  • I. Alhashim et al.

    Topology-varying 3d shape creation via structural blending

    ACM Trans. Graph.

    (2014)
  • M. Fisher et al.

    Context-based search for 3d models

    ACM Transactions on Graphics (TOG)

    (2010)
  • M. Fisher et al.

    Characterizing structural relationships in scenes using graph kernels

    ACM Transactions on Graphics (TOG)

    (2011)
  • Cited by (0)

    View full text