Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag November 24, 2017

Perceptual Rules for Building Enhancements in 3D Virtual Worlds

  • Patrick Tutzauer

    Patrick Tutzauer studied Geodesy and Geoinformatics at the University of Stuttgart. Since 2013 he works as a research associate and doctoral student at the Institute for Photogrammetry. His research focuses on 3D building reconstruction/modeling and Machine Learning techniques for urban data.

    EMAIL logo
    , Susanne Becker

    Susanne Becker is a research associate at the Institute for Photogrammetry, where she is responsible for research and involved in teaching in the field of Geoinformation, Pattern Recognition and Remote Sensing. Her research interests are 3D indoor reconstruction, 3D façade reconstruction and the semantic interpretation of urban data.

    and Norbert Haala

    Norbert Haala is professor at the Institute for Photogrammetry, University of Stuttgart, where he is responsible for research and teaching in Photogrammetric Computer Vision and Image Processing. His main research interests cover automatic approaches for image‐based generation of high quality 3D data with a special focus on virtual city modeling.

From the journal i-com

Abstract

While the generation of geometric 3D virtual models has become feasible to a great extent, the enrichment of the resulting urban building models with semantics remains an open research question in the field of geoinformation and geovisualisation. This additional information is not only valuable for applications like Building Information Modeling (BIM) but also offers possibilities to enhance the visual insight for humans when interacting with that kind of data. Depending on the application, presenting users the highest level of detail of building models is often neither the most informative nor feasible way. For example when using mobile apps, resources and display sizes are quite limited. A concrete use case is the imparting of building use types in urban scenes to users. Within our preliminary work, user studies helped to identify important features for the human ability to associate a building with its correct usage type. In this work we now embed this knowledge into building category-specific grammars to automatically modify the geometry of a building to align its visual appearance to its underlying use type. If the building category for a model is not known beforehand, we investigate its feature space and try to derive its use type from there. Within the context of this work, we developed a Virtual Reality (VR) framework that gives the user the possibility to switch between different building representation types while moving in the VR world, thus enabling us in the future to evaluate the potential and effect of the grammar-enhanced building model in an immersive environment.

1 Introduction

In the constantly evolving digital domain, the presentation of 3D content became a mainstream application. We watch movies in 3D, play games in 3D virtual worlds on our computers and game consoles or dive even deeper into an immersive experience with the growing availability of Virtual Reality devices. These technological developments also trigger new applications and demands in the field of geoinformation and geovisualisation. In the past, the main focus of 3D geodata was on geometry. Recent research efforts meanwhile allow the generation of dense and reliable geometric representations by meshed 3D point clouds, however, the automatic extraction and provision of semantic information still remains an open problem. We do not only want to get what we see, but implicitly describe what can be seen. This semantic data is not only valuable for planning and construction applications as required by Building Information Modelling or aspired urban development concepts like Smart Cities, but also provides new possibilities in enhancing the visual insight for humans when interacting with that kind of geodata. Hence, there is a need to obtain semantic information about 3D virtual building models. Furthermore, this semantic information has to be incorporated into the actual representation of the building. Within the context of this work, we present an approach to estimate the building use category of yet unknown building models. This information is then used to enhance the visual appearance of the model by means of perceptional rules in a way that it becomes more understandable for humans. The contributions of this paper are the automated parsing of CityGML[1] to obtain key properties of a building and the feature space-based determination of the associated building use type in case this information is not known beforehand. Additionally, feeding this information into category-specific rule sets allows to enhance the model’s appearance, thus becoming visually better understandable. We see this pipeline as a contribution to the process of adapting virtual building presentations for better perceptual insights.

2 Related Work

Virtual 3D cities are important tools for the visual communication of diverse urban-related information. Since humans are the direct recipients of this information transfer, it is vital that the 3D city representations account for the humans’ spatial cognition. Sections 2.1 and 2.2 give a brief overview about different representation types and procedural modelling techniques, respectively. Approaches which consider perceptional issues when visualising virtual 3D city models are covered in section 2.3. Finally, section 2.4 presents our current work on a virtual reality framework which we will use to compare and evaluate different city representations in an immersive environment.

2.1 Geometric Representations of Virtual 3D Cities

The variety of geometric representations of urban scenes is wide: Most virtual 3D cities are a collection of 3D buildings given as boundary representations (BReps). Following the OGC standard for 3D city models CityGML [13], [9], the geometric level of detail (LoD) of such 3D building representations can be differentiated into four degrees: 1) planar facades, flat roof structures (LoD1), 2) planar facades, detailed roof structures (LoD2), 3) facades with 3D structures (LoD3), 4) indoor models (LoD4). Due to the fact that most of the existing 3D city models have been reconstructed from airborne data, the majority of 3D building models is of LoD2.

Triggered by the developments in the field of Computer Vision and Computer Graphics, urban scenes are meanwhile alternatively represented by dense 3D point clouds. These point clouds are either the direct output of laser scanning or, pushed by the development of Structure-from-Motion and dense multi-image matching techniques, the result of photogrammetric derivation from images [6], [10], [16]. Google Earth, for example, solely uses meshed point clouds for their representations. By this, they avoid the derivation of geometrically and semantically interpreted BReps with a defined LOD which, however, are required for all applications that go beyond pure visualizations.

2.2 Procedural Modelling of Virtual 3D Cities

3D building structures in the form of LoD1, LoD2 or LoD3 building models are reconstructed from airborne (nadir or oblique) or terrestrial LiDAR- and image data [11], [5]. Automatic approaches that are robust to noisy or incomplete sensor data usually integrate object knowledge in the reconstruction process. Such knowledge about the object’s geometry can be represented by means of a formal description of the operations that are necessary for reconstructing the object. This structure-knowledge comprises basic geometric primitives, i.e., the terminals, and production rules which are a fundamental part of formal grammars and, as such, can be used for both the interpretation [4] and the procedural modelling of building structures [26], [18], [23]. Most of such grammars are probabilistic, meaning that their rules are equipped with probabilities in order to allow for variety in the generated building structures. See [25] for a detailed overview of grammar-based approaches for building modelling.

Procedural modelling is an efficient technique to generate topologically correct 3D building models in large quantities. Typical applications are, for instance, urban planning and simulation. A prominent procedural software tool is Esri CityEngine. Based on a set of given rules, CityEngine synthesizes virtual 3D building models in a specific level of detail and architectural style defined in the rules. Another example for procedural software tools is Random3Dcity [3]. In contrast to CityEngine, this approach generates synthetic datasets of buildings in multiple LoDs. Generally, formal grammars can be applied to produce virtual 3D buildings from scratch. However, they can also be used to efficiently enhance already existing building representations, e.g., by augmenting the planar facades of LoD2 models by 3D window and door geometries [2] or 3D structures like stairs [21].

All these formal grammars developed and used for the efficient generation of 3D building structures contain objective knowledge about buildings, their construction as well as their meaning. Knowledge of a more subjective nature, i.e. knowledge about the way a building’s information content is perceived by a human when looking at the generated 3D representation, is not considered. But, it is exactly this kind of knowledge that is vital for an efficient communication of urban-related information.

2.3 Human Perception of Virtual 3D Cities

Research on the human perception of geometric objects stems from a variety of different branches of science, e.g., geoinformatics and photogrammetry, geography, cartography or computer graphics. Findings of Gestalt theory play a particularly important role in this. For example, Li et al. [14] exploit Gestalt principles for the grouping and generalization of 2D building footprints, and Michaelsen et al. [17] refer to Gestalt-based groupings for the detection of 2D window structures in terrestrial thermal imagery. Within the wide field of visualization approaches, Adabala [1] present a perception-based technique for generating abstract 2D renderings of building façades, and Nan et al. [19] apply conjoining Gestalt rules for the abstraction of architectural 2D drawings.

Figure 1 
              Triangle mesh obtained by means of airborne photogrammetry embedded in VR application.
Figure 1

Triangle mesh obtained by means of airborne photogrammetry embedded in VR application.

Approaches on the human perception of geometric building representations, which are not restricted to 2D structures or 2D visualizations but, instead, are directly located in 3D space are often developed in the context of cartography. Prominent representatives are provided by Glander and Döllner [7] or Pasewaldt et al. [20] who use cognitive principles for generating abstract interactive visualizations of virtual 3D city models. All these approaches as well as most others dealing with perception-based abstraction of virtual 3D cities focus on emphasizing landmarks while generalizing and suppressing buildings that are supposed to be unimportant from a tourist’s point of view (e.g. [8]). Semmo et al. [22] combine the use of different levels of detail and abstraction with different graphic styles in a single view. Seamless transitions are implemented by rendering objects several times in different styles followed by blending the intermediate image results. The best level of detail, abstraction and graphic style for each object in a scene are selected automatically. This is accomplished by exploiting knowledge about landmarks, the view distance, the vertical view-angle, a predefined region of interest and the object’s category (e.g. building, green space, street, water, terrain). Targeted variations of building geometries with the aim of improving the understanding of building-related semantics (e.g. the building’s usage) are not supported.

2.4 Virtual Reality Framework

Tremendous developments in hard- and software have boosted the sector of Virtual Reality in the last years. Decreasing costs and increasing computational capabilities have led to the fact, that meanwhile even smartphones serve as VR devices. Apart from that, stationary computer graphic card-powered devices like Oculus Rift or HTC Vive and even completely autonomous ones like the Mixed Reality focused Microsoft HoloLens have hit the consumer market and enable deeply immersive experiences. We see this as a chance for our community to get an actual feel for photogrammetric products such as point clouds and triangle meshes derived from aerial/UAV or terrestrial imagery. This way, data results can be made more attractive to a broader crowd. In the future, VR will not only serve as an environment to inspect, but also edit data – to for example smooth out imperfection in triangulated meshes – and save the edits back to the actual input data. However, this is not the only area where we see potential benefits of VR. Within the context of this work, we conceptualized a framework in Unity that gives users the possibility to move around in an urban environment and switch between different building representation types [12]. Our approach is designed for the HTC Vive system and offers the user several in-game functions: Locomotion via teleport and 2 different flight modes (Superman and Bird View), Mini-Map for orientation and quick navigation, switching between different building representation types, distance measurement tool, calculation of best-fitting plane in selected area and prototypic grabbing, rotating and scaling of building primitives such as windows. Figure 1a exhibits an use case for the measurement tool, Figure 1b depicts navigation and orientation using the minimap. Due to the room-scale capabilities of the HTC Vive system the user can walk around in the real world and cover equal distance units in VR. However, when trying to cover larger distances in the virtual world the most efficient and human-friendly way of locomotion is pointing at a location and directly teleporting there. Additionally, we implemented two flying modes for large scenes. In Superman mode, one controller serves as 3D joystick to navigate through air within a fixed range of velocity to prevent from motion sickness. Bird View mode imitates the movement of birds, obviously. Here, the user mimics wings with his or her arms. By flapping the arms with the game controllers in hand it is possible to gain speed, tilting the arms just like a plane leads to flying a curve. We implemented this framework as a testbed to visualize content produced by means of photogrammetry and we want to use it to study the influence of different representation types of urban data on users. Thus, it is possible to toggle between different 3D data types for the same urban area. The current setup contains three different representations types: untextured LOD2, textured LOD2 models for selected buildings (from Sketchup 3D Warehouse) and a textured 2.5D triangle mesh. The first two are not too difficult to manage in terms of CPU and GPU load. However, the textured mesh contains massive amounts of triangles in its highest level of detail. Therefore we have to make use of a level of detail concept, where only those parts very close to the camera are rendered in the highest level of detail. This way, a sufficiently high frame rate can be maintained at all times to ensure a smooth VR experience. Ultimately, we want to use the testbed to let users switch between different versions of the same building to examine whether our perception-based adapted building models enhance the level of insight to better conceive a building’s use type. Some first proof of concept is discussed in section 4.3.

3 Building Feature Space

Within our first user studies we designed features that can be used to describe each individual building. Those properties can either be directly derived from the model geometry, such as building height and footprint, or they are calculated by considering building-related semantic aspects, such as number of floors, entrances, windows and balconies. For our studies we distinguish between six different building categories: One-Family Buildings (OFB), Multi-Family Buildings (MFB), Residential Towers (RT), Buildings With Shops (BWS), Office Buildings (OFF) and Industrial Facilities (IF).

Figure 2 
            Workflow of the proposed approach.
Figure 2

Workflow of the proposed approach.

In section 3.1 the used features and how they are derived from unknown buildings is discussed in further detail. Section 3.2 elaborates on how we use the building features to determine what building is at hand. Figure 2 gives a complete overview of the proposed approach.

3.1 Building Features

Each building has a set of essential geometric and semantic properties. Semantic within this context is considered as the functional aspect of façade geometries like windows or doors. In previous work we identified building properties that seem to be relevant for humans to associate a building to a specific use type [24]. Those features are: building footprint, number of floors, floor height, total height, number of windows per façade, mean window surface area, window-to-wall-surface ratio, number of entrances, mean entrance surface area, number of balconies, mean balcony surface area, different arrangement of window in the ground floor as compared to the rest of the floors, different window size in the ground floor with respect to the following floors, different window shapes in ground floor, different shape of ground plan in ground floor and roof complexity. More information on those features is available in [24] – where they were manually extracted.

Figure 3 
              Input CityGML Model (left) and an excerpt of the resulting parsed XML (right).
Figure 3

Input CityGML Model (left) and an excerpt of the resulting parsed XML (right).

However, since CityGML is the de facto standard in the GIS community, we utilize this data format and derive the features now in an automated manner. Since CityGML incorporates semantic information, we can access the related geometry of building primitives (such as windows and doors). This way, we can parse the façade for its geometry and semantics and encode this information into a XML file containing the essential information to describe the building. Figure 3 shows an excerpt of such a generated XML file. Note, that the parsed building in this example actually has 22 floors, for better visibility we only show a truncated version.

Figure 4 
              Exemplary storey of a façade with window primitives. Three different instances can be identified. This facade string can be further compressed by merging the sequence w1,w2,w1w1,w2,w1.
Figure 4

Exemplary storey of a façade with window primitives. Three different instances can be identified. This facade string can be further compressed by merging the sequence w1,w2,w1.

To extract relevant information from the CityGML model we first have to identify all primitives contained in each façade. Those are not stored in a floor-wise manner. Therefore we have to determine the planes dividing a building into different floors. First, all primitives contained in the façade are sorted on their vertical position. Then, comparable to a plane sweep approach, we try to find the vertical position where no primitives are intersected and locate this as our floor plane. Based on the floor-wise division all primitives can be sorted into their according storeys. Subsequently, unique primitive instances have to be identified in order to construct a façade string. A window sequence within the storey can be thought of as a string of symbols. Based on that string we can detect repetitive patterns and derive a hierarchical structure. Figure 4 shows an exemplary storey with different windows. Each window is investigated on its geometry and thereby instances with same spatial extents are grouped, leading to the notation wi, with i=0,1,2, denoting that there were three window types found. They represent terminal symbols of a grammar (see section 4.1). If we denote the storey in Figure 4 as St0 and derived repetitive patterns as Ψi, than the sequence can be described as follows:

St0w0w1w2w1w1w2w1w0St0w0Ψ0Ψ0w0withΨ0w1w2w1

Besides deriving façade strings, we process the whole building to derive all essential building properties that form the key appearance of a building, such as height, depth, width and its ground plan. Those key properties are then exported to a custom XML as depicted in Figure 3 and will serve as seeds for the generation of new building instances in section 4.

3.2 Feature Space

Taking into account the mentioned features in section 3.1, we define an embedding for every building model. This means, we can transform our building into a feature representation and describe similarities of buildings by means of distance in a feature space.

Figure 5 
              t-SNE representation of the mean buildings for each category. Blue lines denote the link between corresponding ground truth and as-perceived instances.
Figure 5

t-SNE representation of the mean buildings for each category. Blue lines denote the link between corresponding ground truth and as-perceived instances.

We performed a t-Distributed Stochastic Neighbour Embedding (t-SNE) to give a better impression of the discrepancies between ground truth and as-perceived building categories in our previous tests. t-SNE is a technique for dimensionality reduction, especially suitable for data of high dimensions [15]. In a pre-processing step, Principal component analysis (PCA) is used to initially reduce the dimensionality of input data. Subsequently, t-SNE maps the input to either a 2D or 3D visualization. In our case, the embedding is 16-dimensional, since we have a total number of 16 features describing a building. In total we map 12 different instances, describing the mean building of each class from the ground truth and as-perceived, respectively. Figure 5 shows the t-SNE mapping, every coloured point denotes a building category instance, blue lines are linking ground truth and the mean instances as classified by users. From this figure, some of our findings become also visually clear. One-Family Buildings and Industrial Facilities could be classified quite well, accordingly they have small distances in the t-SNE mapping. However, users had problems to distinguish between Office Buildings and Buildings with Shops, correspondingly the distances are larger and entanglements occur. To determine the building category of a new building now, we first have to parse it as described in section 3.1. Once the essential properties are extracted, we can then perform a classification that can be considered a supervised learning approach. Our trained classes are the mean ground truth building instances, depicted as Xgti, with i corresponding to each of the six building categories. A 1-Nearest Neighbour (1-NN) classifier can then be used to classify each new unknown building sample. To avoid the curse of dimensionality we also implemented a dimensionality reduction using PCA and then searching for the nearest neighbour category. However, in our examples there was no difference between 1-KNN on original features and using PCA as pre-processing.

If buildings are quite different from what we identified as mean building of each category, that leads to a large distance in feature space. In consequence, a building actually belonging to category A gets classified as belonging to category B, due to a smaller feature space distance. This might be problematic for the subsequent building category-specific adaption, since wrong rules might be applied. However, since CityGML is widely used by surveying offices, the building category is often contained in the XML structure. We take advantage of that fact and whenever the building usage is found in the parsing process we pull the information from there and do not need to perform any feature space processing.

4 Perception-Based Rules

In this chapter we address the process of enhancing buildings in a way, that they become visually more distinctive for humans in terms of their type of use. Section 4.1 gives an introduction and overview of grammars for buildings. In section 4.2, we discuss the properties and purpose of building category specific rule sets and in 4.3 we exhibit how those rules can be used to either enhance existing buildings or use the rule sets to create completely new building category-specific instances.

4.1 Grammars for Buildings

The concept of grammars is generic and can be used for a variety of tasks. The general concept is therefore commonly addressed as formal grammar. A formal grammar consists of symbols (alphabet) and a set of production rules to generate content (syntax). In a formal language, this content would be a string, in our case it is 3D objects. The notation of the grammar is as follows:

G=(N,Σ,P,S)

where N is a set of non-terminal and Σ a set of terminal symbols. Those two sets are disjoints and terminals cannot be further decomposed, whereas non-terminals can be replaced and expressed in terms of a set of terminals. S is called axiom, a non-terminal defining the start point. P describes the set of production rules which are expressed as follows:

id:lc<pred>rc:condsucc:prob

where id is a consecutive rule number, lc is the left context, pred is the predecessor symbol, rc is the right context, cond is a condition under which the rule is applied, succ is the successor symbol and prob is a certain probability for the rule to be applied. Within the context of generating building structures the so-called Split-Grammar [26] and later as a continuation, the Computer Generated Architecture (CGA) Shape Grammar was developed [18]. In the following sections we will use this grammar to define specific rule sets for each building category and then use them to generate enhanced versions of the input building.

Figure 6 
              Transfer of a CityGML Multi-Family House (a) to CGA Rules generated version (b).
Figure 6

Transfer of a CityGML Multi-Family House (a) to CGA Rules generated version (b).

Figure 7 
              Application of building category specific rules.
Figure 7

Application of building category specific rules.

4.2 Building Category Specific Rule Sets

As discussed in 3.1, every building has a set of properties that qualify it belonging to a certain use type. To generate virtual representations of buildings, which are better to understand for humans, we need to refine or abstract (in general – adapt) those buildings in a certain manner. Thus, specific rule sets we designed for each building category. These rule sets incorporate geometric and semantic constraints we extracted from our previous user studies by relating the aforementioned features of ground truth buildings with the as-perceived classifications. Some of those rules are:

One-Family Building:

At least one visible entrance

Multi-Family Building:

Higher number of floors than OFB; keep balconies if they occur in original

Residential Tower:

If not existent, add balconies

Building With Shops:

Ground floor significantly different from remaining floors (window sizes, shape, arrangement)

Office Building:

Very high window-to-wall-surface ratio

Industrial Facility:

Very high floor height

Figure 6 shows an example of a transferred CityGML representation of a Multi-Family-Building (figure 6a) to the CGA Rule based representation (figure 6b). Balconies are maintained, whereas positions of windows and doors on the front-facing façade are loosely coupled.

4.3 Generation of New Instances

We use the described rule sets from the previous section to either generate use type specific buildings from scratch or to adapt existing ones. For the latter one, it is important to maintain the key characteristics of the building (see section 3.1). Therefore we feed the essential elements of the parsed XML into the modelling process.

As a first proof of concept, we used coarse building models as shown in Figure 7a and enhanced them with category specific rule sets. We then ported the results into a VR environment and initially only displayed one of the representations depicted in Figure 7b and 7c. We asked the subjects to classify that building into one of the six pre-defined categories introduced in section 3. First results have shown, that the as-perceived categories coincide with the building categories that were intended to be imparted. However, more extensive actual user studies based on this initial prototypic test have to be conducted in the future.

5 Conclusions and Future Work

In this paper we presented an approach to adapt 3D building models in a way, that they become visually more comprehensible for humans. We use CityGML to utilize the contained semantics and thereby parse the building for essential features. If the building use type is not contained in the CityGML file, we perform a feature space transformation and determine the most probable category there. Based on findings from previous studies, we designed building category-specific rules that incorporate perceptional constraints. Feeding essential building properties from the parsing process into the category-specific rule sets leads to semantically more comprehensive representations of input buildings. However, building models that are geometrically quite different from their class mean will lead to misclassifications. More sophisticated Machine Learning approaches and more input building samples could be used to tackle this issue. Yet, if the building category is embedded in the CityGML file, we can directly access the information from there. For future work, we want to extend the presented concept to not only modify single buildings, but consider the architectural neighbourhood of a building and model interrelations between buildings. This way a holistic semantically consistent 3D Virtual City could be accomplished. To verify the pursued approach, an actual user study should be set up. This can either be an extended version of the proof of concept mentioned above or by using a web platform for 3D content and using crowdsourcing for the evaluation.

Award Identifier / Grant number: D01

Funding statement: We would like to thank the Deutsche Forschungsgemeinschaft (DFG) for financial support within the project D01 of SFB/Transregio 161.

About the authors

Patrick Tutzauer

Patrick Tutzauer studied Geodesy and Geoinformatics at the University of Stuttgart. Since 2013 he works as a research associate and doctoral student at the Institute for Photogrammetry. His research focuses on 3D building reconstruction/modeling and Machine Learning techniques for urban data.

Susanne Becker

Susanne Becker is a research associate at the Institute for Photogrammetry, where she is responsible for research and involved in teaching in the field of Geoinformation, Pattern Recognition and Remote Sensing. Her research interests are 3D indoor reconstruction, 3D façade reconstruction and the semantic interpretation of urban data.

Norbert Haala

Norbert Haala is professor at the Institute for Photogrammetry, University of Stuttgart, where he is responsible for research and teaching in Photogrammetric Computer Vision and Image Processing. His main research interests cover automatic approaches for image‐based generation of high quality 3D data with a special focus on virtual city modeling.

Acknowledgment

We would like to thank nFrames GmbH for providing the meshed point cloud data.

References

[1] Adabala, N. (2009). A technique for building representation in oblique view maps of modern urban areas. The Cartographic Journal, 46(2):104–114.10.1179/000870409X444776Search in Google Scholar

[2] Becker, S. (2009). Generation and application of rules for quality dependent façade reconstruction. ISPRS Journal of Photogrammetry and Remote Sensing, 64(6):640–653.10.1016/j.isprsjprs.2009.06.002Search in Google Scholar

[3] Biljecki, F., Ledoux, H., and Stoter, J. (2016). Generation of multi-LOD 3D city models in CityGML with the procedural modelling engine Random3Dcity. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., pages 51–59.10.5194/isprs-annals-IV-4-W1-51-2016Search in Google Scholar

[4] Brenner, C. and Ripperda, N. (2006). Extraction of facades using rjmcmc and constraint equations. Photogrammetric Computer Vision, pages 155–160.Search in Google Scholar

[5] Debevec, P. E., Taylor, C. J., and Malik, J. (1996). Modeling and Rendering Architecture from Photographs: A Hybrid Geometry-and Image-Based Approach. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pages 11–20. ACM.10.1145/237170.237191Search in Google Scholar

[6] Fritsch, D., Khosravani, A. M., Cefalu, A., and Wenzel, K. (2011). Multi-Sensors and Multiray Reconstruction for Digital Preservation. In Photogrammetric Week, volume 11, pages 305–323.10.1007/978-3-642-34234-9_2Search in Google Scholar

[7] Glander, T. and Döllner, J. (2009). Abstract representations for interactive visualization of virtual 3D city models. Computers, Environment and Urban Systems, 33(5):375–387.10.1016/j.compenvurbsys.2009.07.003Search in Google Scholar

[8] Grabler, F., Agrawala, M., Sumner, R. W., and Pauly, M. (2008). Automatic Generation of Tourist Maps, volume 27. ACM.10.1145/1360612.1360699Search in Google Scholar

[9] Gröger, G. and Plümer, L. (2012). CityGML – interoperable semantic 3D city models. ISPRS Journal of Photogrammetry and Remote Sensing, 71:12–33.10.1016/j.isprsjprs.2012.04.004Search in Google Scholar

[10] Haala, N. (2013). The landscape of dense image matching algorithms.Search in Google Scholar

[11] Haala, N. and Kada, M. (2010). An update on automatic 3D building reconstruction. ISPRS Journal of Photogrammetry and Remote Sensing, 65(6):570–580.10.1016/j.isprsjprs.2010.09.006Search in Google Scholar

[12] Han, M. (2016). Photogrammetry in Virtual Reality Environments. Master’s thesis, University of Stuttgart, Institute for Photogrammetry.Search in Google Scholar

[13] Kolbe, T. H., Gröger, G., and Plümer, L. (2005). CityGML: Interoperable Access to 3D City Models. In Geo-Information for Disaster Management, pages 883–899. Springer.10.1007/3-540-27468-5_63Search in Google Scholar

[14] Li, Z., Yan, H., Ai, T., and Chen, J. (2004). Automated building generalization based on urban morphology and gestalt theory. International Journal of Geographical Information Science, 18(5):513–534.10.1080/13658810410001702021Search in Google Scholar

[15] Maaten, L. v. d. and Hinton, G. (2008). Visualizing Data using t-SNE. Journal of Machine Learning Research, 9(Nov):2579–2605.Search in Google Scholar

[16] Mayer, H., Bartelsen, J., Hirschmüller, H., and Kuhn, A. (2012). Dense 3D reconstruction from wide baseline image sets. Outdoor and Large-Scale Real-World Scene Analysis, pages 285–304.10.1007/978-3-642-34091-8_13Search in Google Scholar

[17] Michaelsen, E., Iwaszczuk, D., Sirmacek, B., Hoegner, L., and Stilla, U. (2012). Gestalt grouping on façade textures from ir image sequences: comparing different production systemse. International Archives of Photogrammetry, Remote Sensing and Spatial Information Science, 39(B3):303–308.10.5194/isprsarchives-XXXIX-B3-303-2012Search in Google Scholar

[18] Müller, P., Wonka, P., Haegler, S., Ulmer, A., and Van Gool, L. (2006). Procedural Modeling of Buildings. In Acm Transactions on Graphics (Tog), volume 25, pages 614–623. ACM.10.1145/1141911.1141931Search in Google Scholar

[19] Nan, L., Sharf, A., Xie, K., Wong, T.-T., Deussen, O., Cohen-Or, D., and Chen, B. (2011). Conjoining Gestalt Rules for Abstraction of Architectural Drawings, volume 30. ACM.10.1145/2070781.2024219Search in Google Scholar

[20] Pasewaldt, S., Semmo, A., Trapp, M., and Döllner, J. (2014). Multi-perspective 3D panoramas. International Journal of Geographical Information Science, 28(10):2030–2051.10.1080/13658816.2014.922686Search in Google Scholar

[21] Schmittwilken, J., Kolbe, T. H., and Plümer, L. (2006). Der Gebäudekragen – Eine detaillierte Betrachtung des Übergangs von Gebäude und Gelände. Geoinformatik und Erdbeobachtung, 26:127–135.Search in Google Scholar

[22] Semmo, A., Trapp, M., Kyprianidis, J. E., and Döllner, J. (2012). Interactive Visualization of Generalized Virtual 3d City Models Using Level-of-Abstraction Transitions. In Computer Graphics Forum, volume 31, pages 885–894. Wiley Online Library.10.1111/j.1467-8659.2012.03081.xSearch in Google Scholar

[23] Smelik, R. M., Tutenel, T., Bidarra, R., and Benes, B. (2014). A Survey on Procedural Modelling for Virtual Worlds. In Computer Graphics Forum, volume 33, pages 31–50. Wiley Online Library.10.1111/cgf.12276Search in Google Scholar

[24] Tutzauer, P., Becker, S., Fritsch, D., Niese, T., and Deussen, O. (2016). A Study of the Human Comprehension of Building Categories Based on Different 3D Building Representations. PFG Photogrammetrie, Fernerkundung, Geoinformation, pages 319–333.10.1127/pfg/2016/0302Search in Google Scholar

[25] Vanegas, C. A., Aliaga, D. G., Wonka, P., Müller, P., Waddell, P., and Watson, B. (2010). Modelling the Appearance and Behaviour of Urban Spaces. In Computer Graphics Forum, volume 29, pages 25–42. Wiley Online Library.10.1111/j.1467-8659.2009.01535.xSearch in Google Scholar

[26] Wonka, P., Wimmer, M., Sillion, F., and Ribarsky, W. (2003). Instant Architecture, volume 22. ACM.10.1145/882262.882324Search in Google Scholar

Published Online: 2017-11-24
Published in Print: 2017-12-20

© 2017 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2017-0022/html
Scroll to top button