Boundary constrained voxel segmentation for 3D point clouds using local geometric differences

https://doi.org/10.1016/j.eswa.2020.113439Get rights and content

Highlights

  • A novel boundary control technique for segmentation method is proposed.

  • An object segmentation benchmark dataset is prepared to segment different shapes.

  • Two building segmentation benchmark datasets are prepared from outdoor scans.

  • The proposed method gives better segmentation results than the other tested methods.

  • The proposed method performs the process rather faster than the other methods.

Abstract

In 3D point cloud processing, the spatial continuity of points is convenient for segmenting point clouds obtained by 3D laser scanners, RGB-D cameras and LiDAR (light detection and ranging) systems in general. In real life, the surface features of both objects and structures give meaningful information enabling them to be identified and distinguished. Segmenting the points by using their local plane directions (normals), which are estimated by point neighborhoods, is a method that has been widely used in the literature. The angle difference between two nearby local normals allows for measurement of the continuity between the two planes. In real life, the surfaces of objects and structures are not simply planes. Surfaces can also be found in other forms, such as cylinders, smooth transitions and spheres. The proposed voxel-based method developed in this paper solves this problem by inspecting only the local curvatures with a new merging criteria and using a non-sequential region growing approach. The general prominent feature of the proposed method is that it mutually one-to-one pairs all of the adjoining boundary voxels between two adjacent segments to examine the curvatures of all of the pairwise connections. The proposed method uses only one parameter, except for the parameter of unit point group (voxel size), and it does not use a mid-level over-segmentation process, such as supervoxelization. The method checks the local surface curvatures using unit normals, which are close to the boundary between two growing adjacent segments. Another contribution of this paper is that some effective solutions are introduced for the noise units that do not have surface features. The method has been applied to one indoor and four outdoor datasets, and the visual and quantitative segmentation results have been presented. As quantitative measurements, the accuracy (based on the number of true segmented points over all points) and F1 score (based on the means of precision and recall values of the reference segments) are used. The results from testing over five datasets show that, according to both measurement techniques, the proposed method is the fastest and achieves the best mean scores among the methods tested.

Introduction

Three-dimensional (3D) point clouds are densely and irregularly distributed point patterns obtained by scanning objects, structures, terrain, forestry and city surfaces, which is usually performed via light detection and ranging systems (LiDAR) (Vo, Truong-Hong, Laefer, & Bertolotto, 2015; Vosselman, Coenen, & Rottensteiner, 2017; Wang & Tseng, 2010; Xu, Yao, Tuttas, Hoegner, & Stilla, 2018). A point cloud consists of clustered points such that each contains spatial (3D coordinates) and radiometric (e.g., RGB colors or intensity) values (Barnea & Filin, 2013; Wang & Tseng, 2011). The spatial values can exploit many types of geometric information, such as proximity, continuity, linearity, planarity and sphericity looking at the covariant matrices of point groups (Blomley, Jutzi, & Weinmann, 2016; Poux & Billen, 2019; Rusu, Blodow, & Beetz, 2009). The collaboration of these primitives increases the usage of point clouds for many real-life applications, such as road detection (Boyko & Funkhouser, 2011; Clode, Kootsookos, & Rottensteiner, 2004), forestry (Mayr et al., 2017; Morsdorf et al., 2004), disaster management (Laefer & Pradhan, 2006), cultural heritage (Lozes, Elmoataz, & Lezoray, 2015) and building reconstruction (Pu & Vosselman, 2009). The main steps of processing point clouds in real-life applications are the segmentation of points, feature extraction from the segments (characterizing the segments), semantic segmentation of characterized segments, and classification (Mayr et al., 2017).

The segmentation step is an indispensable and pioneering process that facilitates the solutions of some sophisticated problems in computer vision, photogrammetry, remote sensing and robotics (Saglam & Baykan, 2017; Weinmann, Jutzi, Hinz, & Mallet, 2015). In the point cloud processing field, the segmentation process, in which the spatially close points are gathered into individual subsets (segments) if they meet the merge criteria, has many benefits for further processing steps (Xu et al., 2018). The first of the main benefits is the fact that the data size to be processed for semantic labeling and classification is reduced substantially. Second, this process maintains spatial integrity and reveals the boundaries of the parts of the objects. Thirdly, the unity of the points in a segment produces new top-level geometric features (Lari & Habib, 2014; Xu, Yao, et al., 2018; Yang, Dong, Zhao, & Dai, 2015).

The geometric attributes of the points in a 3D point cloud provide important distinctive features for the segmenting process (Babahajiani, Fan, Kämäräinen, & Gabbouj, 2017; Poux & Billen, 2019). In addition to the proximity features between the points, the curvature features of the neighboring points are widely used for the point cloud segmentation process in the literature (Xu, Tuttas, Hoegner, & Stilla, 2018). Objects in real life have inherently 3D spatial features. The surfaces of the objects are planar or curved in nature. These characteristics of the surfaces are distinctive features used to separate the surfaces and, in this manner, the objects. When the surfaces of the objects are scanned by a laser scanner, the 3D spatial features of the surfaces are transported to the digital environment at the point level. In this case, the inclinations of local surfaces, which are called “surface normals,” should be determined to compute the angular differences between two local surfaces and, then, to determine whether to separate them into the same segment or different segments (Rabbani, van den Heuvel, & Vosselman, 2006).

The surface normals are obtained by finding the neighboring points of each point and computing the surface normal for each point, which requires enormous processing time, or sampling the points into specific units in the literature (Vo et al., 2015). Sampling neighboring point groups into cubic units (voxels) is a useful method to meet many needs, such as determining the closest points for surface normals, to operate many processes on a regular structure instead of unorganized data and to suppress outliers. This process is usually named “voxelization.” Decomposition of the data into the octree data structure is hierarchical and the most-used voxelization technique, which avoids unnecessary memory usage for point-free spaces and allows for indexing of the voxels (Wang & Tseng, 2011).

In this paper, a point cloud segmentation method that is voxel-based and uses only local surface curvatures of the points is developed and tested on several point cloud datasets.

The point cloud segmentation, which has been a fundamental step for many point cloud applications (Rabbani, Dijkman, van den Heuvel, & Vosselman, 2007; Rau, Jhan, & Hsu, 2015; Rusu, Marton, Blodow, Dolha, & Beetz, 2008; Schumann, Oeltze, Bade, Preim, & Peitgen, 2007), is a very important stage for extracting meaningful information from raw 3D point data. In recent years, point cloud segmentation methods have been broadly categorized under three subtitles by some researchers (Vo et al., 2015; (Xu et al., 2018)). These are clustering-based, model fitting-based and region growing-based methods.

Clustering, known as unsupervised classification, can be defined as forming homogenous groups from the given data elements. The number of groups can vary during the clustering algorithm, unlike supervised classification (Xu & Wunsch, 2005). As in many data clustering approaches in data mining, according to the point cloud clustering method, the points are appointed to the redefined clusters, regardless of the spatial adherence between the parts of a cluster. In general, these methods run iteratively, and the cluster properties are redefined to optimize the centers of the cluster features and the dependencies of the validation parameters along the iterations. This case is the deficiency of this approach due to the time consumption resulting from the optimizations of the properties and the parameters. Filin (2002) developed a surface clustering algorithm that appoints the points to the categories, which are defined in the first step, using the local planes (Filin, 2002). The distinctive features that distinguish these categories are defined, and a measurement is introduced for the clustering process. Morsdorf et al. (2003) segmented single trees in a 3D point cloud created from a national park (Morsdorf, Meier, Allg, & Daniel, 2003). They defined 6 geometric categories that were 3-by-3 meters in size. Before defining the trees in the segmented cloud, they clustered the point cloud according to these categories using the K-means clustering algorithm to create segments. Biosca and Lerma (2008) combined the Fuzzy C-Means and the Possibilistic C-Means algorithms for surface clustering using three features of the points, which are the difference of the height value between a point and its neighbors, the unit surface normals, and the distance of the local planes to the origin (Biosca & Lerma, 2008). According to the method, the initial cluster properties do not need to be defined. The membership coefficients of the clusters are optimized iteratively. Yao et al. (2009) used the mean shift clustering algorithm, benefiting from the spatial characteristics of the points, to create initial segments from the raw point cloud (Yao, Hinz, & Stilla, 2009). They used these segments to form a graph structure for their object extraction method, which used the normalized graph cut method. Lari and Habib (2014) introduced two clustering approaches to segment individual planar and linear/cylindrical features in a point cloud (Lari & Habib, 2014). They first identified the planer and linear/cylindrical features of each point using its local neighbors with itself. They clustered these features, reconstructing the segmentation parameters. Poux and Billen (2019) voxelized the points with the octree organization and extract two feature sets from the voxels; the first is the low-level shape-based geometrical feature set, and the second is the connectivity and relationship feature set. They clustered the voxels into connected voxel clusters according to the similarity of these features before the semantic labeling stage (Poux & Billen, 2019).

The procedure of fitting a point group to some specific geometric models is the principal segmentation technique for model fitting-based methods. This procedure is usually accomplished with the 3D Hough transform (HT) (Borrmann, Elseberg, Lingemann, & Nüchter, 2011) and random sample consensus (RANSAC) (Fischler & Bolles, 1981) in the literature. In the image processing field, the HT is traditionally used for detecting lines and circles from an image. In recent decades, the HT has been extended to 3D shapes, such as planes, spheres, and cylinders (Hulik, Spanel, Smrz, & Materna, 2014). This procedure optimizes the shape parameters on the feature space with high computational costs. RANSAC iteratively fits randomly selected points to different types of shapes in a point cloud that may have a large number of outliers. This algorithm first selects a minimal point set randomly and brings out the candidate shapes that match the selected points. Afterward, the candidate shapes are tested on all points with iteratively varying shape parameters during the iterations (Schnabel, Wahl, & Klein, 2007). Although these procedures (the HT and RANSAC) are reliable and robustly protect against outliers, their main deficiency is the high computational demands. Marshall et al. (2001) presented a model fitting-based segmentation method that fits the least-squares of points to the tried shapes, which are spheres, cylinders, cones, and tori, by using the principal curvatures of surfaces (Marshall, Lukacs, & Martin, 2001). Vosselman and Dijkman (2001) used the 3D HT to extract the planer building faces and the ground plane for the construction of urban environments (Vosselman & Dijkman, 2001). Rabbani and Heuvel (2005) proposed an efficient HT to detect cylinders automatically in a point cloud. Schnabel et al. (2007) extended the RANSAC algorithm to be fast and practical, but some insignificant parts of the surfaces are sacrificed (Schnabel et al., 2007). They used an approximate normal detecting method that quickly finds the surface normal for each point, and they scaled point clouds, recognizing the number of points using an octree structure. Chen et al. (2014) proposed a modified RANSAC segmentation algorithm to segment the planar patches of rooftops; the algorithm copes with outliers, maintains the proximity, and balances the segmentation degree (Chen, Zhang, Mathiopoulos, & Huang, 2014). Li et al. (2017) improved the RANSAC algorithm to run on a voxel space in a grid structure, named “normal distribution transformation” (NDT) cells in the study (Li et al., 2017). The method selects a minimum number of voxels in each iteration to ensure the correctness of planes. Overby et al. (2017) used the HT to model the building obtained by airborne laser scanners. They identified the intersecting planes not using the already evaluated points and construct a polygon mesh of the roof structure in their study (Overby, Bodum, Kjems, & Ilsøe, 2017).

Region growing-based methods grow the initial small segments iteratively by spreading out from the local seed region. The initial seed region may be a point (Nurunnabi, Belton, & West, 2012; Rabbani et al., 2006), a voxel (Su, Bethel, & Hu, 2016; Vo et al., 2015), a super-voxel (Papon, Abramov, Schoeler, & Worgotter, 2013; Xu, Yao, et al., 2018), or a sub-segment (Vieira & Shimada, 2005; Wang & Tseng, 2011). The methods in this category theoretically aim to detect the boundaries of surfaces, and thus they are not dependent on the features of particular geometric shapes. They generally run in a greedy manner using the neighborhood schemes and the graph tools. Therefore, some of them run quite quickly compared to other methods in the other categories, especially if it runs on voxel-structured data (Vo et al., 2015; Wang & Tseng, 2011). Due to that, our method and the methods tested in this study use a region growing approach, and this category is examined more than the other categories. The most noticeable deficiency of these methods is their sensitivity to noisy data (Xu, Yao, et al., 2018).

Region growing has been the most-used approach in the surface segmentation field during the last few decades. Gorte (2002) developed a simple region growing method to segment airborne point clouds (Gorte, 2002). The facets of the roof and the flats between buildings are first segmented to generate planer surfaces. After the first segmentation, a TIN (triangular irregular network) data structure is formed over the planer segments, and the planes are merged with respect to the neighborhood of the triangles. Similarly, Vieira and Shimada (2005) partition the mesh point cloud into regions that have different shape characteristics at first (Vieira & Shimada, 2005). After the curvature and noise estimation of these regions, they are used as seed regions and merged by fitting their surfaces. Rabbani et al. (2006) presented a point cloud segmentation method known as the “Region Growing” (RG) method, which uses a smoothness constraint to find smoothly connected regions (Rabbani et al., 2006). The surface normal of each point is estimated using either k-nearest or fixed distance neighbors of points such that each acts like a seed point. The seed points are merged iteratively by considering their curvature information. Wang and Tseng (2011) developed a split and merge algorithm (Voxel Incremental Segmentation) (VIS) for point cloud segmentation (Wang & Tseng, 2011). They first split the points until each part, which refers to a voxel, has a planer feature according to the determined criterion, and these planes are then merged with respect to their plane angles in two stage; the first is plane merging, and the second is surface merging. Nurunnabi et al. (2012) proposed a fast minimum covariance determinant-based robust PCA approach, which is less sensitive to outliers, to extract local saliency features (Nurunnabi et al., 2012). Papon et al. (2013) presented an over-segmentation algorithm that uses voxel connectivity and a super-voxel approach known as “voxel cloud connectivity segmentation” (VCCS) and pays attention to the super-voxels to not cross the boundaries (Papon et al., 2013). They use also both the geometric features of the voxels, which include the 33 elements (singular value decompositions of the spatial distributions of local point groups) of the fast point feature histograms (FPFH) defined by (Rusu et al., 2009), and the color information of points. Xiao et al. (2013) presented a plane segmentation algorithm (Xiao, Zhang, Adler, Zhang, & Zhang, 2013). Initially, they separated point clouds into grid cells, named “sub-windows” in the study, and these sub-windows were labeled as sparse, planar and nonplanar. To obtain the sub-windows, they employed octree organization with respect to the number of points in a voxel as the voxel size. The labeled sub-windows are then merged them according to their labels and the geometric merging criteria. The main difference of this method from the VIS method is that it divides the points according to the number of points in the small voxels and then labels the voxels as sparse, planar and nonplanar, while the VIS algorithm divides the points according to the planarity criterion of the large voxels being divided. Among these two approaches, the method proposed in this paper is similar to Xiao et al.’s method (Xiao et al., 2013), although there are some important differences in the region growing stage.

The supervoxel-based region growing approach is also widespread in the literature. Stein et al. (2014) proposed a supervoxel-based method named “local convex connected patches” (LCCP) that uses a voxel connectivity criterion on the supervoxels (Papon et al., 2013) in their study (Stein, Schoeler, Papon, & Worgotter, 2014). They label the supervoxels as convex, concave and singular and then grow them, starting from an arbitrarily selected one with respect to their merging criteria. Their method uses the VCCS method to form supervoxels with some modification regarding the voxel adjacency. The proposed method has been inspired from this convexity criterion between the supervoxels to weight the connections between small voxels. Vo et al. (2015) introduced an “Octree-based Region Growing” (ORG) algorithm that grows the voxels, maintaining the planarity, and applies refinement processes to merge the planar regions with boundary voxels and isolated points (Vo et al., 2015). This method is also very similar to the VIS method, looking to the planar surface organization and region growing stage. The main difference of this algorithm is that it grows the planar regions in one main stage rather than two stages as in the VIS method. Su et al. (2016) proposed a voxel growing algorithm for segmenting the cylinders (pipes and vessels) and planes (walls) using graph tools (Su et al., 2016). They developed a merging criterion that combines the proximity, orientation, and curvature features of the voxels. Xu et al. (2017) presented perceptually grouping lows for super-voxel-based point cloud segmentation, named “supervoxel and graph-based segmentation” (SVGS), using voxel connectivity and geometric attributes of point groups (Xu, Hoegner, Tuttas, & Stilla, 2017; Xu, Tuttas, et al., 2018). The differences between this methods and the LCCP method is that this method constructs a graph structure between the supervoxels, and then the supervoxels are merged using the criterion put forward by (Felzenszwalb & Huttenlocher, 2004), which is similar to another study (Ben-Shabat, Avraham, Lindenbaum, & Fischer, 2018) that proposed a supervoxel segmentation method and merged them for segmentation, registering, classification and modeling. In this approach, the connections between the supervoxels are first sorted in ascending order according to the weights and evaluated for merging according to the criteria, starting from the minimum weighted connection. This approach is also used in the proposed method.

The proposed method developed in this paper is voxel-based and does not include the supervoxelization stage as an intermediate process. It first voxelizes the raw unorganized point clouds using the octree organization and then calculates the centroids and normals of the points in the voxels at the same time. After all of the connections between adjacent voxels are determined, the connections are weighted with our local surface curvature measure that benefits from the vector angular difference of the normals and the convexity measure used in the literature, and the connections are sorted in ascending order. Each voxel is first labeled as a segment. Starting from the minimum weighted connection, the segments are considered for merging. The main difference of the proposed method from the literature is that the boundary voxels between two segments in consideration are mutually paired one-to-one. All connections between the paired voxels must meet the criterion to merge segments. This stage is sensitive to the non-surface voxels. For this reason, we also propose a method to evaluate the non-surface voxels in consideration. The most favorable advantage of the proposed method compared to the other region growing methods is that it requires only one parameter (except for the voxel size) to determine the degree of segmentation (under- or over-segmentation).

The other contribution of this paper, besides the proposed method, is the comparison of the methods with two quantitative evaluation methods explained in the paper. To evaluate the methods, we prepare three datasets (one indoor and two outdoors) within the scope of this study and obtain two additional outdoor datasets from the literature. The quantitative and visual segmentation results of the methods with the operating times are compared with different parameter values. Looking to the results, our method seems the best among the methods tested with respect to the mean scores over the five datasets.

In Section 2, the main structure of the proposed method is briefly discussed. This section consists of four subsections, such that each includes a stage of the proposed method. As the first subsection, Section 2.1 comprises the octree voxelization stage. In Section 2.2, the proposed weight measure technique is explained. In Section 2.3, a weighting method for the connections to non-surface voxels is proposed, and Section 2.4 includes the merging process, which is the last stage of the proposed method. Section 3 explains how the datasets put forward in this study are obtained and prepared. All datasets tested for the successes of the methods are also visually shown with their reference data in this section. Section 4 shows the segmentation results with quantitative and visual outputs of our experiments. Finally, the comments of the authors are put forward in Section 5.

Section snippets

Proposed method

The proposed method in this paper uses only planar primitives of the voxels and the differences between the geometric orientations of the adjacent voxels. The voxel adjacency maintains the spatial proximity in the resulting segments. Taking into account only the differences between the adjacent voxels yields complex shaped segments in addition to the planar segments. Therefore, a new measurement for the geometric differences between the adjacent voxels is defined in this paper within the new

Datasets

To verify the success of the proposed method, three point cloud segmentation benchmark datasets are prepared in the scope of this work. For this purpose, three scans are conducted in different environments (one indoor and two outdoors) using a FARO® Laser Scanner Focus3D X 330 system (Info, 2015). The system is adjusted to the density of 27,777 points/m2 from a 10-m distance for each scan. After that, raw point clouds are obtained, sample objects and structures are cropped from the scan data,

Experimental results

The method proposed in this paper is implemented using the C++ programming language similar to the reference methods involved in the literature, which are RG (Rabbani et al., 2006), VIS (Wang & Tseng, 2011), ORG (Vo et al., 2015), LCCP (Stein et al., 2014) and SVGS (Xu et al., 2017). The VIS and ORG methods were implemented in this work, while the source codes of RG and LCCP were obtained from the PCL library (Rusu & Cousins, 2011), and that of SVGS was obtained from the webpage “//github.com/Yusheng-Xu/VGS-SVGS-Segmentation

Conclusion

The segmentation step is a highly crucial intermediate step in the field of machine learning. This step enormously affects the results of the next steps. The segmentation process provides high-quality distinguishing for the classification process, besides ignoring some unnecessary details and grouping similar elements to reduce the massive data to be processed. Point cloud segmentation is also a problem for the machine learning field because raw point clouds come with a spatially

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

CRediT authorship contribution statement

Ali Saglam: Conceptualization, Methodology, Investigation, Formal analysis, Project administration, Writing - original draft, Writing - review & editing. Hasan Bilgehan Makineci: Conceptualization, Methodology, Investigation, Formal analysis, Project administration, Writing - original draft, Writing - review & editing. Nurdan Akhan Baykan: Conceptualization, Methodology, Investigation, Formal analysis, Project administration, Writing - original draft, Writing - review & editing. Ömer Kaan

Acknowledgments

This work was supported by TÜBİTAK (The Scientific and Technological Research Council of Turkey), Turkey [grant number 119E012].

References (61)

  • Yusheng Xu et al.

    Voxel-based segmentation of 3D point clouds from construction sites using a probabilistic connectivity model

    Pattern Recognition Letters

    (2018)
  • M. Awrangjeb et al.

    An automatic and threshold-free performance evaluation system for building extraction techniques from airborne LIDAR data

    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.

    (2014)
  • P. Babahajiani et al.

    Urban 3D segmentation and modelling from street view images and LiDAR point clouds

    Machine Vision and Applications.

    (2017)
  • Y. Ben-Shabat et al.

    Graph based over-segmentation methods for 3D point clouds

    Computer Vision and Image Understanding.

    (2018)
  • J.M. Biosca et al.

    Unsupervised robust planar segmentation of terrestrial laser scanner point clouds based on fuzzy clustering methods

    ISPRS Journal of Photogrammetry and Remote Sensing.

    (2008)
  • Blomley, R., Jutzi, B., & Weinmann, M. (2016). 3D semantic labeling of ALS point clouds by exploiting multi-scale,...
  • D. Borrmann et al.

    The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design

    3D Research

    (2011)
  • A. Boyko et al.

    Extracting roads from dense point clouds in large scale urban environment

    ISPRS Journal of Photogrammetry and Remote Sensing.

    (2011)
  • D. Chen et al.

    A methodology for automated segmentation and reconstruction of urban 3-D buildings from ALS point clouds

    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.

    (2014)
  • S. Clode et al.

    The automatic extraction of roads from lidar data

  • P.F. Felzenszwalb et al.

    Efficient graph-based image segmentation

    International Journal of Computer Vision

    (2004)
  • S. Filin

    Surface clustering from airborne laser scanning data

    International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences

    (2002)
  • M.A. Fischler et al.

    Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography

    Communications of the ACM.

    (1981)
  • B. Gorte

    Segmentation of TIN-structured surface models

    International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences.

    (2002)
  • R. Hulik et al.

    Continuous plane detection in point-cloud data based on 3D Hough Transform

    Journal of Visual Communication and Image Representation.

    (2014)
  • F.P. Info

    FARO laser scanner focus 3D X 330 features

    Benefits & Technical Specifications

    (2015)
  • D.F. Laefer et al.

    Evacuation route selection based on tree-based hazards using light detection and ranging and GIS

    Journal of Transportation Engineering-Asce

    (2006)
  • L. Li et al.

    An improved RANSAC for 3D point cloud plane segmentation based on normal distribution transformation cells

    Remote Sensing

    (2017)
  • F. Lozes et al.

    PDE-based graph signal processing for 3-d color point clouds: Opportunities for cultural heritage arts and found promising

    IEEE Signal Processing Magazine

    (2015)
  • D. Marshall et al.

    Robust segmentation of primitives from range data in the presence of geometric degeneracy

    IEEE Transactions on Pattern Analysis and Machine Intelligence.

    (2001)
  • Cited by (31)

    • A high-quality voxel 3D reconstruction system for large scenes based on the branch and bound method

      2022, Expert Systems with Applications
      Citation Excerpt :

      Although these learning-based methods have achieved impressive results, the pose estimation using this method lags behind the traditional geometric methods, especially the rotation vector method. Three-dimensional reconstruction has potential applications in the development of intelligent robots (Nedjah, Mourelle & Oliveira, 2020; Correal, Pajares & Ruz, 2014; Newcombe et al., 2011; Hernández & Ortiz, 2020; Ingale & Divya, 2021; Saglam et al.,2020). In many applications of artificial intelligence, the intelligent device needs to know its 3D pose in the physical space and the 3D structure of its surrounding environment.

    • Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry

      2021, Automation in Construction
      Citation Excerpt :

      Similarly, in [17], an efficient neighboring voxel searching algorithm was proposed, in order to accelerate the region growing process. In the recent work [85], boundary constraints were added to the growing process. With regards to local clustering, VCCS [18] and LCCP [21] are representative methods, which combine the local convexity and local K-nearest neighbor clustering for over-segmentation with better boundary preservation.

    View all citing articles on Scopus
    View full text