Abstract
In the fields of computer vision and pattern recognition, 3D object recognition has always been one of the most challenging problems, and has become an important direction of current image recognition research. This paper introduces the main methods of 3D object recognition and its key technologies comprehensively. It compares the advantages and disadvantages of various methods, and hopes to have a more comprehensive learning and grasp of 3D object recognition, and further clarify the future research direction.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Object recognition refers to obtaining environmental information by a set of sensors, and identifying specific objects in the scene through computer analysis. Its task is to realize the recognition of specific objects in the scene and give the position and posture of the object. The general process includes object detection, feature extraction, and recognition. About 80% come from vision when humans perceive external information. Therefore, vision-based object recognition has become a popular research in recent years, and is widely used in many fields such as robot navigation, industrial inspection, aerospace, military reconnaissance and so on.
As the complexity of the object to be identified increases, the object recognition of the traditional 2D image cannot meet the practical application, while the 3D object recognition can objectively describe the shape and structure and improve the recognition rate. According to the research status at home and abroad, the existing 3D object recognition methods are roughly divided into five categories: geometric or model-based method, appearance or view-based method, feature matching-based method, depth image-based method and intelligent algorithm-based method. The methods are introduced and compared, respectively.
2 Geometric or Model-Based Method
In the process of object recognition, the prior knowledge of the shape and structure of the object is generally called geometric or model-based 3D object recognition [1]. The method obtains a 3D geometric feature description from the input graphic data, and matches it with the model description to achieve recognition and positioning of the object.
Qian [2] proposed a new 3D object recognition method. The method segments a 3D point set into a number of planar patches and extracts the Inter-Plane Relationships (IPRs) for all patches. Based on the IPRs, the High Level Feature (HLF) for each patch is determined. A Gaussian-Mixture-Model-based plane classifier is then employed to classify each patch into one belonging to a certain model object. Finally, a recursive plane clustering procedure is performed to cluster the classified planes into the model objects.
Lin et al. [3] described the geometric component model in a combined way to describe the contour of the object, and established an ordered chain structure to show the degree of matching. The method can solve the matching problem of complex object contours, and the detection time is reduced by 60%–90% compared with previous methods. For a rigid object with a clear outline, the effect is better. Ding [4] used the forward method to establish a 3D scattering center model from the object CAD model offline. This model can effectively predict the object in arbitrary posture.
This method is generally applicable to the object with a regular shape, and the shape comparison is relatively intuitive and easy to understand. However, the algorithm has a large amount of computation, and a geometric model needs to be established, which is not suitable for environments with complex backgrounds and noise interference. When there is occlusion between objects, it will also produce poor recognition results.
3 Appearance or View-Based Method
3.1 Single-View Feature-Based Method
This method is to analyze the observed image of the object by a certain viewpoint, and to identify the object by feature extraction and feature matching. It requires that the object posture is relatively stable and the structure is relatively simple.
Eigen et al. [5] used multi-scale DNN to obtain depth information from single-view images. This method has only been improved in scale and has limitations for other 3D geometric information. Lee et al. [6] proposed an automatic pose estimation method to obtain depth information values from a given single image, suitable for various image sequences containing objects with different appearances and poses. Yan et al. [7] used the point, line and surface information in the image as the correction of the input image to eliminate the distortion problem. However, most are used in symmetrical building scenarios, and the robustness of the algorithm needs to be further improved.
In this recognition system, the identification feature of the higher dimension is generally required to represent the object, and the feature vector is compared with the template feature vector to complete the recognition. Single-view acquisition is susceptible to factors such as viewing angles, lighting and the complex background.
3.2 Multi-view Feature-Based Method
Based on the single-view object recognition, the multi-view compensate for the misidentification under similar 2D images formed by different objects and background occlusion. Feature matching is performed on images from two different viewing angles, which can realize camera calibration and restore the 3D coordinates of spatial points, thereby gradually developing SFM [8], three views [9] and multiple views [10] were following developed.
Chen [11] extracted features and reduced dimensions, then input these features into the SVM [12] for classification and identification, which solved the problems of classification complexity and low recognition efficiency caused by the increase of feature dimension. Zhan [13] extracted multiple features and then used PCA to eliminate the redundant information between the features. Finally, the genetic algorithm-optimized SVM is used for classification and recognition, which improves the accuracy and speed of 3D object recognition.
In order to objectively and accurately identify the object, a larger number of views are usually required, so that the complexity of the classification is significantly improved. If a smaller number of views are used, the recognition accuracy is reduced.
3.3 Optical Operation-Based Method
The basic principle is to obtain 2D graphics or images by optical imaging method, and to identify the object according to the optical characteristic parameters [14]. In the process of recognition, the object to be identified and the template are measured for similarity, and a set of related features are used to determine the category, position and posture of the object.
The classical optical operation, such as the optical flow method, changes the intensity of the light, and the motion is projected onto the image plane after being irradiated, and the change of the optical flow is formed by the pixel variation of the discrete sampling of the sensor. This method has high accuracy and can adapt to the motion situation, but due to the large amount of calculation and the sensitivity to environment, so the application is relatively small.
Zhang [15] encoded the depth information of the 3D object into the 2D image, and used optical 2D image recognition technology to identify the object. However, this method is limited to simple spherical objects, and the impact in practical applications has not been estimated. Vallmitjana [16] designed different filters according to different views of the object, integrated all the data into the object-centered coordinate system. However, excessive use of filters is likely to cause noise limitations in practical applications.
The optical operation recognition speed is fast, and the information can be processed in parallel, but the calculation amount is large and the time is long. Therefore, it is necessary to extract the 2D information based on the 3D object, and finally realize the optical 3D object recognition.
4 Feature Matching-Based Method
4.1 Global Feature-Based Method
The traditional image description method is to select features from a large number of images containing the object that can represent the whole, such as color, texture, etc., and use statistical classification technology to classify the object to achieve the purpose of recognition. The color histogram [17] describes the proportion of each color in the entire image, but it does not clearly describe the specific distribution and spatial position of the color. Texture features [18] describe a surface property that ignores other properties of the object and is highly flawed in the acquisition of high-level images.
The features selected are comprehensively representative, small in computation and easy to implement, but weak in detail resolution, sensitive to occlusion and background, and the object to be recognized is independent and the data is complete, so the application range is limited [19], and may have the following three shortcomings:
-
(1)
Under the complex image structure, image segmentation technology affects the object recognition;
-
(2)
The amount of learning data is large and the training time is long;
-
(3)
When the object undergoes a large deformation, it will cause a sudden change in the global feature.
The model-based and view-based methods mentioned above show disadvantages in this respect.
4.2 Local Feature-Based Method
The local feature refers to the set of attributes that can objectively and stably describe the object, and combines the local features to form the feature vector, thereby realizing the effective representation of the object. The algorithm based on local feature matching has achieved good results in the field of object recognition [20,21,22].
The selected feature points must satisfy the following conditions [23]: (1) Repeatable extraction; (2) It can define a unique 3D coordinate system; (3) Its neighborhood contains valid description information. Subsequent feature point matching can be performed after feature point selection is completed [24]. The most widely used descriptors are the SIFT [25], the SURF [26], the Harris detector [27], the Hessian detector [28], HOG [29] and LBP [30].
Wei et al. [31] extracted the feature description of the invariant angle contour, obtained the feature vector of the object by invariant moment transformation, and compared the cosine of the angle to achieve feature matching. This method can be used for object recognition in complex scenes.
Although existing local feature-based techniques have high accuracy and can handle occlusion and chaos, these methods still have high computational complexity. In order to solve these problems, the literature [32] proposed keypoints-based surface representation (KSR), which does not need to calculate local features, using the geometric relationship between the detected 3D key points to local surface representation, to some extent, it suppresses the noise level.
Local features have good stability and are not easily affected by environmental factors. Even though the amount of data is too large, fast registration can be achieved, but at the cost of algorithm complexity and computational addition. The global feature is invariant, small calculation amount and convenient to understand. Therefore, they can be combined to improve the recognition rate and reduce the calculation amount.
5 Depth Image-Based Method
A narrow depth image [33] is defined as acquiring depth information of an object using a depth sensor such as a microwave or a laser. At present, the methods frequently used for obtaining depth images are stereoscopic vision technology [34], microwave ranging principle and lidar imaging [35].
The more commonly used depth image types are grid representation [36] and point cloud representation [37].
5.1 Grid Representation
The grid consists of points, edges and planes. It is an irregular data structure and has a rich description of the shape and other details.
Fang et al. [38] introduced the grid structure into the multi-view image, so that the grid point position corresponds to the viewpoint image feature vector, and then the model is built according to the local invariant feature statistics of the object. Wang et al. [39] proposed an end-to-end depth learning framework that can generate 3D mesh directly from a single color picture. The CNN is used to represent the 3D mesh, and features are extracted from the input image to produce the correct geometry shape.
Grid data is informative and has a topology. However, when drawing a large scene, performing grid reconstruction will bring about problems such as long calculation time and large amount of information storage.
5.2 Point Cloud Representation
The point cloud is a set of 3D point coordinates of a scene or an object. Due to the huge amount of point cloud scene data, each object contains a large number of features, and each feature corresponds to a high-dimensional description vector, resulting in large computational complexity and low computational efficiency [40].
The PointNet network [41] can process the unordered point cloud and the rotated point cloud data. On this basis, PointNet++ [42] adds a hierarchical structure to the network structure to process local features. The SO-Net [43] network structure simulate the spatial distribution of the point cloud in a self-organizing map (SOM) manner. In ModelNet40 classification, PointNet achieved 86.2%, PointNet++ is remarkably stronger than PointNet, and SO-Net was up to 90.8%.
The graph-based method is a novel method for 3D point cloud object recognition. Wang et al. [44] proposed an EdgeConv module in DGCNN. By stacking or reusing the EdgeConv module, global shape information can be extracted. DGCNN has improved performance by 0.5% over PointNet++. The key to RS-CNN [45] is learning from relation, i.e., the geometric topology constraint among points. RS-CNN reduces the error rate of PointNet++ by 31.2% and with a stronger robustness than PointNet, PointNet++ and DGCNN.
6 Intelligent Algorithm-Based Method
Intelligent algorithm is a kind of engineering practice algorithm realized by computer. It reflects the simulation and reproduction of biological system, human intelligence and physical chemistry. It is widely used in object recognition and image matching. The following is an introduction to several major intelligent algorithms:
6.1 Ant Colony Algorithm
According to the characteristics of ant colonies’ foraging behavior, a population-based simulated evolutionary algorithm was proposed, called Ant Colony Optimization [46].
The idea is that during the foraging process of the ant, information exchanged and transmission will be carried out, and the next walking path will be selected according to the length of the path taken, showing a positive feedback phenomenon [47]. When the ant is unable to move in the next step, the path taken at this time corresponds to a feasible solution in the optimization problem. Zhang et al. [48] combined the relative difference of gradient and statistical mean with image edge detection. The relative difference between the gradient value and the statistical mean is extracted as an ant search for image edge detection. In the future, parallel ACO algorithms can be used to further reduce the computational complexity of the algorithm.
6.2 Particle Swarm Optimization
According to the birds’ foraging behavior, Kennedy and Eberhart proposed Particle Swarm Optimization [49]. Considering the flock of birds as a group of random particles, with the two attributes of direction and distance, with the nearest solution from the food and the optimal solution currently found by the whole population as a reference, the area closest to the food can be regarded as the best solution to the problem.
Due to the rapid loss of diversity, PSO suffers from premature convergence. In order to improve the performance, Wang et al. [50] proposed a hybrid PSO algorithm (DNSPSO), which uses diversity enhancement mechanism and domain search strategies. By combining these two strategies, DNSPSO achieves a trade-off between exploration and development capabilities. Compared to standard PSO, DNSPSO does not increase computation time and has better results on low-dimensional issues.
6.3 Artificial Fish-Swarm Algorithm
The Artificial Fish-Swarm Algorithm [51] was derived from the characteristics of fish movement. Supposing that in a water area, the fish population will gather together according to the behavior of foraging, so the place where the fish population gathers the most is the best nutritional water quality, which is the best solution to the problem.
Due to the computational complexity of the artificial fish algorithm and the slow convergence rate at the later stage, Ma et al. [52] proposed an adaptive vision-based fish swarm algorithm (AVAFSA), which changed the field of view of the fish foraging, and gradually reduced when the algorithm iterated. The small field of view value stops the iteration until the field of view value is less than half of the initial value. The improved algorithm has fast convergence speed and small calculation amount, and is more accurate and stable than the basic AFSF algorithm.
6.4 Genetic Algorithm
The Genetic Algorithm (GA) [53] is an evolutionary algorithm that utilizes the natural laws of the biological world. The parameters in the optimization problem are regarded as chromosomes, and the chromosomes in the population are optimized by iterative methods such as selection, crossover and mutation, and the chromosomes that meet the optimization object are feasible solutions.
Aiming at the defects of GA, the immune genetic algorithm is used to combine the immune algorithm [54] with the GA to solve the problem of premature convergence of GA, to ensure the diversity of the group [55]. Tao et al. [56] combined GA with SVM to classify data classes, and the classification accuracy were greatly improved.
6.5 Simulated Annealing Optimization
Simulated Annealing Optimization [57] simulates the process of heating and cooling solid matter in physics, referring to the solution process of general optimization problems. Shieh et al. [58] proposed a hybrid algorithm combining particle swarm optimization with simulated annealing behavior (SA-PSO), which has good solution quality advantages in simulated annealing and has fast search capability in particle swarms, which can increase efficiency and speed up convergence.
It can be improved by combining with other algorithms, such as the combination with PSO [59], GA [60] and ant colony algorithm [61].
6.6 Neural Networks
The neural network [62] is a mathematical model that simulates the laws of human beings in various things in nature, solves some problems with its working principle, and adjusts the connection relationship between internal nodes to adapt to the processing of different information.
The advantage of the neural network is that it can be self-learning, and the learning rules are simple, easy to implement by computer, and has broad application prospects. The disadvantage is that it is impossible to explain its own reasoning process and reasoning basis. Once the data is insufficient, it will lose the ability to work normally.
Intelligent algorithms are an emerging research direction, and Table 1 lists the comparison of these six algorithms.
7 Conclusion
Vision-based 3D object recognition has always been a research hotspot in the field of computer vision. According to the foregoing, both method (1) and method (2) can compare shapes intuitively. In the absence of the shape description of the object, method (2) can be used. However, they all require that the object is independent and the data is complete, sensitive to occlusion and background, so the scope of application is limited. In contrast, Method (3) has better robustness in the presence of overlapping and complex backgrounds and has become the most common method. Method (4) embodies the shape contour of the object space, which has the advantages that the ordinary CCD camera does not have and change the idea of 2D image recognition. The difference is that Method (5) uses an optimized strategy combined with the first four methods to improve.
At present, the most widely used recognition methods are object recognition for uniform point cloud distribution or scenes with less objects. The 3D point cloud scene data is sensitive to noise and there is a case where the density distribution is uneven. How to reduce point cloud noise, reduce the impact of uneven density distribution, and how to apply the mature technology in 2D object recognition to 3D point cloud data will be an important research direction. Table 2 lists the comparison of various recognition algorithms.
References
Ying, C., Ji, Z., Hua, C.: 3-D model matching based on distributed estimation algorithm. In: 2009 Chinese Control and Decision Conference, CCDC 2009, pp. 5063–5067. IEEE (2009)
Qian, X., Ye, C.: 3D object recognition by geometric context and Gaussian-mixture-model-based plane classification. In: IEEE International Conference on Robotics and Automation. IEEE (2014)
Lin, Y.D., He, H.J., Chen, F., et al.: A rigid object detection model based on geometric sparse representation of profile and its hierarchical detection algorithm. Acta Automatica Sinica 41(4), 843–853 (2015)
Ding, B., Wen, G.: Target reconstruction based on 3-D scattering center model for robust SAR ATR. IEEE Trans. Geosci. Remote Sens. 56, 3772–3785 (2018)
Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: International Conference on Neural Information Processing Systems, pp. 2366–2374. MIT Press (2014)
Lee, J., Kim, Y., Lee, S., et al.: High-quality depth estimation using an exemplar 3D model for stereo conversion. IEEE Trans. Visual Comput. Graph. 21(7), 835–847 (2015)
Miao, Y.W., Feng, X.H., Yu, L.J., et al.: 3D building interactive progressive modeling based on single image. J. Comput. Aided Des. Comput. Graph. 28(09), 1410–1419 (2016)
Widya, A.R., Torii, A., Okutomi, M.: Structure from motion using dense CNN features with keypoint relocalization. IPSJ Trans. Comput. Vis. Appl. 10(1), 6 (2018)
Wu, C.: Towards linear-time incremental structure from motion. In: International Conference on 3D Vision, pp. 127–134. IEEE Computer Society (2013)
Zou, G.F., Fu, G.X., Li, H.T.: A survey of multi-pose face recognition. Pattern Recogn. Artif. Intell. 28(7), 613–625 (2015)
Chen, G., Deng, C.W.: Research on 3D object recognition based on KPCA-SVM. Comput. CD Softw. Appl. 07, 77–78 (2012)
Gedam, A.G., Shikalpure, S.G.: Direct kernel method for machine learning with support vector machine. In: International Conference on Intelligent Computing. IEEE (2018)
Zhan, N.: Three-dimensional object recognition method based on multiple features and support vector machine. Comput. Simul. 30(3), 375–380 (2013)
Xu, S.: Research on three dimensional object recognition. University of Electronic Science and Technology of China (2010)
Zhang, H.H.: Study on three-dimensional recognition of the spatial object based on optical correlation pattern recognition. Hubei University of Technology (2009)
Vallmitjana, S., Juvells, I.P., Carnicer, A., et al.: Optical correlation from projections of 3D objects. In: Proceedings of SPIE - The International Society for Optical Engineering, vol. 81, pp. 148–169 (2017)
Aloraiqat, A.M., Kostyukova, N.S.: A modified image comparison algorithm using histogram features (2018)
Tyagi, V.: Texture feature. Content-Based Image Retrieval (2017)
Guo, Y.L., Lu, M., Tan, Z.G., Wan, J.W.: Survey of local feature extraction on range image. Pattern Recogn. Artif. Intell. 25(05), 783–791 (2012)
Xiao, Q., Luo, Y., Hu, X.: Object detection based on local feature matching and segmentation. In: IEEE International Conference on Signal Processing. IEEE (2012)
Zhou, D.B., Huo, L.J., Gang, L.I., et al.: Automatic object recognition based on local invariant features. Acta Photonica Sinica 44(2) (2015)
Kechagias-Stamatis, O., Aouf, N., Gray, G., et al.: Local feature based automatic object recognition for future 3D active homing seeker missiles. Aerosp. Sci. Technol. 73, 309–317 (2018)
Mian, A., Bennamoun, M., Owens, R.: On the repeatability and quality of keypoints for local feature-based 3D object retrieval from cluttered scenes. Int. J. Comput. Vis. 89(2–3), 348–361 (2010)
Wei, X.: The research of image matching method and application based on local feature detection. Anhui University (2015)
Xie, J., Xu, Z., Liu, Y., et al.: A remote sensing image object recognition method based on SIFT algorithm. In: International Conference on Mechatronics, Robotics and Automation (2015)
Guan, F., Liu, X., Feng, W., et al.: Multi object recognition based on SURF algorithm. In: International Congress on Image and Signal Processing, pp. 448–453. IEEE (2013)
Karthik, O.S., Varun, D., Ramasangu, H.: Localized Harris-FAST interest point detector. In: India Conference, pp. 1–6. IEEE (2017)
Tahery, S., Drew, M.S.: A novel colour Hessian and its applications. Electron. Imaging (2017)
Jebril, N.A., Al-Zoubi, H.R., Al-Haija, Q.A.: Recognition of handwritten arabic characters using histograms of oriented gradient (HOG). Pattern Recogn. Image Anal. 28(2), 321–345 (2018)
Fan, H., Cosman, P.C., Hou, Y., et al.: High speed railway fastener detection based on line local binary pattern. IEEE Signal Process. Lett. 25, 788–792 (2018)
Yong-Chao, W., Feng, C., Xia, Z., et al.: 3D target recognition based on invariant angle contour. J. Sichuan Univ. (Nat. Sci. Edn.) (2017)
Shah, S.A.A., Bennamoun, M., Boussaid, F.: Keypoints-based surface representation for 3D modeling and 3D object recognition. Pattern Recogn. 64, 29–38 (2017)
Fisher, R.B., Breckon, T.P., Dawson-Howe, K., et al.: Dictionary of Computer Vision and Image Processing. Wiley, New York (2014)
Kaehler, A., Bradski, G.: Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library. O’Reilly Media Inc., Sebastopol (2016)
Wang, Y., Huang, J., Liu, Y., et al.: Simulation of lidar imaging for space object. Infrared Laser Eng. 45(9) (2016)
Li, Y.: Research on key techniques of 3D surface reconstruction based on depth camera. Zhejiang University (2015)
Zhuang, Z.Y., Zhang, J., Sun, G.F.: Extended point feature histograms for 3D point cloud representation. J. Natl. Univ. Defense Technol. 38(6), 124–129 (2016)
Fang, X., Yu, R.X.: Grid-based statistical model for 3D object recognition. Comput. Mod. 2014(4), 24–28 (2014)
Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.-G.: Pixel2Mesh: generating 3D mesh models from single RGB images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 55–71. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01252-6_4
Hao, W., Wang, Y.H., Ning, X.J., et al.: Survey of 3D object recognition for point clouds. Comput. Sci. 44(09), 11–16 (2017)
Qi, C.R., Su, H., Mo, K., et al.: PointNet: deep learning on point sets for 3D classification and segmentation (2016)
Qi, C.R., Yi, L., Su, H., et al.: PointNet++: deep hierarchical feature learning on point sets in a metric space (2017)
Li, J., Chen, B.M., Lee, G.H.: SO-Net: self-organizing network for point cloud analysis (2018)
Wang, Y., Sun, Y., Liu, Z., et al.: Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. (TOG) 38(5), 146 (2018)
Liu, Y., Fan, B., Xiang, S., et al.: Relation-shape convolutional neural network for point cloud analysis (2019)
Bisht, A., Kumar, R.: An efficient multi-level clustering approach using improved ant colony optimization. In: International Conference on Advances in Computing, Communication and Automation, pp. 1–5. IEEE (2018)
Xia, X.Y., Zhou, Y.R.: Advances in theoretical research of ant colony optimization. CAAI Trans. Intell. Syst. 11(01), 27–36 (2016)
Zhang, J., He, K., Zheng, X., et al.: An ant colony optimization algorithm for image edge detection. In: International Conference on Artificial Intelligence and Computational Intelligence, pp. 215–219. IEEE (2010)
Jain, N.K., Nangia, U., Jain, J.: A review of particle swarm optimization. J. Inst. Eng. 99, 407–411 (2018)
Wang, H., Sun, H., Li, C., et al.: Diversity enhanced particle swarm optimization with neighborhood search. Inf. Sci. 223, 119–135 (2013)
Zhang, L.H., Dou, Z.Q., Sun, G.L.: An improved artificial fish-swarm algorithm using cluster analysis. In: Qiao, F., Patnaik, S., Wang, J. (eds.) ICMIR 2017. AISC, vol. 690, pp. 49–54. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-65978-7_8
Ma, X.-M., Liu, N.: Improved artificial fish-swarm algorithm based on adaptive vision for solving the shortest path problem. J. Commun. (2014)
Roberge, V., Tarbouchi, M., Labonte, G.: Comparison of parallel genetic algorithm and particle swarm optimization for real-time UAV path planning. IEEE Trans. Ind. Inform. 9(1), 132–141 (2013)
Jia, C., Fan, Y.: Application of immune genetic algorithm in image segmentation. Beijing Surv. Mapp. (2018)
Shi, J., Su, Y.D., Xie, M.: Research on application of IGA (immune genetic algorithm) to the solution of Course-Timetabling Problem. In: International Conference on Computer Science and Education, pp. 1105–1109. IEEE (2009)
Tao, Y., Zhou, J.: Automatic apple recognition based on the fusion of color and 3D feature for robotic fruit picking. Comput. Electron. Agric. 142, 388–396 (2017)
Awange, J.L., Paláncz, B., Lewis, R.H., et al.: Simulated annealing (2018)
Shieh, H.L., Kuo, C.C., Chiang, C.M.: Modified particle swarm optimization algorithm with simulated annealing behavior and its numerical verification. Appl. Math. Comput. 218(8), 4365–4383 (2011)
Chen, S., Ren, L., Xin, F.: Reactive power optimization based on Particle Swarm Optimization and Simulated Annealing cooperative algorithm. In: Control Conference, pp. 7210–7215. IEEE (2012)
Mann, M., Sangwan, O.P., Tomar, P., et al.: Automatic goal-oriented test data generation using a genetic algorithm and simulated annealing. In: International Conference - Cloud System and Big Data Engineering, pp. 83–87. IEEE (2016)
Rong, X.J.: Research on hybrid task scheduling algorithm simulation of ant colony algorithm and simulated annealing algorithm in virtual environment. In: International Conference on Computer Science and Education, pp. 562–565. IEEE (2015)
Schmidhuber, J.: Deep learning in neural networks. Neural Netw. 61, 85–117 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Dong, T., Qi, X., Zhang, Q., Li, W., Xiong, L. (2019). Overview on Vision-Based 3D Object Recognition Methods. In: Zhao, Y., Barnes, N., Chen, B., Westermann, R., Kong, X., Lin, C. (eds) Image and Graphics. ICIG 2019. Lecture Notes in Computer Science(), vol 11902. Springer, Cham. https://doi.org/10.1007/978-3-030-34110-7_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-34110-7_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34109-1
Online ISBN: 978-3-030-34110-7
eBook Packages: Computer ScienceComputer Science (R0)