Skip to main content

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 383))

  • 627 Accesses

Abstract

In computer vision a reliable recognition and classification of objects is an essential milestone on the way to autonomous scene understanding. In particular, keypoint detection is an essential prerequisite towards its successful implementation. The aim of keypoint algorithms is the identification of such areas within 2-D or 3-D representations of objects which have a particularly high saliency and which are as unambiguous as possible. While keypoints are widely used in the 2-D domain, their 3-D counterparts are more rare in practice. One of the reasons often consists in their long computation time. We present a highly parallelizable algorithm for 3-D keypoint detection which can be implemented on modern GPUs for fast execution. In addition to its speed, the algorithm is characterized by a high robustness against rotations and translations of the objects and a moderate robustness against noise. We evaluate our approach in a direct comparison with state-of-the-art keypoint detection algorithms in terms of repeatability and computation time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adamson, A., Alexa, M.: Ray tracing point set surfaces. In: Shape Modeling International, pp. 272–279. IEEE (2003)

    Google Scholar 

  2. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: speeded up robust features. In: Computer Vision–ECCV 2006, pp. 404–417. Springer (2006)

    Google Scholar 

  3. Dutagaci, H., Cheung, C.P., Godil, A.: Evaluation of 3d interest point detection techniques via human-generated ground truth. Vis. Comput. 28(9), 901–917 (2012)

    Article  Google Scholar 

  4. Filipe, S., Alexandre, L.A.: A comparative evaluation of 3d keypoint detectors. In: 9th Conference on Telecommunications. Conftele 2013, Castelo Branco, Portugal, pp. 145–148 (2013)

    Google Scholar 

  5. Flint, A., Dick, A., Hengel, A.V.D.: Thrift: local 3d structure recognition. In: 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications, pp. 182–188. IEEE (2007)

    Google Scholar 

  6. Gelfand, N., Mitra, N.J., Guibas, L.J., Pottmann, H.: Robust global registration. In: Symposium on Geometry Processing, vol. 2, p. 5 (2005)

    Google Scholar 

  7. Guo, Y., Bennamoun, M., Sohel, F., Lu, M., Wan, J.: 3d object recognition in cluttered scenes with local surface features: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 99(PrePrints), 1 (2014)

    Google Scholar 

  8. Hornung, A., Kobbelt, L.: Robust reconstruction of watertight 3d models from non-uniformly sampled point clouds without normal information. In: Proceedings of the Fourth Eurographics Symposium on Geometry Processing, pp. 41–50. SGP’06, Eurographics Association, Aire-la-Ville, Switzerland, Switzerland (2006)

    Google Scholar 

  9. Lai, K., Bo, L., Ren, X., Fox, D.: A large-scale hierarchical multi-view rgb-d object dataset. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1817–1824. IEEE (2011)

    Google Scholar 

  10. Lai, K., Bo, L., Ren, X., Fox, D.: A scalable tree-based approach for joint object and pose recognition. In: AAAI (2011)

    Google Scholar 

  11. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)

    Article  Google Scholar 

  12. Matei, B., Shan, Y., Sawhney, H.S., Tan, Y., Kumar, R., Huber, D., Hebert, M.: Rapid object indexing using locality sensitive hashing and joint 3d-signature space estimation. IEEE Trans. Pattern Anal. Mach. Intell. 28(7), 1111–1126 (2006)

    Article  Google Scholar 

  13. Mian, A., Bennamoun, M., Owens, R.: On the repeatability and quality of keypoints for local feature-based 3d object retrieval from cluttered scenes. Int. J. Comput. Vis. 89(2–3), 348–361 (2010)

    Article  Google Scholar 

  14. Pauly, M., Keiser, R., Gross, M.: Multi-scale feature extraction on point-sampled surfaces. In: Computer Graphics Forum, vol. 22, pp. 281–289. Wiley Online Library (2003)

    Google Scholar 

  15. Salti, S., Tombari, F., Stefano, L.D.: A performance evaluation of 3d keypoint detectors. In: International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), pp. 236–243. IEEE (2011)

    Google Scholar 

  16. Scott, D.W.: On optimal and data-based histograms. Biometrika 66(3), 605–610 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  17. Sipiran, I., Bustos, B.: Harris 3d: a robust extension of the harris operator for interest point detection on 3d meshes. Vis. Comput. 27(11), 963–976 (2011)

    Article  Google Scholar 

  18. Smith, S.M., Brady, J.M.: Susana new approach to low level image processing. Int. J. Comput. Vis. 23(1), 45–78 (1997)

    Article  Google Scholar 

  19. The stanford 3d scanning repository (2014). http://graphics.stanford.edu/data/3Dscanrep

  20. Unnikrishnan, R., Hebert, M.: Multi-scale interest regions from unorganized point clouds. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008. CVPRW’08, pp. 1–8. IEEE (2008)

    Google Scholar 

  21. Zhong, Y.: Intrinsic shape signatures: a shape descriptor for 3d object recognition. In: IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), pp. 689–696. IEEE (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jens Garstka .

Editor information

Editors and Affiliations

Appendix—Additional Examples

Appendix—Additional Examples

Happy Buddha

This object is obtained from ‘The Stanford 3-D Scanning Repository’ [19] and is characterized by the following properties:

  • Points: 144647

  • \(pcr = 0.00071\)

  • Voxel grid: \(135 \times 299 \times 135\)

  • \(r_{conv} = 10 \cdot pcr\)

  • \(\sigma = 0.124884\)

  • Bins: 121

  • Keypoints: 210

The histogram below illustrates the distribution of convolution values for the ‘Happy Buddha’. To save space the labels are not included in the histogram. They correspond to those shown in Fig. 5, i.e., the abscissa shows the bin number, while the ordinate shows the number of elements per bin.

The 3-D point cloud of the ‘Happy Buddha’ shown right is a combination of two types of figures which have already been used to illustrate the results of the ‘Stanford Bunny’. The color gradient used to tint the point of the point cloud illustrates the convolution values from the smallest value (red) to the largest value (blue). This was already used in Fig. 4. The purple markers illustrate the final keypoints. This was already used in Fig. 6d.

figure a
figure b

Dragon

This object is obtained from ‘The Stanford 3-D Scanning Repository’ [19] and is characterized by the following properties:

  • Points: 100250

  • \(pcr = 0.00097\)

  • Voxel grid: \(236 \times 174 \times 120\)

  • \(r_{conv} = 10 \cdot pcr\)

  • \(\sigma = 0.124507\)

  • Bins: 107

  • Keypoints: 92

The histogram below illustrates the distribution of convolution values for the ‘Dragon’. To save space the labels are not included in the histogram. They correspond to those shown in Fig. 5, i.e., the abscissa shows the bin number, while the ordinate shows the number of elements per bin.

figure c
figure d

The 3-D point cloud of the ‘Dragon’ shown above is a combination of two types of figures which have already been used to illustrate the results of the ‘Stanford Bunny’. The color gradient used to tint the point of the point cloud illustrates the convolution values from the smallest value (red) to the largest value (blue). This was already used in Fig. 4. The purple markers illustrate the final keypoints. This was already used in Fig. 6d.

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Garstka, J., Peters, G. (2016). Highly Parallelizable Algorithm for Keypoint Detection in 3-D Point Clouds. In: Filipe, J., Madani, K., Gusikhin, O., Sasiadek, J. (eds) Informatics in Control, Automation and Robotics 12th International Conference, ICINCO 2015 Colmar, France, July 21-23, 2015 Revised Selected Papers. Lecture Notes in Electrical Engineering, vol 383. Springer, Cham. https://doi.org/10.1007/978-3-319-31898-1_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-31898-1_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-31896-7

  • Online ISBN: 978-3-319-31898-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics