Skip to main content
Log in

Contour-based next-best view planning from point cloud segmentation of unknown objects

  • Published:
Autonomous Robots Aims and scope Submit manuscript

Abstract

A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown objects. Instead of maximizing the total size of the expected unknown volume that becomes visible, the next-best view is chosen to observe the border of incomplete objects. Salient regions of space that belong to the objects are detected, without any prior knowledge, by applying a point cloud segmentation algorithm. The system uses a Kinect V2 sensor, which has not been considered in previous works on next-best view planning, and it exploits KinectFusion to maintain a volumetric representation of the environment. A low-level procedure to reduce Kinect V2 invalid points is also presented. The viability of the approach has been demonstrated in a real setup where the robot is fully autonomous. Experiments indicate that the proposed method enables the robot to actively explore the objects faster than a standard next-best view algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25

Similar content being viewed by others

References

  • Atanasov, N., Sankaran, B., Le Ny, J., Pappas, G. J., & Daniilidis, K. (2014). Nonmyopic view planning for active object classification and pose estimation. IEEE Transactions on Robotics, 30(5), 1078–1090.

    Article  Google Scholar 

  • Banta, J. E., Wong, L. R., Dumont, C., & Abidi, M. A. (2000). A next-best-view system for autonomous 3-D object reconstruction. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 30(5), 589–598.

    Article  Google Scholar 

  • Beale, D., Iravani, P., & Hall, P. (2011). Probabilistic models for robot-based object segmentation. Robotics and Autonomous Systems, 59(12), 1080–1089.

    Article  Google Scholar 

  • Chen, S. Y., & Li, Y. F. (2005). Vision sensor planning for 3-D model acquisition. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 35(5), 894–904.

    Article  Google Scholar 

  • Connolly, C. (1985). The determination of next best views. IEEE International Conference on Robotics and Automation (ICRA), 2, 432–435.

    Google Scholar 

  • Drews, P., Núñez, P., Rocha, R., Campos, M., & Dias, J. (2013). Novelty detection and segmentation based on gaussian mixture models: A case study in 3D robotic laser mapping. Robotics and Autonomous Systems, 61(12), 1696–1709.

    Article  Google Scholar 

  • Finman, R., Whelan, T., Kaess, M., & Leonard, J. J. (2013). Toward lifelong object segmentation from change detection in dense RGB-D maps. In European conference on mobile robots (ECMR), pp. 178–185.

  • Foix, S., Alenyà, G., Andrade-Cetto, J., & Torras, C. (2010). Object modeling using a ToF camera under an uncertainty reduction approach. In IEEE International conference on robotics and automation (ICRA), pp. 1306–1312.

  • Herbst, E., Henry, P., & Fox, D. (2014). Toward online 3-D object segmentation and mapping. In IEEE International conference on robotics and automation (ICRA), pp. 3193–3200.

  • Kahn, G., Sujan, P., Patil, S., Bopardikar, S., Ryde, J., Goldberg, K., et al. (2015). Active exploration using trajectory optimization for robotic grasping in the presence of occlusions. In IEEE international conference on robotics and automation (ICRA), pp. 4783–4790.

  • Kriegel, S. Bodenmuller, T., Suppa, M., & Hirzinger, G. (2011). A surface-based next-best-view approach for automated 3D model completion of unknown objects. In IEEE international conference on robotics and automation (ICRA), pp. 4869–4874.

  • Kriegel, S., Rink, C., Bodenmuller, T., Narr, A., Suppa, M., & Hirzinger, G. (2012). Next-best-scan planning for autonomous 3D modeling. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2850–2856.

  • Kriegel, S., Brucker, M., Marton, Z. C, Bodenmuller, T., & Suppa, M. (2013). Combining object modeling and recognition for active scene exploration. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2384–2391.

  • Li, Y. F., & Liu, Z. G. (2005). Information entropy-based viewpoint planning for 3-D object reconstruction. IEEE Transactions on Robotics, 21(3), 324–337.

    Article  Google Scholar 

  • Liu, S., Wang, Y., Wang, J., Wang, H., Zhang, J., & Pan, C. (2013). Kinect depth restoration via energy minimization with TV21 regularization. In IEEE international conference on image processing (ICIP), pp. 724–724.

  • Monica, R., Aleotti, J., & Caselli, S. (2016). A KinFu based approach for robot spatial attention and view planning. Robotics and Autonomous Systems, 75(Part B), 627–640.

    Article  Google Scholar 

  • Morooka, K., Zha, Hongbin, & Hasegawa, T. (1998). Next best viewpoint (NBV) planning for active object modeling based on a learning-by-showing approach. In Fourteenth international conference on pattern recognition, Vol. 1, pp. 677–681.

  • Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., et al. (2011). KinectFusion: Real-time dense surface mapping and tracking. In IEEE international symposium on mixed and augmented reality (ISMAR), pp. 127–136.

  • Orabona, F., Metta, G., & Sandini, G. (2005). Object-based visual attention: a model for a behaving robot. In IEEE computer society conference on computer vision and pattern recognition (CVPR), pp. 89–89.

  • Papon, J., Abramov, A., Schoeler, M., & Worgotter, F. (2013). Voxel cloud connectivity segmentation - supervoxels for point clouds. In IEEE computer society conference on computer vision and pattern recognition, pp. 2027–2034. doi:10.1109/CVPR.2013.264.

  • Patten, T., Zillich, M., Fitch, R., Vincze, M., & Sukkarieh, S. (2016). Viewpoint evaluation for online 3-D active object classification. IEEE Robotics and Automation Letters, 1(1), 73–81.

    Article  Google Scholar 

  • Pito, R. (1999). A solution to the next best view problem for automated surface acquisition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10), 1016–1030.

    Article  Google Scholar 

  • Potthast, C., & Sukhatme, G. S. (2014). A probabilistic framework for next best view estimation in a cluttered environment. Journal of Visual Communication and Image Representation, 25(1), 148–164.

    Article  Google Scholar 

  • Reed, M. K., & Allen, P. K. (2000). Constraint-based sensor planning for scene modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12), 1460–1467.

    Article  Google Scholar 

  • Roth, H., & Vona, M. (2012). Moving Volume KinectFusion. In Proceedings of the British machine vision conference. BMVA Press, pp. 112.1–112.11.

  • Stampfer, D., Lutz, M., & Schlegel, C. (2012). Information driven sensor placement for robust active object recognition based on multiple views. In IEEE international conference on technologies for practical robot applications (TePRA), pp. 133–138.

  • Stein, S. C., Worgotter, F., Schoeler, M., Papon, J., & Kulvicius, T. (2014). Convexity based object partitioning for robot applications. In IEEE international conference on robotics and automation (ICRA), pp. 3213–3220.

  • Tateno, K., Tombari, F., & Navab, N. (2015). Real-time and scalable incremental segmentation on dense SLAM. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 4465–4472.

  • Torabi, L., & Gupta, K. (2010). Integrated view and path planning for an autonomous six-DOF eye-in-hand object modeling system. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 4516–4521.

  • Tsuda, A., Kakiuchi, Y., Nozawa, S., Ueda, R., Okada, K., & Inaba, M. (2012). On-line next best grasp selection for in-hand object 3D modeling with dual-arm coordination. In IEEE international conference on robotics and automation (ICRA), pp. 1799–1804.

  • Uckermann, A., Haschke, R., & Ritter, H. (2012). Real-time 3D segmentation of cluttered scenes for robot grasping. In 12th IEEE-RAS international conference on humanoid robots (humanoids), pp. 198–203.

  • Uckermann, A., Eibrechter, C., Haschke, R., & Ritter, H. (2014). Real-time hierarchical scene segmentation and classification. In 14th IEEE-RAS international conference on humanoid robots (humanoids), pp. 225–231.

  • van Hoof, H., Kroemer, O., & Peters, J. (2014). Probabilistic segmentation and targeted exploration of objects in cluttered environments. IEEE Transactions on Robotics, 30(5), 1198–1209.

    Article  Google Scholar 

  • Varadarajan, K. M., & Vincze, M. (2011). Object part segmentation and classification in range images for grasping. In 15th International conference on advanced robotics (ICAR), pp. 21–27.

  • Vasquez-Gomez, J. I., Lopez-Damian, E., & Sucar, L. E. (2009). View planning for 3D object reconstruction. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 4015–4020.

  • Wagner, R., Frese, U., & Bauml, B. (2013). Real-time dense multi-scale workspace modeling on a humanoid robot. In IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 5164–5171.

  • Walck G., & Drouin, M. (2010). Automatic observation for 3D reconstruction of unknown objects using visual servoing. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2727–2732.

  • Welke, K., Issac, J., Schiebener, D., Asfour, T., & Dillmann, R. (2010). Autonomous acquisition of visual multi-view object representations for object recognition on a humanoid robot. In: IEEE international conference on robotics and automation (ICRA), pp. 2012–2019.

  • Whaite, P., & Ferrie, F. P. (1997). Autonomous exploration: Driven by uncertainty. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(3), 193–205.

    Article  Google Scholar 

  • Wu, K., Ranasinghe, R., & Dissanayake, G. (2015). Active recognition and pose estimation of household objects in clutter. In IEEE international conference on robotics and automation (ICRA), pp. 4230–4237.

  • Xu, K., Huang, H., Shi, Y., Li, H., Long, P., Caichen, J. et al. (2015). Autoscanning for coupled scene reconstruction and proactive object analysis. ACM Transactions on Graphics, 34(6), 177:1–177:14.

  • Yu, Y., & Gupta, K. (2004). C-space entropy: A measure for view planning and exploration for general robot-sensor systems in unknown environments. The International Journal of Robotics Research, 23(12), 1197–1223.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacopo Aleotti.

Additional information

This is one of several papers published in Autonomous Robots comprising the Special Issue on Active Perception.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (avi 35844 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Monica, R., Aleotti, J. Contour-based next-best view planning from point cloud segmentation of unknown objects. Auton Robot 42, 443–458 (2018). https://doi.org/10.1007/s10514-017-9618-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-017-9618-0

Keywords

Navigation