Skip to main content
Log in

Vision-Based Solutions for Robotic Manipulation and Navigation Applied to Object Picking and Distribution

  • AI Transfer
  • Published:
KI - Künstliche Intelligenz Aims and scope Submit manuscript

Abstract

This paper presents a robotic demonstrator for manipulation and distribution of objects. The demonstrator relies on robust 3D vision-based solutions for navigation, object detection and detection of graspable surfaces using the rc_visard, a self-registering stereo vision sensor. Suitable software modules were developed for SLAM and for model-free suction gripping. The modules run onboard the sensor, which enables creating the presented demonstrator as a standalone application that does not require an additional host PC. The modules are interfaced with ROS, which allows a quick implementation of a fully functional robotic application.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. https://www.turtlebot.com/turtlebot2/

  2. https://roboception.com/en/rc_visard-en/

  3. https://www.kuka.com/en-us/products/mobility/mobile-robot-systems/kuka-flexfellow

  4. https://www.kuka.com/en-us/products/robotics-systems/industrial-robots/lbr-iiwa

  5. https://cdn.schmalz.com/media/05_services/catalog/vt/Flyer_ECBPi_CobotPump_EN.pdf

  6. http://wiki.ros.org/rc_visard

  7. http://cocodataset.org

  8. http://host.robots.ox.ac.uk/pascal/VOC/index.html

  9. https://github.com/roboception/rcapi_java

  10. https://www.youtube.com/watch?v=wdK23-gapqw

  11. https://www.swisslog.com/en-us/warehouse-logistics-distribution-center-automation/products-systems-solutions/picking-palletizing-order-fulfillment/robot-based-robotics-fully-automated/itempiq-single-item-picking

  12. https://www.tgw-group.com/en/news-press/press-releases/rovolution

  13. http://www.thomas-project.eu/

References

  1. Bay H, Ess A, Tuytelaars T, Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110(3):346–359

    Article  Google Scholar 

  2. Bohg J, Morales A, Asfour T, Kragic D (2014) Data-driven grasp synthesis—a survey. IEEE Trans Robot 30(2):289–309

    Article  Google Scholar 

  3. Correll N, Bekris K, Berenson D, Brock O, Causo A, Hauser K, Okada K, Rodriguez A, Romano J, Wurman P (2018) Analysis and observations from the first Amazon picking challenge. IEEE Trans Autom Sci Eng 15(1):172–188

    Article  Google Scholar 

  4. DHL Trend Research (2016) Robotics in logistics: a DPDHL perspective on implications and use cases for the logistics industry. DHL Customer Solutions & Innovation

  5. Döllinger A, Larsson T (2005) Selection of automated order picking systems. Master thesis, Chalmers University of Technology, Sweden

  6. EHI Retail Institute (2017) Robotics 4 retail: status quo, potenziale und herausforderungen. EHI-Whitepaper

  7. Falco J, Sun Y, Roa M (2018) Robotic grasping and manipulation competition: competitor feedback and lessons learned. In: Sun Y, Falco J (eds) Robotic grasping and manipulation: first robotic grasping and manipulation challenge. Springer, Berlin, pp 180–189

    Chapter  Google Scholar 

  8. Galvez-Lopez D, Tardos JD (2012) Bags of binary words for fast place recognition in image sequences. IEEE Trans Robot 28(5):1188–1197

    Article  Google Scholar 

  9. Gambaro E, Emmerich C, Muenster K, Schaller R, Suppa M (2018) Verfahren zum erstellen eines objektmodells zum greifen eines objekts, computerlesbares speichermedium und robotersystem. In: German Patent Office

  10. Gualtierei M, Pas A, Platt R (2018) Pick and place without geometric object models. In: Proceeding of IEEE international conference on robotics and automation—ICRA, Brisbane, Australia, pp 7433–7440

  11. Hillenbrand U (2008) Pose clustering from stereo data. In: Proceedings of VISAPP international workshop on robotic perception, Madeira, Portugal, pp 23–32

  12. Hinterstoisser S, Lepetit V, Ilic S, Holzer S, Bradski G, Konolige K, Navab N (2012) Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes. In: Proceedings of ACCV Asian conference on computer vision, Daejeon, Korea, pp 548–562

  13. Hirschmüller H (2008) Stereo processing by semi-global matching and mutual information. IEEE Trans Pattern Anal Mach Intell 30(2):328–341

    Article  Google Scholar 

  14. Lowe D (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110

    Article  Google Scholar 

  15. Mahler J, Matl M, Liu X, Li A, Gealy D, Goldberg K (2018) Dex-Net 3.0: computing robust vacuum suction grasp targets in point clouds using a new analytic model and deep learning. In: Proceedings of IEEE international conference on robotics and automation—ICRA, Brisbane, Australia, pp 5620–5627

  16. Mur R, Tardos J (2017) ORB-SLAM2: an open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Trans Robot 33(5):1255–1262

    Article  Google Scholar 

  17. Olson E (2011) AprilTag: a robust and flexible visual fiducial system. In: Proceedings of IEEE international conference on robotics and automation—ICRA, Shanghai, China, pp 3400–3407

    Google Scholar 

  18. Pauwels K, Kragic D (2015) Simtrack: a simulation-based framework for scalable real-time object pose detection and tracking. In: Proceedings of IEEE/RSJ international conference on intelligent robots and systems—IROS, Hamburg, Germany, pp 1300–1307

    Google Scholar 

  19. Porges O, Stouraitis T, Borst C, Roa MA (2014) Reachability and capability analysis for manipulation tasks. In: ROBOT2013: first Iberian robotics conference. Springer, Madrid, Spain, pp 703–718

    Chapter  Google Scholar 

  20. Redmon J, Farhadi A (2018) YOLOv3: an incremental improvement. arXiv:1804.02767

  21. Roy N, Newman M, Srinivasa S (2013) Recognition and pose estimation of rigid transparent objects with a kinect sensor. In: Proceedings of robotics science and systems—RSS, Berlin, Germany

  22. Rusu R, Blodow N, Beetz M (2009) Fast point feature histograms (FPFH) for 3D registration. In: Proceedings of IEEE international conference on Robotics and Automation—ICRA, Kobe, Japan, pp 3212–3217

  23. Rusu R, Cousins S (2011) 3D is here: point cloud library (PCL). In: Proceedings of IEEE international conference on robotics and automation—ICRA, Shanghai, China

  24. Sepp W, Fuchs S, Hirzinger G (2006) Hierarchical featureless tracking for position-based 6-DoF visual servoing. In: Proceedings of IEEE/RSJ international conference on intelligent robots and systems—IROS, Beijing, China, pp 4310–4315

  25. Styleintelligence (2018) Market report: goods-to-person ecommerce fulfilment robotics

  26. Xiang Y, Schmidt T, Narayanan V, Fox D (2018) PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes. In: Proceedings of robotics: science and systems—RSS, Pittsburgh, USA

  27. Xiao J, Hays J, Ehinger K, Oliva A, Torralba A (2010) SUN database: large-scale scene recognition from abbey to zoo. In: Proceedings of IEEE conference on computer vision and pattern recognition—CVPR, San Francisco, USA, pp 3485–3492

Download references

Funding

This project was partially funded by the European Union’s Horizon 2020 research and innovation programme under the project ROSIN, Grant agreement no. 732287, with the FTP (Focused Technical Project) VISARD4ROS.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Máximo A. Roa-Garzón.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Roa-Garzón, M.A., Gambaro, E.F., Florek-Jasinska, M. et al. Vision-Based Solutions for Robotic Manipulation and Navigation Applied to Object Picking and Distribution. Künstl Intell 33, 171–180 (2019). https://doi.org/10.1007/s13218-019-00588-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13218-019-00588-z

Keywords

Navigation