Abstract
Given the growth of internet-based trading on a global level, there are several expected logistic challenges regarding the optimal transportation of large volumes of merchandise. With this in mind, the application of technologies such as computer vision and industrial robotics in facing these challenges presents significant advantages regarding the speed and reliability with which palletization tasks, a critical point in the merchandise transportation chain, can be performed. This paper presents a computer vision strategy for the localization and recognition of boxes in the context of a palletization process carried out by a robotic manipulator. The system operates using a Kinect 2.0 depth camera to capture a scene and processing the resulting point cloud. Obtained results permit the simultaneous recognition of up to 15 boxes, their position in space and their size characteristics within the workspace of the robot, with an average error of approximately 3 cm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abbeloos, W., Ataer-Cansizoglu, E., Caccamo, S., Taguchi, Y., Domae, Y.: 3D Object Discovery and Modeling Using Single RGB-D Images Containing Multiple Object Instances (2017)
Logoglu, K.B., Kalkan, S., Temizel, A.: CoSPAIR: colored histograms of spatial concentric surflet-pairs for 3D object recognition. Robot. Auton. Syst. 75, 558–570 (2016)
Bonkenburt, T.: Robotics in logistics. A DPDHL perspective on implications and use cases for the logistics industry (2016)
Camacho-Muñoz, G.A., Rodriguez, C., Alvarez-Martinez, D.: Modelling the kinematic properties of an industrial manipulator in packing applications. In: Proceedings of the 14th IEEE International Conference on Control and Automation, Washington, DC, USA. IEEE Control Systems Society (2018)
Cheng, Z., Cufusion, H.Y.: Accurate real-time camera tracking and volumetric scene reconstruction with a cuboid. Sensors 17, 1–21 (2017)
Collet, A., Berenson, D., Srinivasa, S.S., Ferguson, D.: Object recognition and full pose registration from a single image for robotic manipulation. In: 2009 IEEE International Conference on Robotics and Automation, pp. 48–55, May 2009
Collet, A., Martinez, M., Srinivasa, S.S.: The moped framework: object recognition and pose estimation for manipulation. I. J. Robot. Res. 30(10), 1284–1306 (2011)
Collet, A., Srinivasa, S.S.: Efficient multi-view object recognition and full pose estimation. In: 2010 IEEE International Conference on Robotics and Automation, pp. 2050–2055, May 2010
Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)
Copal: Container stripper and palletizer (2013)
Correll, N., et al.: Analysis and observations from the first Amazon picking challenge. IEEE Trans. Autom. Sci. Eng. 15(1), 172–188 (2018)
Feng, R., Zhang, H.: Efficient monocular coarse-to-fine object pose estimation. In: 2016 IEEE International Conference on Mechatronics and Automation, ICMA 2016, pp. 1617–1622. IEEE (2016)
Firman, M.: RGBD datasets: past, present and future. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Section 3, 661–673 (2016)
Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 31(5), 647–663 (2012)
Institut für Technische Logistik: Parcel Robot (2013)
Meister, S., Izadi, S., Kohli, P., Hämmerle, M., Rother, C., Kondermann, D.: When can we use kinectfusion for ground truth acquisition. In: Workshop on Color-Depth Camera Fusion in Robotics, vol. 2, October 2012
Narayanan, V., Likhachev, M.: Deliberative object pose estimation in clutter. In: Proceedings - IEEE International Conference on Robotics and Automation, pp. 3125–3130 (2017)
Newcombe, R.A., et al.: Kinectfusion: real-time dense surface mapping and tracking. In: Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, pp. 127–136, Washington, DC, USA. IEEE Computer Society (2011)
TEUN: TEUN takes a load off your hands (2012)
Whelan, T., et al.: Kintinuous: spatially extended kinectfusion. Technical report, Massachusetts Institute of Technology, Cambridge, USA (2012)
Whelan, T., Leutenegger, S., Salas Moreno, R., Glocker, B., Davison, A.: ElasticFusion: dense SLAM without a pose graph. Robot.: Sci. Syst. 11, 1–9 (2015)
Zhou, Q.-Y., Koltun, V.: Depth camera tracking with contour cues. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 632–638, June 2015
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Rodriguez-Garavito, C.H., Camacho-Munoz, G., Álvarez-Martínez, D., Cardenas, K.V., Rojas, D.M., Grimaldos, A. (2018). 3D Object Pose Estimation for Robotic Packing Applications. In: Figueroa-García, J., Villegas, J., Orozco-Arroyave, J., Maya Duque, P. (eds) Applied Computer Sciences in Engineering. WEA 2018. Communications in Computer and Information Science, vol 916. Springer, Cham. https://doi.org/10.1007/978-3-030-00353-1_40
Download citation
DOI: https://doi.org/10.1007/978-3-030-00353-1_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00352-4
Online ISBN: 978-3-030-00353-1
eBook Packages: Computer ScienceComputer Science (R0)