Skip to main content
Log in

Integration of Computervision and Artificial Intelligence Subsystems with Robot Operating System Based Motion Planning for Industrial Robots

  • Published:
Automatic Control and Computer Sciences Aims and scope Submit manuscript

Abstract

The paper proposes flexible system that is based on Robot Operating System framework for integration of 3D computer vision and artificial intelligence algorithms with industrial robots for automation of industrial tasks. The system provides flexibility of 3D computer vision hardware and industrial robot components, allowing to test different hardware with small software changes. The experimental system consisting of Kinect V2 RGB+Depth camera and Universal Robots UR5 robot was set up. In experimental setup the pick and place task was implemented where randomly organized two types of objects (tubes and cans) where picked from the container and sorted in two separate containers. Average full cycle time for the task was measured to be 19.675 s.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.

Similar content being viewed by others

REFERENCES

  1. IFR, Executive Summary World Robotics 2017 Industrial Robots. https://ifr.org/downloads/press/Executive_ Summary_WR_2017_Industrial_Robots.pdf. Accessed July 6, 2018.

  2. Gilchrist, A., Industry 4.0. The Industrial Internet of Things, Apress, 2016.

    Google Scholar 

  3. Purdy, M. and Daugherty, P., How AI boosts industry profits and innovation. https://www.accenture.com/us-en/ insight-ai-industry-growth. Accessed July 6, 2018.

  4. Global 3D Camera Market Size, Share, Development, Growth and Demand Forecast to 2022 – Industry Insights by Technology (Time of Flight, Stereo Vision and Structured Light Imaging), by Type (Free Camera and Target Camera), and by Application (Professional Cameras, Smartphone, Tablets, Computer and Other). https://www. psmarketresearch.com/market-analysis/3d-camera-market, 2016. Accessed July 6, 2018.

  5. HE Robotic Arms (Humid Environment). https://www.staubli.com/en/robotics/product-range/6-axis-scara-picker-industrial-robots/sensitive-environments/humid-environment/. Accessed July 6, 2018.

  6. About ROS. http://www.ros.org/about-ros/. Accessed July 6, 2018.

  7. Aitken, J.M., Veres, S.M., and Judge, M., Adaptation of system configuration under the robot operating system, IFAC Proc. Vol., 2014, vol. 47, no. 3, pp. 4484–4492.

  8. Computer Vision Hardware and Software Market to Reach $48.6 Billion by 2022. https://www.tractica.com/ newsroom/press-releases/computer-vision-hardware-and-software-market-to-reach-48-6-billion-by-2022/. Accessed July 6, 2018.

  9. Se, S. and Pears, N., Passive 3D Imaging, Springer, 2012.

    Book  Google Scholar 

  10. Hansard, M., Lee, S., Choi, O. and Horaud, R.P., Time-of-Flight Cameras: Principles, Methods and Applications, Springer Publishing Company, 2012.

    Google Scholar 

  11. Zhang, S., High-speed 3d shape measurement with structured light methods: A review, Opt. Lasers Eng., 2018, vol. 106, pp. 119–131.

    Article  Google Scholar 

  12. Lachat, E., Macher, H., Landes, T., and Grussenmeyer, P., Assessment and calibration of a RGB-D camera (Kinect v2 Sensor) towards a potential use for close-range 3D modeling, Remote Sens., 2015, vol. 7, no. 10, pp. 13070–13097. https://doi.org/10.3390/rs71013070.

    Article  Google Scholar 

  13. Fernandez, L., Avila, V., and Gonçalves, L., A generic approach for error estimation of depth data from (stereo and RGB-D) 3D sensors, 2017 (preprint).

  14. Keselman, L., Woodfill, J.I., Grunnet-Jepse, A., and Bhowmik, A., Intel(R) RealSense(TM) stereoscopic depth cameras, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, 2017.

  15. OpenCV, About OpenCV. https://opencv.org/about.html. Accessed July 6, 2018.

  16. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., and Fei-Fei, L., ImageNet large scale visual recognition challenge, Int. J. Comput. Vision, 2015, vol. 115, no. 3, pp. 211–252.

    Article  MathSciNet  Google Scholar 

  17. Krizhevsky, A., Sutskever, I., and Hinton, G.E., ImageNet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.

  18. Ros-Industrial. https://rosindustrial.org/about/description/. Accessed July 6, 2018.

  19. Lentin, J., Mastering ROS for Robotics Programming: Design, Build and Simulate Complex Robots Using Robot Operating System and Master Its Out-of-the-Box Functionalities, Packt Publishing, 2015.

    Google Scholar 

  20. Concepts Moveit! http://moveit.ros.org/documentation/concepts/. Accessed July 6, 2018.

  21. Hornung, A., Wurm, K.M., Bennewitz, M., Stachnis, C., and Burgard, W., OctoMap: An efficient probabilistic 3D mapping framework based on octrees, Auton. Rob., 2013, vol. 34, no. 3, pp. 189–206.

    Article  Google Scholar 

  22. Sucan, I.A., Moll, M., and Kavraki, L.E., The open motion planning library, IEEE Rob. Autom. Mag., 2012, vol. 19, no. 4, pp. 72–82.

    Article  Google Scholar 

  23. Kalakrishnan, M., Chitta, S., Theodorou, E., Pastor, P., and Schaal, S., Stomp: Stochastic trajectory optimization for motion planning, IEEE International Conference on Robotics and Automation, 2011.

  24. Kavraki, L.E., Svestka, P., Latombe, J.-C., and Overmars, M.H., Probabilistic roadmaps for path planning in high-dimensional configuration spaces, IEEE Trans. Rob. Autom., 1996, vol. 12, no. 4, pp. 566–580.

    Article  Google Scholar 

  25. OMPL, Open motion planning library: A primer. http://ompl.kavrakilab.org/OMPL_Primer.pdf. Accessed July 6, 2018.

  26. Pochyly, A., Kubela, T., Singule, V., and Cihak, P., Robotic vision for bin-picking applications of various objects applications, ISR 2010 (41st International Symposium on Robotics) and ROBOTIK 2010 (6th German Conference on Robotics), 2010.

  27. Pochyly, A., Kubela, T., Kozak, M., and Cihak, P., 3D vision systems for industrial bin-picking applications, Proceedings of 15th International Conference MECHATRONIKA, 2012.

  28. Pochyly, A., Kubela, T., Singule, V., and Cihak, P., Robotic bin-picking system based on a revolving vision system, 2017 19th International Conference on Electrical Drives and Power Electronics (EDPE), 2017.

  29. Schyja, A., Hypki, A., and Kuhlenkötter, B., A modular and extensible framework for real and virtual bin-picking environments, 2012 IEEE International Conference on Robotics and Automation, 2012, pp. 5246–5251.

  30. Tavares, P. and Sousa, A., Flexible pick and place architecture using ROS framework, 2015 10th Iberian Conference on Information Systems and Technologies (CISTI), 2015, pp. 1–6.

  31. Buchholz, D., Winkelbach, S., and Wahl, F.M., Ransam for industrial bin-picking, ISR 2010 (41st International Symposium on Robotics) and ROBOTIK 2010 (6th German Conference on Robotics), 2010, pp. 1–6.

  32. Kim, K., Cho, J., Pyo, J., Kang, S., and Kim, J., Dynamic object recognition using precise location detection and ANN for robot manipulator, 2017 International Conference on Control, Artificial Intelligence, Robotics Optimization (ICCAIRO), 2017, pp. 237–241.

  33. Ur5 Technical Specifications. https://www.universal-robots.com/media/50588/ur5_en.pdf. Accessed July 6, 2018.

  34. Andersen, T., Optimizing the Universal Robots ROS Driver. http://orbit.dtu.dk/files/117833332/Universal_ Robot_report.pdf. Accessed July 6, 2018.

  35. Andersen, T.T., The new driver for the ur3/ur5/ur10 robot arms from universal robots. https://github.com/ ThomasTimm/ur_modern_driver. Accessed July 6, 2018.

  36. ROS-Industrial Universal Robot Meta-Package. https://github.com/ros-industrial/universal_robot. Accessed July 6, 2018.

Download references

ACKNOWLEDGMENTS

The research leading to these results has received funding from the research project “Competency Centre of Latvian Electric and Optical Equipment Productive Industry” of EU Structural funds, contract no. 1.2.1.1/16/A/002 signed between LEO Competence Centre and Central Finance and Contracting Agency, Research no. 11 “The research on the development of computer vision techniques for the automation of industrial processes.”

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Janis Arents, Ricards Cacurs or Modris Greitans.

Additional information

The article is published in the original.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Janis Arents, Cacurs, R. & Greitans, M. Integration of Computervision and Artificial Intelligence Subsystems with Robot Operating System Based Motion Planning for Industrial Robots. Aut. Control Comp. Sci. 52, 392–401 (2018). https://doi.org/10.3103/S0146411618050024

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S0146411618050024

Keywords:

Navigation