Obstacle detection in a greenhouse environment using the Kinect sensor

https://doi.org/10.1016/j.compag.2015.02.001Get rights and content

Highlights

  • Obstacle detection in a greenhouse using the Kinect 3D sensor.

  • Depth is processed into slope information.

  • Obstacle decision is based on color, texture and slope.

  • The system is real-time and provides good results.

Abstract

In many agricultural robotic applications, the robotic vehicle must detect obstacles in its way in order to navigate correctly. This is also true for robotic spray vehicles that autonomously explore greenhouses. In this study, we present an approach to obstacle detection in a greenhouse environment using the Kinect 3D sensor, which provides synchronized color and depth information. First, the depth data are processed by applying slope computation. This creates an obstacle map which labels the pixels that exceed a predefined slope as suspected obstacles, and others as surface. Then, the system uses both color and texture features to classify the suspected obstacle pixels. The obstacle detection decision is made using information on the pixel slope, its intensity and surrounding neighbor pixels. The obstacle detection of the proposed sensor and algorithm is demonstrated on data recorded by the Kinect in a greenhouse. We show that the system produces satisfactory results (all obstacles were detected with only few false positive detections) and is fast enough to run on a limited computer.

Introduction

In a greenhouse environment, to reduce human involvement in toxic or labor-intensive activities such as spraying and fruit collection, a high degree of system autonomy is required. The paths between pepper plants often contain objects, some of which can impede the passage of a robotic vehicle. The robotic spray vehicle needs to be able to navigate along greenhouse paths and thus must be equipped to distinguish between obstacles and non-obstacles. The ability to detect obstacles plays an important role in robot autonomy. Currently, the most common systems in use for this task are based on data from stereo images (Kumano et al., 2000, Zhao and Yuta, 1993, Reina and Milella, 2012, Rankin et al., 2005, Konolige et al., 2008), or a combination of visual imagery and 3D scanning (Quigley et al., 2009). These systems require the use of multiple sensors in order to perceive multiple aspects of their environment (Krotkov et al., 2007, Luo and Kay, 1990, Stentz et al., 2003, Dahlkamp et al., 2006). Passive sensors, such as color and infrared cameras, are used for sensing the appearance of the environment whereas active sensors, such as lasers are used for geometric sensing. The implementation of multiple sensors in a single system requires a large platform for sensor installation and often results in a complicated and expensive product.

Nowadays, cameras that capture RGB images with depth information are available. The Microsoft Kinect (Kinect, 2010) is a low cost peripheral, designed for use as a controller for the Xbox 360 game system. In addition to its color (RGB) image, the depth image is obtained using an infrared (IR) emitter which projects a predefined irregular pattern of IR dots of varying intensities (Shpunt et al., 2007), and an infrared camera which acquires it. The appearance of the light pattern in a certain camera image region varies with the camera-object distance. This effect is utilized to generate a distance image of the acquired scene. This new technology was recently used in object detection systems (Khan et al., 2012, Lopez et al., 2012).

The Kinect 3D sensor, which provides both synchronized color and depth information was originally designed for the gaming industry in a living room environment. The goal of this study was to develop a reliable and cheap system based on the Kinect’s color and depth information, which detects obstacles in the greenhouse environment, without any a priori knowledge of the greenhouse layout. Several objectives were formulated. First, recording Kinect’s color and depth sensor operations in a greenhouse, in situations involving outdoor scenarios and a moving camera. Note that the Kinect was originally built for gaming and in most of the applications the sensor is stationary. Second, developing an obstacle detection system based on a range of information such as depth, color and texture to insure the robustness of the obstacle detection system.

Section snippets

Obstacle detection sensors and technology

Several technologies including stereovision, ladar and microwave radar have been investigated for obstacle detection in agricultural autonomous robotics (Rouveure et al., 2012). An examination of the sensors in use in the obstacle detection field shows that each of them has its own unique advantages and disadvantages (Discant et al., 2007). Stereovision allows three-dimensional scene reconstruction using two or more cameras placed at different fixed locations in space. By matching corresponding

System overview

Obstacle detection is done separately for each video frame based on the depth and color maps. The first step is calculating the slope of the depth image pixels. A small slope for a surface patch is an indication of a ground surface and therefore traversable. Pixels which can potentially be interpreted as obstacles for a robotic vehicle have a slope above a certain threshold value. This first step is based on depth only, taken as a raw data from the sensor, in order to obtain an efficient

Experimental setup

We conducted the experiments in a greenhouse at the Volcani center, Israel. The Kinect sensor was connected to a laptop computer, and its data were recorded while following several greenhouse paths. The Kinect sensor was mounted on a flat surface area and the sensor’s field of view (FOV) contains the ground and pepper bush areas. The recorded data were taken in the afternoon hours when the sunlight is sufficient for the Kinect’s RGB camera operation and yet does not interfere with the Kinect’s

Experimental results

We applied texture features only on frames where at least one obstacle was already detected using the color classifier. In such cases the texture features were utilized to find more obstacles that were not detected based on color. A final detection decision was made if either color or texture classifiers were positive. This guaranteed that the system would stay on the safe side and present all detected obstacles and that the robotic vehicle will not bump into an obstacle. Table 2 shows the

Conclusion

The experimental results determine that the approach presented can efficiently distinguish between obstacles and non-obstacles. By use of varied information such as depth, color and texture information, this method achieves satisfactory performance and its simple computation enables a solution in real time. The application of the Kinect sensor, providing both color and depth information, enabled the development of an inexpensive obstacle detection system, using only one sensor to accomplish the

References (49)

  • J. Borenstein et al.

    Real time avoidance for fast mobile robots

    IEEE Trans. Syst. Man Cybern.

    (1989)
  • J. Borenstein et al.

    Obstacle avoidance with ultrasonic sensors

    IEEE J. Robot. Autom.

    (1998)
  • Corke, P., Winstanley, G., Roberts, J., Duff, E., Sikka, P., 1999. Robotics for the mining industry: opportunities and...
  • Dahlkamp, H., Kaehler, A., Stavens, D., Thrun, S., Bradski, G., 2006. Self-supervised monocular road detection in...
  • Daily, M.J., Harris, J.G., Reiser, K., 1987. Detecting obstacles in range imagery. In: Image Understanding Workshop,...
  • Discant, A., Rogozan, A., Rusu, C., Bensrhair, A., 2007. Sensors for obstacle detection – a survey. In: Electronics...
  • Foessel, A., Chheda, S., Apostolopoulos, D., 1999. Short-range millimeter wave radar perception in a polar environment....
  • Freedman, B., Shpunt, A., Machline, M., Arieli, Y., May 2010. Depth mapping using projected patterns. In: United States...
  • Z. Guo et al.

    A completed modeling of local binary pattern operator for texture classification

    IEEE Trans. Image Process.

    (2010)
  • Harper, N., Mckerrow, P., 1999. Detecting plants for landmarks with ultrasonic sensing. In: Proceedings of the...
  • Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D., December 2010. RGB-D mapping: using depth cameras for dense 3D...
  • Khan, A., Moideen, F., Lopez, J., Khoo, W.L., Zhu, Z., 2012. Kindectect: Kinect detecting objects. In: International...
  • K. Khoshelham et al.

    Accuracy and resolution of Kinect depth data for indoor mapping applications

    Sensors

    (2012)
  • Kinect, 2010....
  • Cited by (0)

    View full text