Sensor Fusion of Intensity and Depth Cues using the ChiNet for Semantic Segmentation of Road Scenes | IEEE Conference Publication | IEEE Xplore

Sensor Fusion of Intensity and Depth Cues using the ChiNet for Semantic Segmentation of Road Scenes


Abstract:

Vision-based environment perception is an important research topic for autonomous driving and advanced driver assistance systems. Vision sensors, such as the monocular ca...Show More

Abstract:

Vision-based environment perception is an important research topic for autonomous driving and advanced driver assistance systems. Vision sensors, such as the monocular camera and stereo camera, are widely used for environment perception. The monocular camera provides the appearance information like intensity, and the stereo camera provides the depth information. The appearance and depth information are complementary, and their effective fusion would result in robust environment perception. Consequently, in this paper, we propose a novel deep learning framework, termed as the ChiNet, for the effective sensor fusion of the appearance and depth information for free space and road object estimation. The ChiNet has two input branches and two output branches. The ChiNet input branches contains separate branches for the intensity and depth information. For the output branches, the ChiNet contains separate branches for the free space and road object semantic segmentation. A comparative of the proposed framework with state-of-the-art baseline algorithms is performed using an acquired dataset. Moreover, a detailed parameter analysis is performed to validate the ChiNet architecture as well as the advantages of sensor fusion. The experimental results show that the ChiNet is better than baseline algorithms. We also show that the proposed ChiNet architecture is better than other variations of the ChiNet architecture.
Date of Conference: 26-30 June 2018
Date Added to IEEE Xplore: 21 October 2018
ISBN Information:
Print on Demand(PoD) ISSN: 1931-0587
Conference Location: Changshu, China

References

References is not available for this document.