Loading [a11y]/accessibility-menu.js
Enhanced Scene Understanding and Situation Awareness for Autonomous Vehicles Based on Semantic Segmentation | IEEE Journals & Magazine | IEEE Xplore

Enhanced Scene Understanding and Situation Awareness for Autonomous Vehicles Based on Semantic Segmentation


Abstract:

Accurate visual perception and comprehensive scene understanding are critical for the safety and reliability of autonomous vehicles (AVs). Nevertheless, the efficacy of v...Show More

Abstract:

Accurate visual perception and comprehensive scene understanding are critical for the safety and reliability of autonomous vehicles (AVs). Nevertheless, the efficacy of visual perception systems can be impaired by the intricacy of road scenes, and the existing scene understanding approach may be insufficient. Consequently, this study proposes an enhanced scene understanding model to achieve precise awareness of driving situations. Recognizing the limitations posed by the oversimplification of samples in current urban scene datasets, we selected critical frames from 336000 video frames, sourced from real-world driving environments, to assemble a more complex road scene (CRS) dataset. We integrated Residual Neural Network and pyramid scene parsing network architectures and refined them through class mapping and targeted network fine-tuning. Based on the segmentation outputs and the XGBoost algorithm, we identified the driving scenarios for the ego vehicle, enabling instantaneous driving situation analysis. The predictive model also evaluated the trajectory of interactive vehicles and estimated their kinematic states. Furthermore, we have conducted a thorough evaluation of scenario complexity, integrating the features described above. The findings indicate that our model achieves a segmentation accuracy of 78.8% in CRSs, with a twofold improvement in training efficiency. We also confirmed the effectiveness of the scene understanding approach through real-world road testing in China. This research provides insight into situation awareness within CRSs, thereby enhancing the visual perception capabilities of AVs. The implications of these results are substantial for their application in autonomous driving tests and advancing decision-making and control algorithms.
Published in: IEEE Transactions on Systems, Man, and Cybernetics: Systems ( Volume: 54, Issue: 11, November 2024)
Page(s): 6537 - 6549
Date of Publication: 10 June 2024

ISSN Information:

Funding Agency:


References

References is not available for this document.