An Interpretable Lane Change Detector Algorithm based on Deep Autoencoder Anomaly Detection | IEEE Conference Publication | IEEE Xplore

An Interpretable Lane Change Detector Algorithm based on Deep Autoencoder Anomaly Detection


Abstract:

In this paper, we address the challenge of employing Machine Learning (ML) algorithms in safety critical driving functions. Despite ML algorithms demonstrating good perfo...Show More

Abstract:

In this paper, we address the challenge of employing Machine Learning (ML) algorithms in safety critical driving functions. Despite ML algorithms demonstrating good performance in various driving tasks, e.g., detecting when other vehicles are going to change lanes, the challenge of validating these methods has been neglected. To this end, we introduce an interpretable Lane Change Detector (LCD) algorithm which takes advantage of the performance of modern ML-based anomaly detection methods. We independently train three Deep Autoencoders (DAEs) on different driving maneuvers: lane keeping, right lane changes, and left lane changes. The lane changes are subsequently detected by observing the reconstruction errors at the output of each DAE. Since the detection is purely based on the reconstruction errors of independently trained DAEs, we show that the classification outputs are completely interpretable. We compare the introduced algorithm with black-box Recurrent Neural Network (RNN)-based classifiers, and train all methods on realistic highway driving data. We discuss both the costs and the benefits of an interpretable classification, and demonstrate the inherent interpretability of the algorithm.
Date of Conference: 11-17 July 2021
Date Added to IEEE Xplore: 01 November 2021
ISBN Information:
Conference Location: Nagoya, Japan

Contact IEEE to Subscribe

References

References is not available for this document.