Abstract:
Despite extraordinary progress of Advanced Driver Assistance Systems (ADAS), an alarming number of over 1,2 million people are still fatally injured in traffic accidents ...Show MoreMetadata
Abstract:
Despite extraordinary progress of Advanced Driver Assistance Systems (ADAS), an alarming number of over 1,2 million people are still fatally injured in traffic accidents every year1. Human error is mostly responsible for such casualties, as by the time the ADAS system has alarmed the driver, it is often too late. We present a vision-based system based on deep neural networks with 3D convolutions and residual learning for anticipating the future maneuver based on driver observation. While previous work focuses on hand-crafted features (e.g. head pose), our model predicts the intention directly from video in an end-to-end fashion. Our architecture consists of three components: a neural network for extraction of optical flow, a 3D residual network for maneuver classification and a Long Short-Term Memory network (LSTM) for handling temporal data of varying length. To evaluate our idea, we conduct thorough experiments on the publicly available Brain4Cars benchmark, which covers both inside and outside views for future maneuver anticipation. Our model is able to predict driver intention with an accuracy of 83,12% and 4,07s before the beginning of the maneuver, outperforming state-of-the-art approaches, while considering the inside view only.
Published in: 2019 IEEE Intelligent Vehicles Symposium (IV)
Date of Conference: 09-12 June 2019
Date Added to IEEE Xplore: 29 August 2019
ISBN Information: