Authors:
Zelin Zhang
and
Jun Ohya
Affiliation:
Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan
Keyword(s):
Autonomous Driving, Deep Learning, End-to-End, Vehicle-to-Vehicle Communication.
Abstract:
In recent years, autonomous driving through deep learning has gained more and more attention. This paper proposes a novel Vehicle-to-Vehicle (V2V) communication based autonomous vehicle driving system that takes advantage of both spatial and temporal information. The proposed system consists of a novel combination of CNN layers and LSTM layers for controlling steering angle and speed by taking advantage of the information from both the autonomous vehicle and cooperative vehicle. The CNN layers process the input sequential image frames, and the LSTM layers process historical data to predict the steering angle and speed of the autonomous vehicle. To confirm the validity of the proposed system, we conducted experiments for evaluating the MSE of the steering angle and vehicle speed using the Udacity dataset. Experimental results are summarized as follows. (1) “with a cooperative car” significantly works better than “without”. (2) Among all the network, the Res-Net performs the best. (3)
Utilizing the LSTM with Res-Net, which processes the historical motion data, performs better than “no LSTM”. (4) As the number of inputted sequential frames, eight frames turn out to work best. (5) As the distance between the autonomous host and cooperative vehicle, ten to forty meters turn out to achieve the robust result on the autonomous driving movement control.
(More)