Loading [a11y]/accessibility-menu.js
Classifying Pedestrian Actions In Advance Using Predicted Video Of Urban Driving Scenes | IEEE Conference Publication | IEEE Xplore

Classifying Pedestrian Actions In Advance Using Predicted Video Of Urban Driving Scenes


Abstract:

Fig. 1.Generating predictions of a future for a pedestrian attempting to cross the street. We pick out two key frames from the (a) input sequence and the (b) ground truth...Show More

Abstract:

Fig. 1.Generating predictions of a future for a pedestrian attempting to cross the street. We pick out two key frames from the (a) input sequence and the (b) ground truth sequence, 16 frames apart. Image (c) shows our prediction at the same time instant as the ground truth.We explore prediction of urban pedestrian actions by generating a video future of the traffic scene, and show promising results in classifying pedestrian behaviour before it is observed. We compare several encoder-decoder network models that predict 16 frames (400-600 milliseconds of video) from the preceding 16 frames. Our main contribution is a method for learning a sequence of representations to iteratively transform features learnt from the input to the future. Then we use a binary action classifier network for determining a pedestrian's crossing intent from predicted video. Our results show an average precision of 81%, significantly higher than previous methods. The model with the best classification performance runs for 117 ms on commodity GPU, giving an effective look-ahead of 416 ms.
Date of Conference: 20-24 May 2019
Date Added to IEEE Xplore: 12 August 2019
ISBN Information:

ISSN Information:

Conference Location: Montreal, QC, Canada

Contact IEEE to Subscribe

References

References is not available for this document.