loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Hazem Rashed 1 ; Senthil Yogamani 2 ; Ahmad El-Sallab 1 ; Pavel Křížek 3 and Mohamed El-Helw 4

Affiliations: 1 CDV AI Research, Cairo and Egypt ; 2 Valeo Vision Systems and Ireland ; 3 Valeo R&D DVS, Prague and Czech Republic ; 4 Nile University, Cairo and Egypt

Keyword(s): Semantic Segmentation, Visual Perception, Dense Optical Flow, Automated Driving.

Related Ontology Subjects/Areas/Topics: Computer Vision, Visualization and Computer Graphics ; Image and Video Analysis ; Segmentation and Grouping

Abstract: Motion is a dominant cue in automated driving systems. Optical flow is typically computed to detect moving objects and to estimate depth using triangulation. In this paper, our motivation is to leverage the existing dense optical flow to improve the performance of semantic segmentation. To provide a systematic study, we construct four different architectures which use RGB only, flow only, RGBF concatenated and two-stream RGB + flow. We evaluate these networks on two automotive datasets namely Virtual KITTI and Cityscapes using the state-of-the-art flow estimator FlowNet v2. We also make use of the ground truth optical flow in Virtual KITTI to serve as an ideal estimator and a standard Farneback optical flow algorithm to study the effect of noise. Using the flow ground truth in Virtual KITTI, two-stream architecture achieves the best results with an improvement of 4% IoU. As expected, there is a large improvement for moving objects like trucks, vans and cars with 38%, 28% and 6% incre ase in IoU. FlowNet produces an improvement of 2.4% in average IoU with larger improvement in the moving objects corresponding to 26%, 11% and 5% in trucks, vans and cars. In Cityscapes, flow augmentation provided an improvement for moving objects like motorcycle and train with an increase of 17% and 7% in IoU. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.17.6.75

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Rashed, H.; Yogamani, S.; El-Sallab, A.; Křížek, P. and El-Helw, M. (2019). Optical Flow Augmented Semantic Segmentation Networks for Automated Driving. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 5: VISAPP; ISBN 978-989-758-354-4; ISSN 2184-4321, SciTePress, pages 165-172. DOI: 10.5220/0007248301650172

@conference{visapp19,
author={Hazem Rashed. and Senthil Yogamani. and Ahmad El{-}Sallab. and Pavel K\v{r}ížek. and Mohamed El{-}Helw.},
title={Optical Flow Augmented Semantic Segmentation Networks for Automated Driving},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 5: VISAPP},
year={2019},
pages={165-172},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007248301650172},
isbn={978-989-758-354-4},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 5: VISAPP
TI - Optical Flow Augmented Semantic Segmentation Networks for Automated Driving
SN - 978-989-758-354-4
IS - 2184-4321
AU - Rashed, H.
AU - Yogamani, S.
AU - El-Sallab, A.
AU - Křížek, P.
AU - El-Helw, M.
PY - 2019
SP - 165
EP - 172
DO - 10.5220/0007248301650172
PB - SciTePress