Abstract:
Video-based traffic sign recognition poses a highly challenging problem due to the significant number of possible classes and large variances of recording conditions in n...Show MoreMetadata
Abstract:
Video-based traffic sign recognition poses a highly challenging problem due to the significant number of possible classes and large variances of recording conditions in natural environments. Gathering an appropriate amount of data to solve this task with machine learning techniques remains an overall issue. In this study, we assess the suitability of automatically generated traffic sign images for training corresponding image classifiers. To this end, we adapt the recently proposed cycle-consistent generative adversarial networks in order to transfer automatically rendered prototypical traffic sign images for which we control type, pose, and-to a degree-background into their true-to-life counterparts. We test the proposed system by extensive experiments on the German Traffic Sign Recognition Benchmark dataset [1] and learn that both a HOG-feature-based SVM classifier and a state-of-the-art CNN exhibit reasonable performance when solely trained on artificial data. Consequently, it is well suited as data augmentation method and allows for covering uncommon cases and classes.
Published in: 2019 IEEE Intelligent Vehicles Symposium (IV)
Date of Conference: 09-12 June 2019
Date Added to IEEE Xplore: 29 August 2019
ISBN Information: