Abstract:
Semantic segmentation in diverse real-world traffic scenes is a challenging task for autonomous vehicles to have a reliable understanding of the outside environment. Alth...Show MoreMetadata
Abstract:
Semantic segmentation in diverse real-world traffic scenes is a challenging task for autonomous vehicles to have a reliable understanding of the outside environment. Although deep visual models show a remarkable performance on traffic scene segmentation, they need significant improvements when the source and target scene distributions are different. The weather and illumination differences between source and target datasets result in dramatic distribution shifts that degrade the target segmentation performance. This article develops an unsupervised domain adaptation (UDA) model to address complex domain shift problems where the dataset, weather, and illumination of source and target traffic scenes are different. We devise a sparse adversarial multitarget UDA approach to capture powerful domain-invariant features to segment traffic scenes in different conditions. First, a sparse representation of the source traffic scenes is captured via a new spectral low-rank dictionary learning technique in the latent space of a deep encoder–decoder segmentation architecture. The feature sparsity provides high generalization capacity for our deep feature extractor that enables us to compute complex visual patterns of source images. Then, the distribution of source sparse features is learned using a generative adversarial framework. Finally, the sparse representations of source and target scenes are aligned via a sparse domain-invariant feature extractor trained by min–max optimization. The aligned features serve as domain-invariant scene representations that can best describe both source and target scenes; hence providing deep domain adaptation for traffic scene semantic segmentation. Experiments on a real-world dataset indicate the superiority of the proposed model compared to the state-of-the-art methodologies.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 20, Issue: 2, February 2024)