Abstract:
Various unfavourable conditions such as fog, snow and rain may degrade image quality and pose tremendous threats to the safety of autonomous driving. Numerous image-denoi...Show MoreMetadata
Abstract:
Various unfavourable conditions such as fog, snow and rain may degrade image quality and pose tremendous threats to the safety of autonomous driving. Numerous image-denoising solutions have been proposed to improve visibility under adverse weather conditions. However, previous studies have been limited in robustness, generalization ability, and interpretability as they were designed for specific scenarios. To address this problem, we introduce an interpretable image denoising framework via Dual Disentangled Representation Learning (DDRL) to enhance robustness and interpretability by decomposing an image into content factors (e.g., objects) and context factors (e.g., weather conditions). DDRL consists of two Disentangled Representation Learning (DRL) blocks. In each DRL block, an input image is deconstructed into the latent content distribution and the weather distribution by minimizing their mutual information. To mitigate the impacts of weather styles, we incorporated a content discriminator and adversarial objectives to learn the decomposable interaction between two DRL blocks. Furthermore, we standardized the weather feature space, enabling our method to be applicable to various downstream tasks such as diverse degraded image generation. We evaluated DDRL under three weather conditions including fog, rain, and snow. The experimental results demonstrate that DDRL shows competitive performance with good generalization capability and high robustness under numerous weather conditions. Furthermore, quantitative analysis shows that DDRL can capture interpretable variations of weather factors and decompose them for safe and reliable all-weather autonomous driving.
Published in: IEEE Transactions on Intelligent Vehicles ( Volume: 9, Issue: 1, January 2024)