Elsevier

Computers & Graphics

Volume 97, June 2021, Pages 117-125
Computers & Graphics

Special Section on CAD & Graphics 2021
Single Image Deraining via detail-guided Efficient Channel Attention Network

https://doi.org/10.1016/j.cag.2021.04.014Get rights and content

Highlights

  • A new detail-guided ECA is proposed, which can extract both global information and detailed information adaptively.

  • Based on detail-guided ECA, the dense cascaded rain streaks removal sub-network and the background details recovery sub-network are proposed.

  • The proposed method outperforms the state-of-the-art methods on four synthetic datasets and two real-world rainy image sets.

Abstract

Single image deraining is an important problem in many computer vision tasks since rain streaks can severely hamper and degrade the visibility of images. Exisiting methods either focus on extracting rain streaks and ignore the background recovery, or the network structure is extremely complex and the number of parameters is quite large. Although some methods mention background restoration work, they generally ignore effective contextual information and result in unsatisfactory results. In this paper, we propose a novel network single image Deraining via detail-guided Efficient Channel Attention Network (DECAN) to remove rain streaks from rainy images. Specifically, we introduce two sub-networks with a comprehensive loss function that synergize to remove rain streaks and recover the background of the derained image. For completing rain streaks removal, we construct a rain streaks removal network with detail-guided efficient-channel-attention module to identify effective low-level features. For background recovery, we present a specialized background repair network consisting of well-designed blocks, named background details recovery network, to repair the background with effective contextual information for eliminating image degradations. Experiments on four synthetic datasets and some real-world rainy image sets show visual and numerical improvements of proposed method over the state-of-the-arts considerably.

Introduction

Many outdoor computer vision tasks like video surveillance [1], visual navigation [2], object tracking [3], as well as self-driving car [4] will fail to work since rain streaks can severely degrade the visibility of images. How to remove rain streaks from rainy images becomes meaningful.

Single image deraining methods pay attention to remove rain streaks from a rainy image, either drawing support from priors, such as Gaussian mixture model [6], [7], low-rank representation [8], [9], [10], sparse coding [11], [12], or feeding a large dataset into the well-designed deep networks [13], [14], [15], [16]. Recently, the great improvements of image deraining have obtained to gain ideal deraining results when processing light-rain images, nevertheless, they are blocked to both remove rain streaks completely and preserve background details on the images captured from the extremely bad weather. The main reason is that the rain streaks of heavy rain scene is more complicated and it is more difficult to decouple from the background. Although some methods try to solve the problem of removing rain streaks under heavy rain, such as [5], [17], they still face a problem: in order to remove the rain streaks of heavy rain, the background details are also damaged to a considerable extent. As shown in Fig. 1(b), the derained results of the [5] fail to accurately identify and extract rain streaks, and there is a certain loss of background information.

In order to maintain the details of the background, and remove the rain streaks of heavy rain, our method starts from the overall situation and restore the background in the process of removing rain streaks. A balance needs to be obtained here: the global information tends to be strong enough to remove rain streaks well and provides guidance for restoring the background; at the same time, the detailed information needs to be rich enough to effectively restore the scene. Global information refers to the overall context information in the image e.g., high-level semantic information, and detailed information refers to low-level and local information e.g., textures and edges. This requirement has conflicts: when the detailed information is too rich, the rain streaks may not be effectively removed, or when the detailed information is not enough, the rain streaks may be restored by mistake.

Inspired by Efficient Channel Attention (ECA) [18], we observed that ECA has good performance to extract global information, and the computational cost is fairly low. It is conducive to embedding into the rain network. But ordinary ECA does not accurately extract detailed information, which is not conducive to scene restoration details. Based on the above discussion, we proposed detail-guided ECA, which is an attention mechanism that can learn detailed information and global information at the same time, without introducing excessive computational burden, and it can still be embedded in the rain removal network.

Like most methods [15], [19], we also use a two-step method to remove the rain streaks from the image. The first step is the detection and removal of rain streaks, and the second step is the restoration of image details. Different from the previous method, we construct two steps based on detail-guided ECA. Firstly, we propose a rain streaks removal network that combines several densely cascaded detail-guided rain extraction blocks (DREB), which could make iterative extraction of detailed information and global information, and reduce information loss, as well as make up for the interference caused by the two kinds of information confrontation, and extract the accurate rain streaks. In the second step, we propose the background details recovery network that corporates detail-guided context aggregation blocks (DCAB) under the global receptive field, for the process of detail recovery. It is always constrained by global information and avoids incorrect reconstruction of rain streaks. As shown in Fig. 1, the derained results of the proposed method (Fig. 1 (c)) obtain better visual quality compared with the derained results of [5] (Fig. 1 (b)).

Overall, this paper makes the following contributions:

  • A new detail-guided ECA is proposed, which can extract both global information and detailed information adaptively. The proposed module is extremely lightweight and can support densely embedded in the network.

  • Based on detail-guided ECA, a dense cascaded rain streaks removal network is proposed, which can effectively alleviate the mutual interference between detailed information and global information, and extract accurate rain streaks. Furthermore, the background details recovery network under the global field of vision is also proposed, which constrains the detailed reconstruction of the background under the global information to avoid the error reconstruction of rain streaks.

  • The proposed method outperforms the state-of-the-art methods on four synthetic datasets and two real-world rainy image sets.

The remainder of this paper is organized as follows. Section 2 discusses the related works. Section 3 introduces the proposed network. Then, experiments results are presented in Section 4. Finally, Section 5 concludes the paper.

Section snippets

Related work

To remove rain streaks from rainy images, many single image deraining methods have been proposed to separate the rain layers from rainy images. All these methods can be categorized into two parts: traditional methods as well as deep learning-based methods. In the following parts, we will review these methods briefly.

Proposed method

Our goal is to model the rain distribution and estimate rain layer R. By substracting R from rainy image I, we can obtain the background B. Owing to the ill-posed inverse problem of deraining, it is impractical to remove rain streaks in a ordinal convolutional nerual network. Thus we propose a novel single image Deraining via detail-guided Efficient Channel Attention Network (DECAN) to remove rain streaks and recovery background details (as shown in Fig. 2).

Experiments

We conduct experiments on four synthetic datasets and some real-world rainy image datasets and compare the proposed method with the state-of-the-art methods: DSC [21], LP [6], JCAS [11], DDN [13], JORDER [26], DID-MDN [27], DualCNN[29], ID_CGAN [28] and RESCAN [32], PReNet [39], MSPFN [5], DRD-Net [19]. Deraining performances on synthetic datasets are evaluated in terms of peak signal-to-noise (PSNR) [41] and structual similarity (SSIM) [42]. Since the ground truth of real-world rainy image is

Conclusions

In this paper, we propose a novel end-to-end network with two sub-networks synerging for deraining from single images. In order to obtain the required effective features more accurately, we propose a detail-guided efficient channel attention module named D-ECA. Furthermore, the RSRN sub-network in DECAN is designed to remove the rain streaks from the rainy images, the other constructs the background details recovery network (BDRN) for the derained images. Experiments on synthetic datasets and

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

We would like to thank all the anonymous reviewers for their helpful comments. This work is funded by the National Natural Science Foundation of China (61775139, 62072126, 61772164, 61872242, 61972157), Shanghai Municipal Commission of Economy and Information (XX-RGZN-01-19-6348) and Opening Topic of Key Laboratory of Embedded Systems and Service Computing of Ministry of Education (ESSCKF 2019-03).

References (42)

  • X. Guo et al.

    Robust low-rank subspace segmentation with finite mixture noise

    Pattern Recognit

    (2019)
  • Y. Chang et al.

    Transformed low-rank model for line pattern noise removal

    Proceedings of the IEEE international conference on computer vision

    (2017)
  • S. Gu et al.

    Joint convolutional analysis and synthesis sparse representation for single image layer separation

    Proceedings of the IEEE international conference on computer vision

    (2017)
  • Y. Wang et al.

    Rain removal by image quasi-sparsity priors

    (2018)
  • X. Fu et al.

    Removing rain from single images via a deep detail network

    Proceedings of the IEEE conference on computer vision and pattern recognition

    (2017)
  • T. Wang et al.

    Spatial attentive single-image deraining with a high quality real rain dataset

    Proceedings of the IEEE conference on computer vision and pattern recognition

    (2019)
  • R. Qian et al.

    Attentive generative adversarial network for raindrop removal from a single image

    Proceedings of the IEEE conference on computer vision and pattern recognition

    (2018)
  • R. Yasarla et al.

    Syn2real transfer learning for image deraining using gaussian processes

    Proceedings of the IEEE conference on computer vision and pattern recognition

    (2020)
  • R. Li et al.

    Heavy rain image restoration: integrating physics model and conditional adversarial learning

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    (2019)
  • Q. Wang et al.

    ECA-NET: efficient channel attention for deep convolutional neural networks

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    (2020)
  • S. Deng et al.

    Detail-recovery image deraining via context aggregation networks

    Proceedings of the IEEE conference on computer vision and pattern recognition

    (2020)
  • Cited by (11)

    View all citing articles on Scopus
    1

    Equal Contribution.

    View full text