Single image rain removal via multi-module deep grid network

https://doi.org/10.1016/j.cviu.2020.103106Get rights and content

Highlights

  • In this paper, we design a GridDerainNet, which is composed of multi-module. The useful information for deraining can be incorporated based on the interactions of multi modules.

  • The rainy image features are extracted by incorporating multiple residual dense blocks at different scales, it furthers to make effective use of multi-scale information to remove rain streaks.

  • Qualitative and quantitative experiments show that our proposed method outperforms several popular state-of-the-art methods on both synthetic and real images.

Abstract

Rain streaks severely degenerate the performances of image/video processing tasks, therefore effective methods for removing rain streaks are required for a wide range of practical applications. In this paper, we introduce an end-to-end deep network, called GridDerainNet, to remove rain streaks within single image under different conditions. The architecture of GridDerainNet consists of three modules: pre-processing, multi-scale attentive module and post-processing. The pre-processing module can effectively generate several variants of the given rainy image, in order to extract more key features from the input. The multi-scale attentive module implements a novel attention mechanism, which allows more flexible information exchange and aggregation, taking full use of diversities of a given image. In the end, post-processing module furthers to reduce residual artifacts after previous two steps. Quantitative and qualitative experimental results demonstrate that the proposed algorithm outperforms several state-of-the-art methods on both synthetic and real-world images.

Introduction

Rain removal problem has been extensively studied in the fields of computer vision and image processing. Under rainy conditions, rain streaks from various directions and shapes make the background scene misty, which seriously affects many visual tasks, including visual tracking (Zhang et al., 2018a), object detection (Li et al., 2018c) and person re-identification (Zhao et al., 2019). Therefore, the removal of undesired rain streaks has become an essential task to obtain high quality images for both computer vision and image understanding.

A rainy image I can be decomposed into two separate images: IB corresponding to rain streak and IR corresponding to the clean background image. Mathematically, it can be expressed by a linear model: I=IB+IR.Given an image impaired by rain streaks, our goal is to remove the rain streaks and restore a clean background as shown in Fig. 1. Similar to image denoising and dehazing problems, image rain removal can be viewed to separate two components from a rainy image. This is an ill-posed problem as the number of unknowns to be recovered is twice as many as that of the input.

In recent years, various rain removal methods (Zhang and Patel, 2018, Fu et al., 2017b, Fu et al., 2019, Ren et al., 2019b, Chen et al., 2018, Liu et al., 2018b) have been proposed for both video and single image. Compared with video rain removal (VRR) problem, the single image rain removal (SIRR) is evidently more challenging, which is due to the lack of beneficial temporal information in the former case (Liu et al., 2018a, Fu et al., 2017a). Besides, main limitations of existing SIRR methods are the semantic gap between the synthetic datasets for training and the real-word images for testing. Although state-of-the-art approaches have been studied for diversities of rain such as Fu et al. (2017b) and Zhang and Patel (2018), their models still lack ability to remove a large range of real-world rain streaks. Therefore, a more adaptive and generalized derainer is needed.

Recently, benefited from large-scale training datasets, deep-learning-based methods have been proposed in image restoration problems (Li et al., 2019, Ren et al., 2019a, Pan et al., 2018). Nonetheless, there are still some challenges remained about SIRR. First, the insufficient real-world rainy images are still a bottleneck to apply deep learning. Therefore, the large synthetic dataset are generally utilized as an alternate. To create synthetic datasets, most works adopt the nonlinear screen blend mode from Photoshop, to add fake rain streaks on clean images (Luo et al., 2015). However, the synthetic rainy images still cannot include sufficiently wider range of rain streaks, which are with diverse directions and shapes and blur the scene in different ways, as shown in Fig. 1. Second, many existing methods remove rain streaks based on image patches, which neglect the contextual information in large regions. We find that multi-scale image processing (Ren, 2008) can tackle these issues by keeping the simplicity of a low dimensional model while enjoying the non-locality of bigger regions in the image. However, this method is commonly done in a successive manner, thus its performance is often limited by the effects of information streams.

In order to address the aforementioned challenges, we propose an end-to-end deep network combined with multi-modules, named GridDerainNet, which can remove the rain streaks effectively. The main contributions of this work are as follows:

(1) Different from previous deep-learning-based SIRR methods, we design a GridDerainNet, which is composed of multi-modules. These modules can work together to remove rain streaks. The useful information for deraining can be incorporated based on the interactions of multi-modules.

(2) Based on the observation that residual can improve the ability of image feature representation, which extract image features by incorporating multiple residual dense blocks at different scales. Meanwhile, we also implement a channel-wise attention mechanism to fuse feature maps. It can improve the capability of capturing information. More flexible information can be exchanged and aggregated to guide the deraining in the network. Through taking full use of the diversities of given images, our network is suitable for recovering rainy images with various rain streaks.

(3) Qualitative and quantitative experiments show that our proposed method outperforms several popular state-of-the-art methods on both synthetic and real images.

The remainder of this paper is organized as follows. Section 2 gives a brief overview of the related work. In Section 3, we present the details of our proposed model. Experimental results are illustrated in Section 4. Finally, making conclusion in Section 5.

Section snippets

Related work

In the last few decades, many methods have been proposed for removing rain streaks. Depending on the input format, these existing methods can be categorized into two groups: video-based methods and single-image-based methods.

GridDerainNet

In this work, we propose an end-to-end deep network, named GridDerainNet, for removing rain streaks in single image.

As aforementioned, our model is capable of removing rain streaks using flexible contextual information at different scales. The main motivation of our model is the difference between layers connecting feature maps horizontally or vertically. These connections, called computation streams, have a large reception field and can acquire more contextual information.

The streams are first

Experiments and results

In this section, we present the experimental details and evaluation results on both synthetic and real-world datasets. Deraining performance on the synthetic datasets is evaluated in terms of PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index) (Wang et al., 2004). Performance of comparison methods on real-world images is evaluated visually since no ground truth exists. Our proposed method is compared with the following recent state-of-the-art methods: DDN (Fu et al., 2017b)

Conclusion

In this work, we proposed a novel GridDerainNet for single image rain removal, which can effective remove rain streaks in different conditions. We introduce a multi-module deep network combined with attention mechanism, which allows more image contextual information to interact with each other. Our proposed model can take full use of image features at different scales, keeping the diversities of input image. Moreover, visual attention mechanism was employed to effective fuse feature information

CRediT authorship contribution statement

Nanfeng Jiang: Carried out the experiment, Writing - original draft, Supervise the project, Conceived the study, Overall direction and planning. Weiling Chen: Supervise the project. Liqun Lin: Supervise the project. Tiesong Zhao: Writing - original draft, Supervise the project, Writing - original draft, Conceived the study, Overall direction and planning.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgment

This research is supported by the National Natural Science Foundation of China (Grant 61671152, 61901119).

References (53)

  • EigenD. et al.

    Restoring an image taken through a window covered with dirt or rain

  • FanZ. et al.

    Residual-guide network for single image deraining

  • FuX. et al.

    Clearing the skies: A deep network architecture for single-image rain removal

    IEEE Trans. Image Process.

    (2017)
  • Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., Paisley, J., 2017b. Removing rain from single images via a deep...
  • FuX. et al.

    Lightweight pyramid networks for image deraining

    IEEE Trans. Neural Netw. Learn. Syst.

    (2019)
  • GargK. et al.

    Detection and removal of rain from videos

  • GargK. et al.

    Vision and rain

    Int. J. Comput. Vis.

    (2007)
  • JohnsonJ. et al.

    Perceptual losses for real-time style transfer and super-resolution

  • KangL.-W. et al.

    Automatic single-image-based rain streaks removal via image decomposition

    IEEE Trans. Image Process.

    (2011)
  • KimJ.-H. et al.

    Single-image deraining using an adaptive nonlocal means filter

  • KingmaD.P. et al.

    Adam: A method for stochastic optimization

    (2014)
  • Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang,...
  • LiG. et al.

    Non-locally enhanced encoder-decoder network for single image de-raining

  • LiR. et al.

    Single image dehazing via conditional generative adversarial network

  • LiY. et al.

    Rain streak removal using layer priors

  • LiX. et al.

    Multi-task structure-aware context modeling for robust keypoint-based object tracking

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2018)
  • Cited by (14)

    • Deep hybrid model for single image dehazing and detail refinement

      2023, Pattern Recognition
      Citation Excerpt :

      Recently, deep learning has been applied in various image processing tasks with promising performance [7–11], and deep learning-based SID methods have also been studied [12–21].

    View all citing articles on Scopus
    View full text