Skip to main content
Log in

Rain Rendering for Evaluating and Improving Robustness to Bad Weather

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Rain fills the atmosphere with water particles, which breaks the common assumption that light travels unaltered from the scene to the camera. While it is well-known that rain affects computer vision algorithms, quantifying its impact is difficult. In this context, we present a rain rendering pipeline that enables the systematic evaluation of common computer vision algorithms to controlled amounts of rain. We present three different ways to add synthetic rain to existing images datasets: completely physic-based; completely data-driven; and a combination of both. The physic-based rain augmentation combines a physical particle simulator and accurate rain photometric modeling. We validate our rendering methods with a user study, demonstrating our rain is judged as much as 73% more realistic than the state-of-the-art. Using our generated rain-augmented KITTI, Cityscapes, and nuScenes datasets, we conduct a thorough evaluation of object detection, semantic segmentation, and depth estimation algorithms and show that their performance decreases in degraded weather, on the order of 15% for object detection, 60% for semantic segmentation, and 6-fold increase in depth estimation error. Finetuning on our augmented synthetic data results in improvements of 21% on object detection, 37% on semantic segmentation, and 8% on depth estimation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Notes

  1. Assuming a stationary camera with KITTI calibration (Geiger et al. 2013), we computed that only 1.24% of the drops project on 1+ pixel in 50 mm/h rain, and 0.7% at 5 mm/h. This follows logic: the heavier the rain, the higher the probability of having large drops.

  2. The distribution and dynamics of drops vary on earth due to gravity and atmospheric conditions. We selected here the broadly used physical models recorded in Ottawa, Canada (Marshall and Palmer 1948; Atlas et al. 1973).

  3. We computed that, for KITTI, 98.7% of the drops are within 4 m from the camera center in a 50 mm/h rainfall rate. Therefore, computing location-dependent environment maps would not be significantly more accurate, while being of very high processing cost.

  4. The circle of confusion C of an object at distance d, is defined as: \(C = \frac{(d - f_{\text {p}}) f^2}{d(f_{\text {p}} - f) f_{\text {N}}}\) with \(f_\text {p}\) the focus plane, f the focal and \(f_\text {N}\) the lens f-number. f and \(f_\text {N}\) are from intrinsic calibration, and \(f_\text {p}\) is set to 6 m.

  5. While Garg and Nayar (2006) does not provide an alpha channel, the latter is easily computed since drops were rendered on black background in a white ambient lighting.

  6. Note that only front camera and annotated key frames are used for consistency and ground truth accuracy.

  7. Weather database at https://openweathermap.org.

References

  • Atlas, D., Srivastava, R., & Sekhon, R. S. (1973). Doppler radar characteristics of precipitation at vertical incidence. Reviews of Geophysics, 11(1), 1–35.

    Article  Google Scholar 

  • Bansal, A., Ma, S., Ramanan, D., & Sheikh, Y. (2018). Recycle-GAN: Unsupervised video retargeting. In European conference on computer vision.

  • Barnum, P. C., Narasimhan, S., & Kanade, T. (2010). Analysis of rain and snow in frequency space. International Journal of Computer Vision, 86(2–3), 256.

    Article  Google Scholar 

  • Barron, J. T., & Poole, B. (2016). The fast bilateral solver. In European conference on computer vision.

  • Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009). Curriculum learning. In International conference on machine learning.

  • Bijelic, M., Mannan, F., Gruber, T., Ritter, W., Dietmayer, K., & Heide, F. (2020). Seeing through fog without seeing fog: Deep sensor fusion in the absence of labeled training data. In IEEE conference on computer vision and pattern recognition.

  • Brock, A., Donahue, J., & Simonyan, K. (2018). Large scale GAN training for high fidelity natural image synthesis. ArXiv preprint arXiv:1809.11096.

  • Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., & Beijbom, O. (2020). nuScenes: A multimodal dataset for autonomous driving. In IEEE conference on computer vision and pattern recognition.

  • Cameron, C. (2005). Hallucinating environment maps from single images. Technical report.

  • Chen, Y. L., & Hsu, C. T. (2013). A generalized low-rank appearance model for spatio-temporally correlated rain streaks. In IEEE international conference on computer vision.

  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2016). The Cityscapes dataset for semantic urban scene understanding. In IEEE conference on computer vision and pattern recognition.

  • Creus, C., & Patow, G. A. (2013). R4: Realistic rain rendering in realtime. Computers and Graphics, 37(1–2), 33–40.

    Article  Google Scholar 

  • Dai, J., Li, Y., He, K., & Sun, J. (2016). R-FCN: Object detection via region-based fully convolutional networks. In Advances in neural information processing systems.

  • de Charette, R., Tamburo, R., Barnum, P. C., Rowe, A., Kanade, T., & Narasimhan, S. G. (2012). Fast reactive control for illumination through rain and snow. In IEEE international conference on computational photography.

  • Dong, C., Loy, C. C., He, K., & Tang, X. (2015). Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(2), 295–307.

    Article  Google Scholar 

  • Eigen, D., Krishnan, D., & Fergus, R. (2013). Restoring an image taken through a window covered with dirt or rain. In IEEE international conference on computer vision.

  • Garg, K., & Nayar, S. K. (2004). Detection and removal of rain from videos. In IEEE conference on computer vision and pattern recognition.

  • Garg, K., & Nayar, S. K. (2005). When does a camera see rain? In IEEE international conference on computer vision.

  • Garg, K., & Nayar, S. K. (2006). Photorealistic rendering of rain streaks. ACM Transactions on Graphics (SIGGRAPH), 25(3), 996–1002.

    Article  Google Scholar 

  • Garg, K., & Nayar, S. K. (2007). Vision and rain. International Journal of Computer Vision, 75(1), 3–27.

    Article  Google Scholar 

  • Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The KITTI dataset. International Journal of Robotics Research, 32(11), 1231–1237.

    Article  Google Scholar 

  • Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. In IEEE conference on computer vision and pattern recognition.

  • Godard, C., Mac Aodha, O., & Brostow, G. J. (2017). Unsupervised monocular depth estimation with left-right consistency. In IEEE conference on computer vision and pattern recognition.

  • Godard, C., Mac Aodha, O., Firman, M., & Brostow, G. J. (2019). Digging into self-supervised monocular depth estimation. In IEEE international conference on computer vision.

  • Gruber, T., Bijelic, M., Heide, F., & Ritter, W., Dietmayer, K. (2019). Pixel-accurate depth evaluation in realistic driving scenarios. In International conference on 3D vision.

  • Halder, S. S., Lalonde, J. F., & Charette, R. D. (2019). Physics-based rendering for improving robustness to rain. In IEEE international conference on computer vision.

  • Halimeh, J. C., & Roser, M. (2009). Raindrop detection on car windshields using geometric-photometric environment construction and intensity-based correlation. In IEEE intelligent vehicles symposium.

  • Hao, Z., You, S., Li, Y., Li, K., & Lu, F. (2019). Learning from synthetic photorealistic raindrop for single image raindrop removal. In IEEE international conference on computer vision workshops.

  • Hold-Geoffroy, Y., Athawale, A., & Lalonde, J. F. (2019). Deep sky modeling for single image outdoor lighting estimation. In IEEE conference on computer vision and pattern recognition.

  • Hold-Geoffroy, Y., Sunkavalli, K., Hadap, S., Gambaretto, E., & Lalonde, J. F. (2017). Deep outdoor illumination estimation. In IEEE conference on computer vision and pattern recognition.

  • Horn, B., Klaus, B., & Horn, P. (1986). Robot vision. Cambridge: MIT Press.

    Google Scholar 

  • Huang, X., Liu, M. Y., Belongie, S., & Kautz, J. (2018). Multimodal unsupervised image-to-image translation. In European conference on computer vision.

  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In IEEE conference on computer vision and pattern recognition.

  • Jacobs, N., Roman, N., & Pless, R. (2007). Consistent temporal variations in many outdoor scenes. In IEEE conference on computer vision and pattern recognition.

  • Jaritz, M., de Charette, R., Wirbel, E., Perrotton, X., & Nashashibi, F. (2018). Sparse and dense data with CNNs: Depth completion and semantic segmentation. In International conference on 3D vision.

  • Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision.

  • Johnson-Roberson, M., Barto, C., Mehta, R., Sridhar, S. N., Rosaen, K., & Vasudevan, R. (2016). Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks? In International conference on robotics and automation.

  • Khan, S., Phan, B., Salay, R., & Czarnecki, K. (2019) ProcSy: Procedural synthetic dataset generation towards influence factor studies of semantic segmentation networks. In IEEE conference on computer vision and pattern recognition workshops.

  • Laffont, P. Y., Ren, Z., Tao, X., Qian, C., & Hays, J. (2014). Transient attributes for high-level understanding and editing of outdoor scenes. ACM Transactions on Graphics (SIGGRAPH), 33(4), 1–11.

    Article  Google Scholar 

  • Lalonde, J. F., Efros, A. A., & Narasimhan, S. G. (2009). Webcam clip art: Appearance and illuminant transfer from time-lapse sequences. ACM Transactions on Graphics (SIGGRAPH), 28(5), 1–10.

    Article  Google Scholar 

  • Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., & Wang, Z., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In IEEE conference on computer vision and pattern recognition.

  • Lee, J. H., Han, M. K., Ko, D. W., & Suh, I. H. (2020) From big to small: Multi-scale local planar guidance for monocular depth estimation. ArXiv preprint arXiv:1907.10326v5.

  • Li, P., Liang, X., Jia, D., & Xing, E. P. (2018). Semantic-aware grad-GAN for virtual-to-real urban scene adaption. ArXiv preprint arXiv:1801.01726.

  • Li, R., Tan, R. T., & Cheong, L. F. (2018) Robust optical flow in rainy scenes. In European conference on computer vision.

  • Li, Y., Tan, R. T., Guo, X., Lu, J., & Brown, M. S. (2016). Rain streak removal using layer priors. In IEEE conference on computer vision and pattern recognition.

  • Liu, M. Y., Breuel, T., & Kautz, J. (2017). Unsupervised image-to-image translation networks. In Advances in neural information processing systems.

  • Liu, M. Y., Huang, X., Mallya, A., Karras, T., Aila, T., Lehtinen, J., & Kautz, J. (2019). Few-shot unsupervised image-to-image translation. In IEEE international conference on computer vision.

  • Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016). SSD: Single shot multibox detector. In European conference on computer vision.

  • Liu, X., Suganuma, M., Sun, Z., & Okatani, T. (2019). Dual residual networks leveraging the potential of paired operations for image restoration. In IEEE conference on computer vision and pattern recognition.

  • Luo, Y., Xu, Y., & Ji, H. (2015). Removing rain from a single image via discriminative sparse coding. In IEEE international conference on computer vision.

  • Maddern, W., Pascoe, G., Linegar, C., & Newman, P. (2017). 1 Year, 1000 km: The Oxford RobotCar Dataset. International Journal of Robotics Research, 36(1), 3–15.

    Article  Google Scholar 

  • Marshall, J. S., & Palmer, W. M. K. (1948). The distribution of raindrops with size. Journal of Meteorology, 5(4), 165–166.

    Article  Google Scholar 

  • Narasimhan, S. G., & Nayar, S. K. (2002). Vision and the atmosphere. International Journal of Computer Vision, 48(3), 233–254.

    Article  Google Scholar 

  • Narasimhan, S. G., Wang, C., & Nayar, S. K. (2002). All the images of an outdoor scene. In European conference on computer vision.

  • Pizzati, F., Cerri, P., & de Charette, R. (2020). Model-based occlusions disentanglement for image-to-image translation. In European conference on computer vision.

  • Porav, H., Bruls, T., & Newman, P. (2019). I can see clearly now: Image restoration via de-raining. In IEEE international conference on robotics and automation.

  • Potmesil, M., & Chakravarty, I. (1981). A lens and aperture camera model for synthetic image generation. ACM Transactions on Graphics (SIGGRAPH), 15(3), 297–305.

    Article  Google Scholar 

  • Redmon, J., & Farhadi, A. (2017). YOLO9000: Better, faster, stronger. In IEEE conference on computer vision and pattern recognition.

  • Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems.

  • Romera, E., Alvarez, J. M., Bergasa, L. M., & Arroyo, R. (2018). ERFNet: Efficient residual factorized convnet for real-time semantic segmentation. IEEE Transactions on Intelligent Transportation Systems, 19(1), 263–272.

    Article  Google Scholar 

  • Roser, M., & Geiger, A. (2009). Video-based raindrop detection for improved image registration. In IEEE international conference on computer vision workshops.

  • Roser, M., Kurz, J., & Geiger, A. (2010). Realistic modeling of water droplets for monocular adherent raindrop recognition using Bezier curves. In Asian conference on computer vision.

  • Rousseau, P., Jolivet, V., & Ghazanfarpour, D. (2006). Realistic real-time rain rendering. Computers and Graphics, 30(4), 507–518.

    Article  Google Scholar 

  • Sachin Mehta, M. R., Caspi, A., Shapiro, L., & Hajishirzi, H. (2018). ESPNet: Efficient spatial pyramid of dilated convolutions for semantic segmentation. In European conference on computer vision.

  • Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126(9), 973–992.

    Article  Google Scholar 

  • Shen, Z., Liu, Z., Li, J., Jiang, Y. G., Chen, Y., & Xue, X. (2017). DSOD: Learning deeply supervised object detectors from scratch. In IEEE international conference on computer vision.

  • Tasar, O., Happy, S., Tarabalka, Y., & Alliez, P. (2020). Semi2i: Semantically consistent image-to-image translation for domain adaptation of remote sensing data. ArXiv preprint arXiv:2002.05925.

  • Tatarchuk, N. (2006). Artist-directable real-time rain rendering in city environments. In SIGGRAPH 2006 courses (pp. 23–64). ACM.

  • Tsai, Y. H., Hung, W. C., Schulter, S., Sohn, K., Yang, M. H., & Chandraker, M. (2018). Learning to adapt structured output space for semantic segmentation. In IEEE conference on computer vision and pattern recognition.

  • van Boxel, J. H., et al. (1997). Numerical model for the fall speed of rain drops in a rain fall simulator. In Workshop on wind and water erosion.

  • Weber, Y., Jolivet, V., Gilet, G., & Ghazanfarpour, D. (2015). A multiscale model for rain rendering in real-time. Computers and Graphics, 50, 61–70.

    Article  Google Scholar 

  • Xie, Y., Franz, E., Chu, M., & Thuerey, N. (2018). tempogan: A temporally coherent, volumetric gan for super-resolution fluid flow. ACM Transactions on Graphics (SIGGRAPH), 37(4), 1–15.

    Google Scholar 

  • Yang, F., Choi, W., & Lin, Y. (2016). Exploit all the layers: Fast and accurate CNN object detector with scale dependent pooling and cascaded rejection classifiers. In IEEE conference on computer vision and pattern recognition.

  • Yang, W., Tan, R. T., Feng, J., Liu, J., Guo, Z., & Yan, S. (2017). Deep joint rain detection and removal from a single image. In IEEE conference on computer vision and pattern recognition.

  • Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). DualGAN: Unsupervised dual learning for image-to-image translation. In IEEE international conference on computer vision.

  • Yu, F., Xian, W., Chen, Y., Liu, F., Liao, M., Madhavan, V., & Darrell, T. (2018). Bdd100k: A diverse driving video database with scalable annotation tooling. ArXiv preprint arXiv:1805.04687.

  • Zendel, O., Honauer, K., Murschitz, M., Steininger, D., & Fernandez Dominguez, G. (2018). WildDash-creating hazard-aware benchmarks. In European conference on computer vision.

  • Zhang, H., & Patel, V. M. (2018). Density-aware single image de-raining using a multi-stream dense network. In IEEE conference on computer vision and pattern recognition.

  • Zhang, H., Sindagi, V., & Patel, V. M. (2019). Image de-raining using a conditional generative adversarial network. In IEEE Transactions on Circuits and Systems for Video Technology.

  • Zhang, J., Sunkavalli, K., Hold-Geoffroy, Y., Hadap, S., Eisenman, J., & Lalonde, J. F. (2019). All-weather deep outdoor lighting estimation. In IEEE conference on computer vision and pattern recognition.

  • Zhao, H., Qi, X., Shen, X., Shi, J., & Jia, J. (2018). ICNet for real-time semantic segmentation on high-resolution images. In European conference on computer vision.

  • Zhao, H., Shi, J., Qi, X., Wang, X., & Jia, J. (2017). Pyramid scene parsing network. In IEEE conference on computer vision and pattern recognition.

  • Zhu, J. Y., Park, T., Isola, P. & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE international conference on computer vision.

Download references

Acknowledgements

This work was partially supported by the Service de coopération et d’action culturelle du Consulat général de France á Québec, as well as the FRQ-NT with the Samuel-de-Champlain grant. We gratefully thank Pierre Bourré for his priceless technical help, Aitor Gomez Torres for his initial input, and Srinivas Narasimhan for letting us reuse the physical simulator. We also thank the Nvidia corporation for the donation of the GPU used in this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maxime Tremblay.

Additional information

Communicated by Dengxin Dai, Robby T. Tan, Vishal Patel, Jiri Matas, Bernt Schiele and Luc Van Gool.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 38109 KB)

Appendices

A Field of View of a Drop in a Sphere

Fig. 19
figure 19

Geometrical construction to compute a drop FOV. Considering \(\mathbf {X}_\mathbf{0}\) and \(\mathbf {X}_\mathbf{1}\) the drop position at shutter opening and closing, respectively. We assume a constant drop position \(\mathbf {X} = \frac{\mathbf {X}_\mathbf{0}+\mathbf {X}_\mathbf{1}}{2}\) (during the exposure time, a few milliseconds). Note that we drew only a slice of the drop FOV for simplicity but a full 3D visualization would show a full 3D cone. The drop FOV in the environment map is the projection of the 3D drop FOV on the scene sphere of constant distance (refer to text for details)

We estimate the field of view (FOV) of a drop when projected on a sphere to compute the radiance and chromaticity of each streak, as detailed in Sect. 3.1.2 of the main paper. Despite its motion, we make the assumption of a constant field of view within a given exposure time. This is acceptable due to the short exposure time used here (i.e. 2 ms for KITTI, 5 ms for Cityscapes). For each drop, the simulator outputs the start position (i.e. shutter opening) and end position (i.e. shutter closing) in both the 3D camera-centered and the 2D image coordinate frames.

We refer to the Fig. 19 for a geometrical illustration of the following. Let us consider an imaged drop D, having 3D start position \(\mathbf {X}_\mathbf{0}\) and end position \(\mathbf {X}_\mathbf{1}\). We first compute \(\mathbf {X} = \frac{\mathbf {X}_\mathbf{0}+\mathbf {X}_\mathbf{1}}{2}\) the assumed constant position for which we will estimate the corresponding FOV. The position being camera-centered, the drop viewing direction is therefore \(\mathbf {d}=\frac{\mathbf {X}}{||\mathbf {X}||}\).

We compute the equation of the plane P going through \(\mathbf {X}\) and orthogonal to the viewing direction \(\mathbf {d}\):

$$\begin{aligned} P = \mathbf {d}_x+\mathbf {d}_y+\mathbf {d}_z-\mathbf {d}\cdot \mathbf {X} = 0, \end{aligned}$$
(6)

where \(\cdot \) is the dot product and select a random vector \(\mathbf {u}\) (with \(||\mathbf {u}|| = 1\)) lying on P. Accounting for the field of view of the drop \(\theta \approx 165^\circ \) (according to Garg and Nayar 2007), we compute an arbitrary vector \(\mathbf {v}\) on the viewing cone through the drop

$$\begin{aligned} \mathbf {v} = \mathbf {d}\cdot \mathbf {R}_{\mathbf {u}}(\theta /2), \end{aligned}$$
(7)

with \(\mathbf {R}_{\mathbf {u}}(\theta /2)\) the 3x3 general rotation matrix of angle \(\theta /2\) about vector \(\mathbf {u}\). We use \(\theta /2\) because the cone being symmetric along the viewing direction, the complete cone field of view obtain is \(\theta \). The set \(V'\) of vectors forming the viewing cone through the drop is obtained by the rotation of \(\mathbf {v}\) all around the viewing direction. Formally,

$$\begin{aligned} V' = \{\mathbf {v}\cdot \mathbf {R}_{\mathbf {d}}(\alpha ) ~| ~\forall ~\alpha \in [0, 2\pi [\}, \end{aligned}$$
(8)

with \(\mathbf {R}_{\mathbf {d}}(\alpha )\) the rotation matrix of \(\alpha \) around vector \(\mathbf {d}\). In practice, \(V'\) is a finite set of radially equidistant vectors (for computational reason we use \(|V'| = 20\)).

To compute the coordinates of the drop FOV in the environment, we assume a projection sphere S of radius 10m. Hence, we compute the set \(Q = \{\phi (S, \mathbf {v'}) ~| ~\forall ~\mathbf {v'} \in V'\}\) of points where vectors intersect the environment sphere, considering only the positive viewing direction axis. Given that the sphere is centered to the camera position and all drops 3D positions are expressed in the camera referential, the intersection \(\phi (S, \mathbf {v'})\) of a vector \(\mathbf {v'}\) and sphere S of radius \(S_{\rho }\) is straight-forward with

$$\begin{aligned} \begin{aligned} \phi (S, \mathbf {v'})&= \mathbf {v'} + t\mathbf {d}\,\text {with},\\ t&= \frac{-b + \sqrt{b^2 - 4ac}}{2a},\\ a&= \mathbf {d}_x^2 + \mathbf {d}_y^2 + \mathbf {d}_z^2,\\ b&= 2(\mathbf {d}_x\mathbf {v'}_x + \mathbf {d}_y\mathbf {v'}_y + \mathbf {d}_z\mathbf {v'}_z),\\ c&= \mathbf {v'}_x^2 + \mathbf {v'}_y^2 + \mathbf {v'}_z^2 - S_{\rho }^2. \end{aligned} \end{aligned}$$
(9)

Having computed Q, the finite set of 3D positions intersecting our environment sphere S, the set \(Q'\) of spherical coordinates (azimuth, altitude) are obtained from simple Cartesian to spherical mapping, and directly translated to the environment map. Thus, \(Q'\) is the projection of the drop FOV on the environment map.

Table 3 Data splits for all experiments

Accounting for implementation details, one may note that \(Q'\) is a discrete representation of the drop field of view contours. In practice, a polygon filling algorithm is used to obtain the drop FOV F, which we use for computing the photometry of a rainstreak (cf. Sect. 3.1.4 of the main paper).

B Compositing a Rain Streak with Different Exposure Time

In their seminal work, Garg and Nayar (2005) demonstrated that the streak appearance is closely related to the amount of time \(\tau \) a drop stays on a pixel. It is thus important to account for the difference of exposure time in the streak appearance database (Garg and Nayar 2006) when adding rain to existing images. Given that the appearance database does not provide enough calibration data to recompute the exact original \(\tau _0\), we estimate it using observations made in appendix 10.3 of Garg and Nayar (2007). The latter states that for a constant exposure time \(\tau \) can be safely approximated with \(\sqrt{a}/50\) (a the drop diameter, in meters), which we use to compute \(\tau _0\) according to simulation settings in Garg and Nayar (2006).

Using the notation defined in Eq. (5) from the main paper, the radiance of streak \(S'\) is corrected with

$$\begin{aligned} S'\frac{\tau _1}{\tau _0}, \end{aligned}$$
(10)

where \(\tau _1\) is the time the current drop stays on a pixel, as obtained in a streak-wise fashion by the physical simulator. Noteworthy, Garg and Nayar (2007) also emphasizes that for a given streak the changes of \(\tau \) across pixels are negligible, so \(\tau \) can safely be assumed constant.

Finally, after normalization, the alpha of each streak is scaled according to \(\tau _1\) and the targeted exposure time T. According to Garg and Nayar equations (cf. Eq. (18) from Garg and Nayar 2007), the composite rainy image is an alpha blending of the background image \(I_{bg}\) and the rain layer \(I_r\). For pixel \(\mathbf {x}\) corresponding to \(\mathbf {x'}\) in the streak coordinates, it leads to:

$$\begin{aligned} \begin{aligned} I_{rainy}(\mathbf {x})&= \alpha {}I_{bg}(\mathbf {x}) + I_{r}(\mathbf {x'}),\\&= \frac{T-S'_{\alpha }(\mathbf {x'})\tau _1}{T}I(\mathbf {x}) + S'(\mathbf {x'})\frac{\tau _1}{\tau _0}.\\ \end{aligned} \end{aligned}$$
(11)

C Experiments Data Splits

Table 3 contains the minutiae of the data splits of the various experimental steps of this paper.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tremblay, M., Halder, S.S., de Charette, R. et al. Rain Rendering for Evaluating and Improving Robustness to Bad Weather. Int J Comput Vis 129, 341–360 (2021). https://doi.org/10.1007/s11263-020-01366-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-020-01366-3

Keywords

Navigation