Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Neutralizing the impact of atmospheric turbulence on complex scene imaging via deep learning

Abstract

A turbulent medium with eddies of different scales gives rise to fluctuations in the index of refraction during the process of wave propagation, which interferes with the original spatial relationship, phase relationship and optical path. The outputs of two-dimensional imaging systems suffer from anamorphosis brought about by this effect. Randomness, along with multiple types of degradation, make it a challenging task to analyse the reciprocal physical process. Here, we present a generative adversarial network (TSR-WGAN), which integrates temporal and spatial information embedded in the three-dimensional input to learn the representation of the residual between the observed and latent ideal data. Vision-friendly and credible sequences are produced without extra assumptions on the scale and strength of turbulence. The capability of TSR-WGAN is demonstrated through tests on our dataset, which contains 27,458 sequences with 411,870 frames of algorithm simulated data, physical simulated data and real data. TSR-WGAN exhibits promising visual quality and a deep understanding of the disparity between random perturbations and object movements. These preliminary results also shed light on the potential of deep learning to parse stochastic physical processes from particular perspectives and to solve complicated image reconstruction problems given limited data.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Schematic of imaging through atmospheric turbulence.
Fig. 2: A comparison of turbulence mitigation models for both stabilizing and preserving motion information.
Fig. 3: A comparison of sequence-processing methods on a static region.
Fig. 4: Column sequences with the same temporal index.
Fig. 5: A subjective comparison of the turbulence-mitigation models.
Fig. 6: A sketch of the physical simulation platform.

Similar content being viewed by others

Data availability

The full datasets, including the algorithm simulated data, physical simulated data and real-world data presented in this Article are publicly available in the Zenodo repository at https://doi.org/10.5281/zenodo.510191051.

Code availability

The code presented in this Article is available through a Code Ocean compute capsule (10.24433/CO.3517894.v1)52, together with a subset of data to test the network.

References

  1. Xia, Z. H. Multiple states in turbulence. Chin. Sci. Bull. 64, 373–383 (2019).

    Article  Google Scholar 

  2. Wyngaard, J. C. Atmospheric turbulence. Annu. Rev. Fluid Mech. 24, 205–234 (1992).

    Article  Google Scholar 

  3. Lohse, D. & Xia, K. Q. Small-scale properties of turbulent Rayleigh–Bénard convection. Annu. Rev. Fluid Mech. 42, 335–364 (2010).

    Article  Google Scholar 

  4. Xi, H. D. & Xia, K. Q. Flow mode transitions in turbulent thermal convection. Phys. Fluids 20, 055104 (2008).

    Article  Google Scholar 

  5. Zhu, X. & Milanfar, P. Removing atmospheric turbulence via space-invariant deconvolution. IEEE Trans. Pattern Anal. Mach. Intell. 35, 157–170 (2013).

    Article  Google Scholar 

  6. Wu, C. S., Ko, J. & Davis, C. C. Imaging through strong turbulence with a light field approach. Opt. Express 24, 11975–11986 (2016).

    Article  Google Scholar 

  7. Rigaut, F. & Neichel, B. Multiconjugate adaptive optics for astronomy. Annu. Rev. Astron. Astr. 56, 277–314 (2018).

    Article  Google Scholar 

  8. Hope, D. A., Jefferies, S. M., Hart, M. & Nagy, J. G. High-resolution speckle imaging through strong atmospheric turbulence. Opt. Express 24, 12116–12129 (2016).

    Article  Google Scholar 

  9. Law, N. M., Mackay, C. D. & Baldwin, J. Lucky imaging: high angular resolution imaging in the visible from the ground. Astron. Astrophys. 446, 739–745 (2006).

    Article  Google Scholar 

  10. Anantrasirichai, N., Achim, A., Kingsbury, N. G. & Bull, D. R. Atmospheric turbulence mitigation using complex wavelet-based fusion. IEEE Trans. Image Process. 22, 2398–2408 (2013).

    Article  MathSciNet  Google Scholar 

  11. Hirsch, M., Sra, S., Schölkopf, B. & Harmeling, S. Efficient filter flow for space-variant multiframe blind deconvolution. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 607–614 (IEEE, 2010).

  12. Xie, Y. et al. Removing turbulence effect via hybrid total variation and deformation-guided kernel regression. IEEE Trans. Image Process. 25, 4943–4958 (2016).

    Article  MathSciNet  Google Scholar 

  13. Oreifej, O., Li, X. & Shah, M. Simultaneous video stabilization and moving object detection in turbulence. IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).

    Article  Google Scholar 

  14. Halder, K. K., Tahtali, M. & Anavatti, G. Geometric correction of atmospheric turbulence-degraded video containing moving objects. Opt. Express 23, 5091–5101 (2015).

    Article  Google Scholar 

  15. Iqbal, A., Khan, R. & Karayannis, T. Developing brain atlas through deep learning. Nat. Mach. Intell. 1, 277–287 (2019).

    Article  Google Scholar 

  16. Tolkach, Y., Dohmgörgen, T., Toma, M. & Kristiansen, G. High-accuracy prostate cancer pathology using deep learning. Nat. Mach. Intell. 2, 411–418 (2020).

    Article  Google Scholar 

  17. Richards, B. A. & Lillicrao, T. P. et al. A deep learning framework for neuroscience. Nat. Neurosci. 22, 1761–1770 (2019).

    Article  Google Scholar 

  18. Baldi, P., Sadowski, P. & Whiteson, D. Searching for exotic particles in high-energy physics with deep learning. Nat. Commun. 5, 4308 (2014).

    Article  Google Scholar 

  19. Pang, L. et al. An equation-of-state-meter of quantum chromodynamics transition from deep learning. Nat. Commun. 9, 210 (2018).

    Article  Google Scholar 

  20. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  Google Scholar 

  21. Dai, T., Cai, J., Zhang, Y., Xia, S. & Zhang, L. Second-order attention network for single image super-resolution. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 11065–11074 (IEEE, 2019).

  22. Yu, J. H. et al. Free-form image inpainting with gated convolution. In Proc. IEEE International Conference on Computer Vision 4471–4480 (2019).

  23. Fried, D. L. Optical resolution through a randomly inhomogeneous medium for very long and very short exposures. J. Opt. Soc. Am. 56, 1372–1379 (1966).

    Article  Google Scholar 

  24. Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

    Article  Google Scholar 

  25. Xue, W., Zhang, L., Mou, X. & Bovik, A. C. Gradient magnitude similarity deviation: a highly efficient perceptual image quality index. IEEE Trans. Image Process. 23, 684–695 (2014).

    Article  MathSciNet  Google Scholar 

  26. Zhang, L., Zhang, L., Mou, X. & Zhang, D. FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20, 2378–2386 (2011).

    Article  MathSciNet  Google Scholar 

  27. Zhang, L., Shen, Y. & Lee, H. VSI: a visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 23, 4270–4281 (2014).

    Article  MathSciNet  Google Scholar 

  28. Soundararajan, R. & Bovik, A. C. Video quality assessment by reduced reference spatio-temporal entropic differencing. IEEE Trans. Circuits Syst. Video Techn. 23, 684–694 (2013).

    Article  Google Scholar 

  29. Bradley, R. A. & Terry, M. E. Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika 39, 324–345 (1952).

    MathSciNet  MATH  Google Scholar 

  30. Xue, B. D. et al. Video stabilization in atmosphere turbulent conditions based on the Laplacian–Riesz pyramid. Opt. Express 24, 28092–29103 (2016).

    Article  Google Scholar 

  31. Lou, Y. F., Kang, S. H., Soatto, S. & Bertozzi, A. L. Video stabilization of atmospheric turbulence distortion. Inverse Probl. Imag. 7, 839–861 (2013).

    Article  MathSciNet  Google Scholar 

  32. Chan, S. H., Khoshabeh, R., Gibson, K. B., Gill, P. E. & Nguyen, T. Q. An augmented Lagrangian method for total variation video restoration. IEEE Trans. Image Process. 20, 3097–3111 (2011).

    Article  MathSciNet  Google Scholar 

  33. Su, S. C. et al. Deep video deblurring for hand-held cameras. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1279–1288 (IEEE, 2017).

  34. Kupyn, O., Martyniuk, T., Wu, J. & Wang, Z. Y. DeblurGAN-v2: deblurring (orders-of-magnitude) faster and better. In Proc. IEEE International Conference on Computer Vision 8878–8887 (IEEE, 2019).

  35. Zhang, K. H. et al. Adversarial spatio-temporal learning for video deblurring. IEEE Trans. Image Process. 28, 291–301 (2019).

    Article  MathSciNet  Google Scholar 

  36. Kim, T. H., Lee, K. M., Scholkopf, B. & Hirsch, M. Online video deblurring via dynamic temporal blending network. In Proc. IEEE International Conference on Computer Vision 4038–4047 (IEEE, 2017).

  37. Pan, J. S., Bai, H. R. & Tang, J. H. Cascaded deep video deblurring using temporal sharpness prior. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 3043–3051 (IEEE, 2020).

  38. Xiang, X. G., Wei, H. & Pan, J. S. Deep video deblurring using sharpness features from exemplars. IEEE Trans. Image Process. 29, 8976–8987 (2020).

    Article  Google Scholar 

  39. Repasi, E. & Weiss, R. Analysis of image distortions by atmospheric turbulence and computer simulation of turbulence effects. In Proc. SPIE, Infrared Imaging Systems: Design, Analysis, Modeling and Testing XIX Vol. 6941, 1–13 (SPIE, 2008).

  40. Arbeláez, P., Maire, M., Fowlkes, C. & Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33, 898–916 (2011).

    Article  Google Scholar 

  41. Soomro, K., Zamir, A. R. & Shah, M. UCF101: a dataset of 101 human actions classes from videos in the wild. Preprint at https://arxiv.org/pdf/1212.0402.pdf (2012).

  42. Smith, F. G. (ed.) The Infrared & Electro-Optical Systems Handbook Vol. 2 (SPIE, 1996).

  43. Badrinarayanan, V., Kendall, A. & Cipolla, R. SegNet: a deep convolutional encoder–decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017).

    Article  Google Scholar 

  44. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D. & Matas, J. DeblurGAN: blind motion deblurring using conditional adversarial networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 8183–8192 (IEEE, 2018).

  45. He, K. M., Zhang, X. Y., Ren, S. Q. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  46. Isola, P., Zhu, J. Y., Zhou, T. H. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1125–1134 (IEEE, 2017).

  47. Zhu, J. Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. IEEE International Conference on Computer Vision 2223–2232 (IEEE, 2017).

  48. Johnson, J., Alahi, A. & Li, F. F. Perceptual losses for real-time style transfer and super-resolution. In Proc. European Conference on Computer Vision 694–711 (Springer, 2016).

  49. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V. & Courville, A. Improved training of Wasserstein GANs. In Proc. Annual Conference on Neural Information Processing Systems 694–711 (NIPS, 2017).

  50. Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. In Proc. Annual Conference on Neural Information Processing Systems 8024–8035 (NIPS, 2019).

  51. Jin, D. R. et al. Atmospheric turbulence distorted video sequence dataset. Zenodo https://doi.org/10.5281/zenodo.5101910 (2021).

  52. Jin, D. R. et al. Temporal-spatial residual perceiving Wasserstein GAN for turbulence distorted sequence restoration (TSR-WGAN) (CodeOcean, 2021); https://codeocean.com/capsule/9958894/tree/v1

Download references

Acknowledgements

This work was supported by grants from the National Natural Science Foundation of China (no. U1736217) and partly by grants from the National Key Research and Development Program of China (no. 2019YFB1311301).

Author information

Authors and Affiliations

Authors

Contributions

D.J. and X.B. conceived the idea and were responsible for the methodology. D.J., Y.C., Y.L., Z.L. and S.G. performed the experiments and created the dataset. D.J. and J.C. developed the software and designed the subjective evaluation. Y.C. and P.W. participated in the model design. X.B. supervised the project. D.J. and X.B. wrote the manuscript.

Corresponding author

Correspondence to Xiangzhi Bai.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review informationNature Machine Intelligence thanks Soumik Sarkar and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–13 and Tables 1–4.

Supplementary Video 1

Dynamic results of algorithm simulated data.

Supplementary Video 2

Dynamic results of algorithm simulated data.

Supplementary Video 3

Dynamic results of algorithm simulated data.

Supplementary Video 4

Dynamic results of physical simulated data.

Supplementary Video 5

Dynamic results of physical simulated data.

Supplementary Video 6

Dynamic results of physical simulated data.

Supplementary Video 7

Dynamic results of real-world data.

Supplementary Video 8

Dynamic results of real-world data.

Supplementary Video 9

Dynamic results of real-world data.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, D., Chen, Y., Lu, Y. et al. Neutralizing the impact of atmospheric turbulence on complex scene imaging via deep learning. Nat Mach Intell 3, 876–884 (2021). https://doi.org/10.1038/s42256-021-00392-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-021-00392-1

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing