Skip to main content
Log in

CloudU-Netv2: A Cloud Segmentation Method for Ground-Based Cloud Images Based on Deep Learning

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Accurately acquiring cloud information through cloud images segmentation is of great importance for weather forecasting, environmental monitoring, sites selection of observatory and analysis of climate evolution. In this paper, a cloud segmentation method based on deep learning, called CloudU-Netv2, is proposed to segment daytime and nighttime ground-based cloud images. The CloudU-Netv2 includes encoder, dual attention modules and decoder. The contributions in this paper are four folds as follows. Firstly, it replaces the ‘upsampling’ in CloudU-Net with ‘bilinear upsampling’. Secondly, position and channel attention modules are added to the structure to improve the discrimination ability of features’ representation. Thirdly, it chooses rectified Adam as the optimizer in the CloudU-Netv2 structure. Finally, we conduct ablation experiments on the key components of CloudU-Netv2 and compare with the existing four advanced methods using six evaluation metrics. Results show that the key components of the model play the pivotal role in improving the segmentation performance, and the proposed CloudU-Netv2 has the best segmentation performance for daytime and nighttime ground-based cloud images compared with four other methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Stephens GL (2005) Cloud feedbacks in the climate system: a critical review. J Clim 18(2):237–273

    Article  MathSciNet  Google Scholar 

  2. Klebe DI, Blatherwick RD, Morris VR (2014) Ground-based all-sky mid-infrared and visible imagery for purposes of characterizing cloud properties. Atmos Meas Tech 7(2):637–645

    Article  Google Scholar 

  3. Yang J, Lu WT, Ma Y, Yao W (2012) An automated cirrus cloud detection method for a ground-based cloud image. J Atmos Ocean Technol 29(4):527–537

    Article  Google Scholar 

  4. Cazorla A, Olmo FJ, Alados-Arboledasl L (2008) Development of a sky imager for cloud cover assessment. J Opt Soc Am A Opt Image Sci Vis 25(1):29–39

    Article  Google Scholar 

  5. Kurtz B, Mejia F, Kleissl J (2017) A virtual sky imager testbed for solar energy forecasting. Sol Energy 158:753–759

    Article  Google Scholar 

  6. Kuji M, Murasaki A, Hori M, Shiobara M (2018) Cloud fractions estimated from shipboard whole-sky camera and ceilometer observations between East Asia and Antarctica. J Meteorol Soc Jpn 96(2):201–214

    Article  Google Scholar 

  7. Long CN, Sabburg JM, Calbo J, Pages D (2006) Retrieving cloud characteristics from ground-based daytime color all-sky images. J Atmos Ocean Technol 23(5):633–652

    Article  Google Scholar 

  8. Heinle A, Macke A, Srivastav A (2010) Automatic cloud classification of whole sky images. Atmos Meas Technol 3(3):557–567

    Article  Google Scholar 

  9. Yang J, Lu WT, Ma Y, Yao W, Li QY (2009) An automatic ground-based cloud detection method based on adaptive threshold. J Appl Meteorol Sci 20(6):713–721

    Google Scholar 

  10. Yang J, Li QY, Lu WT, Ma Y, Yao W, Lu TS, Du J, Liu GY (2016) A total sky cloud detection method using real clear sky background. Atmos Meas Tech 9(2):587–597

    Article  Google Scholar 

  11. Calbo J, Sabburg J (2008) Feature extraction from whole-sky ground-based images for cloud-type recognition. J Atmos Ocean Technol 25(1):3–14

    Article  Google Scholar 

  12. Shi CZ, Wang Y, Wang CH, Xiao BH (2017) Ground-based cloud detection using graph model built upon superpixels. IEEE Geosci Remote Sens Lett 14(5):719–723

    Article  Google Scholar 

  13. Dev S, Lee YH, Winkler S (2017) Color-based segmentation of sky/cloud images from ground-based cameras. IEEE J Sel Top Appl Earth Observ Remote Sens 10(1):231–242

    Article  Google Scholar 

  14. Neto SLM, von Wangenheim A, Pereira EB, Comunello E (2010) The use of Euclidean geometric distance on RGB color space for the classification of sky and cloud patterns. J Atmos Ocean Technol 27(9):1504–1517

    Article  Google Scholar 

  15. Gacal GFB, Antioquia C, Lagrosas N (2016) Ground-based detection of nighttime clouds above Manila Observatory (14.64 degrees N, 121.07 degrees E) using a digital camera. Appl Opt 55(22):6040–6045

    Article  Google Scholar 

  16. Dev S, Savoy FM, Lee YH, Winkler S (2017) Nighttime sky/cloud image segmentation. In: Proceedings of the international conference on image processing (ICIP), pp 345–349

  17. Dev S, Nautiyal A, Lee YH, Winkler S (2019) CloudSegNet: a deep network for nychthemeron cloud image segmentation. IEEE Geosci Remote Sens Lett 16(12):1814–1818

    Article  Google Scholar 

  18. Shi CJ, Zhou YT, Qiu B, He JF, Ding M, Wei SY (2019) Diurnal and nocturnal cloud segmentation of all-sky imager (ASI) images using enhancement fully convolutional networks. Atmos Meas Tech 12(9):4713–4724

    Article  Google Scholar 

  19. Shi CJ, Zhou YT, Qiu B, Guo DJ, Li MC (2019) CloudU-Net: a deep convolutional neural network architecture for daytime and nighttime cloud images segmentation. IEEE Geosci Remote Sens Lett. https://doi.org/10.1109/LGRS.2020.3009227

    Article  Google Scholar 

  20. Liu LY, Jiang HM, He PC, Chen WZ, Liu XD, Gao JF, Han JW (2019) On the variance of the adaptive learning rate and beyond. arXiv e-prints. arXiv:1908.03265

  21. Fu J, Liu J, Tian HJ, Li Y, Bao YJ, Fang ZW, Lu HQ (2019) Dual attention network for scene segmentation. CVPR

  22. Ma G, Hao ZL, Wu X, Wang XJ (2020) An optimal electrical impedance tomography drive pattern for human–computer interaction applications. IEEE Trans Biomed Circuits Syst 14(3):402–411

    Google Scholar 

  23. Palacios JM, Sagues C, Montijano E, Llorente S (2013) Human–computer interaction based on hand gestures using RGB-D sensors. Sensors 13(9):11842–11860

    Article  Google Scholar 

  24. Zhong Z, Lei MY, Cao DL, Fan JP, Li SZ (2017) Class-specific object proposals re-ranking for object detection in automatic driving. Neurocomputing 242:187–194

    Article  Google Scholar 

  25. Liu L, Su Z, Fu XD, Liu LJ, Wang RM, Luo XN (2017) A data-driven editing framework for automatic 3D garment modeling. Multimed Tools Appl 76(10):12597–12626

    Article  Google Scholar 

  26. Zou N, Xiang ZY, Chen YM (2020) RSDCN: a road semantic guided sparse depth completion network. Neural Process Lett 51(3):2737–2749

    Article  Google Scholar 

  27. Wang H, Yu YL (2020) Deep feature fusion for high-resolution aerial scene classification. Neural Process Lett 51(1):853–865

    Article  Google Scholar 

  28. Xia HY, Sun WF, Song SX, Mou XW (2020) Md-net: multi-scale dilated convolution network for CT images segmentation. Neural Process Lett 51(3):2915–2927

    Article  Google Scholar 

  29. Hong YF, Wei BZ, Han ZY, Li X, Zheng YJ, Li S (2020) MMCL-net: spinal disease diagnosis in global mode using progressive multi-task joint learning. Neurocomputing 399:307–316

    Article  Google Scholar 

  30. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: Proceedings of the MICCAI, pp 234–241

  31. Shelhamer E, Long J, Darrell T (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39(4):640–651

    Article  Google Scholar 

  32. Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder–decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495

    Article  Google Scholar 

  33. Wang Q, Gao JY, Yuan Y (2018) A joint convolutional neural networks and context transfer for street scenes labeling. IEEE Trans Intell Transp Syst 19(5):1457–1470

    Article  Google Scholar 

  34. Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) DeepLab: semantic image segmentation with deep convolutional nets, Atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848

    Article  Google Scholar 

  35. Chen LC, Zhu YK, Papandreou G, Schroff F, Adam H (2018) Encoder–decoder with Atrous separable convolution for semantic image segmentation. arXiv e-prints. arXiv:1802.02611

  36. Zhu HG, Miao Y, Zhang XD (2020) Semantic image segmentation with improved position attention and feature fusion. Neural Process Lett 30:88–97

    Google Scholar 

  37. Wu F, Chen F, Jing XY (2020) Dynamic attention network for semantic segmentation. Neurocomputing 384:182–191

    Article  Google Scholar 

  38. Zhang ZX, Liu QJ, Wang YH (2018) Road extraction by deep residual U-net. IEEE Geosci Remote Sens Lett 15(5):749–753

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Joint Research Fund in Astronomy through cooperative agreement between the National Science Foundation of China (NSFC) and Chinese Academy of Sciences (CAS) under Grant (U1931134), Hebei Province Foundation of Returned oversea scholars (CL201707), and Hebei Province Natural Science Foundation (F2019202364).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yatong Zhou.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, C., Zhou, Y. & Qiu, B. CloudU-Netv2: A Cloud Segmentation Method for Ground-Based Cloud Images Based on Deep Learning. Neural Process Lett 53, 2715–2728 (2021). https://doi.org/10.1007/s11063-021-10457-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-021-10457-2

Keywords

Navigation