当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sentinel SAR-optical fusion for crop type mapping using deep learning and Google Earth Engine
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 12.7 ) Pub Date : 2021-03-24 , DOI: 10.1016/j.isprsjprs.2021.02.018
Jarrett Adrian , Vasit Sagan , Maitiniyazi Maimaitijiang

Accurate crop type mapping provides numerous benefits for a deeper understanding of food systems and yield prediction. Ever-increasing big data, easy access to high-resolution imagery, and cloud-based analytics platforms like Google Earth Engine have drastically improved the ability for scientists to advance data-driven agriculture with improved algorithms for crop type mapping using remote sensing, computer vision, and machine learning. Crop type mapping techniques mainly relied on standalone SAR and optical imagery, few studies investigated the potential of SAR-optical data fusion, coupled with virtual constellation, and 3-dimensional (3D) deep learning networks. To this extent, we use a deep learning approach that utilizes the denoised backscatter and texture information from multi-temporal Sentinel-1 SAR data and the spectral information from multi-temporal optical Sentinel-2 data for mapping ten different crop types, as well as water, soil and urban area. Multi-temporal Sentinel-1 data was fused with multi-temporal optical Sentinel-2 data in an effort to improve classification accuracies for crop types. We compared the results of the 3D U-Net to the state-of-the-art deep learning networks, including SegNet and 2D U-Net, as well as commonly used machine learning method such as Random Forest. The results showed (1) fusing multi-temporal SAR and optical data yields higher training overall accuracies (OA) (3D U-Net 0.992, 2D U-Net 0.943, SegNet 0.871) and testing OA (3D U-Net 0.941, 2D U-Net 0.847, SegNet 0.643) for crop type mapping compared to standalone multi-temporal SAR or optical data (2) optical data fused with denoised SAR data via a denoising convolution neural network (OA 0.912) performed better for crop type mapping compared to optical data fused with boxcar (OA 0.880), Lee (OA 0.881), and median (OA 0.887) filtered SAR data and (3) 3D convolutional neural networks perform better than 2D convolutional neural networks for crop type mapping (SAR OA 0.912, optical OA 0.937, fused OA 0.992).



中文翻译:

使用深度学习和Google Earth Engine进行作物类型映射的前哨SAR光学融合

准确的作物类型图谱提供了许多好处,可让您更深入地了解食物系统和产量预测。不断增长的大数据,易于访问的高分辨率图像以及基于云的分析平台(例如Google Earth Engine)通过使用遥感,计算机视觉和机器学习的改进的作物类型制图算法,极大地提高了科学家推进数据驱动型农业的能力。作物类型映射技术主要依靠独立的SAR和光学图像,很少有研究研究SAR光学数据融合,虚拟星座以及3维(3D)深度学习网络的潜力。在此程度上,我们使用深度学习方法,利用来自多时间Sentinel-1 SAR数据的去噪后向散射和纹理信息以及来自多时间Sentinel-2光学数据的光谱信息来绘制十种不同的农作物类型,以及水,土壤和市区。多时相Sentinel-1数据与多时相光学Sentinel-2数据融合在一起,旨在提高农作物类型的分类准确性。我们将3D U-Net的结果与最新的深度学习网络(包括SegNet和2D U-Net)以及常用的机器学习方法(如Random Forest)进行了比较。结果表明(1)融合多时态SAR和光学数据可产生更高的训练总体准确度(OA)(3D U-Net 0.992、2D U-Net 0.943,SegNet 0.871)和测试OA(3D U-Net 0.941、2D U -Net 0.847,SegNet 0.643)与独立的多时间SAR或光学数据进行作物类型映射相比(2)通过去噪卷积神经网络(OA 0.912)与去噪SAR数据融合的光学数据与光学方法相比,对作物类型映射的效果更好与Boxcar(OA 0.880),Lee(OA 0.881)融合的数据,

更新日期:2021-03-24
down
wechat
bug