当前位置: X-MOL 学术Adv. Space Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Land cover classification of spaceborne multifrequency SAR and optical multispectral data using machine learning
Advances in Space Research ( IF 2.8 ) Pub Date : 2021-06-21 , DOI: 10.1016/j.asr.2021.06.028
Rajat Garg 1 , Anil Kumar 1 , Manish Prateek 2 , Kamal Pandey 3 , Shashi Kumar 3
Affiliation  

This study compares the utility of multifrequency SAR and Optical multispectral data for land-cover classification of Mumbai city and its nearby regions with a special focus on water body mapping. The L-band ALOS-2 PALSAR-2, X-band TerraSAR-X, C-band RISAT-1, and Sentinel-2 datasets have been used in this work. This work is done as a retrospective study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission. The ALOS-2 PALSAR-2 data has been pre-processed before the implementation of machine learning algorithms for image segmentation. Multi-looking is performed on ALOS-2 PALSAR-2 data to generate square pixels of size 5.78 m and then target decomposition is applied to generate a false-color composite RGB image. While in the case of TerraSAR-X and RISAT-1 datasets, no multi-looking was performed and direct target decomposition was applied to generate false-color composite RGB images. Similarly, for the optical dataset that has a resolution of 10 m, a true color composite, and a false color composite RGB image are generated. For the comparative study between ALOS-2 PALSAR-2 and Sentinel-2 dataset, the RGB images are divided into smaller chunks of size 500*500 pixels each to create a training and testing dataset. Ten image patches were taken from the large dataset, out of which eight patches were used to train the machine learning models Random Forest (RF), K Nearest Neighbor (KNN), and Support Vector Machine (SVM), and two patches were kept for testing and validation purpose. For training the machine learning models, feature vectors are generated using the Gabor filter, Scharr filter, Gaussian filter, and Median filter. For patch 1, the mIOU for true-color composite based Optical image varies from 0.2323 to 0.2866 with the RF classifier performing the best and the mIOU for false-color composite based Optical image varies from 0.4130 to 0.4941 with the RF classifier performing the best while for ALOS-2 PALSAR-2 data, the mIOU varies from 0.4033 to 0.4663 with the RF classifier outperforming the KNN and the SVM classifiers. For patch 2, the mIOU for true-color composite based Optical data varies from 0.3451 to 0.4517 with KNN performing the best and the mIOU for false-color composite based Optical image varies from 0.5156 to 0.5832 with the RF classifier performing the best while for ALOS-2 PALSAR-2 data, the mIOU varies from 0.4600 to 0.5178 with the RF classifier outperforming the KNN and the SVM classifiers. The gap between the performance of ALOS-2 PALSAR-2 data and Sentinel-2 optical data is observed when the IOU of water class is compared, with IOUw for the true-color composite based optical image at a maximum of 0.2525 and for false-color composite based optical image at a maximum of 0.7366 while for ALOS-2 PALSAR-2 data a maximum IOUw of 0.7948 is achieved. The better performance of SAR data as compared to true-color composite based optical image data is due to the misclassification of ground and water classes into urban and forest in the case of the true-color composite based optical dataset which can be attributed to the high similarity between water and forest classes in the case of true-color composite based optical data whereas both these classes are easily separable in case of SAR data. This issue is however resolved by using the false color composite based optical image dataset for the classification task which performs slightly better than ALOS-2 PALSAR-2 data in the overall classification task. However, the SAR data works best in water body detection as notable from the high IOU for water class in the case of SAR data. In addition to the comparative analysis between Sentinel-2 optical and ALOS-2 PALSAR-2 data, land-cover classification has been performed on X-band TerraSAR-X and C-band RISAT-1 data on a single patch and it has been found that the RF classifier performs the best, recording the mIOU 0.5815 for X-band TerraSAR-X data, mIOU of 0.4031 for the C-band RISAT-1 data, and mIOU of 0.6153 for the L-band ALOS-2 data.



中文翻译:

使用机器学习对星载多频 SAR 和光学多光谱数据进行土地覆盖分类

本研究比较了多频 SAR 和光学多光谱数据在孟买市及其附近地区土地覆盖分类中的效用,特别关注水体测绘。L 波段 ALOS-2 PALSAR-2、X 波段 TerraSAR-X、C 波段 RISAT-1 和 Sentinel-2 数据集已用于这项工作。这项工作是对双频 L 和 S 波段 NASA-ISRO 合成孔径雷达 (NISAR) 任务的回顾性研究。ALOS-2 PALSAR-2 数据在执行用于图像分割的机器学习算法之前已经过预处理。对 ALOS-2 PALSAR-2 数据进行多视生成大小为 5.78 m 的正方形像素,然后应用目标分解生成假彩色合成 RGB 图像。而对于 TerraSAR-X 和 RISAT-1 数据集,没有进行多视,而是应用直接目标分解来生成假彩色合成 RGB 图像。类似地,对于分辨率为 10 m 的光学数据集,生成真彩色合成和假彩色合成 RGB 图像。对于 ALOS-2 PALSAR-2 和 Sentinel-2 数据集之间的比较研究,将 RGB 图像划分为每个大小为 500*500 像素的较小块,以创建训练和测试数据集。从大数据集中提取 10 个图像块,其中 8 个块用于训练机器学习模型随机森林 (RF)、K 最近邻 (KNN) 和支持向量机 (SVM),并保留两个块用于测试和验证目的。为了训练机器学习模型,使用 Gabor 滤波器、Scharr 滤波器、Gaussian 滤波器生成特征向量,和中值滤波器。对于补丁 1,基于真彩色合成的光学图像的 mIOU 在 0.2323 到 0.2866 之间变化,其中 RF 分类器表现最好,基于假彩色合成的光学图像的 mIOU 从 0.4130 到 0.4941 变化,RF 分类器表现最好,而对于 ALOS-2 PALSAR-2 数据,mIOU 从 0.4033 变化到 0.4663,其中 RF 分类器优于 KNN 和 SVM 分类器。对于补丁 2,基于真彩色合成的光学数据的 mIOU 从 0.3451 变化到 0.4517,其中 KNN 表现最好,基于假彩色合成的光学图像的 mIOU 从 0.5156 变化到 0.5832,RF 分类器表现最好,而对于 ALOS -2 PALSAR-2 数据,mIOU 从 0.4600 变化到 0.5178,RF 分类器优于 KNN 和 SVM 分类器。w对于基于真彩色合成的光学图像,最大为 0.2525,对于基于假彩色合成的光学图像,最大为 0.7366,而对于 ALOS-2 PALSAR-2 数据,最大 IOU w达到 0.7948。与基于真彩色合成的光学图像数据相比,SAR 数据的更好性能是由于在基于真彩色合成的光学数据集的情况下将地面和水类错误分类为城市和森林,这可归因于高在基于真彩色合成的光学数据的情况下,水和森林类别之间的相似性,而在 SAR 数据的情况下,这两个类别很容易分离。然而,通过使用基于假彩色合成的光学图像数据集进行分类任务解决了这个问题,该数据集在整体分类任务中的性能略好于 ALOS-2 PALSAR-2 数据。然而,SAR 数据在水体检测中效果最好,这可以从 SAR 数据的水类的高 IOU 中看出。

更新日期:2021-06-21
down
wechat
bug