当前位置: X-MOL 学术Int. J. Digit. Earth › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Incorporating DeepLabv3+ and object-based image analysis for semantic segmentation of very high resolution remote sensing images
International Journal of Digital Earth ( IF 5.1 ) Pub Date : 2020-10-09 , DOI: 10.1080/17538947.2020.1831087
Shouji Du 1 , Shihong Du 1 , Bo Liu 1 , Xiuyuan Zhang 1
Affiliation  

ABSTRACT

Semantic segmentation of remote sensing images is an important but unsolved problem in the remote sensing society. Advanced image semantic segmentation models, such as DeepLabv3+, have achieved astonishing performance for semantically labeling very high resolution (VHR) remote sensing images. However, it is difficult for these models to capture the precise outlines of ground objects and explore the context information that revealing relationships among image objects for optimizing segmentation results. Consequently, this study proposes a semantic segmentation method for VHR images by incorporating deep learning semantic segmentation model (DeepLabv3+) and object-based image analysis (OBIA), wherein DSM is employed to provide geometric information to enhance the interpretation of VHR images. The proposed method first obtains two initial probabilistic labeling predictions using a DeepLabv3+ network on spectral image and a random forest (RF) classifier on hand-crafted features, respectively. These two predictions are then integrated by Dempster-Shafer (D-S) evidence theory to be fed into an object-constrained higher-order conditional random field (CRF) framework to estimate the final semantic labeling results with the consideration of the spatial contextual information. The proposed method is applied to the ISPRS 2D semantic labeling benchmark, and competitive overall accuracies of 90.6% and 85.0% are achieved for Vaihingen and Potsdam datasets, respectively.



中文翻译:

结合DeepLabv3 +和基于对象的图像分析功能,对超高分辨率遥感影像进行语义分割

摘要

遥感图像的语义分割是遥感社会中一个重要但尚未解决的问题。先进的图像语义分割模型(例如DeepLabv3 +)在语义上标记超高分辨率(VHR)遥感图像时已经达到了惊人的性能。但是,这些模型很难捕获地面对象的精确轮廓并探索上下文信息以揭示图像对象之间的关系以优化分割结果。因此,本研究通过结合深度学习语义分割模型(DeepLabv3 +)和基于对象的图像分析(OBIA)提出了一种VHR图像的语义分割方法,其中DSM用于提供几何信息以增强VHR图像的解释能力。所提出的方法首先分别在频谱图像上使用DeepLabv3 +网络和在手工制作的特征上使用随机森林(RF)分类器来获得两个初始概率标记预测。然后,将这两个预测结果通过Dempster-Shafer(DS)证据理论进行整合,以馈入对象约束的高阶条件随机字段(CRF)框架,以在考虑空间上下文信息的情况下估计最终的语义标记结果。将该方法应用于ISPRS 2D语义标注基准,Vaihingen和Potsdam数据集的整体竞争准确率分别达到90.6%和85.0%。然后,将这两个预测结果通过Dempster-Shafer(DS)证据理论进行整合,以馈入对象约束的高阶条件随机字段(CRF)框架,以在考虑空间上下文信息的情况下估计最终的语义标记结果。将该方法应用于ISPRS 2D语义标注基准,Vaihingen和Potsdam数据集的整体竞争准确率分别达到90.6%和85.0%。然后,将这两个预测结果通过Dempster-Shafer(DS)证据理论进行整合,以馈入对象约束的高阶条件随机字段(CRF)框架,以在考虑空间上下文信息的情况下估计最终的语义标记结果。将该方法应用于ISPRS 2D语义标注基准,Vaihingen和Potsdam数据集的整体竞争准确率分别达到90.6%和85.0%。

更新日期:2020-10-09
down
wechat
bug