当前位置: X-MOL 学术Meas. Sci. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
High-resolution remote sensing image semantic segmentation based on a deep feature aggregation network
Measurement Science and Technology ( IF 2.4 ) Pub Date : 2021-06-03 , DOI: 10.1088/1361-6501/abfbfd
Zhen Wang 1, 2 , Jianxin Guo 2 , Wenzhun Huang 2 , Shanwen Zhang 2
Affiliation  

Semantic segmentation of high-resolution remote sensing images has a wide range of applications, such as territorial planning, geographic monitoring and smart cities. The proper operation of semantic segmentation for remote sensing images remains challenging due to the complex and diverse transitions between different ground areas. Although several convolution neural networks (CNNs) have been developed for remote sensing semantic segmentation, the performance of CNNs is far from the expected target. This study presents a deep feature aggregation network (DFANet) for remote sensing image semantic segmentation. It is composed of a basic feature representation layer, an intermediate feature aggregation layer, a deep feature aggregation layer and a feature aggregation module (FAM). Specially, the basic feature representation layer is used to obtain feature maps with different resolutions: the intermediate feature aggregation layer and deep feature aggregation layer can fuse various resolution features and multi-scale features; the FAM is used to splice the features and form more abundant spatial feature maps; and the conditional random field module is used to optimize semantic segmentation results. We have performed extensive experiments on the ISPRS two-dimensional Vaihingen and Potsdam remote sensing image datasets and compared the proposed method with several variations of semantic segmentation networks. The experimental results show that DFANet outperforms the other state-of-the-art approaches.



中文翻译:

基于深度特征聚合网络的高分辨率遥感图像语义分割

高分辨率遥感图像的语义分割具有广泛的应用,如领土规划、地理监测和智慧城市。由于不同地面区域之间复杂多样的过渡,遥感图像语义分割的正确操作仍然具有挑战性。尽管已经开发了几种用于遥感语义分割的卷积神经网络 (CNN),但 CNN 的性能与预期目标相差甚远。本研究提出了一种用于遥感图像语义分割的深度特征聚合网络(DFANet)。它由基本特征表示层、中间特征聚合层、深层特征聚合层和特征聚合模块(FAM)组成。特别,基础特征表示层用于获取不同分辨率的特征图:中间特征聚合层和深层特征聚合层可以融合各种分辨率特征和多尺度特征;FAM用于拼接特征,形成更丰富的空间特征图;条件随机场模块用于优化语义分割结果。我们对 ISPRS 二维 Vaihingen 和 Potsdam 遥感图像数据集进行了大量实验,并将所提出的方法与语义分割网络的几种变体进行了比较。实验结果表明,DFANet 优于其他最先进的方法。中间特征聚合层和深层特征聚合层可以融合各种分辨率特征和多尺度特征;FAM用于拼接特征,形成更丰富的空间特征图;条件随机场模块用于优化语义分割结果。我们对 ISPRS 二维 Vaihingen 和 Potsdam 遥感图像数据集进行了大量实验,并将所提出的方法与语义分割网络的几种变体进行了比较。实验结果表明,DFANet 优于其他最先进的方法。中间特征聚合层和深层特征聚合层可以融合各种分辨率特征和多尺度特征;FAM用于拼接特征,形成更丰富的空间特征图;条件随机场模块用于优化语义分割结果。我们对 ISPRS 二维 Vaihingen 和 Potsdam 遥感图像数据集进行了大量实验,并将所提出的方法与语义分割网络的几种变体进行了比较。实验结果表明,DFANet 优于其他最先进的方法。我们对 ISPRS 二维 Vaihingen 和 Potsdam 遥感图像数据集进行了大量实验,并将所提出的方法与语义分割网络的几种变体进行了比较。实验结果表明,DFANet 优于其他最先进的方法。我们对 ISPRS 二维 Vaihingen 和 Potsdam 遥感图像数据集进行了大量实验,并将所提出的方法与语义分割网络的几种变体进行了比较。实验结果表明,DFANet 优于其他最先进的方法。

更新日期:2021-06-03
down
wechat
bug