当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ABCNet: Attentive bilateral contextual network for efficient semantic segmentation of Fine-Resolution remotely sensed imagery
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2021-09-16 , DOI: 10.1016/j.isprsjprs.2021.09.005
Rui Li 1 , Shunyi Zheng 1 , Ce Zhang 2, 3 , Chenxi Duan 4, 5 , Libo Wang 1 , Peter M. Atkinson 2, 6, 7
Affiliation  

Semantic segmentation of remotely sensed imagery plays a critical role in many real-world applications, such as environmental change monitoring, precision agriculture, environmental protection, and economic assessment. Following rapid developments in sensor technologies, vast numbers of fine-resolution satellite and airborne remote sensing images are now available, for which semantic segmentation is potentially a valuable method. However, because of the rich complexity and heterogeneity of information provided with an ever-increasing spatial resolution, state-of-the-art deep learning algorithms commonly adopt complex network structures for segmentation, which often result in significant computational demand. Particularly, the frequently-used fully convolutional network (FCN) relies heavily on fine-grained spatial detail (fine spatial resolution) and contextual information (large receptive fields), both imposing high computational costs. This impedes the practical utility of FCN for real-world applications, especially those requiring real-time data processing. In this paper, we propose a novel Attentive Bilateral Contextual Network (ABCNet), a lightweight convolutional neural network (CNN) with a spatial path and a contextual path. Extensive experiments, including a comprehensive ablation study, demonstrate that ABCNet has strong discrimination capability with competitive accuracy compared with state-of-the-art benchmark methods while achieving significantly increased computational efficiency. Specifically, the proposed ABCNet achieves a 91.3% overall accuracy (OA) on the Potsdam test dataset and outperforms all lightweight benchmark methods significantly. The code is freely available at https://github.com/lironui/ABCNet.



中文翻译:

ABCNet:细心的双边上下文网络,用于对高分辨率遥感图像进行有效的语义分割

遥感图像的语义分割在许多实际应用中起着至关重要的作用,例如环境变化监测、精准农业、环境保护和经济评估。随着传感器技术的快速发展,现在可以获得大量高分辨率的卫星和机载遥感图像,语义分割可能是一种有价值的方法。然而,由于空间分辨率不断提高所提供的信息的丰富复杂性和异质性,最先进的深度学习算法通常采用复杂的网络结构进行分割,这通常会导致大量的计算需求。特别,常用的全卷积网络 (FCN) 严重依赖细粒度的空间细节(精细的空间分辨率)和上下文信息(大的感受野),两者都会带来高计算成本。这阻碍了 FCN 在实际应用中的实际效用,尤其是那些需要实时数据处理的应用。在本文中,我们提出了一种新颖的注意力双边上下文网络 (ABCNet),一种具有空间路径和上下文路径的轻量级卷积神经网络 (CNN)。包括综合消融研究在内的大量实验表明,与最先进的基准方法相比,ABCNet 具有强大的判别能力和具有竞争力的准确性,同时显着提高了计算效率。具体来说,提议的 ABCNet 达到了 91。Potsdam 测试数据集的总体准确率 (OA) 为 3%,明显优于所有轻量级基准测试方法。该代码可在以下位置免费获得https://github.com/lironui/ABCNet

更新日期:2021-09-17
down
wechat
bug