当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised scene adaptation for semantic segmentation of urban mobile laser scanning point clouds
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 12.7 ) Pub Date : 2020-10-06 , DOI: 10.1016/j.isprsjprs.2020.10.002
Haifeng Luo , Kourosh Khoshelham , Lina Fang , Chongcheng Chen

Semantic segmentation is a fundamental task in understanding urban mobile laser scanning (MLS) point clouds. Recently, deep learning-based methods have become prominent for semantic segmentation of MLS point clouds, and many recent works have achieved state-of-the-art performance on open benchmarks. However, due to differences of objects across different scenes such as different height of buildings and different forms of the same road-side objects, the existing open benchmarks (namely source scenes) are often significantly different from the actual application datasets (namely target scenes). This results in underperformance of semantic segmentation networks trained using source scenes when applied to target scenes. In this paper, we propose a novel method to perform unsupervised scene adaptation for semantic segmentation of urban MLS point clouds. Firstly, we show the scene transfer phenomena in urban MLS point clouds. Then, we propose a new pointwise attentive transformation module (PW-ATM) to adaptively perform the data alignment. Next, a maximum classifier discrepancy-based (MCD-based) adversarial learning framework is adopted to further achieve feature alignment. Finally, an end-to-end alignment deep network architecture is designed for the unsupervised scene adaptation semantic segmentation of urban MLS point clouds. To experimentally evaluate the performance of our proposed approach, two large-scale labeled source scenes and two different target scenes were used for the training. Moreover, four actual application scenes are used for the testing. The experimental results indicated that our approach can effectively achieve scene adaptation for semantic segmentation of urban MLS point clouds.



中文翻译:

用于城市移动激光扫描点云语义分割的无监督场景自适应

语义分割是理解城市移动激光扫描(MLS)点云的一项基本任务。近年来,基于深度学习的方法在MLS点云的语义分割中变得很重要,并且许多近期的工作都在开放基准测试中取得了最先进的性能。但是,由于不同场景中对象的差异(例如建筑物的高度不同和同一路边对象的不同形式),现有的开放基准(即源场景)通常与实际应用程序数据集(即目标场景)存在显着差异。 。当应用到目标场景时,这会导致使用源场景训练的语义分割网络的性能下降。在本文中,我们提出了一种对城市MLS点云进行语义分割的无监督场景自适应新方法。首先,我们展示了城市MLS点云中的场景转移现象。然后,我们提出了一种新的逐点注意变换模块(PW-ATM),可以自适应地执行数据对齐。接下来,采用基于最大分类器差异性(基于MCD)的对抗学习框架来进一步实现特征对齐。最后,针对城市MLS点云的无监督场景自适应语义分割,设计了端到端对齐的深度网络架构。为了通过实验评估我们提出的方法的性能,使用了两个大规模的带标签的源场景和两个不同的目标场景进行训练。此外,测试使用了四个实际的应用场景。

更新日期:2020-10-07
down
wechat
bug