当前位置: X-MOL 学术ISPRS J. Photogramm. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Directionally constrained fully convolutional neural network for airborne LiDAR point cloud classification
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 12.7 ) Pub Date : 2020-02-18 , DOI: 10.1016/j.isprsjprs.2020.02.004
Congcong Wen , Lina Yang , Xiang Li , Ling Peng , Tianhe Chi

Point cloud classification plays an important role in a wide range of airborne light detection and ranging (LiDAR) applications, such as topographic mapping, forest monitoring, power line detection, and road detection. However, due to the sensor noise, high redundancy, incompleteness, and complexity of airborne LiDAR systems, point cloud classification is challenging. Traditional point cloud classification methods mostly focus on the development of handcrafted point geometry features and employ machine learning-based classification models to conduct point classification. In recent years, the advances of deep learning models have caused researchers to shift their focus towards machine learning-based models, specifically deep neural networks, to classify airborne LiDAR point clouds. These learning-based methods start by transforming the unstructured 3D point sets to regular 2D representations, such as collections of feature images, and then employ a 2D CNN for point classification. Moreover, these methods usually need to calculate additional local geometry features, such as planarity, sphericity and roughness, to make use of the local structural information in the original 3D space. Nonetheless, the 3D to 2D conversion results in information loss. In this paper, we propose a directionally constrained fully convolutional neural network (D-FCN) that can take the original 3D coordinates and LiDAR intensity as input; thus, it can directly apply to unstructured 3D point clouds for semantic labeling. Specifically, we first introduce a novel directionally constrained point convolution (D-Conv) module to extract locally representative features of 3D point sets from the projected 2D receptive fields. To make full use of the orientation information of neighborhood points, the proposed D-Conv module performs convolution in an orientation-aware manner by using a directionally constrained nearest neighborhood search. Then, we design a multiscale fully convolutional neural network with downsampling and upsampling blocks to enable multiscale point feature learning. The proposed D-FCN model can therefore process input point cloud with arbitrary sizes and directly predict the semantic labels for all the input points in an end-to-end manner. Without involving additional geometry features as input, the proposed method demonstrates superior performance on the International Society for Photogrammetry and Remote Sensing (ISPRS) 3D labeling benchmark dataset. The results show that our model achieves a new state-of-the-art performance on powerline, car, and facade categories. Moreover, to demonstrate the generalization abilities of the proposed method, we conduct further experiments on the 2019 Data Fusion Contest Dataset. Our proposed method achieves superior performance than the comparing methods and accomplishes an overall accuracy of 95.6% and an average F1 score of 0.810.



中文翻译:

定向约束全卷积神经网络用于机载LiDAR点云分类

点云分类在各种机载光检测和测距(LiDAR)应用中起着重要作用,例如地形图,森林监测,电力线检测和道路检测。但是,由于传感器噪声,机载LiDAR系统的高冗余性,不完整性和复杂性,点云分类具有挑战性。传统的点云分类方法主要集中于手工制作的点几何特征的开发,并采用基于机器学习的分类模型进行点分类。近年来,深度学习模型的发展已使研究人员将重点转移到基于机器学习的模型(特别是深度神经网络)上,以对机载LiDAR点云进行分类。这些基于学习的方法首先将非结构化3D点集转换为常规2D表示形式(例如特征图像的集合),然后将2D CNN用于点分类。此外,这些方法通常需要计算其他局部几何特征,例如平面度,球形度和粗糙度,以利用原始3D空间中的局部结构信息。但是,从3D到2D的转换会导致信息丢失。在本文中,我们提出了一种方向约束全卷积神经网络(D-FCN),该网络可以将原始3D坐标和LiDAR强度作为输入。因此,它可以直接应用于非结构化3D点云进行语义标记。特别,我们首先介绍一种新颖的方向约束点卷积(D-Conv)模块,以从投影的2D接收场中提取3D点集的局部代表特征。为了充分利用邻点的方位信息,提出的D-Conv模块通过使用定向约束的最近邻搜索以方位感知的方式执行卷积。然后,我们设计了具有下采样和上采样块的多尺度全卷积神经网络,以实现多尺度点特征学习。因此,提出的D-FCN模型可以处理任意大小的输入点云,并以端到端的方式直接预测所有输入点的语义标签。在不涉及其他几何特征作为输入的情况下,所提出的方法在国际摄影测量与遥感学会(ISPRS)3D标记基准数据集中展示了卓越的性能。结果表明,我们的模型在电力线,汽车和外墙类别上实现了新的最新性能。此外,为了证明所提出方法的泛化能力,我们对2019年数据融合竞赛数据集进行了进一步的实验。我们提出的方法比比较方法具有更好的性能,并实现了95.6%的总体准确度和0.810的平均F1分数。为了证明所提出方法的泛化能力,我们对2019年数据融合竞赛数据集进行了进一步的实验。我们提出的方法比比较方法具有更好的性能,并实现了95.6%的总体准确度和0.810的平均F1分数。为了证明所提出方法的泛化能力,我们对2019年数据融合竞赛数据集进行了进一步的实验。我们提出的方法比比较方法具有更好的性能,并实现了95.6%的总体准确度和0.810的平均F1分数。

更新日期:2020-02-18
down
wechat
bug