当前位置: X-MOL 学术J. Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
True Orthoimage Generation Using Airborne LiDAR Data with Generative Adversarial Network-Based Deep Learning Model
Journal of Sensors ( IF 1.4 ) Pub Date : 2021-06-14 , DOI: 10.1155/2021/4304548
Young Ha Shin 1 , Dong-Cheon Lee 1
Affiliation  

Orthoimage, which is geometrically equivalent to a map, is one of the important geospatial products. Displacement and occlusion in optical images are caused by perspective projection, camera tilt, and object relief. A digital surface model (DSM) is essential data for generating true orthoimages to correct displacement and to recover occlusion areas. Light detection and ranging (LiDAR) data collected from an airborne laser scanner (ALS) system is a major source of DSM. The traditional methods require sophisticated procedures to produce a true orthoimage. Most methods utilize 3D coordinates of the DSM and multiview images with overlapping areas for orthorectifying displacement and detecting and recovering occlusion areas. LiDAR point cloud data provides not only 3D coordinates but also intensity information reflected from object surfaces in the georeferenced orthoprojected space. This paper proposes true orthoimage generation based on a generative adversarial network (GAN) deep learning (DL) with the Pix2Pix model using intensity and DSM of the LiDAR data. The major advantage of using LiDAR data is that the data is occlusion-free true orthoimage in terms of projection geometry except in the case of low image quality. Intensive experiments were performed using the benchmark datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). The results demonstrate that the proposed approach could have the capability of efficiently generating true orthoimages directly from LiDAR data. However, it is crucial to find appropriate preprocessing to improve the quality of the intensity of the LiDAR data to produce a higher quality of the true orthoimages.

中文翻译:

使用机载 LiDAR 数据和基于生成对抗网络的深度学习模型生成真正的正射影像

正射影像在几何上等同于地图,是重要的地理空间产品之一。光学图像中的位移和遮挡是由透视投影、相机倾斜和物体浮雕引起的。数字表面模型 (DSM) 是生成真实正射影像以校正位移和恢复遮挡区域的基本数据。从机载激光扫描仪 (ALS) 系统收集的光探测和测距 (LiDAR) 数据是 DSM 的主要来源。传统方法需要复杂的程序来产生真正的正射影像。大多数方法利用 DSM 和具有重叠区域的多视图图像的 3D 坐标来对位移进行正射校正并检测和恢复遮挡区域。LiDAR 点云数据不仅提供 3D 坐标,还提供从地理参考正射投影空间中的物体表面反射的强度信息。本文提出了基于生成对抗网络 (GAN) 深度学习 (DL) 和 Pix2Pix 模型的真实正射影像生成,该模型使用 LiDAR 数据的强度和 DSM。使用 LiDAR 数据的主要优势在于,除了图像质量低的情况外,数据在投影几何方面是无遮挡的真实正射影像。使用国际摄影测量和遥感学会 (ISPRS) 提供的基准数据集进行了密集实验。结果表明,所提出的方法能够直接从 LiDAR 数据中有效地生成真实的正射影像。然而,
更新日期:2021-06-14
down
wechat
bug