当前位置: X-MOL 学术Remote Sens. Environ. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A fully learnable context-driven object-based model for mapping land cover using multi-view data from unmanned aircraft systems
Remote Sensing of Environment ( IF 11.1 ) Pub Date : 2018-10-01 , DOI: 10.1016/j.rse.2018.06.031
Tao Liu , Amr Abd-Elrahman , Alina Zare , Bon A. Dewitt , Luke Flory , Scot E. Smith

Abstract Context information is rarely used in the object-based landcover classification. Previous models that attempted to utilize this information usually required the user to input empirical values for critical model parameters, leading to less optimal performance. Multi-view image information is useful for improving classification accuracy, but the methods to assimilate multi-view information to make it usable for context driven models have not been explored in the literature. Here we propose a novel method to exploit the multi-view information for generating class membership probability. Moreover, we develop a new conditional random field model to integrate multi-view information and context information to further improve landcover classification accuracy. This model does not require the user to manually input parameters because all parameters in the Conditional Random Field (CRF) model are fully learned from the training dataset using the gradient descent approach. Using multi-view data extracted from small Unmanned Aerial Systems (UASs), we experimented with Gaussian Mixed Model (GMM), Random Forest (RF), Support Vector Machine (SVM) and Deep Convolutional Neural Networks (DCNN) classifiers to test model performance. The results showed that our model improved average overall accuracies from 58.3% to 74.7% for the GMM classifier, 75.8% to 87.3% for the RF classifier, 75.0% to 84.4% for the SVM classifier and 80.3% to 86.3% for the DCNN classifier. Although the degree of improvement may depend on the specific classifier respectively, the proposed model can significantly improve classification accuracy irrespective of classifier type.

中文翻译:

一种完全可学习的上下文驱动的基于对象的模型,用于使用来自无人机系统的多视图数据绘制土地覆盖图

摘要 上下文信息很少用于基于对象的土地覆盖分类。以前试图利用此信息的模型通常需要用户输入关键模型参数的经验值,从而导致最佳性能降低。多视图图像信息对于提高分类精度很有用,但文献中尚未探索吸收多视图信息以使其可用于上下文驱动模型的方法。在这里,我们提出了一种利用多视图信息生成类成员概率的新方法。此外,我们开发了一种新的条件随机场模型来整合多视图信息和上下文信息,以进一步提高土地覆盖分类的准确性。该模型不需要用户手动输入参数,因为条件随机场 (CRF) 模型中的所有参数都是使用梯度下降方法从训练数据集中完全学习的。使用从小型无人机系统 (UAS) 中提取的多视图数据,我们试验了高斯混合模型 (GMM)、随机森林 (RF)、支持向量机 (SVM) 和深度卷积神经网络 (DCNN) 分类器来测试模型性能. 结果表明,我们的模型将 GMM 分类器的平均总体准确率从 58.3% 提高到 74.7%,RF 分类器从 75.8% 提高到 87.3%,SVM 分类器从 75.0% 提高到 84.4%,DCNN 分类器从 80.3% 提高到 86.3% . 虽然改进程度可能分别取决于具体的分类器,
更新日期:2018-10-01
down
wechat
bug