当前位置: X-MOL 学术Electronics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MDEAN: Multi-View Disparity Estimation with an Asymmetric Network
Electronics ( IF 2.6 ) Pub Date : 2020-06-02 , DOI: 10.3390/electronics9060924
Zhao Pei , Deqiang Wen , Yanning Zhang , Miao Ma , Min Guo , Xiuwei Zhang , Yee-Hong Yang

In recent years, disparity estimation of a scene based on deep learning methods has been extensively studied and significant progress has been made. In contrast, a traditional image disparity estimation method requires considerable resources and consumes much time in processes such as stereo matching and 3D reconstruction. At present, most deep learning based disparity estimation methods focus on estimating disparity based on monocular images. Motivated by the results of traditional methods that multi-view methods are more accurate than monocular methods, especially for scenes that are textureless and have thin structures, in this paper, we present MDEAN, a new deep convolutional neural network to estimate disparity using multi-view images with an asymmetric encoder–decoder network structure. First, our method takes an arbitrary number of multi-view images as input. Next, we use these images to produce a set of plane-sweep cost volumes, which are combined to compute a high quality disparity map using an end-to-end asymmetric network. The results show that our method performs better than state-of-the-art methods, in particular, for outdoor scenes with the sky, flat surfaces and buildings.

中文翻译:

MDEAN:具有非对称网络的多视图视差估计

近年来,基于深度学习方法的场景视差估计已得到广泛研究,并取得了重大进展。相比之下,传统的图像视差估计方法需要大量资源,并且在诸如立体匹配和3D重建之类的过程中会消耗大量时间。当前,大多数基于深度学习的视差估计方法专注于基于单眼图像估计视差。根据传统方法的结果,多视图方法比单眼方法更准确,尤其是对于无纹理且结构较薄的场景,在本文中,我们提出了MDEAN,这是一种新的深度卷积神经网络,可以使用多视点方法估计视差使用非对称编码器-解码器网络结构查看图像。第一,我们的方法将任意数量的多视图图像作为输入。接下来,我们使用这些图像生成一组平面扫描的成本量,将其组合起来以使用端到端非对称网络来计算高质量的视差图。结果表明,我们的方法比最先进的方法表现更好,特别是对于具有天空,平坦表面和建筑物的室外场景。
更新日期:2020-06-02
down
wechat
bug