当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Coupled Real-Synthetic Domain Adaptation for Real-World Deep Depth Enhancement.
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2020-04-23 , DOI: 10.1109/tip.2020.2988574
Xiao Gu , Yao Guo , Fani Deligianni , Guang-Zhong Yang

Advances in depth sensing technologies have allowed simultaneous acquisition of both color and depth data under different environments. However, most depth sensors have lower resolution than that of the associated color channels and such a mismatch can affect applications that require accurate depth recovery. Existing depth enhancement methods use simplistic noise models and cannot generalize well under real-world conditions. In this paper, a coupled real-synthetic domain adaptation method is proposed, which enables domain transfer between high-quality depth simulators and real depth camera information for super-resolution depth recovery. The method first enables the realistic degradation from synthetic images, and then enhances degraded depth data to high quality with a color-guided sub-network. The key advantage of the work is that it generalizes well to real-world datasets without further training or fine-tuning. Detailed quantitative and qualitative results are presented, and it is demonstrated that the proposed method achieves improved performance compared to previous methods fine-tuned on the specific datasets.

中文翻译:

耦合的真实合成域自适应,可实现真实世界的深度深度增强。

深度感测技术的进步已允许在不同环境下同时采集颜色和深度数据。但是,大多数深度传感器的分辨率低于相关颜色通道的分辨率,并且这种不匹配会影响需要精确深度恢复的应用。现有的深度增强方法使用简单的噪声模型,并且在现实条件下不能很好地概括。本文提出了一种耦合的实时合成域自适应方法,该方法可以在高质量深度模拟器和真实深度相机信息之间进行域转移,以实现超分辨率深度恢复。该方法首先使得能够从合成图像进行逼真的降级,然后利用颜色引导的子网将降级的深度数据增强到高质量。这项工作的主要优势在于,它可以很好地将其推广到现实世界的数据集,而无需进行进一步的培训或微调。给出了详细的定量和定性结果,并且证明了与在特定数据集上进行微调的先前方法相比,该方法具有更高的性能。
更新日期:2020-04-23
down
wechat
bug