当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-supervised Learning of Detailed 3D Face Reconstruction.
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2020-08-27 , DOI: 10.1109/tip.2020.3017347
Yajing Chen , Fanzi Wu , Zeyu Wang , Yibing Song , Yonggen Ling , Linchao Bao

In this article, we present an end-to-end learning framework for detailed 3D face reconstruction from a single image. Our approach uses a 3DMM-based coarse model and a displacement map in UV-space to represent a 3D face. Unlike previous work addressing the problem, our learning framework does not require supervision of surrogate ground-truth 3D models computed with traditional approaches. Instead, we utilize the input image itself as supervision during learning. In the first stage, we combine a photometric loss and a facial perceptual loss between the input face and the rendered face, to regress a 3DMM-based coarse model. In the second stage, both the input image and the regressed texture of the coarse model are unwrapped into UV-space, and then sent through an image-to-image translation network to predict a displacement map in UV-space. The displacement map and the coarse model are used to render a final detailed face, which again can be compared with the original input image to serve as a photometric loss for the second stage. The advantage of learning displacement map in UV-space is that face alignment can be explicitly done during the unwrapping, thus facial details are easier to learn from large amount of data. Extensive experiments demonstrate the superiority of our method over previous work.

中文翻译:

自我指导的详细3D面部重建学习。

在本文中,我们提供了一个端到端学习框架,用于从单个图像进行详细的3D人脸重建。我们的方法使用基于3DMM的粗略模型和UV空间中的位移图来表示3D面。与以前解决该问题的工作不同,我们的学习框架不需要监督使用传统方法计算的替代真实3D模型。相反,我们在学习过程中利用输入图像本身作为监督。在第一阶段,我们将输入面部和渲染面部之间的光度损失和面部感知损失结合起来,以回归基于3DMM的粗糙模型。在第二阶段,将输入图像和粗糙模型的回归纹理都展开到UV空间中,然后通过图像到图像转换网络发送以预测UV空间中的位移图。位移图和粗略模型用于渲染最终的详细人脸,再次将其与原始输入图像进行比较,以作为第二阶段的光度损失。在UV空间中学习位移图的优势在于,在展开过程中可以显式完成面部对齐,因此更容易从大量数据中学习面部细节。大量的实验证明了我们的方法优于以前的工作。
更新日期:2020-09-08
down
wechat
bug