当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Image-Guided Human Reconstruction via Multi-Scale Graph Transformation Networks
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2021-05-19 , DOI: 10.1109/tip.2021.3080177
Kun Li , Hao Wen , Qiao Feng , Yuxiang Zhang , Xiongzheng Li , Jing Huang , Cunkuan Yuan , Yu-Kun Lai , Yebin Liu

3D human reconstruction from a single image is a challenging problem. Existing methods have difficulties to infer 3D clothed human models with consistent topologies for various poses. In this paper, we propose an efficient and effective method using a hierarchical graph transformation network. To deal with large deformations and avoid distorted geometries, rather than using Euclidean coordinates directly, 3D human shapes are represented by a vertex-based deformation representation that effectively encodes the deformation and copes well with large deformations. To infer a 3D human mesh consistent with the input real image, we also use a perspective projection layer to incorporate perceptual image features into the deformation representation. Our model is easy to train and fast to converge with short test time. Besides, we present the D2HumanD^{2}Human (Dynamic Detailed Human) dataset, including variously posed 3D human meshes with consistent topologies and rich geometry details, together with the captured color images and SMPL models, which is useful for training and evaluation of deep frameworks, particularly for graph neural networks. Experimental results demonstrate that our method achieves more plausible and complete 3D human reconstruction from a single image, compared with several state-of-the-art methods. The code and dataset are available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/MGTnet.

中文翻译:


通过多尺度图转换网络进行图像引导人体重建



从单个图像重建 3D 人体是一个具有挑战性的问题。现有方法很难推断出各种姿势具有一致拓扑的 3D 服装人体模型。在本文中,我们提出了一种使用分层图转换网络的高效方法。为了处理大变形并避免几何形状扭曲,3D人体形状不是直接使用欧几里德坐标,而是通过基于顶点的变形表示来表示,该变形表示可以有效地对变形进行编码并很好地应对大变形。为了推断与输入真实图像一致的 3D 人体网格,我们还使用透视投影层将感知图像特征合并到变形表示中。我们的模型易于训练,收敛速度快,测试时间短。此外,我们还提供了 D2HumanD^{2}Human(动态详细人体)数据集,包括具有一致拓扑和丰富几何细节的各种姿势的 3D 人体网格,以及捕获的彩色图像和 SMPL 模型,这对于训练和评估非常有用深层框架,特别是图神经网络。实验结果表明,与几种最先进的方法相比,我们的方法可以从单个图像实现更合理、更完整的 3D 人体重建。代码和数据集可用于研究目的:http://cic.tju.edu.cn/faculty/likun/projects/MGTnet。
更新日期:2021-05-19
down
wechat
bug