当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust and Accurate 3D Self-Portraits in Seconds
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 2021-09-16 , DOI: 10.1109/tpami.2021.3113164
Zhe Li 1 , Tao Yu 1 , Zerong Zheng 1 , Yebin Liu 1
Affiliation  

In this paper, we propose an efficient method for robust and accurate 3D self-portraits using a single RGBD camera. Our method can generate detailed and realistic 3D self-portraits in seconds and shows the ability to handle subjects wearing extremely loose clothes. To achieve highly efficient and robust reconstruction, we propose PIFusion, which combines learning-based 3D recovery with volumetric non-rigid fusion to generate accurate sparse partial scans of the subject. Meanwhile, a non-rigid volumetric deformation method is proposed to continuously refine the learned shape prior. Moreover, a lightweight bundle adjustment algorithm is proposed to guarantee that all the partial scans can not only “loop” with each other but also remain consistent with the selected live key observations. Finally, to further generate realistic portraits, we propose non-rigid texture optimization to improve the texture quality. Additionally, we also contribute a benchmark for single-view 3D self-portrait reconstruction, an evaluation dataset that contains 10 single-view RGBD sequences of a self-rotating performer wearing various clothes and the corresponding ground-truth 3D models in the first frame of each sequence. The results and experiments based on this dataset show that the proposed method outperforms state-of-the-art methods on accuracy, efficiency, and generality.

中文翻译:


几秒内完成稳健且准确的 3D 自画像



在本文中,我们提出了一种使用单个 RGBD 相机实现稳健且准确的 3D 自画像的有效方法。我们的方法可以在几秒钟内生成详细且逼真的 3D 自画像,并显示出处理穿着极其宽松衣服的拍摄对象的能力。为了实现高效和稳健的重建,我们提出了 PIFusion,它将基于学习的 3D 恢复与体积非刚性融合相结合,以生成对象的精确稀疏部分扫描。同时,提出了一种非刚性体积变形方法来不断细化先验的学习形状。此外,提出了一种轻量级束调整算法,以保证所有部分扫描不仅可以相互“循环”,而且与所选的实时关键观测保持一致。最后,为了进一步生成逼真的肖像,我们提出非刚性纹理优化来提高纹理质量。此外,我们还贡献了单视图 3D 自画像重建的基准,这是一个评估数据集,其中包含穿着各种衣服的自旋转表演者的 10 个单视图 RGBD 序列以及第一帧中相应的地面实况 3D 模型。每个序列。基于该数据集的结果和实验表明,所提出的方法在准确性、效率和通用性方面优于最先进的方法。
更新日期:2021-09-16
down
wechat
bug