当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Weakly Supervised Learning for Single Depth-Based Hand Shape Recovery
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2020-11-17 , DOI: 10.1109/tip.2020.3037479
Xiaoming Deng , Yuying Zhu , Yinda Zhang , Zhaopeng Cui , Ping Tan , Wentian Qu , Cuixia Ma , Hongan Wang

Recent emerging technologies such AR/VR and HCI are drawing high demand on more comprehensive hand shape understanding, requiring not only 3D hand skeleton pose but also hand shape geometry. In this paper, we propose a deep learning framework to produce 3D hand shape from a single depth image. To address the challenge that capturing ground truth 3D hand shape in the training dataset is non-trivial, we leverage synthetic data to construct a statistical hand shape model and adopt weak supervision from widely accessible hand skeleton pose annotation. To bridge the gap due to the different hand skeleton definitions in the existing public datasets, we propose a joint regression network for hand pose adaptation. To reconstruct the hand shape, we use Chamfer loss between the predicted hand shape and the point cloud from the input depth to learn the shape reconstruction model in a weakly-supervised manner. Experiments demonstrate that our model adapts well to the real data and produces accurate hand shapes that outperform the state-of-the-art methods both qualitatively and quantitatively.

中文翻译:

基于单深度的手形恢复的弱监督学习

诸如AR / VR和HCI之类的新兴技术对更全面的手形理解提出了很高的要求,不仅需要3D手形骨架姿势,还需要手形几何形状。在本文中,我们提出了一种深度学习框架,可从单个深度图像生成3D手形。为了解决在训练数据集中捕获地面真实3D手形是不平凡的挑战,我们利用合成数据来构建统计手形模型,并从可广泛访问的手骨骼姿势注释中采用弱监督。为了弥补现有公共数据集中由于不同的手部骨骼定义而造成的差距,我们提出了一种用于手部姿势适应的联合回归网络。要重建手形,我们使用从输入深度开始的预测手形和点云之间的倒角损失,以弱监督的方式学习形状重构模型。实验表明,我们的模型可以很好地适应实际数据,并在质量和数量上均能胜过最先进的方法。
更新日期:2020-11-27
down
wechat
bug