当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatial-aware stacked regression network for real-time 3D hand pose estimation
Neurocomputing ( IF 5.5 ) Pub Date : 2021-01-18 , DOI: 10.1016/j.neucom.2021.01.045
Pengfei Ren , Haifeng Sun , Weiting Huang , Jiachang Hao , Daixuan Cheng , Qi Qi , Jingyu Wang , Jianxin Liao

Making full use of the spatial information of the depth data is crucial for 3D hand pose estimation from a single depth image. In this paper, we propose a Spatial-aware Stacked Regression Network (SSRN) for fast, robust and accurate 3D hand pose estimation from a single depth image. By adopting a differentiable pose re-parameterization process, our method efficiently encodes the pose-dependent 3D spatial structure of the depth data as spatial-aware representations. Taking such spatial-aware representations as inputs, the stacked regression network utilizes multi-joint spatial context and the 3D spatial relationship between the estimated pose and the depth data to predict a refined hand pose. To further improve the estimation accuracy, we adopt a spatial attention mechanism to reduce the influence of irrelevant features for pose regression. In order to improve the speed of the network, we propose a cross-stage self-distillation mechanism to distill knowledge within the network itself. Experiments on four datasets show that our proposed method achieves state-of-the-art accuracy with high running speed around 330 FPS on a single GPU and 35 FPS on a single CPU.



中文翻译:

空间感知的堆叠回归网络,用于实时3D手部姿势估计

充分利用深度数据的空间信息对于从单个深度图像进行3D手势估计至关重要。在本文中,我们提出了一种空间感知的堆叠回归网络(SSRN),用于从单个深度图像进行快速,鲁棒和准确的3D手姿势估计。通过采用可微分的姿势重新参数化过程,我们的方法有效地将深度数据的姿势相关3D空间结构编码为空间感知表示。以此类空间感知表示作为输入,堆叠回归网络利用多关节空间上下文和估计姿势与深度数据之间的3D空间关系来预测精致的手部姿势。为了进一步提高估计精度,我们采用了一种空间注意机制来减少无关特征对姿势回归的影响。为了提高网络速度,我们提出了一种跨阶段的自我提炼机制来提炼网络本身内的知识。在四个数据集上进行的实验表明,我们提出的方法以单个GPU上330 FPS的速度和单个CPU 35 FPS的高运行速度实现了最新的精度。

更新日期:2021-02-05
down
wechat
bug