当前位置: X-MOL 学术IEEE Sens. J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning to Estimate 3D Human Pose from Point Cloud
IEEE Sensors Journal ( IF 4.3 ) Pub Date : 2020-10-15 , DOI: 10.1109/jsen.2020.2999849
Yufan Zhou , Haiwei Dong , Abdulmotaleb El Saddik

3D pose estimation is a challenging problem in computer vision. Most of the existing neural-network-based approaches address color or depth images through convolution networks (CNNs). In this paper, we study the task of 3D human pose estimation from depth images. Different from the existing CNN-based human pose estimation method, we propose a deep human pose network for 3D pose estimation by taking the point cloud data as input data to model the surface of complex human structures. We first cast the 3D human pose estimation from 2D depth images to 3D point clouds and directly predict the 3D joint position. Our experiments on two public datasets show that our approach achieves higher accuracy than previous state-of-art methods. The reported results on both ITOP and EVAL datasets demonstrate the effectiveness of our method on the targeted tasks.

中文翻译:

学习从点云估计 3D 人体姿势

3D 姿态估计是计算机视觉中的一个具有挑战性的问题。大多数现有的基于神经网络的方法通过卷积网络 (CNN) 处理颜色或深度图像。在本文中,我们研究了从深度图像估计 3D 人体姿态的任务。与现有的基于 CNN 的人体姿态估计方法不同,我们提出了一种用于 3D 姿态估计的深度人体姿态网络,通过将点云数据作为输入数据对复杂人体结构的表面进行建模。我们首先将 3D 人体姿态估计从 2D 深度图像投射到 3D 点云,并直接预测 3D 关节位置。我们在两个公共数据集上的实验表明,我们的方法比以前的最先进方法实现了更高的准确度。在 ITOP 和 EVAL 数据集上报告的结果证明了我们的方法在目标任务上的有效性。
更新日期:2020-10-15
down
wechat
bug