当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DeMoCap: Low-Cost Marker-Based Motion Capture
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2021-10-15 , DOI: 10.1007/s11263-021-01526-z
Anargyros Chatzitofis 1, 2 , Stefanos Kollias 1 , Dimitrios Zarpalas 2 , Petros Daras 2
Affiliation  

Optical marker-based motion capture (MoCap) remains the predominant way to acquire high-fidelity articulated body motions. We introduce DeMoCap, the first data-driven approach for end-to-end marker-based MoCap, using only a sparse setup of spatio-temporally aligned, consumer-grade infrared-depth cameras. Trading off some of their typical features, our approach is the sole robust option for far lower-cost marker-based MoCap than high-end solutions. We introduce an end-to-end differentiable markers-to-pose model to solve a set of challenges such as under-constrained position estimates, noisy input data and spatial configuration invariance. We simultaneously handle depth and marker detection noise, label and localize the markers, and estimate the 3D pose by introducing a novel spatial 3D coordinate regression technique under a multi-view rendering and supervision concept. DeMoCap is driven by a special dataset captured with 4 spatio-temporally aligned low-cost Intel RealSense D415 sensors and a 24 MXT40S camera professional MoCap system, used as input and ground truth, respectively.



中文翻译:

DeMoCap:基于标记的低成本运动捕捉

基于光学标记的运动捕捉 (MoCap) 仍然是获取高保真关节运动的主要方式。我们介绍了 DeMoCap,这是第一个基于标记的端到端 MoCap 的数据驱动方法,仅使用时空对齐的消费级红外深度相机的稀疏设置。权衡它们的一些典型功能,我们的方法是比高端解决方案成本低得多的基于标记的 MoCap 的唯一可靠选择。我们引入了一个端到端的可微标记姿势模型来解决一系列挑战,例如欠约束位置估计、嘈杂的输入数据和空间配置不变性。我们同时处理深度和标记检测噪声,标记和定位标记,并通过在多视图渲染和监督概念下引入新颖的空间 3D 坐标回归技术来估计 3D 姿态。DeMoCap 由一个特殊数据集驱动,该数据集使用 4 个时空对齐的低成本英特尔实感 D415 传感器和一个 24 个 MXT40S 相机专业 MoCap 系统捕获,分别用作输入和地面实况。

更新日期:2021-10-17
down
wechat
bug