当前位置: X-MOL 学术Graph. Models › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Full-body motion capture for multiple closely interacting persons
Graphical Models ( IF 1.7 ) Pub Date : 2020-05-22 , DOI: 10.1016/j.gmod.2020.101072
Kun Li , Yali Mao , Yunke Liu , Ruizhi Shao , Yebin Liu

Human shape and pose estimation is a popular but challenging problem, especially when asked to capture the body, hands, feet and face jointly for multiple persons with close interaction. Existing methods can only have a total motion capture of a single person or multiple persons without close interaction. In this paper, we present a fully automatic and effective method to capture full-body human performance including body poses, face poses, hand gestures, and feet orientations for closely interacting multiple persons. We predict 2D keypoints corresponding to the poses of body, face, hands and feet for each person, and associate the same person in multi-view videos by computing personalized appearance descriptors to reduce ambiguities and uncertainties. To deal with occlusions and obtain temporally coherent human shapes, we estimate shape and pose for each person with the spatio-temporal tracking and constraints. Experimental results demonstrate that our method achieves better performance than state-of-the-art methods.



中文翻译:

多个紧密互动的人的全身运动捕捉

人体形状和姿势估计是一个普遍但具有挑战性的问题,尤其是当要求多个具有紧密交互作用的人共同捕获身体,手,脚和脸时。现有方法只能具有一个人或多个人的完整运动捕捉,而没有紧密的交互作用。在本文中,我们提出了一种全自动有效的方法来捕获人体的完整表现,包括人体姿势,面部姿势,手势和脚部姿势,以便与多个人紧密互动。我们预测与每个人的身体,面部,手和脚的姿势相对应的2D关键点,并通过计算个性化的外观描述符来减少多义性和不确定性,从而在多视图视频中关联同一个人。为了处理遮挡并获得时间上连贯的人形,我们通过时空跟踪和约束来估计每个人的形状和姿势。实验结果表明,我们的方法比最先进的方法具有更好的性能。

更新日期:2020-05-22
down
wechat
bug