当前位置: X-MOL 学术Sensors › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
VI-Net-View-Invariant Quality of Human Movement Assessment.
Sensors ( IF 3.4 ) Pub Date : 2020-09-15 , DOI: 10.3390/s20185258
Faegheh Sardari 1 , Adeline Paiement 2 , Sion Hannuna 1 , Majid Mirmehdi 1
Affiliation  

We propose a view-invariant method towards the assessment of the quality of human movements which does not rely on skeleton data. Our end-to-end convolutional neural network consists of two stages, where at first a view-invariant trajectory descriptor for each body joint is generated from RGB images, and then the collection of trajectories for all joints are processed by an adapted, pre-trained 2D convolutional neural network (CNN) (e.g., VGG-19 or ResNeXt-50) to learn the relationship amongst the different body parts and deliver a score for the movement quality. We release the only publicly-available, multi-view, non-skeleton, non-mocap, rehabilitation movement dataset (QMAR), and provide results for both cross-subject and cross-view scenarios on this dataset. We show that VI-Net achieves average rank correlation of 0.66 on cross-subject and 0.65 on unseen views when trained on only two views. We also evaluate the proposed method on the single-view rehabilitation dataset KIMORE and obtain 0.66 rank correlation against a baseline of 0.62.

中文翻译:

人体运动评估的VI网络观不变质量。

我们提出了一种不依赖于骨架数据的不变视图方法来评估人体运动的质量。我们的端到端卷积神经网络包括两个阶段,首先从RGB图像生成每个身体关节的视图不变轨迹描述符,然后通过适应性的,预先的处理来处理所有关节的轨迹集合。训练有素的2D卷积神经网络(CNN)(例如VGG-19或ResNeXt-50),以学习不同身体部位之间的关系并提供运动质量评分。我们发布了唯一的公开可用的,多视图,无骨架,无运动障碍的康复运动数据集(QMAR),并提供了该数据集上的跨主题和跨视图场景的结果。我们证明,VI-Net在交叉主题上的平均排名相关性为0.66,在0上。如果只接受两种视图训练,则看不见的视图为65。我们还评估了单视图康复数据集KIMORE上的拟议方法,并获得了0.66的等级相关性(相对于0.62的基线)。
更新日期:2020-09-15
down
wechat
bug