当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Automated work efficiency analysis for smart manufacturing using human pose tracking and temporal action localization
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2020-10-12 , DOI: 10.1016/j.jvcir.2020.102948
Hao Sun , Guanghan Ning , Zhiqun Zhao , Zhongchao Huang , Zhihai He

In this paper, we aim to develop an automatic system to monitor and evaluate worker’s efficiency for smart manufacturing based on human pose tracking and temporal action localization. First, we explore the generative adversarial networks (GANs) to achieve significantly improved estimation of human body joints. Second, we formulate the automated worker efficiency analysis into a temporal action localization problem in which the action video performed by the worker is matched against a reference video performed by a teacher. We extract invariant spatio-temporal features from the human body pose sequences and perform cross-video matching using dynamic time warping. Our proposed human pose estimation method achieves state-of-the-art performance on the benchmark dataset. Our automated work efficiency analysis is able to achieve action localization with an average IoU (intersection over union) score large than 0.9. This represents one of the first systems to provide automated worker efficiency evaluation.



中文翻译:

使用人体姿势跟踪和时间动作本地化的智能制造自动化工作效率分析

在本文中,我们旨在开发一种基于人的姿态跟踪和时间动作定位的自动系统,以监视和评估工人的智能制造效率。首先,我们探索生成对抗网络(GAN),以实现对人体关节的明显改善的估计。其次,我们将自动化的工人效率分析公式化为时间动作定位问题,在该问题中,工人执行的动作视频与老师执行的参考视频相匹配。我们从人体姿势序列中提取不变的时空特征,并使用动态时间扭曲进行跨视频匹配。我们提出的人体姿态估计方法可在基准数据集上实现最先进的性能。我们的自动化工作效率分析能够实现动作定位,且平均IoU(跨工会交集)得分大于0.9。这代表了提供自动化工人效率评估的首批系统之一。

更新日期:2020-10-30
down
wechat
bug