当前位置: X-MOL 学术Ergonomics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Computer Vision Approach for Classifying Isometric Grip Force Exertion Levels
Ergonomics ( IF 2.4 ) Pub Date : 2020-04-10 , DOI: 10.1080/00140139.2020.1745898
Hamed Asadi 1 , Guoyang Zhou 1 , Jae Joong Lee 2 , Vaneet Aggarwal 1, 3 , Denny Yu 1
Affiliation  

Abstract Exposure to high and/or repetitive force exertions can lead to musculoskeletal injuries. However, measuring worker force exertion levels is challenging, and existing techniques can be intrusive, interfere with human–machine interface, and/or limited by subjectivity. In this work, computer vision techniques are developed to detect isometric grip exertions using facial videos and wearable photoplethysmogram. Eighteen participants (19–24 years) performed isometric grip exertions at varying levels of maximum voluntary contraction. Novel features that predict forces were identified and extracted from video and photoplethysmogram data. Two experiments with two (High/Low) and three (0%MVC/50%MVC/100%MVC) labels were performed to classify exertions. The Deep Neural Network classifier performed the best with 96% and 87% accuracy for two- and three-level classifications, respectively. This approach was robust to leave subjects out during cross-validation (86% accuracy when 3-subjects were left out) and robust to noise (i.e. 89% accuracy for correctly classifying talking activities as low force exertions). Practitioner summary: Forceful exertions are contributing factors to musculoskeletal injuries, yet it remains difficult to measure in work environments. This paper presents an approach to estimate force exertion levels, which is less distracting to workers, easier to implement by practitioners, and could potentially be used in a wide variety of workplaces. Abbreviations: MSD: musculoskeletal disorders; ACGIH: American Conference of Governmental Industrial Hygienists; HAL: hand activity level; MVC: maximum voluntary contraction; PPG: photoplethysmogram; DNN: deep neural networks; LOSO: leave-one-subject-out; ROC: receiver operating characteristic; AUC: area under curve

中文翻译:

用于对等距握力施加水平进行分类的计算机视觉方法

摘要 承受高强度和/或重复性的力量运动会导致肌肉骨骼损伤。然而,测量工人的力量消耗水平具有挑战性,并且现有技术可能具有侵入性、干扰人机界面和/或受到主观性的限制。在这项工作中,开发了计算机视觉技术来使用面部视频和可穿戴光电容积描记图检测等距握力。18 名参与者(19-24 岁)以不同程度的最大自主收缩进行等长握力训练。从视频和光电容积描记数据中识别和提取预测力的新特征。使用两个(高/低)和三个(0%MVC/50%MVC/100%MVC)标签进行了两个实验来对运动进行分类。深度神经网络分类器在二级和三级分类中表现最好,准确率分别为 96% 和 87%。这种方法对于在交叉验证期间将受试者排除在外是稳健的(当排除 3 个受试者时,准确率为 86%)并且对噪声具有稳健性(即将谈话活动正确分类为低力量消耗的准确率为 89%)。从业者总结:用力过度是导致肌肉骨骼损伤的因素,但在工作环境中仍然难以衡量。本文提出了一种估算用力水平的方法,该方法对工人的干扰较小,从业者更容易实施,并且可能适用于各种工作场所。缩写:MSD:肌肉骨骼疾病;ACGIH:美国政府工业卫生学家会议;HAL:手部活动水平;MVC:最大自愿收缩;PPG:光体积描记图;DNN:深度神经网络;LOSO:留一题;ROC:接收器操作特性;AUC:曲线下面积
更新日期:2020-04-10
down
wechat
bug