Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Estimating Trunk Angle Kinematics During Lifting Using a Computationally Efficient Computer Vision Method.
Human Factors: The Journal of the Human Factors and Ergonomics Society ( IF 2.9 ) Pub Date : 2020-09-24 , DOI: 10.1177/0018720820958840
Runyu L Greene 1 , Ming-Lun Lu 2 , Menekse Salar Barim 2 , Xuan Wang 1 , Marie Hayden 2 , Yu Hen Hu 1 , Robert G Radwin 1
Affiliation  

OBJECTIVE A computer vision method was developed for estimating the trunk flexion angle, angular speed, and angular acceleration by extracting simple features from the moving image during lifting. BACKGROUND Trunk kinematics is an important risk factor for lower back pain, but is often difficult to measure by practitioners for lifting risk assessments. METHODS Mannequins representing a wide range of hand locations for different lifting postures were systematically generated using the University of Michigan 3DSSPP software. A bounding box was drawn tightly around each mannequin and regression models estimated trunk angles. The estimates were validated against human posture data for 216 lifts collected using a laboratory-grade motion capture system and synchronized video recordings. Trunk kinematics, based on bounding box dimensions drawn around the subjects in the video recordings of the lifts, were modeled for consecutive video frames. RESULTS The mean absolute difference between predicted and motion capture measured trunk angles was 14.7°, and there was a significant linear relationship between predicted and measured trunk angles (R2 = .80, p < .001). The training error for the kinematics model was 2.3°. CONCLUSION Using simple computer vision-extracted features, the bounding box method indirectly estimated trunk angle and associated kinematics, albeit with limited precision. APPLICATION This computer vision method may be implemented on handheld devices such as smartphones to facilitate automatic lifting risk assessments in the workplace.

中文翻译:


使用计算高效的计算机视觉方法估计举升过程中的躯干角度运动学。



目的 开发一种计算机视觉方法,通过从举升过程中的运动图像中提取简单特征来估计躯干弯曲角度、角速度和角加速度。背景技术躯干运动学是下背痛的重要危险因素,但通常难以被从业者测量以进行举重风险评估。方法 使用密歇根大学 3DSSPP 软件系统地生成代表不同举升姿势的各种手部位置的人体模型。每个人体模型周围都紧密地绘制了一个边界框,回归模型估计了躯干角度。这些估计值根据使用实验室级动作捕捉系统和同步视频记录收集的 216 次升降机的人体姿势数据进行了验证。躯干运动学基于电梯视频记录中围绕对象绘制的边界框尺寸,针对连续视频帧进行建模。结果 预测的躯干角度和运动捕捉测量的躯干角度之间的平均绝对差为 14.7°,预测的躯干角度和测量的躯干角度之间存在显着的线性关系 (R2 = .80, p < .001)。运动学模型的训练误差为2.3°。结论 使用简单的计算机视觉提取的特征,边界框方法间接估计躯干角度和相关的运动学,尽管精度有限。应用这种计算机视觉方法可以在智能手机等手持设备上实施,以促进工作场所的自动提升风险评估。
更新日期:2020-09-24
down
wechat
bug