当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Yet it moves: Learning from Generic Motions to Generate IMU data from YouTube videos
arXiv - CS - Artificial Intelligence Pub Date : 2020-11-23 , DOI: arxiv-2011.11600
Vitor Fortes Rey, Kamalveer Kaur Garewal, Paul Lukowicz

Human activity recognition (HAR) using wearable sensors has benefited much less from recent advances in Machine Learning than fields such as computer vision and natural language processing. This is to a large extent due to the lack of large scale repositories of labeled training data. In our research we aim to facilitate the use of online videos, which exists in ample quantity for most activities and are much easier to label than sensor data, to simulate labeled wearable motion sensor data. In previous work we already demonstrate some preliminary results in this direction focusing on very simple, activity specific simulation models and a single sensor modality (acceleration norm)\cite{10.1145/3341162.3345590}. In this paper we show how we can train a regression model on generic motions for both accelerometer and gyro signals and then apply it to videos of the target activities to generate synthetic IMU data (acceleration and gyro norms) that can be used to train and/or improve HAR models. We demonstrate that systems trained on simulated data generated by our regression model can come to within around 10% of the mean F1 score of a system trained on real sensor data. Furthermore we show that by either including a small amount of real sensor data for model calibration or simply leveraging the fact that (in general) we can easily generate much more simulated data from video than we can collect in terms of real sensor data the advantage of real sensor data can be eventually equalized.

中文翻译:

然而它却在前进:从通用动作中学习,以从YouTube视频中生成IMU数据

与计算机视觉和自然语言处理等领域相比,使用可穿戴传感器的人类活动识别(HAR)从机器学习的最新进展中获得的收益要少得多。这在很大程度上是由于缺乏标记训练数据的大规模存储库。在我们的研究中,我们旨在促进在线视频的使用,该视频对于大多数活动而言数量充足,并且比传感器数据更容易标记,以模拟标记的可穿戴运动传感器数据。在先前的工作中,我们已经在此方向上展示了一些初步结果,这些结果集中在非常简单的活动特定的仿真模型和单个传感器模式(加速规范)\ cite {10.1145 / 3341162.3345590}上。在本文中,我们展示了如何针对加速度计和陀螺仪信号针对通用运动训练回归模型,然后将其应用于目标活动的视频以生成可用于训练和/或合成的IMU数据(加速度和陀螺仪规范)或改善HAR模型。我们证明,使用我们的回归模型生成的模拟数据训练的系统可以达到使用实际传感器数据训练的系统的平均F1得分的10%左右。此外,我们表明,通过包含少量实际传感器数据以进行模型校准,或者简单地利用以下事实(通常),我们可以从视频中轻松生成比在真实传感器数据方面可以收集的模拟数据多得多的模拟数据,实际的传感器数据最终可以均衡。
更新日期:2020-11-25
down
wechat
bug