当前位置: X-MOL 学术Sensors › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fusing Object Information and Inertial Data for Activity Recognition
Sensors ( IF 3.9 ) Pub Date : 2019-09-23 , DOI: 10.3390/s19194119
Alexander Diete , Heiner Stuckenschmidt

In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F 1 -measure of up to 79.6%.

中文翻译:

融合对象信息和惯性数据以进行活动识别

在普适计算领域,可穿戴设备已被广泛用于识别人类活动。这项研究的一个重要领域是对日常生活活动的认识,其中惯性传感器和交互传感器(例如带有扫描仪的RFID标签)是作为数据源的流行选择。但是,使用交互传感器有一个缺点:它们可能无法区分适当的交互和物体的简单触摸。来自交互传感器的正信号不一定是由执行的活动引起的,例如,当仅触摸对象但之后没有发生交互时。但是,在很多情况下,例如药物摄入,都严重依赖正确识别的活动。在我们的工作中 我们旨在解决这一局限性,并提出一种基于自我中心的多模式活动识别方法。我们的解决方案依靠对象检测来识别帧中的关键活动对象。由于始终期望获得高质量的摄像机视图是不可行的,因此我们通过监视用户的惯性传感器数据来丰富视觉功能。手臂运动。这样,我们试图克服每个传感器的缺点。我们展示了结合惯性和视频功能来识别人类在不同类型场景中的活动的结果,在这些场景中,我们实现的F 1测度高达79.6%。我们通过监视用户的惯性传感器数据丰富了视觉功能。手臂运动。这样,我们试图克服每个传感器的缺点。我们展示了结合惯性和视频功能来识别人类在不同类型场景中的活动的结果,在这些场景中,我们实现的F 1测度高达79.6%。我们通过监视用户的惯性传感器数据丰富了视觉功能。手臂运动。这样,我们试图克服每个传感器的缺点。我们展示了结合惯性和视频功能来识别人类在不同类型场景中的活动的结果,在这些场景中,我们实现的F 1测度高达79.6%。
更新日期:2019-09-23
down
wechat
bug