当前位置: X-MOL 学术ACM Trans. Multimed. Comput. Commun. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Human Activity Recognition from Multiple Sensors Data Using Multi-fusion Representations and CNNs
ACM Transactions on Multimedia Computing, Communications, and Applications ( IF 5.1 ) Pub Date : 2020-05-25 , DOI: 10.1145/3377882
Farzan Majeed Noori 1 , Michael Riegler 2 , Md Zia Uddin 1 , Jim Torresen 3
Affiliation  

With the emerging interest in the ubiquitous sensing field, it has become possible to build assistive technologies for persons during their daily life activities to provide personalized feedback and services. For instance, it is possible to detect an individual’s behavioral pattern (e.g., physical activity, location, and mood) by using sensors embedded in smart-watches and smartphones. The multi-sensor environments also come with some challenges, such as how to fuse and combine different sources of data. In this article, we explore several methods of fusion for multi-representations of data from sensors. Furthermore, multiple representations of sensor data were generated and then fused using data-level, feature-level , and decision-level fusions . The presented methods were evaluated using three publicly available human activity recognition (HAR) datasets. The presented approaches utilize Deep Convolutional Neural Networks (CNNs). A generic architecture for fusion of different sensors is proposed. The proposed method shows promising performance, with the best results reaching an overall accuracy of 98.4% for the Context-Awareness via Wrist-Worn Motion Sensors (HANDY) dataset and 98.7% for the Wireless Sensor Data Mining (WISDM version 1.1) dataset. Both results outperform previous approaches.

中文翻译:

使用多融合表示和 CNN 从多传感器数据中识别人类活动

随着人们对无处不在的传感领域的兴趣日益浓厚,在人们的日常生活活动中构建辅助技术以提供个性化的反馈和服务已成为可能。例如,可以通过使用嵌入智能手表和智能手机的传感器来检测个人的行为模式(例如,身体活动、位置和情绪)。多传感器环境也带来了一些挑战,例如如何融合和组合不同来源的数据。在本文中,我们探讨了几种融合来自传感器的数据的多表示的方法。此外,生成传感器数据的多种表示,然后使用数据级,特征级, 和决策级融合. 使用三个公开可用的人类活动识别 (HAR) 数据集对所提出的方法进行了评估。所提出的方法利用深度卷积神经网络(CNN)。提出了一种用于融合不同传感器的通用架构。所提出的方法显示出令人鼓舞的性能,通过腕戴式运动传感器(HANDY)数据集的上下文感知总体准确度达到 98.4%,无线传感器数据挖掘(WISDM 版本 1.1)数据集的总体准确度达到 98.7%。这两个结果都优于以前的方法。
更新日期:2020-05-25
down
wechat
bug