当前位置: X-MOL 学术IEEE Trans. Veh. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Accurate Device-Free Action Recognition System Using Two-stream Network
IEEE Transactions on Vehicular Technology ( IF 6.8 ) Pub Date : 2020-07-01 , DOI: 10.1109/tvt.2020.2993901
Biyun Sheng , Yuanrun Fang , Fu Xiao , Lijuan Sun

With the popularization of Wi-Fi signals and urgent demands for passive human action recognition, wireless sensing based activity recognition has been a hot topic in recent years. Most existing researches rely on traditional hand-crafted features, and limited work focuses on how to effectively extract deep features with spatial-temporal information. In this paper, we develop an accurate device-free action recognition system utilizing a Commodity Off-The-Shelf (COTS) router and propose a novel deep learning framework (termed two-stream network) mining spatial-temporal cues in channel state information (CSI). Specifically, an entire action sample is segmented into a series of coherent sub-activity clips. Then we try to capture the complementary features on appearance from the original CSI clips and motion between CSI frames. The spatial and temporal information are processed with separate networks which are then integrated for the final recognition task. The extensive experiments are implemented on the data collected from two indoor environments, respectively reaching 97.6% and 96.9% recognition accuracies.

中文翻译:

一种基于双流网络的精确无设备动作识别系统

随着Wi-Fi信号的普及和对被动人体动作识别的迫切需求,基于无线传感的活动识别成为近年来的热门话题。现有的研究大多依赖于传统的手工特征,有限的工作集中在如何有效地提取具有时空信息的深层特征。在本文中,我们利用现成商品 (COTS) 路由器开发了一种准确的无设备动作识别系统,并提出了一种新颖的深度学习框架(称为双流网络),挖掘通道状态信息中的时空线索( CSI)。具体来说,整个动作样本被分割成一系列连贯的子活动剪辑。然后我们尝试从原始 CSI 剪辑和 CSI 帧之间的运动中捕获外观上的互补特征。空间和时间信息用单独的网络处理,然后集成到最终的识别任务中。对从两个室内环境收集的数据进行了广泛的实验,分别达到了 97.6% 和 96.9% 的识别准确率。
更新日期:2020-07-01
down
wechat
bug