当前位置: X-MOL 学术Concurr. Comput. Pract. Exp. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hand gesture recognition based on attentive feature fusion
Concurrency and Computation: Practice and Experience ( IF 1.5 ) Pub Date : 2020-06-30 , DOI: 10.1002/cpe.5910
Bin Yu 1, 2 , Zhiming Luo 3 , Huangbin Wu 1 , Shaozi Li 1
Affiliation  

Video‐based hand gesture recognition plays an important role in human‐computer interaction (HCI). Recent advanced methods usually add 3D convolutional neural networks to capture the information from both spatial and temporal dimensions. However, these methods suffer the issue of requiring large‐scale training data and high computational complexity. To address this issue, we proposed an attentive feature fusion framework for efficient hand‐gesture recognition. In our proposed model, we utilize a shallow two‐stream CNNs to capture the low‐level features from the original video frame and its corresponding optical flow. Following, we designed an attentive feature fusion module to selectively combine useful information from the previous two streams based on the attention mechanism. Finally, we obtain a compact embedding of a video by concatenating features from several short segments. To evaluate the effectiveness of our proposed framework, we train and test our method on a large‐scale video‐based hand gesture recognition dataset, Jester. Experimental results demonstrate that our approach obtains very competitive performance on the Jester dataset with a classification accuracy of 95.77%.

中文翻译:

基于注意力特征融合的手势识别

基于视频的手势识别在人机交互(HCI)中发挥着重要作用。最近的先进方法通常会添加 3D 卷积神经网络来捕获来自空间和时间维度的信息。然而,这些方法存在需要大规模训练数据和高计算复杂度的问题。为了解决这个问题,我们提出了一种有效的手势识别的注意力特征融合框架。在我们提出的模型中,我们利用浅层双流 CNN 从原始视频帧及其相应的光流中捕获低级特征。接下来,我们设计了一个注意力特征融合模块,以基于注意力机制选择性地结合来自前两个流的有用信息。最后,我们通过连接来自几个短片段的特征来获得视频的紧凑嵌入。为了评估我们提出的框架的有效性,我们在基于视频的大规模手势识别数据集 Jester 上训练和测试我们的方法。实验结果表明,我们的方法在 Jester 数据集上获得了非常有竞争力的性能,分类准确率为 95.77%。
更新日期:2020-06-30
down
wechat
bug