当前位置: X-MOL 学术Vis. Comput. Ind. Biomed. Art › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fused behavior recognition model based on attention mechanism.
Visual Computing for Industry, Biomedicine, and Art ( IF 3.2 ) Pub Date : 2020-03-12 , DOI: 10.1186/s42492-020-00045-x
Lei Chen 1 , Rui Liu 1 , Dongsheng Zhou 1 , Xin Yang 2 , Qiang Zhang 1, 2
Affiliation  

With the rapid development of deep learning technology, behavior recognition based on video streams has made great progress in recent years. However, there are also some problems that must be solved: (1) In order to improve behavior recognition performance, the models have tended to become deeper, wider, and more complex. However, some new problems have been introduced also, such as that their real-time performance decreases; (2) Some actions in existing datasets are so similar that they are difficult to distinguish. To solve these problems, the ResNet34-3DRes18 model, which is a lightweight and efficient two-dimensional (2D) and three-dimensional (3D) fused model, is constructed in this study. The model used 2D convolutional neural network (2DCNN) to obtain the feature maps of input images and 3D convolutional neural network (3DCNN) to process the temporal relationships between frames, which made the model not only make use of 3DCNN’s advantages on video temporal modeling but reduced model complexity. Compared with state-of-the-art models, this method has shown excellent performance at a faster speed. Furthermore, to distinguish between similar motions in the datasets, an attention gate mechanism is added, and a Res34-SE-IM-Net attention recognition model is constructed. The Res34-SE-IM-Net achieved 71.85%, 92.196%, and 36.5% top-1 accuracy (The predicting label obtained from model is the largest one in the output probability vector. If the label is the same as the target label of the motion, the classification is correct.) respectively on the test sets of the HMDB51, UCF101, and Something-Something v1 datasets.

中文翻译:


基于注意力机制的融合行为识别模型。



随着深度学习技术的快速发展,基于视频流的行为识别近年来取得了长足的进步。但也存在一些必须解决的问题:(1)为了提高行为识别性能,模型趋于更深、更广、更复杂。但也带来了一些新的问题,如实时性下降; (2)现有数据集中的一些动作非常相似,以至于难以区分。为了解决这些问题,本研究构建了ResNet34-3DRes18模型,这是一种轻量级且高效的二维(2D)和三维(3D)融合模型。该模型使用2D卷积神经网络(2DCNN)获取输入图像的特征图,并使用3D卷积神经网络(3DCNN)处理帧之间的时间关系,这使得该模型不仅利用了3DCNN在视频时间建模上的优势,而且降低模型复杂性。与最先进的模型相比,该方法以更快的速度表现出了优异的性能。此外,为了区分数据集中的相似运动,添加了注意力门机制,并构建了Res34-SE-IM-Net注意力识别模型。 Res34-SE-IM-Net 的 top-1 准确率分别为 71.85%、92.196% 和 36.5%(模型得到的预测标签是输出概率向量中最大的标签。如果该标签与运动,分类正确。)分别在 HMDB51、UCF101 和 Something-Something v1 数据集的测试集上。
更新日期:2020-03-12
down
wechat
bug