当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Long-Term Temporal Convolutions for Action Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2017-06-06 , DOI: 10.1109/tpami.2017.2712608
Gul Varol , Ivan Laptev , Cordelia Schmid

Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7%) and HMDB51 (67.2%).

中文翻译:

动作识别的长期时间卷积

典型的人类动作持续几秒钟,并展现出独特的时空结构。最近的方法试图捕获这种结构并通过卷积神经网络学习动作表示。但是,这种表示通常是在几个视频帧的级别上学习的,这些视频帧无法在其整个时间范围内对动作进行建模。在这项工作中,我们使用具有长期时间卷积(LTC)的神经网络学习视频表示。我们证明,随着时间范围的增加,LTC-CNN模型提高了动作识别的准确性。我们还研究了不同低层表示的影响,例如视频像素的原始值和光流矢量场,并证明了高质量光流估计对于学习精确动作模型的重要性。
更新日期:2018-05-05
down
wechat
bug