当前位置: X-MOL 学术Int. J. Mach. Learn. & Cyber. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
View-independent representation with frame interpolation method for skeleton-based human action recognition
International Journal of Machine Learning and Cybernetics ( IF 3.1 ) Pub Date : 2020-05-05 , DOI: 10.1007/s13042-020-01132-4
Yingguo Jiang , Jun Xu , Tong Zhang

Human action recognition is an important branch of computer vision science. It is a challenging task based on skeletal data because of joints’ complex spatiotemporal information. In this work, we propose a method for action recognition, which consists of three parts: view-independent representation, frame interpolation, and combined model. First, the action sequence becomes view-independent representations independent of the view. Second, when judgment conditions are met, differentiated frame interpolations are used to expand the temporal dimensional information. Then, a combined model is adopted to extract these representation features and classify actions. Experimental results on two multi-view benchmark datasets Northwestern-UCLA and NTU RGB+D demonstrate the effectiveness of our complete method. Although using only one type of action feature and a simple architecture combined model, our complete method still outperforms most of the referential state-of-the-art methods and has strong robustness.



中文翻译:

基于帧的插值方法与视图无关表示用于基于骨骼的人体动作识别

人体动作识别是计算机视觉科学的重要分支。由于关节复杂的时空信息,基于骨骼数据这是一项艰巨的任务。在这项工作中,我们提出了一种动作识别方法,该方法包括三个部分:独立于视图的表示,帧插值和组合模型。首先,动作序列成为独立于视图的视图无关表示。其次,当满足判断条件时,使用差分帧插值来扩展时间维信息。然后,采用组合模型提取这些表示特征并对动作进行分类。在两个多视图基准数据集Northwestern-UCLA和NTU RGB + D上的实验结果证明了我们完整方法的有效性。

更新日期:2020-05-05
down
wechat
bug