当前位置: X-MOL 学术IEEE Signal Process. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Novel Two-Stage Knowledge Distillation Framework for Skeleton-Based Action Prediction
IEEE Signal Processing Letters ( IF 3.9 ) Pub Date : 2022-09-05 , DOI: 10.1109/lsp.2022.3204190
Cuiwei Liu 1 , Xiaoxue Zhao 1 , Zhaokui Li 1 , Zhuo Yan 1 , Chong Du 2
Affiliation  

This letter addresses the challenging problem of action prediction with partially observed sequences of skeletons. Towards this goal, we propose a novel two-stage knowledge distillation framework, which transfers prior knowledge to assist the early prediction of ongoing actions. In the first stage, the action prediction model (also referred to as the student) learns from a couple of teachers to adaptively distill action knowledge at different progress levels for partial sequences. Then the learned student acts as a teacher in the next stage, with the objective of optimizing a better action prediction model in a self-training manner. We design an adaptive self-training strategy from the perspective of undermining the supervision from the annotated labels, since this hard supervision is actually too strict for partial sequences without enough discriminative information. Finally, the action prediction models trained in the two stages jointly constitute a two-stream architecture for action prediction. Extensive experiments on the large-scale NTU RGB+D dataset validate the effectiveness of the proposed method.

中文翻译:

一种新的基于骨架的动作预测的两阶段知识蒸馏框架

这封信解决了通过部分观察到的骨骼序列进行动作预测的挑战性问题。为了实现这一目标,我们提出了一种新颖的两阶段知识蒸馏框架,该框架转移了先验知识以帮助对正在进行的动作进行早期预测。在第一阶段,动作预测模型(也称为学生)从几个老师那里学习,以自适应地提取部分序列的不同进度级别的动作知识。然后学习的学生在下一阶段充当老师,目的是通过自我训练的方式优化更好的动作预测模型。我们从破坏注释标签的监督的角度设计了一种自适应的自我训练策略,因为对于没有足够判别信息的部分序列,这种硬监督实际上太严格了。最后,两个阶段训练的动作预测模型共同构成了动作预测的双流架构。在大规模 NTU RGB+D 数据集上的大量实验验证了所提方法的有效性。
更新日期:2022-09-05
down
wechat
bug