当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
RED-Net: A Recurrent Encoder–Decoder Network for Video-Based Face Alignment
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2018-05-23 , DOI: 10.1007/s11263-018-1095-1
Xi Peng , Rogerio S. Feris , Xiaoyu Wang , Dimitris N. Metaxas

We propose a novel method for real-time face alignment in videos based on a recurrent encoder–decoder network model. Our proposed model predicts 2D facial point heat maps regularized by both detection and regression loss, while uniquely exploiting recurrent learning at both spatial and temporal dimensions. At the spatial level, we add a feedback loop connection between the combined output response map and the input, in order to enable iterative coarse-to-fine face alignment using a single network model, instead of relying on traditional cascaded model ensembles. At the temporal level, we first decouple the features in the bottleneck of the network into temporal-variant factors, such as pose and expression, and temporal-invariant factors, such as identity information. Temporal recurrent learning is then applied to the decoupled temporal-variant features. We show that such feature disentangling yields better generalization and significantly more accurate results at test time. We perform a comprehensive experimental analysis, showing the importance of each component of our proposed model, as well as superior results over the state of the art and several variations of our method in standard datasets.

中文翻译:

RED-Net:用于基于视频的人脸对齐的循环编码器-解码器网络

我们提出了一种基于循环编码器-解码器网络模型的视频实时人脸对齐新方法。我们提出的模型预测了由检测和回归损失正则化的二维面部点热图,同时在空间和时间维度上独特地利用循环学习。在空间层面,我们在组合输出响应图和输入之间添加了一个反馈回路连接,以便使用单个网络模型实现从粗到细的迭代人脸对齐,而不是依赖于传统的级联模型集成。在时间层面,我们首先将网络瓶颈中的特征解耦为时间变量因素,例如姿势和表情,以及时间不变因素,例如身份信息。然后将时间循环学习应用于解耦的时间变量特征。我们表明,这种特征解开在测试时产生了更好的泛化和更准确的结果。我们进行了全面的实验分析,显示了我们提出的模型的每个组件的重要性,以及在标准数据集中我们的方法在现有技术和几种变体上的优越结果。
更新日期:2018-05-23
down
wechat
bug