当前位置: X-MOL 学术Sensors › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ction Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model
Sensors ( IF 3.9 ) Pub Date : 2021-01-15 , DOI: 10.3390/s21020589
Luigi Ariano , Claudio Ferrari , Stefano Berretti , Alberto Del Bimbo

Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross-dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation.

中文翻译:

通过学习3D可变形模型的变形系数进行单元检测

面部动作单位(AUs)对应于各个面部肌肉或其组合的变形/收缩。因此,每个AU仅影响面部的一小部分,并且在许多情况下会产生不对称的变形。以3D形式生成和分析AU与其可能实现的潜在应用特别相关。在本文中,我们通过在新定义的人脸3D变形模型(3DMM)上进行开发,提出了用于3D AU检测和合成的解决方案。与文献中现有的大多数3DMM不同,后者主要模拟面部的整体变化并在适应局部和非对称变形方面显示出局限性,所提出的解决方案专门设计用于应对这种困难的变形。在训练阶段,了解变形系数,使3DMM能够变形为3D目标扫描,从而显示同一个人的中性和面部表情,从而使表情与身份变形脱钩。然后,这种变形系数一方面用于训练AU分类器,另一方面,可以将它们应用于3D中性扫描,以独立于对象的方式生成AU变形。在Bosphorus数据集上验证了提出的AU检测方法,即使在具有挑战性的跨数据集设置中,也能报告有关最新技术的竞争结果。我们进一步表明,学习到的系数足够通用,可以使用AU激活来合成逼真的3D人脸实例。这种变形系数一方面用于训练AU分类器,另一方面可以将它们应用于3D中性扫描,以独立于对象的方式生成AU变形。在Bosphorus数据集上验证了提出的AU检测方法,即使在具有挑战性的跨数据集设置中,也能报告有关最新技术的竞争结果。我们进一步表明,学习到的系数足够通用,可以使用AU激活来合成逼真的3D人脸实例。这种变形系数一方面用于训练AU分类器,另一方面可以将它们应用于3D中性扫描,以独立于对象的方式生成AU变形。在Bosphorus数据集上验证了提出的AU检测方法,即使在具有挑战性的跨数据集设置中,也能报告有关最新技术的竞争结果。我们进一步表明,学习到的系数足够通用,可以使用AU激活来合成逼真的3D人脸实例。即使在具有挑战性的跨数据集设置中也是如此。我们进一步表明,学习到的系数足够通用,可以使用AU激活来合成逼真的3D人脸实例。即使在具有挑战性的跨数据集设置中也是如此。我们进一步表明,学习到的系数足够通用,可以使用AU激活来合成逼真的3D人脸实例。
更新日期:2021-01-15
down
wechat
bug