当前位置: X-MOL 学术IEEE Trans. Hum. Mach. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Facial Expression Imitation Method for Humanoid Robot Based on Smooth-Constraint Reversed Mechanical Model (SRMM)
IEEE Transactions on Human-Machine Systems ( IF 3.6 ) Pub Date : 2020-12-01 , DOI: 10.1109/thms.2020.3017781
Zhong Huang , Fuji Ren , Min Hu , Sugen Chen

To improve the space–time similarity and motion smoothness of facial expression imitation (FEI), a real-time FEI method for a humanoid robot is proposed based on smooth-constraint reversed mechanical model (SRMM) by combining a sequence-to-sequence deep learning model and a motion-smoothing constraint. First, on the basis of facial data from a Kinect capture device, a facial feature vector is characterized based on 3 head postures, 17 facial animation units, and facial geometric deformation cascaded by Laplace coordinates. Second, a reversed mechanical model is constructed via a multilayer long short-term memory neural network to accomplish direct mapping from facial feature sequences to motor position sequences. Additionally, to overcome the motor chattering phenomenon during real-time FEI, a high-order polynomial is constructed to fit the position sequence of motors, and an SRMM is proposed and designed based on the deviation of position, velocity, and acceleration. Finally, aiming to imitate the real-time facial feature sequences of a performer captured from Kinect, the optimal position sequences generated based on the SRMM is sent to the hardware system to keep the space–time characteristics consistent with those of the performer. The experimental results demonstrate that the motor position deviation of the SRMM is less than 8%. The space–time similarity between the robot and the performer is greater than 85%, and the motion smoothness of the online FEI exceeded 90%. Compared with other related methods, the proposed method achieves a remarkable improvement in motor position deviation, space–time similarity, and motion smoothness.

中文翻译:

基于平滑约束逆向机械模型(SRMM)的仿人机器人面部表情模仿方法

为了提高面部表情模仿(FEI)的时空相似性和运动平滑度,提出了一种基于平滑约束逆向机械模型(SRMM)的仿人机器人实时 FEI 方法,结合序列到序列深度学习模型和运动平滑约束。首先,基于来自Kinect捕获设备的面部数据,基于3个头部姿势、17个面部动画单元和拉普拉斯坐标级联的面部几何变形来表征面部特征向量。其次,通过多层长短期记忆神经网络构建反向力学模型,完成从面部特征序列到运动位置序列的直接映射。此外,为了克服实时 FEI 期间的电机抖动现象,构造了一个高阶多项式来拟合电机的位置序列,并基于位置、速度和加速度的偏差提出并设计了一种SRMM。最后,为了模仿从 Kinect 捕获的表演者的实时面部特征序列,将基于 SRMM 生成的最佳位置序列发送到硬件系统,以保持与表演者的时空特征一致。实验结果表明,SRMM 的电机位置偏差小于 8%。机器人与表演者的时空相似度大于 85%,在线 FEI 的运动平滑度超过 90%。与其他相关方法相比,该方法在运动位置偏差、时空相似性和运动平滑度方面取得了显着改善。
更新日期:2020-12-01
down
wechat
bug