当前位置: X-MOL 学术Future Gener. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep interaction: Wearable robot-assisted emotion communication for enhancing perception and expression ability of children with Autism Spectrum Disorders
Future Generation Computer Systems ( IF 7.5 ) Pub Date : 2020-03-18 , DOI: 10.1016/j.future.2020.03.022
Wenjing Xiao , Miao Li , Min Chen , Ahmed Barnawi

Recent changes in both the social economy and people’s living conditions and their habits have had an influence on the incidence of Autism Spectrum Disorder (ASD), which has brought huge economic and mental burden to the society , and has become an urgent public health problem. The main symptom of autism in children is the presence of a social barrier, and one of the main reasons for that is a lack of emotional cognitive ability. However, the existing autism-treatment systems intended for children pay little attention to the emotion cognition disorder. Besides, too much importance has been given to the interaction with children, while limiting the timeliness and movability of these systems. With the aim to address this shortcoming, in this work, we focus on the emotion cognition disorder and design a feasible solution for enhancing perception and expression ability of children with ASD. First, the first-view emotional care system for children with ASD (First-ECS) is developed using a wearable robot as a system carrier and realizing the deep emotional interaction with children with autism from the first-view perspective. Emotion communication is used to meet high timeliness requirements for emotion transmission in the First-ECS. Next, the emotional interaction mechanism that is applicable to the line of sight and non-line of sight communication scenarios is introduced. Also, the emotion perception engine and emotion expression engine are designed. Subsequently, multimodal data collection and processing by a wearable affective robot are discussed. In addition, this paper introduces a multimodal data fusion method from the angle of emotion relevance and emotion computing model based on audio-visual data. Finally, a demo platform is built to verify the feasibility of the proposed system.



中文翻译:

深度互动:可穿戴机器人辅助的情感交流,可增强自闭症谱系障碍儿童的知觉和表达能力

社会经济,人们生活条件和生活习惯的最新变化对自闭症谱系障碍(ASD)的发生产生了影响,给社会带来了巨大的经济和精神负担,并已成为一个紧迫的公共卫生问题。儿童自闭症的主要症状是社交障碍的存在,其主要原因之一是缺乏情感认知能力。但是,现有的针对儿童的自闭症治疗系统很少关注情绪认知障碍。此外,在限制这些系统的及时性和可移动性的同时,过分重视与儿童的互动。为了解决这一缺点,在这项工作中,我们关注情绪认知障碍,并设计一种可行的解决方案,以增强ASD儿童的知觉和表达能力。首先,使用可穿戴机器人作为系统载体,开发了自闭症儿童的初视情感护理系统(First-ECS),并从初视角度实现了与自闭症儿童的深层情感互动。情绪交流用于满足First-ECS中对情绪传递的高度及时性要求。接下来,介绍适用于视线和非视线交流场景的情感交互机制。此外,设计了情感感知引擎和情感表达引擎。随后,讨论了可穿戴情感机器人的多模式数据收集和处理。此外,从情感相关性和基于视听数据的情感计算模型的角度介绍了一种多峰数据融合方法。最后,构建了一个演示平台来验证所提出系统的可行性。

更新日期:2020-03-18
down
wechat
bug