当前位置: X-MOL 学术Future Gener. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Affective database for e-learning and classroom environments using Indian students’ faces, hand gestures and body postures
Future Generation Computer Systems ( IF 6.2 ) Pub Date : 2020-02-28 , DOI: 10.1016/j.future.2020.02.075
Ashwin T.S. , Ram Mohana Reddy Guddeti

Automatic recognition of the students’ affective states is a challenging task. These affective states are recognized using their facial expressions, hand gestures, and body postures. An intelligent tutoring system and smart classroom environment can be made more personalized using students’ affective state analysis, and it is performed using machine or deep learning techniques. Effective recognition of affective states is mainly dependent on the quality of the database used. But, there exist very few standard databases for the students’ affective state recognition and its analysis that works for both e-learning and classroom environments. In this paper, we propose a new affective database for both the e-learning and classroom environments using the students’ facial expressions, hand gestures, and body postures. The database consists of both posed (acted) and spontaneous (natural) expressions with single and multi-person in a single image frame with more than 4000 manually annotated image frames with object localization. The classification was done manually using the gold standard study for both Ekman’s basic emotions and learning-centered emotions, including neutral. The annotators reliably agree when discriminating against the recognized affective states with Cohen’s κ = 0.48. The created database is more robust as it considers various image variants such as occlusion, background clutter, pose, illumination, cultural & regional background, intra-class variations, cropped images, multipoint view, and deformations. Further, we analyzed the classification accuracy of our database using a few state-of-the-art machine and deep learning techniques. Experimental results demonstrate that the convolutional neural network based architecture achieved an accuracy of 83% and 76% for detection and classification, respectively.



中文翻译:

使用印度学生的面部,手势和身体姿势的情感学习数据库,用于电子学习和教室环境

自动识别学生的情感状态是一项艰巨的任务。这些情感状态通过其面部表情,手势和身体姿势来识别。使用学生的情感状态分析,可以使智能辅导系统和智能教室环境更加个性化,并且可以使用机器或深度学习技术来执行。有效识别情感状态主要取决于所使用数据库的质量。但是,几乎没有用于学生情感状态识别及其分析的标准数据库可用于电子学习和课堂环境。在本文中,我们使用学生的面部表情,手势和身体姿势为电子学习和教室环境提供了一个新的情感数据库。该数据库由在单个图像帧中具有单人和多人的姿势(表达)和自发(自然)表达式组成,并具有4000多个带有对象定位的手动注释图像帧。使用黄金标准研究对埃克曼的基本情绪和以学习为中心的情绪(包括中性情绪)进行手动分类。当与科恩(Cohen)区分公认的情感状态时,注释者可靠地达成一致。κ= 0.48。创建的数据库考虑到各种图像变体,例如遮挡,背景杂波,姿势,照明,文化和区域背景,类内变体,裁剪图像,多点视图和变形,因此更加健壮。此外,我们使用了一些最新的机器和深度学习技术来分析数据库的分类准确性。实验结果表明,基于卷积神经网络的体系结构的检测和分类精度分别为83%和76%。

更新日期:2020-02-28
down
wechat
bug