当前位置: X-MOL 学术J. Sci. Educ. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How Does Augmented Observation Facilitate Multimodal Representational Thinking? Applying Deep Learning to Decode Complex Student Construct
Journal of Science Education and Technology ( IF 4.4 ) Pub Date : 2020-09-16 , DOI: 10.1007/s10956-020-09856-2
Shannon H. Sung , Chenglu Li , Guanhua Chen , Xudong Huang , Charles Xie , Joyce Massicotte , Ji Shen

In this paper, we demonstrate how machine learning could be used to quickly assess a student’s multimodal representational thinking. Multimodal representational thinking is the complex construct that encodes how students form conceptual, perceptual, graphical, or mathematical symbols in their mind. The augmented reality (AR) technology is adopted to diversify student’s representations. The AR technology utilized a low-cost, high-resolution thermal camera attached to a smartphone which allows students to explore the unseen world of thermodynamics. Ninth-grade students (N = 314) engaged in a prediction–observation–explanation (POE) inquiry cycle scaffolded to leverage the augmented observation provided by the aforementioned device. The objective is to investigate how machine learning could expedite the automated assessment of multimodal representational thinking of heat energy. Two automated text classification methods were adopted to decode different mental representations students used to explain their haptic perception, thermal imaging, and graph data collected in the lab. Since current automated assessment in science education rarely considers multilabel classification, we resorted to the help of the state-of-the-art deep learning technique—bidirectional encoder representations from transformers (BERT). The BERT model classified open-ended responses into appropriate categories with higher precision than the traditional machine learning method. The satisfactory accuracy of deep learning in assigning multiple labels is revolutionary in processing qualitative data. The complex student construct, such as multimodal representational thinking, is rarely mutually exclusive. The study avails a convenient technique to analyze qualitative data that does not satisfy the mutual-exclusiveness assumption. Implications and future studies are discussed.



中文翻译:

增强观察如何促进多模态表征思维?应用深度学习解码复杂的学生构造

在本文中,我们演示了如何使用机器学习来快速评估学生的多模式表征思维。多峰表示性思维是一种复杂的结构,可编码学生如何在他们的脑海中形成概念,知觉,图形或数学符号。增强现实(AR)技术被采用来使学生的表现形式多样化。AR技术利用连接到智能手机的低成本高分辨率高分辨率热像仪,使学生能够探索看不见的热力学世界。九年级学生(N = 314)参与了一个预测-观察-解释(POE)查询周期,以利用上述设备提供的增强观察结果。目的是研究机器学习如何促进热能的多模态代表性思维的自动评估。两种自动文本分类方法被用来解码学生用来解释他们的触觉感知,热成像和在实验室中收集的图形数据的不同心理表征。由于目前科学教育中的自动化评估很少考虑多标签分类,因此我们借助最先进的深度学习技术(来自变压器(BERT)的双向编码器表示)进行帮助。BERT模型将开放式响应分类为适当的类别,比传统的机器学习方法具有更高的精度。深度学习在分配多个标签方面的令人满意的准确性在处理定性数据方面具有革命性意义。复杂的学生结构(例如多模式表征思维)很少相互排斥。该研究提供了一种便利的技术来分析不满足互斥假设的定性数据。涵义和未来的研究进行了讨论。该研究提供了一种便利的技术来分析不满足互斥假设的定性数据。涵义和未来的研究进行了讨论。该研究提供了一种便利的技术来分析不满足互斥假设的定性数据。涵义和未来的研究进行了讨论。

更新日期:2020-09-16
down
wechat
bug