当前位置: X-MOL 学术Int. J. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Knowledge structure enhanced graph representation learning model for attentive knowledge tracing
International Journal of Intelligent Systems ( IF 5.0 ) Pub Date : 2021-11-24 , DOI: 10.1002/int.22763
Wenbin Gan 1 , Yuan Sun 1 , Yi Sun 2
Affiliation  

Knowledge tracing (KT) is a fundamental personalized-tutoring technique for learners in online learning systems. Recent KT methods employ flexible deep neural network-based models that excel at this task. However, the adequacy of KT is still challenged by the sparseness of the learners' exercise data. To alleviate the sparseness problem, most of the exiting KT studies are performed at the skill-level rather than the question-level, as questions are often numerous and associated with much fewer skills. However, at the skill level, KT neglects the distinctive information related to the questions themselves and their relations. In this case, the models can imprecisely infer the learners' knowledge states and might fail to capture the long-term dependencies in the exercising sequences. In the knowledge domain, skills are naturally linked as a graph (with the edges being the prerequisite relations between pedagogical concepts). We refer to such a graph as a knowledge structure (KS). Incorporating a KS into the KT procedure can potentially resolve both the sparseness and information loss, but this avenue has been underexplored because obtaining the complete KS of a domain is challenging and labor-intensive. In this paper, we propose a novel KS-enhanced graph representation learning model for KT with an attention mechanism (KSGKT). We first explore eight methods that automatically infer the domain KS from learner response data and integrate it into the KT procedure. Leveraging a graph representation learning model, we then obtain the question and skill embeddings from the KS-enhanced graph. To incorporate more distinctive information on the questions, we extract the cognitive question difficulty from the learning history of each learner. We then propose a convolutional representation method that fuses these disctinctive features, thus obtaining a comprehensive representation of each question. These representations are input to the proposed KT model, and the long-term dependencies are handled by the attention mechanism. The model finally predicts the learner's performance on new problems. Extensive experiments conducted from six perspectives on three real-world data sets demonstrated the superiority and interpretability of our model for learner-performance modeling. Based on the KT results, we also suggest three potential applications of our model.

中文翻译:

知识结构增强图表示学习模型用于注意力的知识追踪

知识追踪(KT)是在线学习系统中学习者的基本个性化辅导技术。最近的 KT 方法采用了灵活的基于深度神经网络的模型,这些模型在这项任务中表现出色。然而,KT 的充分性仍然受到学习者运动数据稀疏性的挑战。为了缓解稀疏问题,大多数现有的 KT 研究都是在技能级别而不是问题级别进行的,因为问题通常很多并且与更少的技能相关联。然而,在技能层面,KT 忽略了与问题本身及其关系相关的独特信息。在这种情况下,模型可以不精确地推断学习者的知识状态,并且可能无法捕捉到锻炼序列中的长期依赖关系。在知识领域,技能自然地以图的形式联系起来(边是教学概念之间的先决关系)。我们将这样的图称为知识结构(KS)。将 KS 合并到 KT 过程中可以潜在地解决稀疏和信息丢失的问题,但是由于获得域的完整 KS 具有挑战性且劳动密集型,因此该途径尚未得到充分探索。在本文中,我们提出了一种新的 KS 增强型图表示学习模型,用于具有注意机制 (KSGKT) 的 KT。我们首先探索了八种从学习者响应数据中自动推断域 KS 并将其集成到 KT 过程中的方法。利用图表示学习模型,我们从 KS 增强图获得问题和技能嵌入。为了纳入有关问题的更多独特信息,我们从每个学习者的学习历史中提取认知问题难度。然后,我们提出了一种卷积表示方法,融合了这些独特的特征,从而获得了每个问题的综合表示。这些表示是所提出的 KT 模型的输入,而长期依赖关系由注意力机制处理。该模型最终预测了学习者在新问题上的表现。从六个角度对三个真实世界数据集进行的广泛实验证明了我们的学习者绩效建模模型的优越性和可解释性。基于 KT 结果,我们还提出了我们模型的三个潜在应用。然后,我们提出了一种卷积表示方法,融合了这些独特的特征,从而获得了每个问题的综合表示。这些表示是所提出的 KT 模型的输入,而长期依赖关系由注意力机制处理。该模型最终预测了学习者在新问题上的表现。从六个角度对三个真实世界数据集进行的广泛实验证明了我们的学习者绩效建模模型的优越性和可解释性。基于 KT 结果,我们还提出了我们模型的三个潜在应用。然后,我们提出了一种卷积表示方法,融合了这些独特的特征,从而获得了每个问题的综合表示。这些表示是所提出的 KT 模型的输入,而长期依赖关系由注意力机制处理。该模型最终预测了学习者在新问题上的表现。从六个角度对三个真实世界数据集进行的广泛实验证明了我们的学习者绩效建模模型的优越性和可解释性。基于 KT 结果,我们还提出了我们模型的三个潜在应用。该模型最终预测了学习者在新问题上的表现。从六个角度对三个真实世界数据集进行的广泛实验证明了我们的学习者绩效建模模型的优越性和可解释性。基于 KT 结果,我们还提出了我们模型的三个潜在应用。该模型最终预测了学习者在新问题上的表现。从六个角度对三个真实世界数据集进行的广泛实验证明了我们的学习者绩效建模模型的优越性和可解释性。基于 KT 结果,我们还提出了我们模型的三个潜在应用。
更新日期:2021-11-24
down
wechat
bug