MAKT: Multichannel Attention Networks based Knowledge Tracing with Representation Learning

Xuyang Jiang, Y. Ouyang, Zhuang Liu, Wenge Rong, Zhang Xiong
{"title":"MAKT: Multichannel Attention Networks based Knowledge Tracing with Representation Learning","authors":"Xuyang Jiang, Y. Ouyang, Zhuang Liu, Wenge Rong, Zhang Xiong","doi":"10.1109/TALE54877.2022.00055","DOIUrl":null,"url":null,"abstract":"As an effective and emerging component of intelligent education, Knowledge Tracing(KT) achieves the combination of artificial intelligence and individualized learning, whose aim is to assess students’ mastery of knowledge concepts and assist in developing learning plans. Several existing KT models either use concepts sequence as input and evaluate students’ knowledge state or treat exercise as input to predict students’ future performance. In this paper, we introduce a constraint factor to extract concepts’ and exercises’ relation matrix, design three methods in representation learning, and propose a Multichannel Attention Networks based KT model(MAKT). Specifically, we restrict the co-occurrence relationship within a time window to extract the relation matrix and then train their representations via graph generative learning, graph contrastive learning, and matrix decomposition, respectively. In MAKT, a sliding window is implemented by multichannel where input sequence is sequentially lagged in turn by one position and attention mechanism is applied. We conduct experiments on several benchmark datasets and demonstrate that MAKT with concepts’ and exercises’ representation trained by matrix decomposition outperforms state-of-the-art models.","PeriodicalId":369501,"journal":{"name":"2022 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TALE54877.2022.00055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As an effective and emerging component of intelligent education, Knowledge Tracing(KT) achieves the combination of artificial intelligence and individualized learning, whose aim is to assess students’ mastery of knowledge concepts and assist in developing learning plans. Several existing KT models either use concepts sequence as input and evaluate students’ knowledge state or treat exercise as input to predict students’ future performance. In this paper, we introduce a constraint factor to extract concepts’ and exercises’ relation matrix, design three methods in representation learning, and propose a Multichannel Attention Networks based KT model(MAKT). Specifically, we restrict the co-occurrence relationship within a time window to extract the relation matrix and then train their representations via graph generative learning, graph contrastive learning, and matrix decomposition, respectively. In MAKT, a sliding window is implemented by multichannel where input sequence is sequentially lagged in turn by one position and attention mechanism is applied. We conduct experiments on several benchmark datasets and demonstrate that MAKT with concepts’ and exercises’ representation trained by matrix decomposition outperforms state-of-the-art models.
基于多通道注意网络的知识追踪与表征学习
知识追踪(Knowledge Tracing, KT)是智能教育中一种有效的新兴组成部分,实现了人工智能与个性化学习的结合,其目的是评估学生对知识概念的掌握情况,帮助制定学习计划。现有的几个KT模型要么以概念序列作为输入,评估学生的知识状态,要么以练习作为输入,预测学生未来的表现。本文引入约束因子提取概念和练习的关系矩阵,设计了三种表征学习方法,提出了基于Multichannel Attention Networks的KT模型(MAKT)。具体来说,我们将共现关系限制在一个时间窗口内,提取关系矩阵,然后分别通过图生成学习、图对比学习和矩阵分解来训练它们的表示。在MAKT中,滑动窗口由多通道实现,输入序列依次滞后一个位置,并应用注意机制。我们在几个基准数据集上进行了实验,并证明了通过矩阵分解训练的概念和练习表示的MAKT优于最先进的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信