{"title":"MPSKT: Multi-head ProbSparse Self-Attention for Knowledge Tracing","authors":"Lingmei Ai, Xiaoying Zhang, Ximing Hu","doi":"10.1145/3565387.3565420","DOIUrl":null,"url":null,"abstract":"Over the past two years, COVID-19 has led to a widespread rise in online education, and knowledge tracing has been used on various educational platforms. However, most existing knowledge tracing models still suffer from long-term dependence. To address this problem, we propose a Multi-head ProbSparse Self-Attention for Knowledge Tracing(MPSKT). Firstly, the temporal convolutional network is used to encode the position information of the input sequence. Then, the Multi-head ProbSparse Self-Attention in the encoder and decoder blocks is used to capture the relationship between the input sequences, and the convolution and pooling layers in the encoder block are used to shorten the length of the input sequence, which greatly reduces the time complexity of the model and better solves the problem of long-term dependence of the model. Finally, experimental results on three public online education datasets demonstrate the effectiveness of our proposed model.","PeriodicalId":182491,"journal":{"name":"Proceedings of the 6th International Conference on Computer Science and Application Engineering","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th International Conference on Computer Science and Application Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3565387.3565420","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Over the past two years, COVID-19 has led to a widespread rise in online education, and knowledge tracing has been used on various educational platforms. However, most existing knowledge tracing models still suffer from long-term dependence. To address this problem, we propose a Multi-head ProbSparse Self-Attention for Knowledge Tracing(MPSKT). Firstly, the temporal convolutional network is used to encode the position information of the input sequence. Then, the Multi-head ProbSparse Self-Attention in the encoder and decoder blocks is used to capture the relationship between the input sequences, and the convolution and pooling layers in the encoder block are used to shorten the length of the input sequence, which greatly reduces the time complexity of the model and better solves the problem of long-term dependence of the model. Finally, experimental results on three public online education datasets demonstrate the effectiveness of our proposed model.