{"title":"GCKT: Context-Aware Gating of Heterogeneous Learning Features With Transformer for Cognitive Knowledge Tracing in Intelligent Tutoring Systems","authors":"Zhifeng Wang, Jinyu Liu, Chunyan Zeng","doi":"10.1155/int/3037960","DOIUrl":null,"url":null,"abstract":"<p>With the rapid growth of online education, Knowledge Tracing (KT) has become central to adaptive learning systems. Yet existing models struggle to integrate the multidimensional and heterogeneous signals generated during learning—such as exercise attributes, response behaviors, temporal factors, and hierarchical knowledge structure. Many methods rely on naive feature concatenation or fixed weighting, limiting their ability to capture synergistic interactions among features. We propose Gated full-features Transformer Cognitive Knowledge Tracing (GCKT), a Transformer-based model with a gated fusion mechanism that dynamically integrates multiple inputs. The model first embeds exercise, response correctness, response time, and hierarchical knowledge features (topics and concepts). Topic and concept embeddings are linearly projected into a unified knowledge representation. The exercise, time, correctness, and unified knowledge embeddings are then concatenated and passed through a learnable gating network (linear layer with sigmoid) to produce context-aware importance weights. These weights are applied element-wise to adaptively scale each feature before projection into a fused representation for the sequence encoder, enabling the Transformer to more accurately model the evolution of students’ cognitive states. Extensive experiments on public datasets, including MOOCRadar and Math, show that GCKT consistently outperforms strong baselines—such as DKT, AKT, and SAINT+—on key metrics (AUC and F1), delivering robust gains across settings. The results demonstrate that dynamic, fine-grained feature fusion substantially improves KT performance and that GCKT offers a general, effective approach for modeling complex learning scenarios.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2026 1","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/3037960","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/int/3037960","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
With the rapid growth of online education, Knowledge Tracing (KT) has become central to adaptive learning systems. Yet existing models struggle to integrate the multidimensional and heterogeneous signals generated during learning—such as exercise attributes, response behaviors, temporal factors, and hierarchical knowledge structure. Many methods rely on naive feature concatenation or fixed weighting, limiting their ability to capture synergistic interactions among features. We propose Gated full-features Transformer Cognitive Knowledge Tracing (GCKT), a Transformer-based model with a gated fusion mechanism that dynamically integrates multiple inputs. The model first embeds exercise, response correctness, response time, and hierarchical knowledge features (topics and concepts). Topic and concept embeddings are linearly projected into a unified knowledge representation. The exercise, time, correctness, and unified knowledge embeddings are then concatenated and passed through a learnable gating network (linear layer with sigmoid) to produce context-aware importance weights. These weights are applied element-wise to adaptively scale each feature before projection into a fused representation for the sequence encoder, enabling the Transformer to more accurately model the evolution of students’ cognitive states. Extensive experiments on public datasets, including MOOCRadar and Math, show that GCKT consistently outperforms strong baselines—such as DKT, AKT, and SAINT+—on key metrics (AUC and F1), delivering robust gains across settings. The results demonstrate that dynamic, fine-grained feature fusion substantially improves KT performance and that GCKT offers a general, effective approach for modeling complex learning scenarios.
期刊介绍:
The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.