Computers and Education Artificial Intelligence最新文献

筛选
英文 中文
A LLM-based pedagogical framework for active, inquiry-based and adaptive learning in L2 writing 基于法学硕士的第二语言写作主动、探究性和适应性学习的教学框架
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2025-12-20 DOI: 10.1016/j.caeai.2025.100535
Ruonan Wang , Yan Yin , Yongbo Cao
{"title":"A LLM-based pedagogical framework for active, inquiry-based and adaptive learning in L2 writing","authors":"Ruonan Wang ,&nbsp;Yan Yin ,&nbsp;Yongbo Cao","doi":"10.1016/j.caeai.2025.100535","DOIUrl":"10.1016/j.caeai.2025.100535","url":null,"abstract":"<div><div>Traditional L2 writing instruction often struggles to provide personalized, process-oriented feedback and engage student motivation. While generative AI like ChatGPT offers a potential solution, its application lacks a robust pedagogical foundation. This study proposes an innovative framework that integrates ChatGPT into L2 writing through a synthesis of active, inquiry-based, and adaptive learning principles. Within the framework, learners occupy the central position, undergoing the teaching implementation of LLM-based six-step writing instruction process to inquisitively learn, experiencing the teaching assessment of three-dimensional writing evaluation to actively acquire, and benefiting from the teaching reflection of plan for subsequent writing instruction to adaptively improve. Based on a quasi-experimental study involving 50 sophomores, the framework is proved to be effective, enhancing significantly learners’ writing outcomes and motivation. These findings add to the limited body of research on the utilization of ChatGPT in education, providing valuable implications for research and pedagogical practices in L2 writing.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100535"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Undergraduate students’ learning outcomes with ChatGPT: A meta-analytic study 大学生ChatGPT学习效果的元分析研究
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2025-12-22 DOI: 10.1016/j.caeai.2025.100536
Fangfang Mo , Jing Huang , Yao Yang , Zafer Özen , Yukiko Maeda , F. Richard Olenchak
{"title":"Undergraduate students’ learning outcomes with ChatGPT: A meta-analytic study","authors":"Fangfang Mo ,&nbsp;Jing Huang ,&nbsp;Yao Yang ,&nbsp;Zafer Özen ,&nbsp;Yukiko Maeda ,&nbsp;F. Richard Olenchak","doi":"10.1016/j.caeai.2025.100536","DOIUrl":"10.1016/j.caeai.2025.100536","url":null,"abstract":"<div><div>ChatGPT has gained substantial attention in the field of higher education, particularly for its potential to enhance undergraduate students' learning outcomes. To better understand ChatGPT's impact, we conducted a meta-analysis evaluating the effects of ChatGPT applications on undergraduate students' learning outcomes, with data collected from studies published between January 1st, 2023, and May 31st, 2025. Our search across nine academic databases identified 5555 potential studies, of which 66 met the pre-defined inclusion criteria and were selected for meta-analysis. The meta-analysis incorporated 129 effect sizes, allowing us to estimate the overall impact of ChatGPT on undergraduate students' learning across a variety of academic disciplines. The results suggested that ChatGPT applications had a large positive effect (Hedges' <span><math><mrow><mi>g</mi></mrow></math></span> = 1.14, <em>SE</em> = 0.185) on undergraduate students' learning outcomes. The results of this study highlight undergraduate students' overall positive experiences with ChatGPT. These findings contribute to the growing body of literature on the role of artificial intelligence (AI) in higher education, offering critical insights for educators, administrators, and policymakers seeking to enhance undergraduate students' learning outcomes by integrating AI technologies like ChatGPT into academic curricula.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100536"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How reliable are large language models in analyzing the quality of written lesson plans? A mixed-methods study from a teacher internship program 大型语言模型在分析书面教案质量方面有多可靠?一项来自教师实习项目的混合方法研究
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2025-12-23 DOI: 10.1016/j.caeai.2025.100538
Dennis Hauk, Nina Soujon
{"title":"How reliable are large language models in analyzing the quality of written lesson plans? A mixed-methods study from a teacher internship program","authors":"Dennis Hauk,&nbsp;Nina Soujon","doi":"10.1016/j.caeai.2025.100538","DOIUrl":"10.1016/j.caeai.2025.100538","url":null,"abstract":"<div><div>This study investigates the reliability of Large Language Models (LLMs) in evaluating the quality of written lesson plans from pre-service teachers. A total of 32 lesson plans, each ranging from 60 to 100 pages, were collected during a teacher internship program for civic education pre-service teachers. Using the ChatGPT-o1 reasoning model, we compared a human expert standard with LLM coding outcomes in a two-phase explanatory sequential mixed-methods design that combined quantitative reliability testing with a qualitative follow-up analysis to interpret inter-dimensional patterns of agreement. Quantitatively, overall reliability across six qualitative components of written lesson plans (Content Transformation, Task Creation, Adaptation, Goal Clarification, Contextualization and Sequencing<em>)</em> reached a moderate alignment in identifying explicit instructional features (α = .689; 73.8 % exact agreement). Qualitative analyses further revealed that the LLM struggled with high-inferential criteria, such as the depth of pedagogical reasoning and the coherence of instructional decisions, as it often relied on surface-level textual cues rather than deeper contextual understanding. These findings indicate that LLMs can support teacher educators and educational researchers as a design-stage screening tool, but human judgment remains essential for interpreting complex pedagogical constructs in written lesson plans and for ensuring the ethical and pedagogical integrity of evaluation processes. We outline implications for integrating LLM-based analysis into teacher education and emphasize improved prompt design and systematic human oversight to ensure reliable qualitative use.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100538"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directive, metacognitive, or a blend of both? A comparison of AI-generated feedback types on student engagement, confidence, and outcomes 指令,元认知,还是两者兼而有之?人工智能生成的反馈类型对学生参与度、信心和结果的比较
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2026-02-10 DOI: 10.1016/j.caeai.2026.100553
Omar Alsaiari , Nilufar Baghaei , Jason M. Lodge , Omid Noroozi , Dragan Gašević , Marie Boden , Hassan Khosravi
{"title":"Directive, metacognitive, or a blend of both? A comparison of AI-generated feedback types on student engagement, confidence, and outcomes","authors":"Omar Alsaiari ,&nbsp;Nilufar Baghaei ,&nbsp;Jason M. Lodge ,&nbsp;Omid Noroozi ,&nbsp;Dragan Gašević ,&nbsp;Marie Boden ,&nbsp;Hassan Khosravi","doi":"10.1016/j.caeai.2026.100553","DOIUrl":"10.1016/j.caeai.2026.100553","url":null,"abstract":"<div><div>Effective feedback is a central component of successful student learning, with extensive research examining how best to implement it in educational settings. Increasingly, feedback is being generated by artificial intelligence (AI), offering scalable and adaptive responses. Two widely studied approaches are directive feedback, which gives explicit explanations and reduces cognitive load to speed up learning, and metacognitive feedback which prompts learners to reflect, track their progress, and develop self-regulated learning (SRL) skills. While both approaches have clear theoretical advantages, their comparative effects on engagement, confidence, and quality of work remain underexplored. This study presents a semester-long randomised controlled trial with 329 students in an introductory design and programming course using an adaptive educational platform. Participants were assigned to receive directive, metacognitive, or hybrid AI-generated feedback that blended elements of both directive and metacognitive feedback. Results showed that revision behaviour differed across feedback conditions, with Hybrid prompting the most revisions compared to Directive and Metacognitive. Confidence ratings were uniformly high, and resource quality outcomes were comparable across conditions. These findings highlight the promise of AI in delivering feedback that balances clarity with reflection. Hybrid approaches, in particular, illustrate how AI-generated feedback can be structured to support both clarity and reflection, although more work is required to evaluate their broader impact.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100553"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147384880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the relationship between empowerment in using artificial intelligence for problem-solving and artificial intelligence ethical awareness: Multi-group structural equation modelling 探索使用人工智能解决问题的授权与人工智能伦理意识之间的关系:多群体结构方程建模
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2026-02-08 DOI: 10.1016/j.caeai.2026.100555
Siu Cheung Kong , Jinyu Zhu
{"title":"Exploring the relationship between empowerment in using artificial intelligence for problem-solving and artificial intelligence ethical awareness: Multi-group structural equation modelling","authors":"Siu Cheung Kong ,&nbsp;Jinyu Zhu","doi":"10.1016/j.caeai.2026.100555","DOIUrl":"10.1016/j.caeai.2026.100555","url":null,"abstract":"<div><div>While increasing attention has been given to cultivating students' artificial intelligence (AI) ethical awareness, the factors that contribute to its development remain underexplored. Psychological empowerment is a crucial motivational factor influencing students' intentions to incorporate ethical norms in developing AI-based solutions. Hence, this study filled this up by exploring the relationship between empowerment in using AI for problem-solving and AI ethical awareness. Empowerment in using AI for problem-solving in this study comprised three components: impact, self-efficacy and meaningfulness. Data was collected from 681 students from secondary schools and a university in Hong Kong. Structural equation modelling (SEM) results revealed that the impact of using AI for problem-solving positively predicted human autonomy, beneficence, and fairness components of ethical awareness. Meaningfulness in using AI for problem-solving was positively associated with beneficence. However, self-efficacy in using AI for problem-solving negatively predicted beneficence. Multi-group SEM results revealed that gender significantly moderated the structural paths between impact/self-efficacy/meaningfulness in using AI for problem-solving and human autonomy. Such promising findings highlight that psychological empowerment is an effective intervention to cultivate students’ ethical awareness in using AI for problem-solving, with the project-based learning approach providing a supportive environment for AI applications.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100555"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The AI literacy heptagon: A structured approach to AI literacy in higher education 人工智能素养七角形:高等教育中人工智能素养的结构化方法
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2026-01-05 DOI: 10.1016/j.caeai.2026.100540
Veronika Hackl , Alexandra Elena Müller , Maximilian Sailer
{"title":"The AI literacy heptagon: A structured approach to AI literacy in higher education","authors":"Veronika Hackl ,&nbsp;Alexandra Elena Müller ,&nbsp;Maximilian Sailer","doi":"10.1016/j.caeai.2026.100540","DOIUrl":"10.1016/j.caeai.2026.100540","url":null,"abstract":"<div><div>The integrative literature review addresses the conceptualization and implementation of AI Literacy (AIL) in Higher Education (HE) by examining recent research literature. Through an analysis of publications (2021–2024), we explore (1) how AIL is defined and conceptualized in current research, particularly in HE, and how it can be delineated from related concepts such as Data Literacy, Media Literacy, and Computational Literacy; (2) how various definitions can be synthesized into a comprehensive working definition, and (3) how scientific insights can be effectively translated into educational practice. Our analysis identifies seven central dimensions of AIL: technical, applicational, critical thinking, ethical, social, integrational, and legal. These are synthesized in the AI Literacy Heptagon, deepening conceptual understanding and supporting the structured development of AIL in HE. The study aims to bridge the gap between theoretical AIL conceptualizations and the practical implementation in academic curricula.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100540"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Less stress, better scores, same learning: The dissociation of performance and learning in AI-supported programming education 更少的压力,更好的成绩,同样的学习:在人工智能支持的编程教育中,表现和学习的分离
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2025-12-24 DOI: 10.1016/j.caeai.2025.100537
Patrick Bassner, Ben Lenk-Ostendorf, Ramona Beinstingel, Tobias Wasner, Stephan Krusche
{"title":"Less stress, better scores, same learning: The dissociation of performance and learning in AI-supported programming education","authors":"Patrick Bassner,&nbsp;Ben Lenk-Ostendorf,&nbsp;Ramona Beinstingel,&nbsp;Tobias Wasner,&nbsp;Stephan Krusche","doi":"10.1016/j.caeai.2025.100537","DOIUrl":"10.1016/j.caeai.2025.100537","url":null,"abstract":"<div><h3>Introduction</h3><div>Generative AI is reshaping programming education, yet its effects on conceptual learning, intrinsic motivation, and cognitive load remain unclear. This study tests whether assistance deepens understanding or primarily boosts task completion, and how scaffolded versus answer-giving designs matter.</div></div><div><h3>Objectives</h3><div>This study compares performance, learning, cognitive load, frustration, and motivation across three AI support types, and examines students’ perceptions.</div></div><div><h3>Methods</h3><div>A three-arm randomized controlled trial was conducted in an introductory programming (CS1) course at TUM (N=275). Participants completed a 90-minute exercise on concurrency, implementing a parallel sum with threading in one of three conditions: (1) <em>Iris</em>, a scaffolded tutor providing calibrated hints while withholding full solutions; (2) <em>ChatGPT</em>, unrestricted assistance that can provide complete solutions; (3) no-AI control using traditional web resources. Pre- and post-knowledge tests and a code comprehension task measured learning, while auto-graded test coverage measured performance. Validated scales captured intrinsic, germane, and extraneous cognitive load, frustration, and intrinsic motivation.</div></div><div><h3>Results</h3><div>Both AI groups achieved substantially higher exercise scores than the control group, with distinct distributions: <em>ChatGPT</em> users clustered at high scores, control participants at low scores, and <em>Iris</em> users spread across the full range. Despite these performance gains, neither AI condition produced greater pre–post knowledge gains or code-comprehension advantages. Both AI groups reported lower frustration and reduced extraneous and germane load than the control group, while intrinsic load did not differ. Only <em>Iris</em> increased intrinsic motivation. Students rated <em>ChatGPT</em> as easier to use and more helpful.</div></div><div><h3>Conclusion</h3><div>In this setting, generative AI acted primarily as a performance aid rather than a learning enhancer. Scaffolded, hint-first design preserved motivational benefits, whereas AI providing unrestricted solutions encouraged a “comfort trap” where students’ preferences misaligned with pedagogical effectiveness. These findings motivate scaffolded AI integration and assessment designs resilient to environments where performance no longer reliably tracks understanding.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100537"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146022835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating lab assistant chatbot on student learning and behaviors in a programming short course 在编程短期课程中评估实验室助理聊天机器人对学生学习和行为的影响
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2025-12-11 DOI: 10.1016/j.caeai.2025.100527
Thanapon Noraset, Akara Supratak, Chaiyong Ragkhitwetsagul, Nubthong Worathong, Suppawong Tuarob
{"title":"Evaluating lab assistant chatbot on student learning and behaviors in a programming short course","authors":"Thanapon Noraset,&nbsp;Akara Supratak,&nbsp;Chaiyong Ragkhitwetsagul,&nbsp;Nubthong Worathong,&nbsp;Suppawong Tuarob","doi":"10.1016/j.caeai.2025.100527","DOIUrl":"10.1016/j.caeai.2025.100527","url":null,"abstract":"<div><div>The rise of generative AI has increased interest in its application as an intelligent lab assistant in programming education, but concerns persist over its educational value and potential exploitation. While previous work supports using a customized chatbot as an assistant that provides specific guidance rather than allowing students to prompt responses freely, empirical evidence directly comparing these approaches is still lacking. This study evaluates the impact of two chatbot designs, Unrestricted and Assistant, on student learning and behavior in a short Python programming course. Through a controlled experiment involving 42 participants, we found that students using the Assistant chatbot, which provided guidance through preset and free-text prompts without offering direct solutions, showed significantly greater improvement from pre- to post-test than those using an Unrestricted chatbot. Analysis of over 1000 chatbot interactions revealed a strong preference for free-text input and a high rate of attempted exploits among participants. Additionally, prompt injection tests demonstrated the Assistant chatbot’s partial vulnerability to hijacking attempts. These findings highlight the benefits and limitations of AI assistants in programming education, underscoring the importance of guided interaction design to support learning while minimizing exploitation.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100527"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for evaluation of large language models in essay assessment: Reliability, alignment, and causal reasoning 评估论文评估中大型语言模型的框架:可靠性,一致性和因果推理
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2026-03-07 DOI: 10.1016/j.caeai.2026.100565
Tongxi Liu , Luyao Ye , Wei Yan
{"title":"A framework for evaluation of large language models in essay assessment: Reliability, alignment, and causal reasoning","authors":"Tongxi Liu ,&nbsp;Luyao Ye ,&nbsp;Wei Yan","doi":"10.1016/j.caeai.2026.100565","DOIUrl":"10.1016/j.caeai.2026.100565","url":null,"abstract":"<div><div>Recent advances in large language models have revitalized research on automated essay evaluation, yet critical concerns remain regarding their reliability, validity, and interpretability. This study presents a comparative analysis of five LLMs (GPT-4.1, Llama 4 Maverick, Gemini 2.5 Flash, Claude Sonnet 4, and DeepSeek R1) in the assessment of long English essays authored by non-native speakers in higher education. The analysis draws on LLM-generated scores for 60 essays to examine (a) intra-model reliability across repeated scoring runs, (b) the degree of alignment between model outputs and expert human ratings, and (c) causal feature dependencies that clarify how linguistic characteristics influence model scoring behavior. Findings reveal substantial variation: some models achieved near-perfect reproducibility and strong alignment with human raters, whereas others displayed inconsistency, score compression, or systematic underestimation. Causal discovery analysis further uncovered distinct evaluative heuristics, with most models prioritizing lexical precision and fluency, while others emphasized syntactic complexity or cross-domain integration. Collectively, these results establish model-specific reliability profiles and application contexts, providing empirical benchmarks and practical guidance for the responsible use of LLMs in educational writing assessment.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100565"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147384883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Potential risks of generative artificial intelligence integration into K-12 education: A scoping review 生成式人工智能融入K-12教育的潜在风险:范围审查
Computers and Education Artificial Intelligence Pub Date : 2026-06-01 Epub Date: 2026-02-25 DOI: 10.1016/j.caeai.2026.100561
Sisi Tao , Min Lan , Minjuan Wang , Hui Li
{"title":"Potential risks of generative artificial intelligence integration into K-12 education: A scoping review","authors":"Sisi Tao ,&nbsp;Min Lan ,&nbsp;Minjuan Wang ,&nbsp;Hui Li","doi":"10.1016/j.caeai.2026.100561","DOIUrl":"10.1016/j.caeai.2026.100561","url":null,"abstract":"<div><div>The rapid integration of Generative AI (GenAI) into K-12 education has outpaced our understanding of its potential risks and unintended consequences. Existing reviews have prioritized higher education and technical promise, overlooking the potential developmental risks it poses to children and adolescents. This scoping review synthesizes 22 empirical studies from K-12 contexts to map the types of risks and concerns reported in research, compile mitigation strategies, and identify priority gaps that could guide more developmentally informed future studies. Across the included studies, reported risks clustered into three domains: (a) risks to psychological wellbeing, with evidence of emotional disconnection and social isolation; (b) risks to intellectual agency, comprising cognitive dependency, distorted self-assessment, and the erosion of creative authorship; and (c) risks to ecological environments, including limited institutional readiness, unclear governance, equity gaps, and privacy concerns that complicate safe and consistent student engagement with GenAI. Promising mitigation strategies identified include designing tasks that value process over product, using GenAI to generate scaffolding (hints) rather than direct solutions, and embedding tools within critical AI literacy curricula. Ultimately, this review suggests that without intentional, developmentally responsive governance, GenAI risks displacing the productive struggle and authentic expression necessary for learning and identity formation. We conclude that safe integration requires shifting the focus from technical adoption to ecological protection, ensuring that tools function as transparent scaffolds for human cognition rather than opaque substitutes for student agency in the K-12 context.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"10 ","pages":"Article 100561"},"PeriodicalIF":0.0,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书