Yumeng Zhu, Caifeng Zhu, Tao Wu, Shulei Wang, Yiyun Zhou, Jingyuan Chen, Fei Wu, Yan Li
{"title":"Impact of assignment completion assisted by Large Language Model-based chatbot on middle school students’ learning","authors":"Yumeng Zhu, Caifeng Zhu, Tao Wu, Shulei Wang, Yiyun Zhou, Jingyuan Chen, Fei Wu, Yan Li","doi":"10.1007/s10639-024-12898-3","DOIUrl":null,"url":null,"abstract":"<p>With the prevalence of Large Language Model-based chatbots, middle school students are increasingly likely to engage with these tools to complete their assignments, raising concerns about its potential to harm students’ learning motivation and learning outcomes. However, we know little about its real impact. Through quasi-experiment research with 127 Chinese middle school students, we examined the impact of completing assignments with a Large Language Model-based chatbot, wisdomBot, on middle school students’ assignment performance, learning outcomes, learning motivation, learning satisfaction, and learning experiences; we also summarized teachers’ reflections on learning design. Compared to control groups, the Large Language Model chatbot-assisted group demonstrated significantly higher assignment submission rates, word counts, and scores in assignment performance. However, they gained significantly lower scores on materials refinement and knowledge tests. No significant differences have been observed in learning motivation, satisfaction, enjoyment, and students’ ability to migrate their knowledge. The majority of students expressed satisfaction and willingness to continue using the tool. We also identified three key gaps in learning designs, including providing scaffolds for the potential prompts, suggesting group collaboration mode, and relinquishing the authoritarian of the teacher. Our findings provide insights regarding with Large Language Model-based chatbots we could better design assignment assessment tools, facilitate students’ autonomous learning, provide emotional support, propose guidelines and instructions about applying Large Language Model-based chatbots in K-12, as well as design specialized educational Large Language Model-based chatbots.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Education and Information Technologies","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1007/s10639-024-12898-3","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
With the prevalence of Large Language Model-based chatbots, middle school students are increasingly likely to engage with these tools to complete their assignments, raising concerns about its potential to harm students’ learning motivation and learning outcomes. However, we know little about its real impact. Through quasi-experiment research with 127 Chinese middle school students, we examined the impact of completing assignments with a Large Language Model-based chatbot, wisdomBot, on middle school students’ assignment performance, learning outcomes, learning motivation, learning satisfaction, and learning experiences; we also summarized teachers’ reflections on learning design. Compared to control groups, the Large Language Model chatbot-assisted group demonstrated significantly higher assignment submission rates, word counts, and scores in assignment performance. However, they gained significantly lower scores on materials refinement and knowledge tests. No significant differences have been observed in learning motivation, satisfaction, enjoyment, and students’ ability to migrate their knowledge. The majority of students expressed satisfaction and willingness to continue using the tool. We also identified three key gaps in learning designs, including providing scaffolds for the potential prompts, suggesting group collaboration mode, and relinquishing the authoritarian of the teacher. Our findings provide insights regarding with Large Language Model-based chatbots we could better design assignment assessment tools, facilitate students’ autonomous learning, provide emotional support, propose guidelines and instructions about applying Large Language Model-based chatbots in K-12, as well as design specialized educational Large Language Model-based chatbots.
期刊介绍:
The Journal of Education and Information Technologies (EAIT) is a platform for the range of debates and issues in the field of Computing Education as well as the many uses of information and communication technology (ICT) across many educational subjects and sectors. It probes the use of computing to improve education and learning in a variety of settings, platforms and environments.
The journal aims to provide perspectives at all levels, from the micro level of specific pedagogical approaches in Computing Education and applications or instances of use in classrooms, to macro concerns of national policies and major projects; from pre-school classes to adults in tertiary institutions; from teachers and administrators to researchers and designers; from institutions to online and lifelong learning. The journal is embedded in the research and practice of professionals within the contemporary global context and its breadth and scope encourage debate on fundamental issues at all levels and from different research paradigms and learning theories. The journal does not proselytize on behalf of the technologies (whether they be mobile, desktop, interactive, virtual, games-based or learning management systems) but rather provokes debate on all the complex relationships within and between computing and education, whether they are in informal or formal settings. It probes state of the art technologies in Computing Education and it also considers the design and evaluation of digital educational artefacts. The journal aims to maintain and expand its international standing by careful selection on merit of the papers submitted, thus providing a credible ongoing forum for debate and scholarly discourse. Special Issues are occasionally published to cover particular issues in depth. EAIT invites readers to submit papers that draw inferences, probe theory and create new knowledge that informs practice, policy and scholarship. Readers are also invited to comment and reflect upon the argument and opinions published. EAIT is the official journal of the Technical Committee on Education of the International Federation for Information Processing (IFIP) in partnership with UNESCO.