ChatGPT 提高了大学生创造性解决问题的能力:一项实验研究

IF 8.9 1区 教育学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Marek Urban , Filip Děchtěrenko , Jiří Lukavský , Veronika Hrabalová , Filip Svacha , Cyril Brom , Kamila Urban
{"title":"ChatGPT 提高了大学生创造性解决问题的能力:一项实验研究","authors":"Marek Urban ,&nbsp;Filip Děchtěrenko ,&nbsp;Jiří Lukavský ,&nbsp;Veronika Hrabalová ,&nbsp;Filip Svacha ,&nbsp;Cyril Brom ,&nbsp;Kamila Urban","doi":"10.1016/j.compedu.2024.105031","DOIUrl":null,"url":null,"abstract":"<div><p>University students often employ generative artificial intelligence tools such as ChatGPT in resolution of ill-defined problem-solving tasks. However, the experimental evidence about effects of ChatGPT on complex problem-solving performance is still missing. In this preregistered experiment, the impact of ChatGPT on performance in a complex creative problem-solving task was investigated in 77 university students solving a task with ChatGPT in comparison to 68 students solving a task without it. ChatGPT use significantly improved self-efficacy for task resolution (<em>d</em> = 0.65) and enhanced the quality (<em>d</em> = 0.69), elaboration (<em>d</em> = 0.61), and originality (<em>d</em> = 0.55) of solutions. Moreover, participants with ChatGPT assistance perceived task as easier (<em>d</em> = 0.56) and requiring less mental effort (<em>d</em> = 0.58). However, use of ChatGPT did not make task resolution more interesting (<em>d</em> = 0.08), and the impact of ChatGPT on metacognitive monitoring accuracy was unclear. Although there were no significant differences in absolute accuracy between students solving the task with and without the assistance of ChatGPT, the absence of correlation between self-evaluation judgments and performance suggests that participants struggled to calibrate their self-evaluations when using ChatGPT. Notably, the perceived usefulness of ChatGPT appeared to inform self-evaluation judgments, resulting in higher inaccuracy. The implications for hybrid human-AI regulation (HHAIR) theory are discussed. To regulate effectively, students using AI tools should focus on valid metacognitive cues instead of the perceived ease of ChatGPT-assisted problem-solving.</p></div>","PeriodicalId":10568,"journal":{"name":"Computers & Education","volume":"215 ","pages":"Article 105031"},"PeriodicalIF":8.9000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT improves creative problem-solving performance in university students: An experimental study\",\"authors\":\"Marek Urban ,&nbsp;Filip Děchtěrenko ,&nbsp;Jiří Lukavský ,&nbsp;Veronika Hrabalová ,&nbsp;Filip Svacha ,&nbsp;Cyril Brom ,&nbsp;Kamila Urban\",\"doi\":\"10.1016/j.compedu.2024.105031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>University students often employ generative artificial intelligence tools such as ChatGPT in resolution of ill-defined problem-solving tasks. However, the experimental evidence about effects of ChatGPT on complex problem-solving performance is still missing. In this preregistered experiment, the impact of ChatGPT on performance in a complex creative problem-solving task was investigated in 77 university students solving a task with ChatGPT in comparison to 68 students solving a task without it. ChatGPT use significantly improved self-efficacy for task resolution (<em>d</em> = 0.65) and enhanced the quality (<em>d</em> = 0.69), elaboration (<em>d</em> = 0.61), and originality (<em>d</em> = 0.55) of solutions. Moreover, participants with ChatGPT assistance perceived task as easier (<em>d</em> = 0.56) and requiring less mental effort (<em>d</em> = 0.58). However, use of ChatGPT did not make task resolution more interesting (<em>d</em> = 0.08), and the impact of ChatGPT on metacognitive monitoring accuracy was unclear. Although there were no significant differences in absolute accuracy between students solving the task with and without the assistance of ChatGPT, the absence of correlation between self-evaluation judgments and performance suggests that participants struggled to calibrate their self-evaluations when using ChatGPT. Notably, the perceived usefulness of ChatGPT appeared to inform self-evaluation judgments, resulting in higher inaccuracy. The implications for hybrid human-AI regulation (HHAIR) theory are discussed. To regulate effectively, students using AI tools should focus on valid metacognitive cues instead of the perceived ease of ChatGPT-assisted problem-solving.</p></div>\",\"PeriodicalId\":10568,\"journal\":{\"name\":\"Computers & Education\",\"volume\":\"215 \",\"pages\":\"Article 105031\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2024-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0360131524000459\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Education","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0360131524000459","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

大学生经常使用生成式人工智能工具(如 ChatGPT)来解决定义不明确的问题。然而,有关 ChatGPT 对复杂问题解决绩效影响的实验证据仍然缺失。在这项预先注册的实验中,77 名大学生使用 ChatGPT 解决了一项复杂的创造性问题,而 68 名大学生则没有使用 ChatGPT。使用 ChatGPT 极大地提高了解决任务的自我效能感(= 0.65),并提高了解决方案的质量(= 0.69)、详细程度(= 0.61)和原创性(= 0.55)。此外,在 ChatGPT 的帮助下,参与者认为任务更容易(= 0.56),所需的脑力劳动更少(= 0.58)。然而,使用 ChatGPT 并没有使任务的解决变得更有趣(= 0.08),而且 ChatGPT 对元认知监控准确性的影响也不明确。虽然学生在使用 ChatGPT 和不使用 ChatGPT 的情况下完成任务的绝对准确率没有明显差异,但自我评价判断和成绩之间缺乏相关性,这表明参与者在使用 ChatGPT 时难以校准自我评价。值得注意的是,对 ChatGPT 有用性的感知似乎为自我评价判断提供了依据,从而导致更高的不准确性。本文讨论了人类-人工智能混合调节(HHAIR)理论的意义。为了有效地进行调节,使用人工智能工具的学生应该关注有效的元认知线索,而不是ChatGPT辅助解决问题的感知难易程度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ChatGPT improves creative problem-solving performance in university students: An experimental study

University students often employ generative artificial intelligence tools such as ChatGPT in resolution of ill-defined problem-solving tasks. However, the experimental evidence about effects of ChatGPT on complex problem-solving performance is still missing. In this preregistered experiment, the impact of ChatGPT on performance in a complex creative problem-solving task was investigated in 77 university students solving a task with ChatGPT in comparison to 68 students solving a task without it. ChatGPT use significantly improved self-efficacy for task resolution (d = 0.65) and enhanced the quality (d = 0.69), elaboration (d = 0.61), and originality (d = 0.55) of solutions. Moreover, participants with ChatGPT assistance perceived task as easier (d = 0.56) and requiring less mental effort (d = 0.58). However, use of ChatGPT did not make task resolution more interesting (d = 0.08), and the impact of ChatGPT on metacognitive monitoring accuracy was unclear. Although there were no significant differences in absolute accuracy between students solving the task with and without the assistance of ChatGPT, the absence of correlation between self-evaluation judgments and performance suggests that participants struggled to calibrate their self-evaluations when using ChatGPT. Notably, the perceived usefulness of ChatGPT appeared to inform self-evaluation judgments, resulting in higher inaccuracy. The implications for hybrid human-AI regulation (HHAIR) theory are discussed. To regulate effectively, students using AI tools should focus on valid metacognitive cues instead of the perceived ease of ChatGPT-assisted problem-solving.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Education
Computers & Education 工程技术-计算机:跨学科应用
CiteScore
27.10
自引率
5.80%
发文量
204
审稿时长
42 days
期刊介绍: Computers & Education seeks to advance understanding of how digital technology can improve education by publishing high-quality research that expands both theory and practice. The journal welcomes research papers exploring the pedagogical applications of digital technology, with a focus broad enough to appeal to the wider education community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信