{"title":"没有先验知识的学生可以用ChatGPT答题吗?实证研究","authors":"Abdulhadi Shoufan","doi":"10.1145/3628162","DOIUrl":null,"url":null,"abstract":"With the immense interest in ChatGPT worldwide, education has seen a mix of both excitement and skepticism. To properly evaluate its impact on education, it is crucial to understand how far it can help students without prior knowledge answer assessment questions. This study aims to address this question as well as the impact of the question type. We conducted multiple experiments with computer engineering students (experiment group: n = 41 to 56), who were asked to use ChatGPT to answer previous test questions before learning about the related topics. Their scores were then compared with the scores of previous-term students who answered the same questions in a quiz or exam setting (control group: n = 24 to 61). The results showed a wide range of effect sizes, from -2.55 to 1.23, depending on the question type and content. The experiment group performed best answering code analysis and conceptual questions but struggled with code completion and questions that involved images. On the other hand, the performance in code generation tasks was inconsistent. Overall, the ChatGPT group’s answers lagged slightly behind the control group’s answers with an effect size of − 0.16. We conclude that ChatGPT, at least in the field of this study, is not yet ready to rely on by students who don’t have sufficient background to evaluate generated answers. We suggest that educators try using ChatGPT and educate students on effective questioning techniques and how to assess the generated responses. This study provides insights into the capabilities and limitations of ChatGPT in education and informs future research and development.","PeriodicalId":48764,"journal":{"name":"ACM Transactions on Computing Education","volume":"162 1","pages":"0"},"PeriodicalIF":3.2000,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can students without prior knowledge use ChatGPT to answer test questions? An empirical study\",\"authors\":\"Abdulhadi Shoufan\",\"doi\":\"10.1145/3628162\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the immense interest in ChatGPT worldwide, education has seen a mix of both excitement and skepticism. To properly evaluate its impact on education, it is crucial to understand how far it can help students without prior knowledge answer assessment questions. This study aims to address this question as well as the impact of the question type. We conducted multiple experiments with computer engineering students (experiment group: n = 41 to 56), who were asked to use ChatGPT to answer previous test questions before learning about the related topics. Their scores were then compared with the scores of previous-term students who answered the same questions in a quiz or exam setting (control group: n = 24 to 61). The results showed a wide range of effect sizes, from -2.55 to 1.23, depending on the question type and content. The experiment group performed best answering code analysis and conceptual questions but struggled with code completion and questions that involved images. On the other hand, the performance in code generation tasks was inconsistent. Overall, the ChatGPT group’s answers lagged slightly behind the control group’s answers with an effect size of − 0.16. We conclude that ChatGPT, at least in the field of this study, is not yet ready to rely on by students who don’t have sufficient background to evaluate generated answers. We suggest that educators try using ChatGPT and educate students on effective questioning techniques and how to assess the generated responses. This study provides insights into the capabilities and limitations of ChatGPT in education and informs future research and development.\",\"PeriodicalId\":48764,\"journal\":{\"name\":\"ACM Transactions on Computing Education\",\"volume\":\"162 1\",\"pages\":\"0\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2023-10-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Computing Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3628162\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Computing Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3628162","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
Can students without prior knowledge use ChatGPT to answer test questions? An empirical study
With the immense interest in ChatGPT worldwide, education has seen a mix of both excitement and skepticism. To properly evaluate its impact on education, it is crucial to understand how far it can help students without prior knowledge answer assessment questions. This study aims to address this question as well as the impact of the question type. We conducted multiple experiments with computer engineering students (experiment group: n = 41 to 56), who were asked to use ChatGPT to answer previous test questions before learning about the related topics. Their scores were then compared with the scores of previous-term students who answered the same questions in a quiz or exam setting (control group: n = 24 to 61). The results showed a wide range of effect sizes, from -2.55 to 1.23, depending on the question type and content. The experiment group performed best answering code analysis and conceptual questions but struggled with code completion and questions that involved images. On the other hand, the performance in code generation tasks was inconsistent. Overall, the ChatGPT group’s answers lagged slightly behind the control group’s answers with an effect size of − 0.16. We conclude that ChatGPT, at least in the field of this study, is not yet ready to rely on by students who don’t have sufficient background to evaluate generated answers. We suggest that educators try using ChatGPT and educate students on effective questioning techniques and how to assess the generated responses. This study provides insights into the capabilities and limitations of ChatGPT in education and informs future research and development.
期刊介绍:
ACM Transactions on Computing Education (TOCE) (formerly named JERIC, Journal on Educational Resources in Computing) covers diverse aspects of computing education: traditional computer science, computer engineering, information technology, and informatics; emerging aspects of computing; and applications of computing to other disciplines. The common characteristics shared by these papers are a scholarly approach to teaching and learning, a broad appeal to educational practitioners, and a clear connection to student learning.