{"title":"创建编程练习以克服ChatGPT的建议","authors":"Jonnathan Berrezueta-Guzman, Stephan Krusche","doi":"10.1109/CSEET58097.2023.00031","DOIUrl":null,"url":null,"abstract":"Large language models, such as ChatGPT, possess the potential to revolutionize educational practices across various domains. Nonetheless, the deployment of these models can inadvertently foster academic dishonesty due to their facile accessibility. In practical courses like programming, where hands-on experience is crucial for learning, relying solely on ChatGPT can hinder students’ ability to engage with the exercises, consequently impeding the attainment of learning outcomes.This paper conducts an experimental analysis of GPT 3.5 and GPT 4, gauging their proficiencies and constraints in resolving a compendium of 22 programming exercises. We discern and categorize exercises based on ChatGPT’s ability to furnish viable solutions, alongside those that remain unaddressed. Moreover, an evaluation of the malleability of the solutions proposed by ChatGPT is undertaken. Subsequently, we propound a series of recommendations aimed at curtailing undue dependence on ChatGPT, thereby fostering authentic competency development in programming. The efficaciousness of these recommendations is underpinned by their integration into the design and delivery of an examination as part of the corresponding course.","PeriodicalId":256885,"journal":{"name":"2023 IEEE 35th International Conference on Software Engineering Education and Training (CSEE&T)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Recommendations to Create Programming Exercises to Overcome ChatGPT\",\"authors\":\"Jonnathan Berrezueta-Guzman, Stephan Krusche\",\"doi\":\"10.1109/CSEET58097.2023.00031\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large language models, such as ChatGPT, possess the potential to revolutionize educational practices across various domains. Nonetheless, the deployment of these models can inadvertently foster academic dishonesty due to their facile accessibility. In practical courses like programming, where hands-on experience is crucial for learning, relying solely on ChatGPT can hinder students’ ability to engage with the exercises, consequently impeding the attainment of learning outcomes.This paper conducts an experimental analysis of GPT 3.5 and GPT 4, gauging their proficiencies and constraints in resolving a compendium of 22 programming exercises. We discern and categorize exercises based on ChatGPT’s ability to furnish viable solutions, alongside those that remain unaddressed. Moreover, an evaluation of the malleability of the solutions proposed by ChatGPT is undertaken. Subsequently, we propound a series of recommendations aimed at curtailing undue dependence on ChatGPT, thereby fostering authentic competency development in programming. The efficaciousness of these recommendations is underpinned by their integration into the design and delivery of an examination as part of the corresponding course.\",\"PeriodicalId\":256885,\"journal\":{\"name\":\"2023 IEEE 35th International Conference on Software Engineering Education and Training (CSEE&T)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE 35th International Conference on Software Engineering Education and Training (CSEE&T)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSEET58097.2023.00031\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 35th International Conference on Software Engineering Education and Training (CSEE&T)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSEET58097.2023.00031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recommendations to Create Programming Exercises to Overcome ChatGPT
Large language models, such as ChatGPT, possess the potential to revolutionize educational practices across various domains. Nonetheless, the deployment of these models can inadvertently foster academic dishonesty due to their facile accessibility. In practical courses like programming, where hands-on experience is crucial for learning, relying solely on ChatGPT can hinder students’ ability to engage with the exercises, consequently impeding the attainment of learning outcomes.This paper conducts an experimental analysis of GPT 3.5 and GPT 4, gauging their proficiencies and constraints in resolving a compendium of 22 programming exercises. We discern and categorize exercises based on ChatGPT’s ability to furnish viable solutions, alongside those that remain unaddressed. Moreover, an evaluation of the malleability of the solutions proposed by ChatGPT is undertaken. Subsequently, we propound a series of recommendations aimed at curtailing undue dependence on ChatGPT, thereby fostering authentic competency development in programming. The efficaciousness of these recommendations is underpinned by their integration into the design and delivery of an examination as part of the corresponding course.