工科学生使用ChatGPT为学科编程生成代码的经验

IF 2.2 3区 工程技术 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Camilo Vieira, Jose L. De la Hoz, Alejandra J. Magana, David Restrepo
{"title":"工科学生使用ChatGPT为学科编程生成代码的经验","authors":"Camilo Vieira,&nbsp;Jose L. De la Hoz,&nbsp;Alejandra J. Magana,&nbsp;David Restrepo","doi":"10.1002/cae.70090","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Large Language Models (LLMs) are transforming several aspects of our lives, including text and code generation. Their potential as “copilots” in computer programming is significant, yet their effective use is not straightforward. Even experts may have to generate multiple prompts before getting the desired output, and the code generated may contain bugs that are difficult for novice programmers to identify and fix. Although some prompting methods have been shown to be effective, the primary approach still involves a trial-and-error process. This study explores mechanical engineering students' experiences after engaging with ChatGPT to generate code for the Finite Element Analysis (FEA) course, aiming to provide insights into integrating LLMs into engineering education. The course included a scaffolded progression for students to develop an understanding of MATLAB programming and the implementation of FEA algorithms. After that, the students engaged with ChatGPT to automatically generate a similar code and reflected on their experiences of using this tool. We designed this activity guided by the productive failure framework: since LLMs do not necessarily produce correct code from a single prompt, students would need to use these failures to give feedback, potentially increasing their own understanding of MATLAB coding and FEA. The results suggest that while students find ChatGPT useful for efficient code generation, they struggle to: (1) understand a more sophisticated algorithm compared to what they had experienced in class; (2) find and fix bugs in the generated code; (3) learn about disciplinary concepts while they are also trying to fix the code; and (4) identify effective prompting strategies to instruct the ChatGPT how to complete the task. While LLMs show promise in supporting coding tasks for both professionals and students, using them requires strong background knowledge. When integrated into disciplinary courses, LLMs do not replace the need for effective pedagogical strategies. Our approach involved implementing a use-modify-create sequence, culminating in a productive failure activity where students engaged in conversations with the LLM encountered desirable difficulties. Our findings suggest that students faced challenges in trying to get a correct working code for FEA, and felt like they were teaching the model, which in some cases, led to some frustration. Thus, future research should explore additional forms of support and guidance to address these issues.</p>\n </div>","PeriodicalId":50643,"journal":{"name":"Computer Applications in Engineering Education","volume":"33 6","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Engineering Students' Experiences With ChatGPT to Generate Code for Disciplinary Programming\",\"authors\":\"Camilo Vieira,&nbsp;Jose L. De la Hoz,&nbsp;Alejandra J. Magana,&nbsp;David Restrepo\",\"doi\":\"10.1002/cae.70090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Large Language Models (LLMs) are transforming several aspects of our lives, including text and code generation. Their potential as “copilots” in computer programming is significant, yet their effective use is not straightforward. Even experts may have to generate multiple prompts before getting the desired output, and the code generated may contain bugs that are difficult for novice programmers to identify and fix. Although some prompting methods have been shown to be effective, the primary approach still involves a trial-and-error process. This study explores mechanical engineering students' experiences after engaging with ChatGPT to generate code for the Finite Element Analysis (FEA) course, aiming to provide insights into integrating LLMs into engineering education. The course included a scaffolded progression for students to develop an understanding of MATLAB programming and the implementation of FEA algorithms. After that, the students engaged with ChatGPT to automatically generate a similar code and reflected on their experiences of using this tool. We designed this activity guided by the productive failure framework: since LLMs do not necessarily produce correct code from a single prompt, students would need to use these failures to give feedback, potentially increasing their own understanding of MATLAB coding and FEA. The results suggest that while students find ChatGPT useful for efficient code generation, they struggle to: (1) understand a more sophisticated algorithm compared to what they had experienced in class; (2) find and fix bugs in the generated code; (3) learn about disciplinary concepts while they are also trying to fix the code; and (4) identify effective prompting strategies to instruct the ChatGPT how to complete the task. While LLMs show promise in supporting coding tasks for both professionals and students, using them requires strong background knowledge. When integrated into disciplinary courses, LLMs do not replace the need for effective pedagogical strategies. Our approach involved implementing a use-modify-create sequence, culminating in a productive failure activity where students engaged in conversations with the LLM encountered desirable difficulties. Our findings suggest that students faced challenges in trying to get a correct working code for FEA, and felt like they were teaching the model, which in some cases, led to some frustration. Thus, future research should explore additional forms of support and guidance to address these issues.</p>\\n </div>\",\"PeriodicalId\":50643,\"journal\":{\"name\":\"Computer Applications in Engineering Education\",\"volume\":\"33 6\",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Applications in Engineering Education\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/cae.70090\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Applications in Engineering Education","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cae.70090","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)正在改变我们生活的几个方面,包括文本和代码生成。它们在计算机编程中作为“副驾驶员”的潜力是巨大的,但它们的有效使用并不直截了当。即使是专家在得到想要的输出之前也可能需要生成多个提示,并且生成的代码可能包含新手程序员难以识别和修复的错误。虽然一些提示方法已被证明是有效的,但主要的方法仍然涉及一个试错过程。本研究探讨了机械工程专业学生在使用ChatGPT为有限元分析(FEA)课程生成代码后的经验,旨在为将法学硕士课程整合到工程教育中提供见解。该课程包括一个脚手架式的进展,让学生发展对MATLAB编程和有限元算法的实现的理解。之后,学生们使用ChatGPT自动生成类似的代码,并反思他们使用这个工具的经验。我们在生产性失败框架的指导下设计了这个活动:由于法学硕士不一定从单个提示生成正确的代码,学生需要使用这些失败来提供反馈,从而潜在地增加他们对MATLAB编码和有限元分析的理解。结果表明,虽然学生发现ChatGPT对于有效的代码生成很有用,但他们很难:(1)与他们在课堂上经历的相比,理解更复杂的算法;(2)发现并修复生成代码中的bug;(3)在学习学科概念的同时,他们也在努力修复代码;(4)确定有效的提示策略,指导ChatGPT如何完成任务。虽然法学硕士课程有望为专业人士和学生提供编程支持,但使用它们需要扎实的背景知识。当整合到学科课程时,法学硕士并不能取代对有效教学策略的需求。我们的方法包括实施一个使用-修改-创建的顺序,最终在一个富有成效的失败活动中,学生们在与法学硕士的对话中遇到了理想的困难。我们的研究结果表明,学生在试图获得正确的FEA工作代码时面临挑战,并且感觉他们是在教授模型,这在某些情况下导致了一些挫折。因此,未来的研究应探索其他形式的支持和指导,以解决这些问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Engineering Students' Experiences With ChatGPT to Generate Code for Disciplinary Programming

Engineering Students' Experiences With ChatGPT to Generate Code for Disciplinary Programming

Large Language Models (LLMs) are transforming several aspects of our lives, including text and code generation. Their potential as “copilots” in computer programming is significant, yet their effective use is not straightforward. Even experts may have to generate multiple prompts before getting the desired output, and the code generated may contain bugs that are difficult for novice programmers to identify and fix. Although some prompting methods have been shown to be effective, the primary approach still involves a trial-and-error process. This study explores mechanical engineering students' experiences after engaging with ChatGPT to generate code for the Finite Element Analysis (FEA) course, aiming to provide insights into integrating LLMs into engineering education. The course included a scaffolded progression for students to develop an understanding of MATLAB programming and the implementation of FEA algorithms. After that, the students engaged with ChatGPT to automatically generate a similar code and reflected on their experiences of using this tool. We designed this activity guided by the productive failure framework: since LLMs do not necessarily produce correct code from a single prompt, students would need to use these failures to give feedback, potentially increasing their own understanding of MATLAB coding and FEA. The results suggest that while students find ChatGPT useful for efficient code generation, they struggle to: (1) understand a more sophisticated algorithm compared to what they had experienced in class; (2) find and fix bugs in the generated code; (3) learn about disciplinary concepts while they are also trying to fix the code; and (4) identify effective prompting strategies to instruct the ChatGPT how to complete the task. While LLMs show promise in supporting coding tasks for both professionals and students, using them requires strong background knowledge. When integrated into disciplinary courses, LLMs do not replace the need for effective pedagogical strategies. Our approach involved implementing a use-modify-create sequence, culminating in a productive failure activity where students engaged in conversations with the LLM encountered desirable difficulties. Our findings suggest that students faced challenges in trying to get a correct working code for FEA, and felt like they were teaching the model, which in some cases, led to some frustration. Thus, future research should explore additional forms of support and guidance to address these issues.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Applications in Engineering Education
Computer Applications in Engineering Education 工程技术-工程:综合
CiteScore
7.20
自引率
10.30%
发文量
100
审稿时长
6-12 weeks
期刊介绍: Computer Applications in Engineering Education provides a forum for publishing peer-reviewed timely information on the innovative uses of computers, Internet, and software tools in engineering education. Besides new courses and software tools, the CAE journal covers areas that support the integration of technology-based modules in the engineering curriculum and promotes discussion of the assessment and dissemination issues associated with these new implementation methods.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信