Camilo Vieira, Jose L. De la Hoz, Alejandra J. Magana, David Restrepo
{"title":"Engineering Students' Experiences With ChatGPT to Generate Code for Disciplinary Programming","authors":"Camilo Vieira, Jose L. De la Hoz, Alejandra J. Magana, David Restrepo","doi":"10.1002/cae.70090","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Large Language Models (LLMs) are transforming several aspects of our lives, including text and code generation. Their potential as “copilots” in computer programming is significant, yet their effective use is not straightforward. Even experts may have to generate multiple prompts before getting the desired output, and the code generated may contain bugs that are difficult for novice programmers to identify and fix. Although some prompting methods have been shown to be effective, the primary approach still involves a trial-and-error process. This study explores mechanical engineering students' experiences after engaging with ChatGPT to generate code for the Finite Element Analysis (FEA) course, aiming to provide insights into integrating LLMs into engineering education. The course included a scaffolded progression for students to develop an understanding of MATLAB programming and the implementation of FEA algorithms. After that, the students engaged with ChatGPT to automatically generate a similar code and reflected on their experiences of using this tool. We designed this activity guided by the productive failure framework: since LLMs do not necessarily produce correct code from a single prompt, students would need to use these failures to give feedback, potentially increasing their own understanding of MATLAB coding and FEA. The results suggest that while students find ChatGPT useful for efficient code generation, they struggle to: (1) understand a more sophisticated algorithm compared to what they had experienced in class; (2) find and fix bugs in the generated code; (3) learn about disciplinary concepts while they are also trying to fix the code; and (4) identify effective prompting strategies to instruct the ChatGPT how to complete the task. While LLMs show promise in supporting coding tasks for both professionals and students, using them requires strong background knowledge. When integrated into disciplinary courses, LLMs do not replace the need for effective pedagogical strategies. Our approach involved implementing a use-modify-create sequence, culminating in a productive failure activity where students engaged in conversations with the LLM encountered desirable difficulties. Our findings suggest that students faced challenges in trying to get a correct working code for FEA, and felt like they were teaching the model, which in some cases, led to some frustration. Thus, future research should explore additional forms of support and guidance to address these issues.</p>\n </div>","PeriodicalId":50643,"journal":{"name":"Computer Applications in Engineering Education","volume":"33 6","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Applications in Engineering Education","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cae.70090","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Large Language Models (LLMs) are transforming several aspects of our lives, including text and code generation. Their potential as “copilots” in computer programming is significant, yet their effective use is not straightforward. Even experts may have to generate multiple prompts before getting the desired output, and the code generated may contain bugs that are difficult for novice programmers to identify and fix. Although some prompting methods have been shown to be effective, the primary approach still involves a trial-and-error process. This study explores mechanical engineering students' experiences after engaging with ChatGPT to generate code for the Finite Element Analysis (FEA) course, aiming to provide insights into integrating LLMs into engineering education. The course included a scaffolded progression for students to develop an understanding of MATLAB programming and the implementation of FEA algorithms. After that, the students engaged with ChatGPT to automatically generate a similar code and reflected on their experiences of using this tool. We designed this activity guided by the productive failure framework: since LLMs do not necessarily produce correct code from a single prompt, students would need to use these failures to give feedback, potentially increasing their own understanding of MATLAB coding and FEA. The results suggest that while students find ChatGPT useful for efficient code generation, they struggle to: (1) understand a more sophisticated algorithm compared to what they had experienced in class; (2) find and fix bugs in the generated code; (3) learn about disciplinary concepts while they are also trying to fix the code; and (4) identify effective prompting strategies to instruct the ChatGPT how to complete the task. While LLMs show promise in supporting coding tasks for both professionals and students, using them requires strong background knowledge. When integrated into disciplinary courses, LLMs do not replace the need for effective pedagogical strategies. Our approach involved implementing a use-modify-create sequence, culminating in a productive failure activity where students engaged in conversations with the LLM encountered desirable difficulties. Our findings suggest that students faced challenges in trying to get a correct working code for FEA, and felt like they were teaching the model, which in some cases, led to some frustration. Thus, future research should explore additional forms of support and guidance to address these issues.
期刊介绍:
Computer Applications in Engineering Education provides a forum for publishing peer-reviewed timely information on the innovative uses of computers, Internet, and software tools in engineering education. Besides new courses and software tools, the CAE journal covers areas that support the integration of technology-based modules in the engineering curriculum and promotes discussion of the assessment and dissemination issues associated with these new implementation methods.