{"title":"Assessing Learning of Computer Programing Skills in the Age of Generative Artificial Intelligence.","authors":"Sara Ellen Wilson, Matthew Nishimoto","doi":"10.1115/1.4064364","DOIUrl":null,"url":null,"abstract":"<p><p>Generative artificial intelligence (AI) tools such as ChatGPT, Bard, and Claude have recently become a concern in the delivery of engineering education. For courses focused on computer coding, such tools are capable for creating working computer code across a range of computer languages and computing platforms. In a course for mechanical engineers focused on C++ coding for the Arduino microcontroller and coding engineering problems in Matlab, a new approach to assessment of programing homework assignments was developed. This assessment moved the focus of assigned points from the correctness of the code to the effort and understanding of the code demonstrated by the student during in-person grading. Students who participated fully in in-person grading did significantly better on a midterm exam. Relative to a previous semester, where grading was focused on correct code, students had a slightly higher average midterm exam score. This approach appears to be effective in supporting computational learning in the face of evolving tools that could be used to circumvent learning. Future work should examine how to also encourage responsible use of generative AI in computational learning.</p>","PeriodicalId":54871,"journal":{"name":"Journal of Biomechanical Engineering-Transactions of the Asme","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Biomechanical Engineering-Transactions of the Asme","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4064364","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BIOPHYSICS","Score":null,"Total":0}
引用次数: 0
Abstract
Generative artificial intelligence (AI) tools such as ChatGPT, Bard, and Claude have recently become a concern in the delivery of engineering education. For courses focused on computer coding, such tools are capable for creating working computer code across a range of computer languages and computing platforms. In a course for mechanical engineers focused on C++ coding for the Arduino microcontroller and coding engineering problems in Matlab, a new approach to assessment of programing homework assignments was developed. This assessment moved the focus of assigned points from the correctness of the code to the effort and understanding of the code demonstrated by the student during in-person grading. Students who participated fully in in-person grading did significantly better on a midterm exam. Relative to a previous semester, where grading was focused on correct code, students had a slightly higher average midterm exam score. This approach appears to be effective in supporting computational learning in the face of evolving tools that could be used to circumvent learning. Future work should examine how to also encourage responsible use of generative AI in computational learning.
期刊介绍:
Artificial Organs and Prostheses; Bioinstrumentation and Measurements; Bioheat Transfer; Biomaterials; Biomechanics; Bioprocess Engineering; Cellular Mechanics; Design and Control of Biological Systems; Physiological Systems.