{"title":"ChatGPT-3.5 和 -4.0 与机械工程:考察 FE 机械工程和本科生考试的成绩","authors":"Matthew Frenkel, Hebah Emara","doi":"10.1002/cae.22781","DOIUrl":null,"url":null,"abstract":"The launch of Generative Pretrained Transformer (ChatGPT) at the end of 2022 generated large interest in possible applications of artificial intelligence (AI) in science, technology, engineering, and mathematics (STEM) education and among STEM professions. As a result many questions surrounding the capabilities of generative AI tools inside and outside of the classroom have been raised and are starting to be explored. This study examines the capabilities of ChatGPT within the discipline of mechanical engineering. It aims to examine the use cases and pitfalls of such a technology in the classroom and professional settings. ChatGPT was presented with a set of questions from junior‐ and senior‐level mechanical engineering exams provided at a large private university, as well as a set of practice questions for the Fundamentals of Engineering (FE) exam in mechanical engineering. The responses of two ChatGPT models, one free to use and one paid subscription, were analyzed. The paper found that the subscription model (GPT‐4, May 12, 2023) greatly outperformed the free version (GPT‐3.5, May 12, 2023), achieving 76% correct versus 51% correct, but the limitation of text only input on both models makes neither likely to pass the FE exam. The results confirm findings in the literature with regard to types of errors and pitfalls made by ChatGPT. It was found that due to its inconsistency and a tendency to confidently produce incorrect answers, the tool is best suited for users with expert knowledge.","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT‐3.5 and ‐4.0 and mechanical engineering: Examining performance on the FE mechanical engineering and undergraduate exams\",\"authors\":\"Matthew Frenkel, Hebah Emara\",\"doi\":\"10.1002/cae.22781\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The launch of Generative Pretrained Transformer (ChatGPT) at the end of 2022 generated large interest in possible applications of artificial intelligence (AI) in science, technology, engineering, and mathematics (STEM) education and among STEM professions. As a result many questions surrounding the capabilities of generative AI tools inside and outside of the classroom have been raised and are starting to be explored. This study examines the capabilities of ChatGPT within the discipline of mechanical engineering. It aims to examine the use cases and pitfalls of such a technology in the classroom and professional settings. ChatGPT was presented with a set of questions from junior‐ and senior‐level mechanical engineering exams provided at a large private university, as well as a set of practice questions for the Fundamentals of Engineering (FE) exam in mechanical engineering. The responses of two ChatGPT models, one free to use and one paid subscription, were analyzed. The paper found that the subscription model (GPT‐4, May 12, 2023) greatly outperformed the free version (GPT‐3.5, May 12, 2023), achieving 76% correct versus 51% correct, but the limitation of text only input on both models makes neither likely to pass the FE exam. The results confirm findings in the literature with regard to types of errors and pitfalls made by ChatGPT. It was found that due to its inconsistency and a tendency to confidently produce incorrect answers, the tool is best suited for users with expert knowledge.\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-07-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1002/cae.22781\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1002/cae.22781","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
ChatGPT‐3.5 and ‐4.0 and mechanical engineering: Examining performance on the FE mechanical engineering and undergraduate exams
The launch of Generative Pretrained Transformer (ChatGPT) at the end of 2022 generated large interest in possible applications of artificial intelligence (AI) in science, technology, engineering, and mathematics (STEM) education and among STEM professions. As a result many questions surrounding the capabilities of generative AI tools inside and outside of the classroom have been raised and are starting to be explored. This study examines the capabilities of ChatGPT within the discipline of mechanical engineering. It aims to examine the use cases and pitfalls of such a technology in the classroom and professional settings. ChatGPT was presented with a set of questions from junior‐ and senior‐level mechanical engineering exams provided at a large private university, as well as a set of practice questions for the Fundamentals of Engineering (FE) exam in mechanical engineering. The responses of two ChatGPT models, one free to use and one paid subscription, were analyzed. The paper found that the subscription model (GPT‐4, May 12, 2023) greatly outperformed the free version (GPT‐3.5, May 12, 2023), achieving 76% correct versus 51% correct, but the limitation of text only input on both models makes neither likely to pass the FE exam. The results confirm findings in the literature with regard to types of errors and pitfalls made by ChatGPT. It was found that due to its inconsistency and a tendency to confidently produce incorrect answers, the tool is best suited for users with expert knowledge.