Gabriel Castelblanco, Laura Cruz-Castro, Zhenlin Yang
{"title":"Performance of a Large-Language Model in scoring construction management capstone design projects","authors":"Gabriel Castelblanco, Laura Cruz-Castro, Zhenlin Yang","doi":"10.1002/cae.22796","DOIUrl":null,"url":null,"abstract":"<p>Grading is one of the most relevant hurdles for instructors, diverting instructor's focus on the development of engaging learning activities, class preparation, and attending to students' questions. Institutions and instructors are continuously looking for alternatives to reduce educators' time required on grading, frequently, resulting in hiring teaching assistants whose inexperience and frequent rotation can lead to inconsistent and subjective evaluations. Large Language Models (LLMs) like GPT-4 may alleviate grading challenges; however, research in this field is limited when dealing with assignments requiring specialized knowledge, complex critical thinking, subjective, and creative. This research investigates whether GPT-4's scores correlate with human grading in a construction capstone project and how the use of criteria and rubrics in GPT influences this correlation. Projects were graded by two human graders and three training configurations in GPT-4: no detailed criteria, paraphrased criteria, and explicit rubrics. Each configuration was tested through 10 iterations to evaluate GPT consistency. Results challenge GPT-4's potential to grade argumentative assignments. GPT-4's score correlates slightly better (although poor overall) with human evaluations when no additional information is provided, underscoring the poor impact of the specificity of training materials for GPT scoring in this type of assignment. Despite the LLMs' promises, their limitations include variability in consistency and reliance on statistical pattern analysis, which can lead to misleading evaluations along with privacy concerns when handling sensitive student data. Educators must carefully design grading guidelines to harness the full potential of LLMs in academic assessments, balancing AI's efficiency with the need for nuanced human judgment.</p>","PeriodicalId":50643,"journal":{"name":"Computer Applications in Engineering Education","volume":"32 6","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Applications in Engineering Education","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cae.22796","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Grading is one of the most relevant hurdles for instructors, diverting instructor's focus on the development of engaging learning activities, class preparation, and attending to students' questions. Institutions and instructors are continuously looking for alternatives to reduce educators' time required on grading, frequently, resulting in hiring teaching assistants whose inexperience and frequent rotation can lead to inconsistent and subjective evaluations. Large Language Models (LLMs) like GPT-4 may alleviate grading challenges; however, research in this field is limited when dealing with assignments requiring specialized knowledge, complex critical thinking, subjective, and creative. This research investigates whether GPT-4's scores correlate with human grading in a construction capstone project and how the use of criteria and rubrics in GPT influences this correlation. Projects were graded by two human graders and three training configurations in GPT-4: no detailed criteria, paraphrased criteria, and explicit rubrics. Each configuration was tested through 10 iterations to evaluate GPT consistency. Results challenge GPT-4's potential to grade argumentative assignments. GPT-4's score correlates slightly better (although poor overall) with human evaluations when no additional information is provided, underscoring the poor impact of the specificity of training materials for GPT scoring in this type of assignment. Despite the LLMs' promises, their limitations include variability in consistency and reliance on statistical pattern analysis, which can lead to misleading evaluations along with privacy concerns when handling sensitive student data. Educators must carefully design grading guidelines to harness the full potential of LLMs in academic assessments, balancing AI's efficiency with the need for nuanced human judgment.
期刊介绍:
Computer Applications in Engineering Education provides a forum for publishing peer-reviewed timely information on the innovative uses of computers, Internet, and software tools in engineering education. Besides new courses and software tools, the CAE journal covers areas that support the integration of technology-based modules in the engineering curriculum and promotes discussion of the assessment and dissemination issues associated with these new implementation methods.