María Blanca Ibáñez, María Lucía Barrón-Estrada, Ramón Zatarain-Cabada
{"title":"Can Multimodal Large Language Models Grade Like an Expert? A Study on UML Class Diagram Assessment Accuracy","authors":"María Blanca Ibáñez, María Lucía Barrón-Estrada, Ramón Zatarain-Cabada","doi":"10.1002/cae.70080","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>This study investigates the potential of Multimodal Large Language Models to evaluate the quality of Unified Modelling Language (UML) class diagrams, with a focus on their ability to assess class structures and attribute information in alignment with object-oriented design principles. Thirty-four engineering students completed a design task involving the application of five object-oriented design principles known collectively as the S.O.L.I.D. principles (Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion). Their solutions were independently assessed by three expert instructors and four Multimodal Large Language Models: ChatGPTChatGPT-4, Gemini, Amazon AI, and Claude 3.5 Sonnet. Quantitative analysis compared AI-generated scores to instructor consensus ratings using inter-rater reliability metrics, while a grounded theory approach was used to qualitatively identify and classify AI evaluation errors. Results indicate that while MLLMs demonstrate promising partial scoring alignment with experts, they consistently exhibit significant limitations in semantic interpretation and evaluative reasoning, often leading to inconsistencies. These findings highlight that despite their potential, MLLMs are not yet reliable replacements for human expertise and underscore the critical need for improved model alignment with domain-specific assessment practices. They also suggest future directions for carefully integrated hybrid instructor-AI evaluation workflows in educational settings.</p>\n </div>","PeriodicalId":50643,"journal":{"name":"Computer Applications in Engineering Education","volume":"33 5","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Applications in Engineering Education","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cae.70080","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
This study investigates the potential of Multimodal Large Language Models to evaluate the quality of Unified Modelling Language (UML) class diagrams, with a focus on their ability to assess class structures and attribute information in alignment with object-oriented design principles. Thirty-four engineering students completed a design task involving the application of five object-oriented design principles known collectively as the S.O.L.I.D. principles (Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion). Their solutions were independently assessed by three expert instructors and four Multimodal Large Language Models: ChatGPTChatGPT-4, Gemini, Amazon AI, and Claude 3.5 Sonnet. Quantitative analysis compared AI-generated scores to instructor consensus ratings using inter-rater reliability metrics, while a grounded theory approach was used to qualitatively identify and classify AI evaluation errors. Results indicate that while MLLMs demonstrate promising partial scoring alignment with experts, they consistently exhibit significant limitations in semantic interpretation and evaluative reasoning, often leading to inconsistencies. These findings highlight that despite their potential, MLLMs are not yet reliable replacements for human expertise and underscore the critical need for improved model alignment with domain-specific assessment practices. They also suggest future directions for carefully integrated hybrid instructor-AI evaluation workflows in educational settings.
期刊介绍:
Computer Applications in Engineering Education provides a forum for publishing peer-reviewed timely information on the innovative uses of computers, Internet, and software tools in engineering education. Besides new courses and software tools, the CAE journal covers areas that support the integration of technology-based modules in the engineering curriculum and promotes discussion of the assessment and dissemination issues associated with these new implementation methods.