{"title":"Towards standarized benchmarks of LLMs in software modeling tasks: a conceptual framework","authors":"Javier Cámara, Lola Burgueño, Javier Troya","doi":"10.1007/s10270-024-01206-9","DOIUrl":null,"url":null,"abstract":"<p>The integration of Large Language Models (LLMs) in software modeling tasks presents both opportunities and challenges. This Expert Voice addresses a significant gap in the evaluation of these models, advocating for the need for standardized benchmarking frameworks. Recognizing the potential variability in prompt strategies, LLM outputs, and solution space, we propose a conceptual framework to assess their quality in software model generation. This framework aims to pave the way for standardization of the benchmarking process, ensuring consistent and objective evaluation of LLMs in software modeling. Our conceptual framework is illustrated using UML class diagrams as a running example.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":null,"pages":null},"PeriodicalIF":2.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software and Systems Modeling","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10270-024-01206-9","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
The integration of Large Language Models (LLMs) in software modeling tasks presents both opportunities and challenges. This Expert Voice addresses a significant gap in the evaluation of these models, advocating for the need for standardized benchmarking frameworks. Recognizing the potential variability in prompt strategies, LLM outputs, and solution space, we propose a conceptual framework to assess their quality in software model generation. This framework aims to pave the way for standardization of the benchmarking process, ensuring consistent and objective evaluation of LLMs in software modeling. Our conceptual framework is illustrated using UML class diagrams as a running example.
期刊介绍:
We invite authors to submit papers that discuss and analyze research challenges and experiences pertaining to software and system modeling languages, techniques, tools, practices and other facets. The following are some of the topic areas that are of special interest, but the journal publishes on a wide range of software and systems modeling concerns:
Domain-specific models and modeling standards;
Model-based testing techniques;
Model-based simulation techniques;
Formal syntax and semantics of modeling languages such as the UML;
Rigorous model-based analysis;
Model composition, refinement and transformation;
Software Language Engineering;
Modeling Languages in Science and Engineering;
Language Adaptation and Composition;
Metamodeling techniques;
Measuring quality of models and languages;
Ontological approaches to model engineering;
Generating test and code artifacts from models;
Model synthesis;
Methodology;
Model development tool environments;
Modeling Cyberphysical Systems;
Data intensive modeling;
Derivation of explicit models from data;
Case studies and experience reports with significant modeling lessons learned;
Comparative analyses of modeling languages and techniques;
Scientific assessment of modeling practices