{"title":"Assessing output reliability and similarity of large language models in software development: A comparative case study approach","authors":"Dae-Kyoo Kim , Hua Ming","doi":"10.1016/j.infsof.2025.107787","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><div>Generative large language models (LLMs) are increasingly used across various activities in software development, offering significant potential to enhance productivity. However, there is a lack of systematic study examining the reliability and similarity of the outputs from these models.</div></div><div><h3>Objective:</h3><div>This work presents a comparative analysis of the reliability – defined as the consistency and correctness of software artifacts – and similarity of LLM outputs in software development.</div></div><div><h3>Method:</h3><div>To accomplish the objective, we introduce a structured approach for assessing the reliability and similarity of outputs from five prominent LLMs – ChatGPT, Claude, Copilot, Gemini, and Meta – and apply it within two case studies focused on developing a food order and delivery system and a smart wallet system.</div></div><div><h3>Results:</h3><div>The study found that the overall output reliability of the models is rated at 0.82 with Claude outperforming other models at 0.92, followed by ChatGPT at 0.90, Copilot at 0.80, Meta at 0.75, and Gemini at 0.71. The models demonstrated an overall 57% similarity and 43% variability in their outputs, highlighting the uniqueness of models.</div></div><div><h3>Conclusions:</h3><div>While overall, LLMs exhibit decent reliability in their outputs with varying degrees, they still require human oversight and review of their outputs before implementation. LLMs present unique characteristics that practitioners should consider before adoption.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"185 ","pages":"Article 107787"},"PeriodicalIF":3.8000,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584925001260","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Context:
Generative large language models (LLMs) are increasingly used across various activities in software development, offering significant potential to enhance productivity. However, there is a lack of systematic study examining the reliability and similarity of the outputs from these models.
Objective:
This work presents a comparative analysis of the reliability – defined as the consistency and correctness of software artifacts – and similarity of LLM outputs in software development.
Method:
To accomplish the objective, we introduce a structured approach for assessing the reliability and similarity of outputs from five prominent LLMs – ChatGPT, Claude, Copilot, Gemini, and Meta – and apply it within two case studies focused on developing a food order and delivery system and a smart wallet system.
Results:
The study found that the overall output reliability of the models is rated at 0.82 with Claude outperforming other models at 0.92, followed by ChatGPT at 0.90, Copilot at 0.80, Meta at 0.75, and Gemini at 0.71. The models demonstrated an overall 57% similarity and 43% variability in their outputs, highlighting the uniqueness of models.
Conclusions:
While overall, LLMs exhibit decent reliability in their outputs with varying degrees, they still require human oversight and review of their outputs before implementation. LLMs present unique characteristics that practitioners should consider before adoption.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.