{"title":"Metric-based defect prediction from class diagram","authors":"Batnyam Battulga, Lkhamrolom Tsoodol, Enkhzol Dovdon, Naranchimeg Bold, Oyun-Erdene Namsrai","doi":"10.1016/j.array.2025.100438","DOIUrl":null,"url":null,"abstract":"<div><div>A software defect refers to a fault, failure, or error in software. With the rapid development and increasing reliance on software products, it is essential to identify these defects as early and easily as possible, given the efforts and budget invested in their creation and maintenance. In the literature, various approaches such as machine learning (ML) and deep learning (DL), have been proposed and proven effective in detecting defects in source code during the implementation or testing phases of the software development life cycle (SDLC). A promising approach is crucial for predicting defects at earlier stages of the SDLC, particularly during the design phase, with the goal of enhancing software quality while reducing time, effort, and costs. Meanwhile, software metrics provide a quantifiable way to analyze the software, making it easier to identify defects. Many researchers have leveraged these metrics to predict defects using ML and DL methods, achieving state-of-the-art performance. The objective of this paper is to present a novel approach to predict defects in class diagram (i.e., at design stage) using ML and DL with software metrics. Due to a lack of defect datasets extracted from class diagram, firstly, we created a model-based metric dataset using reverse engineering from a code-based dataset. Then, we apply various ML and DL techniques to the newly created dataset to predict defects in classes by classifying them as either defective or clean. The study utilizes a large dataset called the Unified Bug Dataset, which comprises five publicly available sub-datasets. We compare ML and DL models in terms of accuracy, precision, recall, F-measure, AUC and provide a performance comparison against code-based methods. Finally, we conducted a cross-dataset experiment to evaluate the generalizability of our approach.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100438"},"PeriodicalIF":4.5000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625000657","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
A software defect refers to a fault, failure, or error in software. With the rapid development and increasing reliance on software products, it is essential to identify these defects as early and easily as possible, given the efforts and budget invested in their creation and maintenance. In the literature, various approaches such as machine learning (ML) and deep learning (DL), have been proposed and proven effective in detecting defects in source code during the implementation or testing phases of the software development life cycle (SDLC). A promising approach is crucial for predicting defects at earlier stages of the SDLC, particularly during the design phase, with the goal of enhancing software quality while reducing time, effort, and costs. Meanwhile, software metrics provide a quantifiable way to analyze the software, making it easier to identify defects. Many researchers have leveraged these metrics to predict defects using ML and DL methods, achieving state-of-the-art performance. The objective of this paper is to present a novel approach to predict defects in class diagram (i.e., at design stage) using ML and DL with software metrics. Due to a lack of defect datasets extracted from class diagram, firstly, we created a model-based metric dataset using reverse engineering from a code-based dataset. Then, we apply various ML and DL techniques to the newly created dataset to predict defects in classes by classifying them as either defective or clean. The study utilizes a large dataset called the Unified Bug Dataset, which comprises five publicly available sub-datasets. We compare ML and DL models in terms of accuracy, precision, recall, F-measure, AUC and provide a performance comparison against code-based methods. Finally, we conducted a cross-dataset experiment to evaluate the generalizability of our approach.