{"title":"用于软件缺陷预测的可解释AI框架","authors":"Bahar Gezici Geçer, Ayça Kolukısa Tarhan","doi":"10.1002/smr.70018","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Software engineering plays a critical role in improving the quality of software systems, because identifying and correcting defects is one of the most expensive tasks in software development life cycle. For instance, determining whether a software product still has defects before distributing it is crucial. The customer's confidence in the software product will decline if the defects are discovered after it has been deployed. Machine learning-based techniques for predicting software defects have lately started to yield encouraging results. The software defect prediction system's prediction results are raised by machine learning models. More accurate models tend to be more complicated, which makes them harder to interpret. As the rationale behind machine learning models' decisions are obscure, it is challenging to employ them in actual production. In this study, we employ five different machine learning models which are random forest (RF), gradient boosting (GB), naive Bayes (NB), multilayer perceptron (MLP), and neural network (NN) to predict software defects and also provide an explainable artificial intelligence (XAI) framework to both locally and globally increase openness throughout the machine learning pipeline. While global explanations identify general trends and feature importance, local explanations provide insights into individual instances, and their combination allows for a holistic understanding of the model. This is accomplished through the utilization of Explainable AI algorithms, which aim to reduce the “black-boxiness” of ML models by explaining the reasoning behind a prediction. The explanations provide quantifiable information about the characteristics that affect defect prediction. These justifications are produced using six XAI methods, namely, SHAP, anchor, ELI5, LIME, partial dependence plot (PDP), and ProtoDash. We use the KC2 dataset to apply these methods to the software defect prediction (SDP) system, and provide and discuss the results.</p>\n </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 4","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2025-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable AI Framework for Software Defect Prediction\",\"authors\":\"Bahar Gezici Geçer, Ayça Kolukısa Tarhan\",\"doi\":\"10.1002/smr.70018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Software engineering plays a critical role in improving the quality of software systems, because identifying and correcting defects is one of the most expensive tasks in software development life cycle. For instance, determining whether a software product still has defects before distributing it is crucial. The customer's confidence in the software product will decline if the defects are discovered after it has been deployed. Machine learning-based techniques for predicting software defects have lately started to yield encouraging results. The software defect prediction system's prediction results are raised by machine learning models. More accurate models tend to be more complicated, which makes them harder to interpret. As the rationale behind machine learning models' decisions are obscure, it is challenging to employ them in actual production. In this study, we employ five different machine learning models which are random forest (RF), gradient boosting (GB), naive Bayes (NB), multilayer perceptron (MLP), and neural network (NN) to predict software defects and also provide an explainable artificial intelligence (XAI) framework to both locally and globally increase openness throughout the machine learning pipeline. While global explanations identify general trends and feature importance, local explanations provide insights into individual instances, and their combination allows for a holistic understanding of the model. This is accomplished through the utilization of Explainable AI algorithms, which aim to reduce the “black-boxiness” of ML models by explaining the reasoning behind a prediction. The explanations provide quantifiable information about the characteristics that affect defect prediction. These justifications are produced using six XAI methods, namely, SHAP, anchor, ELI5, LIME, partial dependence plot (PDP), and ProtoDash. We use the KC2 dataset to apply these methods to the software defect prediction (SDP) system, and provide and discuss the results.</p>\\n </div>\",\"PeriodicalId\":48898,\"journal\":{\"name\":\"Journal of Software-Evolution and Process\",\"volume\":\"37 4\",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-04-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Software-Evolution and Process\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/smr.70018\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Software-Evolution and Process","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/smr.70018","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Explainable AI Framework for Software Defect Prediction
Software engineering plays a critical role in improving the quality of software systems, because identifying and correcting defects is one of the most expensive tasks in software development life cycle. For instance, determining whether a software product still has defects before distributing it is crucial. The customer's confidence in the software product will decline if the defects are discovered after it has been deployed. Machine learning-based techniques for predicting software defects have lately started to yield encouraging results. The software defect prediction system's prediction results are raised by machine learning models. More accurate models tend to be more complicated, which makes them harder to interpret. As the rationale behind machine learning models' decisions are obscure, it is challenging to employ them in actual production. In this study, we employ five different machine learning models which are random forest (RF), gradient boosting (GB), naive Bayes (NB), multilayer perceptron (MLP), and neural network (NN) to predict software defects and also provide an explainable artificial intelligence (XAI) framework to both locally and globally increase openness throughout the machine learning pipeline. While global explanations identify general trends and feature importance, local explanations provide insights into individual instances, and their combination allows for a holistic understanding of the model. This is accomplished through the utilization of Explainable AI algorithms, which aim to reduce the “black-boxiness” of ML models by explaining the reasoning behind a prediction. The explanations provide quantifiable information about the characteristics that affect defect prediction. These justifications are produced using six XAI methods, namely, SHAP, anchor, ELI5, LIME, partial dependence plot (PDP), and ProtoDash. We use the KC2 dataset to apply these methods to the software defect prediction (SDP) system, and provide and discuss the results.