{"title":"软件质量的机器学习模型:性能和可理解性之间的妥协","authors":"H. Lounis, T. Gayed, M. Boukadoum","doi":"10.1109/ICTAI.2011.155","DOIUrl":null,"url":null,"abstract":"Building powerful machine-learning assessment models is an important achievement of empirical software engineering research, but it is not the only one. Intelligibility of such models is also needed, especially, in a domain, software engineering, where exploration and knowledge capture is still a challenge. Several algorithms, belonging to various machine-learning approaches, are selected and run on software data collected from medium size applications. Some of these approaches produce models with very high quantitative performances, others give interpretable, intelligible, and \"glass-box\" models that are very complementary. We consider that the integration of both, in automated decision-making systems for assessing software product quality, is desirable to reach a compromise between performance and intelligibility.","PeriodicalId":332661,"journal":{"name":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Machine-Learning Models for Software Quality: A Compromise between Performance and Intelligibility\",\"authors\":\"H. Lounis, T. Gayed, M. Boukadoum\",\"doi\":\"10.1109/ICTAI.2011.155\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Building powerful machine-learning assessment models is an important achievement of empirical software engineering research, but it is not the only one. Intelligibility of such models is also needed, especially, in a domain, software engineering, where exploration and knowledge capture is still a challenge. Several algorithms, belonging to various machine-learning approaches, are selected and run on software data collected from medium size applications. Some of these approaches produce models with very high quantitative performances, others give interpretable, intelligible, and \\\"glass-box\\\" models that are very complementary. We consider that the integration of both, in automated decision-making systems for assessing software product quality, is desirable to reach a compromise between performance and intelligibility.\",\"PeriodicalId\":332661,\"journal\":{\"name\":\"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI.2011.155\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 23rd International Conference on Tools with Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2011.155","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Machine-Learning Models for Software Quality: A Compromise between Performance and Intelligibility
Building powerful machine-learning assessment models is an important achievement of empirical software engineering research, but it is not the only one. Intelligibility of such models is also needed, especially, in a domain, software engineering, where exploration and knowledge capture is still a challenge. Several algorithms, belonging to various machine-learning approaches, are selected and run on software data collected from medium size applications. Some of these approaches produce models with very high quantitative performances, others give interpretable, intelligible, and "glass-box" models that are very complementary. We consider that the integration of both, in automated decision-making systems for assessing software product quality, is desirable to reach a compromise between performance and intelligibility.