Comparative study of machine learning test case prioritization for continuous integration testing

IF 1.7 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Dusica Marijan
{"title":"Comparative study of machine learning test case prioritization for continuous integration testing","authors":"Dusica Marijan","doi":"10.1007/s11219-023-09646-0","DOIUrl":null,"url":null,"abstract":"There is a growing body of research indicating the potential of machine learning to tackle complex software testing challenges. One such challenge pertains to continuous integration testing, which is highly time-constrained, and generates a large amount of data coming from iterative code commits and test runs. In such a setting, we can use plentiful test data for training machine learning predictors to identify test cases able to speed up the detection of regression bugs introduced during code integration. However, different machine learning models can have different fault prediction performance depending on the context and the parameters of continuous integration testing, for example, variable time budget available for continuous integration cycles, or the size of test execution history used for learning to prioritize failing test cases. Existing studies on test case prioritization rarely study both of these factors, which are essential for the continuous integration practice. In this study, we perform a comprehensive comparison of the fault prediction performance of machine learning approaches that have shown the best performance on test case prioritization tasks in the literature. We evaluate the accuracy of the classifiers in predicting fault-detecting tests for different values of the continuous integration time budget and with different lengths of test history used for training the classifiers. In evaluation, we use real-world and augmented industrial datasets from a continuous integration practice. The results show that different machine learning models have different performance for different size of test history used for model training and for different time budgets available for test case execution. Our results imply that machine learning approaches for test prioritization in continuous integration testing should be carefully configured to achieve optimal performance.","PeriodicalId":21827,"journal":{"name":"Software Quality Journal","volume":"10 1","pages":"0"},"PeriodicalIF":1.7000,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Quality Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11219-023-09646-0","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 2

Abstract

There is a growing body of research indicating the potential of machine learning to tackle complex software testing challenges. One such challenge pertains to continuous integration testing, which is highly time-constrained, and generates a large amount of data coming from iterative code commits and test runs. In such a setting, we can use plentiful test data for training machine learning predictors to identify test cases able to speed up the detection of regression bugs introduced during code integration. However, different machine learning models can have different fault prediction performance depending on the context and the parameters of continuous integration testing, for example, variable time budget available for continuous integration cycles, or the size of test execution history used for learning to prioritize failing test cases. Existing studies on test case prioritization rarely study both of these factors, which are essential for the continuous integration practice. In this study, we perform a comprehensive comparison of the fault prediction performance of machine learning approaches that have shown the best performance on test case prioritization tasks in the literature. We evaluate the accuracy of the classifiers in predicting fault-detecting tests for different values of the continuous integration time budget and with different lengths of test history used for training the classifiers. In evaluation, we use real-world and augmented industrial datasets from a continuous integration practice. The results show that different machine learning models have different performance for different size of test history used for model training and for different time budgets available for test case execution. Our results imply that machine learning approaches for test prioritization in continuous integration testing should be carefully configured to achieve optimal performance.

Abstract Image

持续集成测试中机器学习测试用例优先级的比较研究
越来越多的研究表明,机器学习有潜力解决复杂的软件测试挑战。其中一个挑战与持续集成测试有关,它具有高度的时间限制,并从迭代代码提交和测试运行中生成大量数据。在这种情况下,我们可以使用大量的测试数据来训练机器学习预测器,以识别能够加速检测代码集成期间引入的回归错误的测试用例。然而,不同的机器学习模型可能有不同的故障预测性能,这取决于上下文和持续集成测试的参数,例如,持续集成周期可用的可变时间预算,或者用于学习优先考虑失败测试用例的测试执行历史的大小。现有的关于测试用例优先级的研究很少研究这两个因素,而这两个因素对于持续集成实践是至关重要的。在这项研究中,我们对文献中在测试用例优先级任务上表现最佳的机器学习方法的故障预测性能进行了全面比较。在不同的持续积分时间预算值和不同的测试历史长度下,我们评估了分类器预测故障检测测试的准确性。在评估中,我们使用来自持续集成实践的真实世界和增强的工业数据集。结果表明,不同的机器学习模型对于用于模型训练的不同规模的测试历史和用于测试用例执行的不同时间预算具有不同的性能。我们的结果表明,在持续集成测试中,用于测试优先级的机器学习方法应该仔细配置,以实现最佳性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Software Quality Journal
Software Quality Journal 工程技术-计算机:软件工程
CiteScore
4.90
自引率
5.30%
发文量
26
审稿时长
>12 weeks
期刊介绍: The aims of the Software Quality Journal are: (1) To promote awareness of the crucial role of quality management in the effective construction of the software systems developed, used, and/or maintained by organizations in pursuit of their business objectives. (2) To provide a forum of the exchange of experiences and information on software quality management and the methods, tools and products used to measure and achieve it. (3) To provide a vehicle for the publication of academic papers related to all aspects of software quality. The Journal addresses all aspects of software quality from both a practical and an academic viewpoint. It invites contributions from practitioners and academics, as well as national and international policy and standard making bodies, and sets out to be the definitive international reference source for such information. The Journal will accept research, technique, case study, survey and tutorial submissions that address quality-related issues including, but not limited to: internal and external quality standards, management of quality within organizations, technical aspects of quality, quality aspects for product vendors, software measurement and metrics, software testing and other quality assurance techniques, total quality management and cultural aspects. Other technical issues with regard to software quality, including: data management, formal methods, safety critical applications, and CASE.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信