Zhongsheng Qian, Qingyuan Yu, Hui Zhu, Jinping Liu, Tingfeng Fu
{"title":"Reinforcement learning for test case prioritization based on LLEed K-means clustering and dynamic priority factor","authors":"Zhongsheng Qian, Qingyuan Yu, Hui Zhu, Jinping Liu, Tingfeng Fu","doi":"10.1016/j.infsof.2024.107654","DOIUrl":null,"url":null,"abstract":"<div><div>Integrating reinforcement learning (RL) into test case prioritization (TCP) aims to cope with the dynamic nature and time constraints of continuous integration (CI) testing. However, achieving optimal ranking across CI cycles is challenging if the RL agent starts from an unfavorable initial environment and deals with a dynamic environment characterized by continuous errors during learning. To mitigate the influence of adverse environments, this work proposes an approach to <strong>T</strong>est <strong>C</strong>ase <strong>P</strong>rioritization which incorporates Locally Linear Embedding-based <strong>K</strong>-means Clustering and <strong>D</strong>ynamic Priority Factor into <strong>R</strong>einforcement <strong>L</strong>earning (<strong>TCP-KDRL</strong>). Firstly, we exploit the K-means clustering method with Locally Linear Embedding (LLE) to mine the relationships between test cases, followed by assigning initial priority factors to the test cases. These test cases are ranked based on their initial factors, providing an improved initial learning environment for the agent in RL. Secondly, with the agent learning the ranking strategy in various cycles, we design a comprehensive reward indicator by considering running discrepancy and the position between test cases. Additionally, based on the reward values, the dynamic priority factors for the ranked test cases in each learning round of RL are adaptively updated and the sequence is locally fine-tuned. The fine-tuning strategy provides ample feedback to the agent and enables real-time correction of the erroneous ranking environment, enhancing the generalization of RL across various cycles. Finally, the experimental results demonstrate that TCP-KDRL, as an enhanced RL-based TCP method, outperforms other competitive TCP approaches. Specifically, incorporating the reward indicator and the fine-tuning strategy components, the results are significantly better than that of combining any other two components. For instance, in 12 projects, the average improvements are 0.1548 in APFD and 0.0793 in NRPA. Compared to other TCP methods, the proposed method achieves notable enhancement, with an increase of 0.6902 in APFD and 0.3816 in NRPA.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"179 ","pages":"Article 107654"},"PeriodicalIF":3.8000,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584924002593","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Integrating reinforcement learning (RL) into test case prioritization (TCP) aims to cope with the dynamic nature and time constraints of continuous integration (CI) testing. However, achieving optimal ranking across CI cycles is challenging if the RL agent starts from an unfavorable initial environment and deals with a dynamic environment characterized by continuous errors during learning. To mitigate the influence of adverse environments, this work proposes an approach to Test Case Prioritization which incorporates Locally Linear Embedding-based K-means Clustering and Dynamic Priority Factor into Reinforcement Learning (TCP-KDRL). Firstly, we exploit the K-means clustering method with Locally Linear Embedding (LLE) to mine the relationships between test cases, followed by assigning initial priority factors to the test cases. These test cases are ranked based on their initial factors, providing an improved initial learning environment for the agent in RL. Secondly, with the agent learning the ranking strategy in various cycles, we design a comprehensive reward indicator by considering running discrepancy and the position between test cases. Additionally, based on the reward values, the dynamic priority factors for the ranked test cases in each learning round of RL are adaptively updated and the sequence is locally fine-tuned. The fine-tuning strategy provides ample feedback to the agent and enables real-time correction of the erroneous ranking environment, enhancing the generalization of RL across various cycles. Finally, the experimental results demonstrate that TCP-KDRL, as an enhanced RL-based TCP method, outperforms other competitive TCP approaches. Specifically, incorporating the reward indicator and the fine-tuning strategy components, the results are significantly better than that of combining any other two components. For instance, in 12 projects, the average improvements are 0.1548 in APFD and 0.0793 in NRPA. Compared to other TCP methods, the proposed method achieves notable enhancement, with an increase of 0.6902 in APFD and 0.3816 in NRPA.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.