{"title":"测试强化学习系统:全面回顾","authors":"Amal Sunba , Jameleddine Hassine , Moataz Ahmed","doi":"10.1016/j.jss.2025.112563","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement Learning (RL) enables autonomous decision-making in dynamic environments, making it suited for complex, high-stakes domains like healthcare and defense systems. However, RL’s high dimensionality and non-deterministic behavior pose testing challenges. This study presents the first literature review on testing RL systems, analyzing 49 studies published between 2013 and May 2025. The review categorizes testing RL techniques based on key workflow components: testing objectives, test generation, test oracles, and test adequacy. It identifies eleven primary gaps, including the lack of validation for testing RL frameworks in real-world applications and the need for specialized testing to verify RL-specific objectives, such as fairness and generalization. Additionally, the review highlights four key challenges: stochasticity leading to inconsistent fault detection, scalability and efficiency constraints in testing adequacy, fault identification complexity due to RL-specific failure definitions, and validation limitations due to reliance on simple tasks and underdeveloped test oracles. Our analysis shows that current research focuses on single-agent RL, robustness, and safety, yet these areas still contain gaps that require further exploration. The findings highlight that testing RL has become an active research area, peaking in 2023 and 2024, with 57% of the reviewed papers published these years. The identified challenges and gaps present opportunities for future research, guiding efforts toward more comprehensive and effective methodologies for testing RL.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"231 ","pages":"Article 112563"},"PeriodicalIF":4.1000,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Testing reinforcement learning systems: A comprehensive review\",\"authors\":\"Amal Sunba , Jameleddine Hassine , Moataz Ahmed\",\"doi\":\"10.1016/j.jss.2025.112563\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Reinforcement Learning (RL) enables autonomous decision-making in dynamic environments, making it suited for complex, high-stakes domains like healthcare and defense systems. However, RL’s high dimensionality and non-deterministic behavior pose testing challenges. This study presents the first literature review on testing RL systems, analyzing 49 studies published between 2013 and May 2025. The review categorizes testing RL techniques based on key workflow components: testing objectives, test generation, test oracles, and test adequacy. It identifies eleven primary gaps, including the lack of validation for testing RL frameworks in real-world applications and the need for specialized testing to verify RL-specific objectives, such as fairness and generalization. Additionally, the review highlights four key challenges: stochasticity leading to inconsistent fault detection, scalability and efficiency constraints in testing adequacy, fault identification complexity due to RL-specific failure definitions, and validation limitations due to reliance on simple tasks and underdeveloped test oracles. Our analysis shows that current research focuses on single-agent RL, robustness, and safety, yet these areas still contain gaps that require further exploration. The findings highlight that testing RL has become an active research area, peaking in 2023 and 2024, with 57% of the reviewed papers published these years. The identified challenges and gaps present opportunities for future research, guiding efforts toward more comprehensive and effective methodologies for testing RL.</div></div>\",\"PeriodicalId\":51099,\"journal\":{\"name\":\"Journal of Systems and Software\",\"volume\":\"231 \",\"pages\":\"Article 112563\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2025-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems and Software\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0164121225002328\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121225002328","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Testing reinforcement learning systems: A comprehensive review
Reinforcement Learning (RL) enables autonomous decision-making in dynamic environments, making it suited for complex, high-stakes domains like healthcare and defense systems. However, RL’s high dimensionality and non-deterministic behavior pose testing challenges. This study presents the first literature review on testing RL systems, analyzing 49 studies published between 2013 and May 2025. The review categorizes testing RL techniques based on key workflow components: testing objectives, test generation, test oracles, and test adequacy. It identifies eleven primary gaps, including the lack of validation for testing RL frameworks in real-world applications and the need for specialized testing to verify RL-specific objectives, such as fairness and generalization. Additionally, the review highlights four key challenges: stochasticity leading to inconsistent fault detection, scalability and efficiency constraints in testing adequacy, fault identification complexity due to RL-specific failure definitions, and validation limitations due to reliance on simple tasks and underdeveloped test oracles. Our analysis shows that current research focuses on single-agent RL, robustness, and safety, yet these areas still contain gaps that require further exploration. The findings highlight that testing RL has become an active research area, peaking in 2023 and 2024, with 57% of the reviewed papers published these years. The identified challenges and gaps present opportunities for future research, guiding efforts toward more comprehensive and effective methodologies for testing RL.
期刊介绍:
The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to:
•Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution
•Agile, model-driven, service-oriented, open source and global software development
•Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems
•Human factors and management concerns of software development
•Data management and big data issues of software systems
•Metrics and evaluation, data mining of software development resources
•Business and economic aspects of software development processes
The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.