Testing reinforcement learning systems: A comprehensive review

IF 4.1 2区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Amal Sunba , Jameleddine Hassine , Moataz Ahmed
{"title":"Testing reinforcement learning systems: A comprehensive review","authors":"Amal Sunba ,&nbsp;Jameleddine Hassine ,&nbsp;Moataz Ahmed","doi":"10.1016/j.jss.2025.112563","DOIUrl":null,"url":null,"abstract":"<div><div>Reinforcement Learning (RL) enables autonomous decision-making in dynamic environments, making it suited for complex, high-stakes domains like healthcare and defense systems. However, RL’s high dimensionality and non-deterministic behavior pose testing challenges. This study presents the first literature review on testing RL systems, analyzing 49 studies published between 2013 and May 2025. The review categorizes testing RL techniques based on key workflow components: testing objectives, test generation, test oracles, and test adequacy. It identifies eleven primary gaps, including the lack of validation for testing RL frameworks in real-world applications and the need for specialized testing to verify RL-specific objectives, such as fairness and generalization. Additionally, the review highlights four key challenges: stochasticity leading to inconsistent fault detection, scalability and efficiency constraints in testing adequacy, fault identification complexity due to RL-specific failure definitions, and validation limitations due to reliance on simple tasks and underdeveloped test oracles. Our analysis shows that current research focuses on single-agent RL, robustness, and safety, yet these areas still contain gaps that require further exploration. The findings highlight that testing RL has become an active research area, peaking in 2023 and 2024, with 57% of the reviewed papers published these years. The identified challenges and gaps present opportunities for future research, guiding efforts toward more comprehensive and effective methodologies for testing RL.</div></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"231 ","pages":"Article 112563"},"PeriodicalIF":4.1000,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121225002328","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Reinforcement Learning (RL) enables autonomous decision-making in dynamic environments, making it suited for complex, high-stakes domains like healthcare and defense systems. However, RL’s high dimensionality and non-deterministic behavior pose testing challenges. This study presents the first literature review on testing RL systems, analyzing 49 studies published between 2013 and May 2025. The review categorizes testing RL techniques based on key workflow components: testing objectives, test generation, test oracles, and test adequacy. It identifies eleven primary gaps, including the lack of validation for testing RL frameworks in real-world applications and the need for specialized testing to verify RL-specific objectives, such as fairness and generalization. Additionally, the review highlights four key challenges: stochasticity leading to inconsistent fault detection, scalability and efficiency constraints in testing adequacy, fault identification complexity due to RL-specific failure definitions, and validation limitations due to reliance on simple tasks and underdeveloped test oracles. Our analysis shows that current research focuses on single-agent RL, robustness, and safety, yet these areas still contain gaps that require further exploration. The findings highlight that testing RL has become an active research area, peaking in 2023 and 2024, with 57% of the reviewed papers published these years. The identified challenges and gaps present opportunities for future research, guiding efforts toward more comprehensive and effective methodologies for testing RL.
测试强化学习系统:全面回顾
强化学习(RL)能够在动态环境中实现自主决策,使其适用于医疗保健和国防系统等复杂、高风险的领域。然而,强化学习的高维性和不确定性行为给测试带来了挑战。本研究首次对测试RL系统进行了文献回顾,分析了2013年至2025年5月期间发表的49项研究。这篇综述根据关键的工作流组件对测试RL技术进行了分类:测试目标、测试生成、测试预言机和测试充分性。它确定了11个主要差距,包括缺乏在实际应用中测试强化学习框架的验证,以及需要专门的测试来验证强化学习的特定目标,如公平性和泛化。此外,该综述强调了四个关键挑战:导致不一致的故障检测的随机性,测试充分性的可扩展性和效率限制,由于rl特定的故障定义而导致的故障识别复杂性,以及由于依赖简单任务和不发达的测试oracle而导致的验证限制。我们的分析表明,目前的研究主要集中在单智能体强化学习、鲁棒性和安全性上,但这些领域仍然存在需要进一步探索的空白。研究结果强调,测试强化学习已成为一个活跃的研究领域,在2023年和2024年达到顶峰,这几年发表的论文中有57%。确定的挑战和差距为未来的研究提供了机会,指导朝着更全面和有效的RL测试方法努力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Systems and Software
Journal of Systems and Software 工程技术-计算机:理论方法
CiteScore
8.60
自引率
5.70%
发文量
193
审稿时长
16 weeks
期刊介绍: The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to: •Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution •Agile, model-driven, service-oriented, open source and global software development •Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems •Human factors and management concerns of software development •Data management and big data issues of software systems •Metrics and evaluation, data mining of software development resources •Business and economic aspects of software development processes The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信