Evaluating Reinforcement Learning Agents for Autonomous Cyber Defence

Abby Morris, Rachael Procter, Caroline Wallbank
{"title":"Evaluating Reinforcement Learning Agents for Autonomous Cyber Defence","authors":"Abby Morris,&nbsp;Rachael Procter,&nbsp;Caroline Wallbank","doi":"10.1002/ail2.125","DOIUrl":null,"url":null,"abstract":"<p>Artificial Intelligence (AI) is set to become an essential tool for defending against machine-speed attacks on increasingly connected cyber networks and systems. It will allow self-defending and self-recovering cyber-defence agents to be developed, which can respond to attacks in a timely manner. But how can these agents be trusted to perform as expected, and how can they be evaluated responsibly and thoroughly? To answer these questions, a Test and Evaluation (T&amp;E) process has been developed to assess cyber-defence agents. The process evaluates the performance, effectiveness, resilience, and generalizability of agents in both low- and high-fidelity cyber environments. This paper demonstrates the low-fidelity part of the process by performing an example evaluation in the Cyber Operations Research Gym (CybORG) environment on Reinforcement Learning (RL) agents trained as part of Cyber Autonomy Gym for Experimentation (CAGE) Challenge 2. The process makes use of novel Measures of Effectiveness (MoE) metrics, which can be used in combination with performance metrics such as the RL reward. MoE are tailored for cyber defence, allowing a greater understanding of agents' defensive abilities within a cyber environment. Agents are evaluated against multiple conditions that perturb the environment to investigate their robustness to scenarios not seen during training. The results from this evaluation process will help inform decisions around the benefits and risks of integrating autonomous agents into existing or future cyber systems.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"6 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.125","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied AI letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ail2.125","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial Intelligence (AI) is set to become an essential tool for defending against machine-speed attacks on increasingly connected cyber networks and systems. It will allow self-defending and self-recovering cyber-defence agents to be developed, which can respond to attacks in a timely manner. But how can these agents be trusted to perform as expected, and how can they be evaluated responsibly and thoroughly? To answer these questions, a Test and Evaluation (T&E) process has been developed to assess cyber-defence agents. The process evaluates the performance, effectiveness, resilience, and generalizability of agents in both low- and high-fidelity cyber environments. This paper demonstrates the low-fidelity part of the process by performing an example evaluation in the Cyber Operations Research Gym (CybORG) environment on Reinforcement Learning (RL) agents trained as part of Cyber Autonomy Gym for Experimentation (CAGE) Challenge 2. The process makes use of novel Measures of Effectiveness (MoE) metrics, which can be used in combination with performance metrics such as the RL reward. MoE are tailored for cyber defence, allowing a greater understanding of agents' defensive abilities within a cyber environment. Agents are evaluated against multiple conditions that perturb the environment to investigate their robustness to scenarios not seen during training. The results from this evaluation process will help inform decisions around the benefits and risks of integrating autonomous agents into existing or future cyber systems.

Abstract Image

评估用于自主网络防御的强化学习代理
人工智能(AI)将成为抵御日益互联的网络和系统中机器速度攻击的重要工具。它将允许开发自我防御和自我恢复的网络防御代理,可以及时应对攻击。但是,如何才能信任这些代理按预期执行,如何才能对它们进行负责任和彻底的评估?为了回答这些问题,已经开发了一个测试和评估(T&;E)过程来评估网络防御代理。该过程评估代理在低保真和高保真网络环境中的性能、有效性、弹性和可泛化性。本文通过在Cyber Operations Research Gym (CybORG)环境中对作为Cyber Autonomy Gym for Experimentation (CAGE) Challenge 2的一部分训练的强化学习(RL)代理进行示例评估,展示了该过程的低保真度部分。这个过程使用了新的有效性度量(MoE)指标,它可以与RL奖励等绩效指标结合使用。MoE是为网络防御量身定制的,可以更好地了解代理在网络环境中的防御能力。对干扰环境的多种条件对智能体进行评估,以调查其对训练期间未见的场景的鲁棒性。这一评估过程的结果将有助于围绕将自主代理集成到现有或未来网络系统中的利益和风险做出决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信