EPPTA:用于渗透测试应用的高效部分可观察强化学习代理

Zegang Li, Qian Zhang, Guangwen Yang
{"title":"EPPTA:用于渗透测试应用的高效部分可观察强化学习代理","authors":"Zegang Li, Qian Zhang, Guangwen Yang","doi":"10.1002/eng2.12818","DOIUrl":null,"url":null,"abstract":"In recent years, penetration testing (pen‐testing) has emerged as a crucial process for evaluating the security level of network infrastructures by simulating real‐world cyber‐attacks. Automating pen‐testing through reinforcement learning (RL) facilitates more frequent assessments, minimizes human effort, and enhances scalability. However, real‐world pen‐testing tasks often involve incomplete knowledge of the target network system. Effectively managing the intrinsic uncertainties via partially observable Markov decision processes (POMDPs) constitutes a persistent challenge within the realm of pen‐testing. Furthermore, RL agents are compelled to formulate intricate strategies to contend with the challenges posed by partially observable environments, thereby engendering augmented computational and temporal expenditures. To address these issues, this study introduces EPPTA (efficient POMDP‐driven penetration testing agent), an agent built on an asynchronous RL framework, designed for conducting pen‐testing tasks within partially observable environments. We incorporate an implicit belief module in EPPTA, grounded on the belief update formula of the traditional POMDP model, which represents the agent's probabilistic estimation of the current environment state. Furthermore, by integrating the algorithm with the high‐performance RL framework, sample factory, EPPTA significantly reduces convergence time compared to existing pen‐testing methods, resulting in an approximately 20‐fold acceleration. Empirical results across various pen‐testing scenarios validate EPPTA's superior task reward performance and enhanced scalability, providing substantial support for efficient and advanced evaluation of network infrastructure security.","PeriodicalId":11735,"journal":{"name":"Engineering Reports","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EPPTA: Efficient partially observable reinforcement learning agent for penetration testing applications\",\"authors\":\"Zegang Li, Qian Zhang, Guangwen Yang\",\"doi\":\"10.1002/eng2.12818\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, penetration testing (pen‐testing) has emerged as a crucial process for evaluating the security level of network infrastructures by simulating real‐world cyber‐attacks. Automating pen‐testing through reinforcement learning (RL) facilitates more frequent assessments, minimizes human effort, and enhances scalability. However, real‐world pen‐testing tasks often involve incomplete knowledge of the target network system. Effectively managing the intrinsic uncertainties via partially observable Markov decision processes (POMDPs) constitutes a persistent challenge within the realm of pen‐testing. Furthermore, RL agents are compelled to formulate intricate strategies to contend with the challenges posed by partially observable environments, thereby engendering augmented computational and temporal expenditures. To address these issues, this study introduces EPPTA (efficient POMDP‐driven penetration testing agent), an agent built on an asynchronous RL framework, designed for conducting pen‐testing tasks within partially observable environments. We incorporate an implicit belief module in EPPTA, grounded on the belief update formula of the traditional POMDP model, which represents the agent's probabilistic estimation of the current environment state. Furthermore, by integrating the algorithm with the high‐performance RL framework, sample factory, EPPTA significantly reduces convergence time compared to existing pen‐testing methods, resulting in an approximately 20‐fold acceleration. Empirical results across various pen‐testing scenarios validate EPPTA's superior task reward performance and enhanced scalability, providing substantial support for efficient and advanced evaluation of network infrastructure security.\",\"PeriodicalId\":11735,\"journal\":{\"name\":\"Engineering Reports\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Reports\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/eng2.12818\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Reports","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/eng2.12818","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,渗透测试(pen-testing)已成为通过模拟真实世界的网络攻击来评估网络基础设施安全级别的重要过程。通过强化学习(RL)实现笔测试自动化有助于提高评估频率,最大限度地减少人力,并增强可扩展性。然而,现实世界中的笔测试任务往往涉及对目标网络系统的不完全了解。通过部分可观测马尔可夫决策过程(POMDPs)有效管理内在不确定性,是笔测试领域的一项长期挑战。此外,RL 代理不得不制定复杂的策略来应对部分可观测环境带来的挑战,从而增加了计算和时间消耗。为了解决这些问题,本研究引入了 EPPTA(高效 POMDP 驱动的渗透测试代理),它是一种基于异步 RL 框架的代理,专为在部分可观测环境中执行渗透测试任务而设计。我们在 EPPTA 中加入了隐式信念模块,该模块以传统 POMDP 模型的信念更新公式为基础,代表了代理对当前环境状态的概率估计。此外,通过将该算法与高性能 RL 框架(样本工厂)集成,EPPTA 与现有的笔测试方法相比大大缩短了收敛时间,使收敛速度提高了约 20 倍。各种笔测试场景的实证结果验证了 EPPTA 优越的任务奖励性能和增强的可扩展性,为高效、先进的网络基础设施安全性评估提供了实质性支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
EPPTA: Efficient partially observable reinforcement learning agent for penetration testing applications
In recent years, penetration testing (pen‐testing) has emerged as a crucial process for evaluating the security level of network infrastructures by simulating real‐world cyber‐attacks. Automating pen‐testing through reinforcement learning (RL) facilitates more frequent assessments, minimizes human effort, and enhances scalability. However, real‐world pen‐testing tasks often involve incomplete knowledge of the target network system. Effectively managing the intrinsic uncertainties via partially observable Markov decision processes (POMDPs) constitutes a persistent challenge within the realm of pen‐testing. Furthermore, RL agents are compelled to formulate intricate strategies to contend with the challenges posed by partially observable environments, thereby engendering augmented computational and temporal expenditures. To address these issues, this study introduces EPPTA (efficient POMDP‐driven penetration testing agent), an agent built on an asynchronous RL framework, designed for conducting pen‐testing tasks within partially observable environments. We incorporate an implicit belief module in EPPTA, grounded on the belief update formula of the traditional POMDP model, which represents the agent's probabilistic estimation of the current environment state. Furthermore, by integrating the algorithm with the high‐performance RL framework, sample factory, EPPTA significantly reduces convergence time compared to existing pen‐testing methods, resulting in an approximately 20‐fold acceleration. Empirical results across various pen‐testing scenarios validate EPPTA's superior task reward performance and enhanced scalability, providing substantial support for efficient and advanced evaluation of network infrastructure security.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信