Hackphyr:用于网络安全环境的本地微调 LLM 代理

Maria Rigaki, Carlos Catania, Sebastian Garcia
{"title":"Hackphyr:用于网络安全环境的本地微调 LLM 代理","authors":"Maria Rigaki, Carlos Catania, Sebastian Garcia","doi":"arxiv-2409.11276","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have shown remarkable potential across various\ndomains, including cybersecurity. Using commercial cloud-based LLMs may be\nundesirable due to privacy concerns, costs, and network connectivity\nconstraints. In this paper, we present Hackphyr, a locally fine-tuned LLM to be\nused as a red-team agent within network security environments. Our fine-tuned 7\nbillion parameter model can run on a single GPU card and achieves performance\ncomparable with much larger and more powerful commercial models such as GPT-4.\nHackphyr clearly outperforms other models, including GPT-3.5-turbo, and\nbaselines, such as Q-learning agents in complex, previously unseen scenarios.\nTo achieve this performance, we generated a new task-specific cybersecurity\ndataset to enhance the base model's capabilities. Finally, we conducted a\ncomprehensive analysis of the agents' behaviors that provides insights into the\nplanning abilities and potential shortcomings of such agents, contributing to\nthe broader understanding of LLM-based agents in cybersecurity contexts","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"89 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hackphyr: A Local Fine-Tuned LLM Agent for Network Security Environments\",\"authors\":\"Maria Rigaki, Carlos Catania, Sebastian Garcia\",\"doi\":\"arxiv-2409.11276\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLMs) have shown remarkable potential across various\\ndomains, including cybersecurity. Using commercial cloud-based LLMs may be\\nundesirable due to privacy concerns, costs, and network connectivity\\nconstraints. In this paper, we present Hackphyr, a locally fine-tuned LLM to be\\nused as a red-team agent within network security environments. Our fine-tuned 7\\nbillion parameter model can run on a single GPU card and achieves performance\\ncomparable with much larger and more powerful commercial models such as GPT-4.\\nHackphyr clearly outperforms other models, including GPT-3.5-turbo, and\\nbaselines, such as Q-learning agents in complex, previously unseen scenarios.\\nTo achieve this performance, we generated a new task-specific cybersecurity\\ndataset to enhance the base model's capabilities. Finally, we conducted a\\ncomprehensive analysis of the agents' behaviors that provides insights into the\\nplanning abilities and potential shortcomings of such agents, contributing to\\nthe broader understanding of LLM-based agents in cybersecurity contexts\",\"PeriodicalId\":501332,\"journal\":{\"name\":\"arXiv - CS - Cryptography and Security\",\"volume\":\"89 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Cryptography and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11276\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11276","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(LLM)在包括网络安全在内的各个领域都显示出了巨大的潜力。由于隐私问题、成本和网络连接限制,使用基于云的商用 LLM 可能并不可取。在本文中,我们介绍了 Hackphyr,一种经过本地微调的 LLM,可用作网络安全环境中的红队代理。Hackphyr 的性能明显优于其他模型(包括 GPT-3.5-turbo 模型)和基准模型,例如 Q-learning 代理在复杂的、以前从未见过的场景中的表现。最后,我们对代理的行为进行了全面分析,深入了解了此类代理的规划能力和潜在缺陷,有助于更广泛地理解网络安全环境中基于 LLM 的代理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Hackphyr: A Local Fine-Tuned LLM Agent for Network Security Environments
Large Language Models (LLMs) have shown remarkable potential across various domains, including cybersecurity. Using commercial cloud-based LLMs may be undesirable due to privacy concerns, costs, and network connectivity constraints. In this paper, we present Hackphyr, a locally fine-tuned LLM to be used as a red-team agent within network security environments. Our fine-tuned 7 billion parameter model can run on a single GPU card and achieves performance comparable with much larger and more powerful commercial models such as GPT-4. Hackphyr clearly outperforms other models, including GPT-3.5-turbo, and baselines, such as Q-learning agents in complex, previously unseen scenarios. To achieve this performance, we generated a new task-specific cybersecurity dataset to enhance the base model's capabilities. Finally, we conducted a comprehensive analysis of the agents' behaviors that provides insights into the planning abilities and potential shortcomings of such agents, contributing to the broader understanding of LLM-based agents in cybersecurity contexts
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信