IF 4 3区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Aditya K Sood , Sherali Zeadally , EenKee Hong
{"title":"The Paradigm of Hallucinations in AI-driven cybersecurity systems: Understanding taxonomy, classification outcomes, and mitigations","authors":"Aditya K Sood ,&nbsp;Sherali Zeadally ,&nbsp;EenKee Hong","doi":"10.1016/j.compeleceng.2025.110307","DOIUrl":null,"url":null,"abstract":"<div><div>The adoption of AI to solve cybersecurity problems is occurring exponentially. However, AI-driven cybersecurity systems face significant challenges due to the impact of hallucinations in Large Language Models (LLMs). In AI-driven cybersecurity systems, hallucinations refer to instances when an AI model generates fabricated, inaccurate, and misleading information that impacts the security posture of organizations. This failure to recognize and misreport security threats identifies benign activities as malicious, invents insights not grounded to actual cyber threats, and causes real threats to go undetected due to erroneous interpretations. Hallucinations are a critical problem in AI-driven cybersecurity because they can lead to severe vulnerabilities, erode trust in automated systems, and divert resources to address non-existent threats. In cybersecurity, where real-time, accurate insights are vital, hallucinated outputs—such as mistakenly generated alerts, can cause a misallocation of time and resources. It is crucial to address hallucinations by improving LLM accuracy, grounding outputs in real-time data, and implementing human oversight mechanisms to ensure that AI-based cybersecurity systems remain trustworthy, reliable, and capable of defending against sophisticated threats. We present a taxonomy of hallucinations in LLMs for cybersecurity, including mapping LLM responses to classification outcomes (confusion matrix components). Finally, we discuss mitigation strategies to combat hallucinations.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"124 ","pages":"Article 110307"},"PeriodicalIF":4.0000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Electrical Engineering","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0045790625002502","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

采用人工智能来解决网络安全问题的趋势正呈指数级增长。然而,由于大型语言模型(LLM)中幻觉的影响,人工智能驱动的网络安全系统面临着重大挑战。在人工智能驱动的网络安全系统中,幻觉指的是人工智能模型生成编造的、不准确的和误导性的信息,从而影响组织的安全态势。这种无法识别和误报安全威胁的情况会将良性活动认定为恶意活动,编造出与实际网络威胁不符的见解,并由于错误的解释而导致真正的威胁未被发现。幻觉是人工智能驱动的网络安全中的一个关键问题,因为它们会导致严重的漏洞,削弱对自动化系统的信任,并将资源转用于应对不存在的威胁。在网络安全领域,实时、准确的洞察力至关重要,而幻觉输出(如错误生成的警报)会导致时间和资源的错误分配。解决幻觉问题的关键在于提高 LLM 的准确性,将输出结果建立在实时数据的基础上,并实施人工监督机制,以确保基于人工智能的网络安全系统始终可信、可靠,并能够抵御复杂的威胁。我们介绍了网络安全 LLM 中幻觉的分类,包括 LLM 响应与分类结果(混淆矩阵组件)的映射。最后,我们讨论了应对幻觉的缓解策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Paradigm of Hallucinations in AI-driven cybersecurity systems: Understanding taxonomy, classification outcomes, and mitigations
The adoption of AI to solve cybersecurity problems is occurring exponentially. However, AI-driven cybersecurity systems face significant challenges due to the impact of hallucinations in Large Language Models (LLMs). In AI-driven cybersecurity systems, hallucinations refer to instances when an AI model generates fabricated, inaccurate, and misleading information that impacts the security posture of organizations. This failure to recognize and misreport security threats identifies benign activities as malicious, invents insights not grounded to actual cyber threats, and causes real threats to go undetected due to erroneous interpretations. Hallucinations are a critical problem in AI-driven cybersecurity because they can lead to severe vulnerabilities, erode trust in automated systems, and divert resources to address non-existent threats. In cybersecurity, where real-time, accurate insights are vital, hallucinated outputs—such as mistakenly generated alerts, can cause a misallocation of time and resources. It is crucial to address hallucinations by improving LLM accuracy, grounding outputs in real-time data, and implementing human oversight mechanisms to ensure that AI-based cybersecurity systems remain trustworthy, reliable, and capable of defending against sophisticated threats. We present a taxonomy of hallucinations in LLMs for cybersecurity, including mapping LLM responses to classification outcomes (confusion matrix components). Finally, we discuss mitigation strategies to combat hallucinations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computers & Electrical Engineering
Computers & Electrical Engineering 工程技术-工程:电子与电气
CiteScore
9.20
自引率
7.00%
发文量
661
审稿时长
47 days
期刊介绍: The impact of computers has nowhere been more revolutionary than in electrical engineering. The design, analysis, and operation of electrical and electronic systems are now dominated by computers, a transformation that has been motivated by the natural ease of interface between computers and electrical systems, and the promise of spectacular improvements in speed and efficiency. Published since 1973, Computers & Electrical Engineering provides rapid publication of topical research into the integration of computer technology and computational techniques with electrical and electronic systems. The journal publishes papers featuring novel implementations of computers and computational techniques in areas like signal and image processing, high-performance computing, parallel processing, and communications. Special attention will be paid to papers describing innovative architectures, algorithms, and software tools.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信