Too much of a good thing: How varying levels of automation impact user performance in a simulated intrusion detection task

IF 4.9 Q1 PSYCHOLOGY, EXPERIMENTAL
Robert Thomson , Daniel N. Cassenti , Thom Hawkins
{"title":"Too much of a good thing: How varying levels of automation impact user performance in a simulated intrusion detection task","authors":"Robert Thomson ,&nbsp;Daniel N. Cassenti ,&nbsp;Thom Hawkins","doi":"10.1016/j.chbr.2024.100511","DOIUrl":null,"url":null,"abstract":"<div><div>Cyber analysts face a demanding task when prioritizing alerts from intrusion detection systems, balancing the challenge of numerous false positives from rule-based methods with the critical need to detect genuine cyber threats, necessitating unwavering vigilance and imposing a significant cognitive burden. In this field, there exists pressure to incorporate artificial intelligence techniques to enhance the automation of analyst workflows, yet without a clear grasp of how elevating the <em>Level of Automation</em> impacts the allocation of attentional and cognitive resources among analysts. This paper describes a simulated AI-assisted intrusion detection task which varies five degrees of automation as well as the sensitivity of the assistant, evaluating performance-based (e.g., accuracy, response time, sensitivity, response bias) and subjective (e.g., surveys on workload and trust) measures. Participants white-listed a series of time-sensitive alerts in a simulated Snort® environment. Our findings indicate that elevating the level of automation altered participants’ behavior, evident in their tendency to display a response bias towards rejecting hits (reduced hit rate and false alarm rate) when overriding an AI’s decision. Additionally, participants subjectively reported experiencing a decreased cognitive workload with a more precise algorithm, irrespective of any variance in their actual performance. Our findings suggest the necessity for additional research before implementing further automation into analyst workflows, as the demands of tasks evolve with escalating levels of automation.</div></div>","PeriodicalId":72681,"journal":{"name":"Computers in human behavior reports","volume":"16 ","pages":"Article 100511"},"PeriodicalIF":4.9000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in human behavior reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2451958824001441","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Cyber analysts face a demanding task when prioritizing alerts from intrusion detection systems, balancing the challenge of numerous false positives from rule-based methods with the critical need to detect genuine cyber threats, necessitating unwavering vigilance and imposing a significant cognitive burden. In this field, there exists pressure to incorporate artificial intelligence techniques to enhance the automation of analyst workflows, yet without a clear grasp of how elevating the Level of Automation impacts the allocation of attentional and cognitive resources among analysts. This paper describes a simulated AI-assisted intrusion detection task which varies five degrees of automation as well as the sensitivity of the assistant, evaluating performance-based (e.g., accuracy, response time, sensitivity, response bias) and subjective (e.g., surveys on workload and trust) measures. Participants white-listed a series of time-sensitive alerts in a simulated Snort® environment. Our findings indicate that elevating the level of automation altered participants’ behavior, evident in their tendency to display a response bias towards rejecting hits (reduced hit rate and false alarm rate) when overriding an AI’s decision. Additionally, participants subjectively reported experiencing a decreased cognitive workload with a more precise algorithm, irrespective of any variance in their actual performance. Our findings suggest the necessity for additional research before implementing further automation into analyst workflows, as the demands of tasks evolve with escalating levels of automation.
好东西太多:不同程度的自动化如何影响用户在模拟入侵检测任务中的表现
网络分析师在对入侵检测系统发出的警报进行优先排序时面临着一项艰巨的任务,既要应对基于规则的方法产生的大量误报,又要满足检测真正网络威胁的关键需求,这就要求分析师时刻保持警惕,并承受巨大的认知负担。在这一领域,人们面临着采用人工智能技术提高分析师工作流程自动化的压力,但却不清楚自动化水平的提高如何影响分析师的注意力和认知资源分配。本文介绍了一项模拟人工智能辅助入侵检测任务,该任务有五种不同的自动化程度以及助手的灵敏度,对基于性能(如准确性、响应时间、灵敏度、响应偏差)和主观(如工作量和信任度调查)的衡量标准进行了评估。参与者在模拟 Snort® 环境中将一系列具有时间敏感性的警报列入白名单。我们的研究结果表明,自动化水平的提高改变了参与者的行为,这表现在他们在推翻人工智能决策时倾向于拒绝命中(降低命中率和误报率)。此外,参与者主观地表示,使用更精确的算法后,他们的认知工作量减少了,而与实际表现的差异无关。我们的研究结果表明,在分析师工作流程中实施进一步自动化之前,有必要进行更多的研究,因为随着自动化水平的不断提高,任务的需求也在不断变化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信