用于自动评估安全度量的框架

M. Zaber, S. Nair
{"title":"用于自动评估安全度量的框架","authors":"M. Zaber, S. Nair","doi":"10.1145/3407023.3409197","DOIUrl":null,"url":null,"abstract":"Observation is the foundation of scientific experimentation. We consider observations to be measurements when they are quantified with respect to an agreed upon scale, or measurement unit. A number of metrics have been proposed in the literature which attempt to quantify some property of cyber security, but no systematic validation has been conducted to characterize the behaviour of these metrics as measurement instruments, or to understand how the quantity being measured is related to the security of the system under test. In this paper we broadly classify the body of available security metrics against the recently released Cyber Security Body of Knowledge, and identify common attributes across metric classes which may be useful anchors for comparison. We propose a general four stage evaluation pipeline to encapsulate the processing specifics of each metric, encouraging a separation of the actual measurement logic from the model it is often paired with in publication. Decoupling these stages allows us to systematically apply a range of inputs to a set of metrics, and we demonstrate some important results in our proof of concept. First, we determine a metric's suitability for use as a measurement instrument against validation criteria like operational range, sensitivity, and precision by observing performance over controlled variations of a reference input. Then we show how evaluating multiple metrics against common reference sets allows direct comparison of results and identification of patterns in measurement performance. Consequently, development and operations teams can also use this strategy to evaluate security tradeoffs between competing input designs or to measure the effects of incremental changes during production deployments.","PeriodicalId":121225,"journal":{"name":"Proceedings of the 15th International Conference on Availability, Reliability and Security","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A framework for automated evaluation of security metrics\",\"authors\":\"M. Zaber, S. Nair\",\"doi\":\"10.1145/3407023.3409197\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Observation is the foundation of scientific experimentation. We consider observations to be measurements when they are quantified with respect to an agreed upon scale, or measurement unit. A number of metrics have been proposed in the literature which attempt to quantify some property of cyber security, but no systematic validation has been conducted to characterize the behaviour of these metrics as measurement instruments, or to understand how the quantity being measured is related to the security of the system under test. In this paper we broadly classify the body of available security metrics against the recently released Cyber Security Body of Knowledge, and identify common attributes across metric classes which may be useful anchors for comparison. We propose a general four stage evaluation pipeline to encapsulate the processing specifics of each metric, encouraging a separation of the actual measurement logic from the model it is often paired with in publication. Decoupling these stages allows us to systematically apply a range of inputs to a set of metrics, and we demonstrate some important results in our proof of concept. First, we determine a metric's suitability for use as a measurement instrument against validation criteria like operational range, sensitivity, and precision by observing performance over controlled variations of a reference input. Then we show how evaluating multiple metrics against common reference sets allows direct comparison of results and identification of patterns in measurement performance. Consequently, development and operations teams can also use this strategy to evaluate security tradeoffs between competing input designs or to measure the effects of incremental changes during production deployments.\",\"PeriodicalId\":121225,\"journal\":{\"name\":\"Proceedings of the 15th International Conference on Availability, Reliability and Security\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 15th International Conference on Availability, Reliability and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3407023.3409197\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 15th International Conference on Availability, Reliability and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3407023.3409197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

观察是科学实验的基础。我们认为观察是测量,当它们被量化相对于商定的尺度,或测量单位。文献中已经提出了一些指标,试图量化网络安全的某些属性,但没有进行系统验证,以表征这些指标作为测量工具的行为,或者了解被测量的数量如何与被测系统的安全性相关。在本文中,我们根据最近发布的网络安全知识体系对可用的安全度量进行了广泛的分类,并确定了度量类之间的共同属性,这些属性可能是有用的比较锚点。我们提出一个通用的四阶段评估管道来封装每个度量的处理细节,鼓励将实际度量逻辑从模型中分离出来,它通常与出版物中配对。解耦这些阶段允许我们系统地将一系列输入应用于一组度量,并且我们在概念证明中展示了一些重要的结果。首先,我们根据操作范围、灵敏度和精度等验证标准,通过观察参考输入受控变化的性能,确定度量是否适合作为测量工具使用。然后,我们将展示如何根据公共参考集评估多个度量,从而允许直接比较结果和识别度量性能中的模式。因此,开发和运维团队也可以使用此策略来评估竞争输入设计之间的安全性权衡,或者在生产部署期间度量增量更改的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A framework for automated evaluation of security metrics
Observation is the foundation of scientific experimentation. We consider observations to be measurements when they are quantified with respect to an agreed upon scale, or measurement unit. A number of metrics have been proposed in the literature which attempt to quantify some property of cyber security, but no systematic validation has been conducted to characterize the behaviour of these metrics as measurement instruments, or to understand how the quantity being measured is related to the security of the system under test. In this paper we broadly classify the body of available security metrics against the recently released Cyber Security Body of Knowledge, and identify common attributes across metric classes which may be useful anchors for comparison. We propose a general four stage evaluation pipeline to encapsulate the processing specifics of each metric, encouraging a separation of the actual measurement logic from the model it is often paired with in publication. Decoupling these stages allows us to systematically apply a range of inputs to a set of metrics, and we demonstrate some important results in our proof of concept. First, we determine a metric's suitability for use as a measurement instrument against validation criteria like operational range, sensitivity, and precision by observing performance over controlled variations of a reference input. Then we show how evaluating multiple metrics against common reference sets allows direct comparison of results and identification of patterns in measurement performance. Consequently, development and operations teams can also use this strategy to evaluate security tradeoffs between competing input designs or to measure the effects of incremental changes during production deployments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信