{"title":"用于自动评估安全度量的框架","authors":"M. Zaber, S. Nair","doi":"10.1145/3407023.3409197","DOIUrl":null,"url":null,"abstract":"Observation is the foundation of scientific experimentation. We consider observations to be measurements when they are quantified with respect to an agreed upon scale, or measurement unit. A number of metrics have been proposed in the literature which attempt to quantify some property of cyber security, but no systematic validation has been conducted to characterize the behaviour of these metrics as measurement instruments, or to understand how the quantity being measured is related to the security of the system under test. In this paper we broadly classify the body of available security metrics against the recently released Cyber Security Body of Knowledge, and identify common attributes across metric classes which may be useful anchors for comparison. We propose a general four stage evaluation pipeline to encapsulate the processing specifics of each metric, encouraging a separation of the actual measurement logic from the model it is often paired with in publication. Decoupling these stages allows us to systematically apply a range of inputs to a set of metrics, and we demonstrate some important results in our proof of concept. First, we determine a metric's suitability for use as a measurement instrument against validation criteria like operational range, sensitivity, and precision by observing performance over controlled variations of a reference input. Then we show how evaluating multiple metrics against common reference sets allows direct comparison of results and identification of patterns in measurement performance. Consequently, development and operations teams can also use this strategy to evaluate security tradeoffs between competing input designs or to measure the effects of incremental changes during production deployments.","PeriodicalId":121225,"journal":{"name":"Proceedings of the 15th International Conference on Availability, Reliability and Security","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A framework for automated evaluation of security metrics\",\"authors\":\"M. Zaber, S. Nair\",\"doi\":\"10.1145/3407023.3409197\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Observation is the foundation of scientific experimentation. We consider observations to be measurements when they are quantified with respect to an agreed upon scale, or measurement unit. A number of metrics have been proposed in the literature which attempt to quantify some property of cyber security, but no systematic validation has been conducted to characterize the behaviour of these metrics as measurement instruments, or to understand how the quantity being measured is related to the security of the system under test. In this paper we broadly classify the body of available security metrics against the recently released Cyber Security Body of Knowledge, and identify common attributes across metric classes which may be useful anchors for comparison. We propose a general four stage evaluation pipeline to encapsulate the processing specifics of each metric, encouraging a separation of the actual measurement logic from the model it is often paired with in publication. Decoupling these stages allows us to systematically apply a range of inputs to a set of metrics, and we demonstrate some important results in our proof of concept. First, we determine a metric's suitability for use as a measurement instrument against validation criteria like operational range, sensitivity, and precision by observing performance over controlled variations of a reference input. Then we show how evaluating multiple metrics against common reference sets allows direct comparison of results and identification of patterns in measurement performance. Consequently, development and operations teams can also use this strategy to evaluate security tradeoffs between competing input designs or to measure the effects of incremental changes during production deployments.\",\"PeriodicalId\":121225,\"journal\":{\"name\":\"Proceedings of the 15th International Conference on Availability, Reliability and Security\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 15th International Conference on Availability, Reliability and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3407023.3409197\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 15th International Conference on Availability, Reliability and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3407023.3409197","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A framework for automated evaluation of security metrics
Observation is the foundation of scientific experimentation. We consider observations to be measurements when they are quantified with respect to an agreed upon scale, or measurement unit. A number of metrics have been proposed in the literature which attempt to quantify some property of cyber security, but no systematic validation has been conducted to characterize the behaviour of these metrics as measurement instruments, or to understand how the quantity being measured is related to the security of the system under test. In this paper we broadly classify the body of available security metrics against the recently released Cyber Security Body of Knowledge, and identify common attributes across metric classes which may be useful anchors for comparison. We propose a general four stage evaluation pipeline to encapsulate the processing specifics of each metric, encouraging a separation of the actual measurement logic from the model it is often paired with in publication. Decoupling these stages allows us to systematically apply a range of inputs to a set of metrics, and we demonstrate some important results in our proof of concept. First, we determine a metric's suitability for use as a measurement instrument against validation criteria like operational range, sensitivity, and precision by observing performance over controlled variations of a reference input. Then we show how evaluating multiple metrics against common reference sets allows direct comparison of results and identification of patterns in measurement performance. Consequently, development and operations teams can also use this strategy to evaluate security tradeoffs between competing input designs or to measure the effects of incremental changes during production deployments.