{"title":"使用基于ASG的生成策略来模拟人类规则","authors":"G. White, J. Ingham, Mark Law, A. Russo","doi":"10.1109/SMARTCOMP.2019.00036","DOIUrl":null,"url":null,"abstract":"Generative policies have recently been researched to provide a method for next generation security policies. They are created using either traditional machine learning techniques or, more recently, inductive learning of answer set programs. The latter method is targeted to the learning of Answer Set Grammars (ASG), a new notion of generative policy model for security policies that has the benefit of transparent explainability of the learned outcomes. This paper proposes a military scenario based on logistical resupply from a military base to coalition forces located in a nearby urban area or city. We describe the scenario and accompanying policy such that the context of the resupply missions (and therefore the policy) changes over time. The set of policies and related changes over time have been manually defined using a set of human created rules to replicate how security policies would currently be created by humans in such scenarios. We show how inductive learning of answer set programs can successfully learn ASG generative policy models that capture the human-driven rules from just example traces and decisions made at different time points and with respect to different contextual situations that can arise during the resupply mission. These results demonstrate the utility of ASG generative policy as a method for modelling human-driven policy rules.","PeriodicalId":253364,"journal":{"name":"2019 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Using an ASG Based Generative Policy to Model Human Rules\",\"authors\":\"G. White, J. Ingham, Mark Law, A. Russo\",\"doi\":\"10.1109/SMARTCOMP.2019.00036\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Generative policies have recently been researched to provide a method for next generation security policies. They are created using either traditional machine learning techniques or, more recently, inductive learning of answer set programs. The latter method is targeted to the learning of Answer Set Grammars (ASG), a new notion of generative policy model for security policies that has the benefit of transparent explainability of the learned outcomes. This paper proposes a military scenario based on logistical resupply from a military base to coalition forces located in a nearby urban area or city. We describe the scenario and accompanying policy such that the context of the resupply missions (and therefore the policy) changes over time. The set of policies and related changes over time have been manually defined using a set of human created rules to replicate how security policies would currently be created by humans in such scenarios. We show how inductive learning of answer set programs can successfully learn ASG generative policy models that capture the human-driven rules from just example traces and decisions made at different time points and with respect to different contextual situations that can arise during the resupply mission. These results demonstrate the utility of ASG generative policy as a method for modelling human-driven policy rules.\",\"PeriodicalId\":253364,\"journal\":{\"name\":\"2019 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"volume\":\"114 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SMARTCOMP.2019.00036\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP.2019.00036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
摘要
生成策略的研究为下一代安全策略提供了一种方法。它们要么是使用传统的机器学习技术创建的,要么是最近使用的对答案集程序的归纳学习。后一种方法针对的是回答集语法(Answer Set grammar, ASG)的学习,这是一种针对安全策略的生成策略模型的新概念,其优点是学习结果具有透明的可解释性。本文提出了一种基于从军事基地向位于附近市区或城市的联军提供后勤补给的军事方案。我们描述了场景和附带的政策,以便补给任务的上下文(以及政策)随着时间的推移而变化。策略集和随时间变化的相关更改是使用一组人工创建的规则手动定义的,以复制当前在此类场景中人工创建安全策略的方式。我们展示了答案集程序的归纳学习如何成功地学习ASG生成策略模型,该模型从示例跟踪和在不同时间点做出的决策中捕获人类驱动的规则,并考虑到补给任务期间可能出现的不同上下文情况。这些结果证明了ASG生成策略作为人为驱动策略规则建模方法的实用性。
Using an ASG Based Generative Policy to Model Human Rules
Generative policies have recently been researched to provide a method for next generation security policies. They are created using either traditional machine learning techniques or, more recently, inductive learning of answer set programs. The latter method is targeted to the learning of Answer Set Grammars (ASG), a new notion of generative policy model for security policies that has the benefit of transparent explainability of the learned outcomes. This paper proposes a military scenario based on logistical resupply from a military base to coalition forces located in a nearby urban area or city. We describe the scenario and accompanying policy such that the context of the resupply missions (and therefore the policy) changes over time. The set of policies and related changes over time have been manually defined using a set of human created rules to replicate how security policies would currently be created by humans in such scenarios. We show how inductive learning of answer set programs can successfully learn ASG generative policy models that capture the human-driven rules from just example traces and decisions made at different time points and with respect to different contextual situations that can arise during the resupply mission. These results demonstrate the utility of ASG generative policy as a method for modelling human-driven policy rules.