{"title":"Quantitative access control with partially-observable Markov decision processes","authors":"F. Martinelli, C. Morisset","doi":"10.1145/2133601.2133623","DOIUrl":null,"url":null,"abstract":"This paper presents a novel access control framework reducing the access control problem to a traditional decision problem, thus allowing a policy designer to reuse tools and techniques from the decision theory. We propose here to express, within a single framework, the notion of utility of an access, decisions beyond the traditional allowing/denying of an access, the uncertainty over the effect of executing a given decision, the uncertainty over the current state of the system, and to optimize this process for a (probabilistic) sequence of requests. We show that an access control mechanism including these different concepts can be specified as a (Partially Observable) Markov Decision Process, and we illustrate this framework with a running example, which includes notions of conflict, critical resource, mitigation and auditing decisions, and we show that for a given sequence of requests, it is possible to calculate an optimal policy different from the naive one. This optimization is still possible even for several probable sequences of requests.","PeriodicalId":90472,"journal":{"name":"CODASPY : proceedings of the ... ACM conference on data and application security and privacy. ACM Conference on Data and Application Security & Privacy","volume":"45 1","pages":"169-180"},"PeriodicalIF":0.0000,"publicationDate":"2012-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CODASPY : proceedings of the ... ACM conference on data and application security and privacy. ACM Conference on Data and Application Security & Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2133601.2133623","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22
Abstract
This paper presents a novel access control framework reducing the access control problem to a traditional decision problem, thus allowing a policy designer to reuse tools and techniques from the decision theory. We propose here to express, within a single framework, the notion of utility of an access, decisions beyond the traditional allowing/denying of an access, the uncertainty over the effect of executing a given decision, the uncertainty over the current state of the system, and to optimize this process for a (probabilistic) sequence of requests. We show that an access control mechanism including these different concepts can be specified as a (Partially Observable) Markov Decision Process, and we illustrate this framework with a running example, which includes notions of conflict, critical resource, mitigation and auditing decisions, and we show that for a given sequence of requests, it is possible to calculate an optimal policy different from the naive one. This optimization is still possible even for several probable sequences of requests.