{"title":"Controlled query evaluation in description logics through consistent query answering","authors":"Gianluca Cima , Domenico Lembo , Riccardo Rosati , Domenico Fabio Savo","doi":"10.1016/j.artint.2024.104176","DOIUrl":null,"url":null,"abstract":"<div><p>Controlled Query Evaluation (CQE) is a framework for the protection of confidential data, where a <em>policy</em> given in terms of logic formulae indicates which information must be kept private. Functions called <em>censors</em> filter query answering so that no answers are returned that may lead a user to infer data protected by the policy. The preferred censors, called <em>optimal</em> censors, are the ones that conceal only what is necessary, thus maximizing the returned answers. Typically, given a policy over a data or knowledge base, several optimal censors exist.</p><p>Our research on CQE is based on the following intuition: confidential data are those that violate the logical assertions specifying the policy, and thus censoring them in query answering is similar to processing queries in the presence of inconsistent data as studied in Consistent Query Answering (CQA). In this paper, we investigate the relationship between CQE and CQA in the context of Description Logic ontologies. We borrow the idea from CQA that query answering is a form of skeptical reasoning that takes into account all possible optimal censors. This approach leads to a revised notion of CQE, which allows us to avoid making an arbitrary choice on the censor to be selected, as done by previous research on the topic.</p><p>We then study the data complexity of query answering in our CQE framework, for conjunctive queries issued over ontologies specified in the popular Description Logics <span><math><msub><mrow><mtext>DL-Lite</mtext></mrow><mrow><mi>R</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>EL</mi></mrow><mrow><mo>⊥</mo></mrow></msub></math></span>. In our analysis, we consider some variants of the censor language, which is the language used by the censor to enforce the policy. Whereas the problem is in general intractable for simple censor languages, we show that for <span><math><msub><mrow><mtext>DL-Lite</mtext></mrow><mrow><mi>R</mi></mrow></msub></math></span> ontologies it is first-order rewritable, and thus in AC<sup>0</sup> in data complexity, for the most expressive censor language we propose.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"334 ","pages":"Article 104176"},"PeriodicalIF":5.1000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001127/pdfft?md5=ee177d55b636c08d6ce8c57b16674343&pid=1-s2.0-S0004370224001127-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0004370224001127","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Controlled Query Evaluation (CQE) is a framework for the protection of confidential data, where a policy given in terms of logic formulae indicates which information must be kept private. Functions called censors filter query answering so that no answers are returned that may lead a user to infer data protected by the policy. The preferred censors, called optimal censors, are the ones that conceal only what is necessary, thus maximizing the returned answers. Typically, given a policy over a data or knowledge base, several optimal censors exist.
Our research on CQE is based on the following intuition: confidential data are those that violate the logical assertions specifying the policy, and thus censoring them in query answering is similar to processing queries in the presence of inconsistent data as studied in Consistent Query Answering (CQA). In this paper, we investigate the relationship between CQE and CQA in the context of Description Logic ontologies. We borrow the idea from CQA that query answering is a form of skeptical reasoning that takes into account all possible optimal censors. This approach leads to a revised notion of CQE, which allows us to avoid making an arbitrary choice on the censor to be selected, as done by previous research on the topic.
We then study the data complexity of query answering in our CQE framework, for conjunctive queries issued over ontologies specified in the popular Description Logics and . In our analysis, we consider some variants of the censor language, which is the language used by the censor to enforce the policy. Whereas the problem is in general intractable for simple censor languages, we show that for ontologies it is first-order rewritable, and thus in AC0 in data complexity, for the most expressive censor language we propose.
期刊介绍:
The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.