{"title":"Bayesian Knowledge Base Distance-Based Tuning","authors":"Chase Yakaboski, E. Santos","doi":"10.1109/WI.2018.0-106","DOIUrl":null,"url":null,"abstract":"In order to rigorously characterize the difference or similarity between two probabilistic knowledge bases, a distance/divergence metric must be established. This becomes increasingly important when conducting parameter learning or tuning of a knowledge base. When tuning a knowledge base, it is essential to characterize the global probabilistic belief change as the knowledge base is tuned to ensure the underlying probability distribution of the knowledge base is not drastically compromised. In this paper, we develop an entropy-based distance measure for a Bayesian Knowledge Base (BKB) derived from the Chan-Darwiche distance measure used in a variety of probabilistic belief network analyses. Through this distance measure it is possible to calculate the theoretical minimum distance required to correct a BKB when the system's answer contradicts that of an expert. Having a theoretical minimization limit on distance allows for a quick calculation to test the integrity of either the BKB or the expert's judgement, since a high minimum distance would suggest the necessary tuning is not consistent with the rest of the knowledge base. Further still, this distance measure can be used to prove that in some cases the standard single causal rule set BKB tuning procedure (discussed in our prior work) in fact minimizes the global BKB distance. The final portion of this paper presents a few practical examples which analyze the utility of the BKB distance measure along with evidence supporting the proposition that the Causal Rule Set tuning procedure minimizes the distance between the original and tuned knowledge base in many cases.","PeriodicalId":405966,"journal":{"name":"2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WI.2018.0-106","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In order to rigorously characterize the difference or similarity between two probabilistic knowledge bases, a distance/divergence metric must be established. This becomes increasingly important when conducting parameter learning or tuning of a knowledge base. When tuning a knowledge base, it is essential to characterize the global probabilistic belief change as the knowledge base is tuned to ensure the underlying probability distribution of the knowledge base is not drastically compromised. In this paper, we develop an entropy-based distance measure for a Bayesian Knowledge Base (BKB) derived from the Chan-Darwiche distance measure used in a variety of probabilistic belief network analyses. Through this distance measure it is possible to calculate the theoretical minimum distance required to correct a BKB when the system's answer contradicts that of an expert. Having a theoretical minimization limit on distance allows for a quick calculation to test the integrity of either the BKB or the expert's judgement, since a high minimum distance would suggest the necessary tuning is not consistent with the rest of the knowledge base. Further still, this distance measure can be used to prove that in some cases the standard single causal rule set BKB tuning procedure (discussed in our prior work) in fact minimizes the global BKB distance. The final portion of this paper presents a few practical examples which analyze the utility of the BKB distance measure along with evidence supporting the proposition that the Causal Rule Set tuning procedure minimizes the distance between the original and tuned knowledge base in many cases.