Improving Candidate Quality of Probabilistic Logic Models

Joana Côrte-Real, Anton Dries, I. Dutra, Ricardo Rocha
{"title":"Improving Candidate Quality of Probabilistic Logic Models","authors":"Joana Côrte-Real, Anton Dries, I. Dutra, Ricardo Rocha","doi":"10.4230/OASIcs.ICLP.2018.6","DOIUrl":null,"url":null,"abstract":"Many real-world phenomena exhibit both relational structure and uncertainty. Probabilistic Inductive Logic Programming (PILP) uses Inductive Logic Programming (ILP) extended with probabilistic facts to produce meaningful and interpretable models for real-world phenomena. This merge between First Order Logic (FOL) theories and uncertainty makes PILP a very adequate tool for knowledge representation and extraction. However, this flexibility is coupled with a problem (inherited from ILP) of exponential search space growth and so, often, only a subset of all possible models is explored due to limited resources. Furthermore, the probabilistic evaluation of FOL theories, coming from the underlying probabilistic logic language and its solver, is also computationally demanding. This work introduces a prediction-based pruning strategy, which can reduce the search space based on the probabilistic evaluation of models, and a safe pruning criterion, which guarantees that the optimal model is not pruned away, as well as two alternative more aggressive criteria that do not provide this guarantee. Experiments performed using three benchmarks from different areas show that prediction pruning is effective in (i) maintaining predictive accuracy for all criteria and experimental settings; (ii) reducing the execution time when using some of the more aggressive criteria, compared to using no pruning; and (iii) selecting better candidate models in limited resource settings, also when compared to using no pruning. 2012 ACM Subject Classification Computing methodologies → Probabilistic reasoning","PeriodicalId":271041,"journal":{"name":"International Conference on Logic Programming","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Logic Programming","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4230/OASIcs.ICLP.2018.6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Many real-world phenomena exhibit both relational structure and uncertainty. Probabilistic Inductive Logic Programming (PILP) uses Inductive Logic Programming (ILP) extended with probabilistic facts to produce meaningful and interpretable models for real-world phenomena. This merge between First Order Logic (FOL) theories and uncertainty makes PILP a very adequate tool for knowledge representation and extraction. However, this flexibility is coupled with a problem (inherited from ILP) of exponential search space growth and so, often, only a subset of all possible models is explored due to limited resources. Furthermore, the probabilistic evaluation of FOL theories, coming from the underlying probabilistic logic language and its solver, is also computationally demanding. This work introduces a prediction-based pruning strategy, which can reduce the search space based on the probabilistic evaluation of models, and a safe pruning criterion, which guarantees that the optimal model is not pruned away, as well as two alternative more aggressive criteria that do not provide this guarantee. Experiments performed using three benchmarks from different areas show that prediction pruning is effective in (i) maintaining predictive accuracy for all criteria and experimental settings; (ii) reducing the execution time when using some of the more aggressive criteria, compared to using no pruning; and (iii) selecting better candidate models in limited resource settings, also when compared to using no pruning. 2012 ACM Subject Classification Computing methodologies → Probabilistic reasoning
提高概率逻辑模型的候选质量
许多现实世界的现象既表现出关系结构,又表现出不确定性。概率归纳逻辑规划(PILP)是将归纳逻辑规划(ILP)与概率事实进行扩展,为现实世界的现象生成有意义且可解释的模型。一阶逻辑(FOL)理论和不确定性之间的融合使得PILP成为知识表示和提取的一个非常合适的工具。然而,这种灵活性与指数搜索空间增长的问题(继承自ILP)相结合,因此,由于资源有限,通常只能探索所有可能模型的子集。此外,基于底层概率逻辑语言及其求解器的FOL理论的概率评估也需要大量的计算量。这项工作引入了一种基于预测的修剪策略,该策略可以根据模型的概率评估来减少搜索空间,以及一个安全修剪准则,该准则保证最优模型不会被修剪掉,以及两个不提供这种保证的更激进的替代标准。使用来自不同领域的三个基准进行的实验表明,预测修剪在以下方面是有效的:(i)保持所有标准和实验设置的预测准确性;(ii)与不使用修剪相比,使用一些更激进的标准时减少了执行时间;(iii)在有限的资源设置下选择更好的候选模型,也与不使用修剪相比。2012 ACM主题分类计算方法→概率推理
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信