对称加权一阶模型采样的提升算法

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yuanhong Wang , Juhua Pu , Yuyi Wang , Ondřej Kuželka
{"title":"对称加权一阶模型采样的提升算法","authors":"Yuanhong Wang ,&nbsp;Juhua Pu ,&nbsp;Yuyi Wang ,&nbsp;Ondřej Kuželka","doi":"10.1016/j.artint.2024.104114","DOIUrl":null,"url":null,"abstract":"<div><p>Weighted model counting (WMC) is the task of computing the weighted sum of all satisfying assignments (i.e., models) of a propositional formula. Similarly, weighted model sampling (WMS) aims to randomly generate models with probability proportional to their respective weights. Both WMC and WMS are hard to solve exactly, falling under the #<span>P</span>-hard complexity class. However, it is known that the counting problem may sometimes be tractable, if the propositional formula can be compactly represented and expressed in first-order logic. In such cases, model counting problems can be solved in time polynomial in the domain size, and are known as <em>domain-liftable</em>. The following question then arises: Is it also the case for WMS? This paper addresses this question and answers it affirmatively. Specifically, we prove the <em>domain-liftability under sampling</em> for the two-variables fragment of first-order logic with counting quantifiers in this paper, by devising an efficient sampling algorithm for this fragment that runs in time polynomial in the domain size. We then further show that this result continues to hold even in the presence of cardinality constraints. To empirically validate our approach, we conduct experiments over various first-order formulas designed for the uniform generation of combinatorial structures and sampling in statistical-relational models. The results demonstrate that our algorithm outperforms a state-of-the-art WMS sampler by a substantial margin, confirming the theoretical results.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104114"},"PeriodicalIF":5.1000,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lifted algorithms for symmetric weighted first-order model sampling\",\"authors\":\"Yuanhong Wang ,&nbsp;Juhua Pu ,&nbsp;Yuyi Wang ,&nbsp;Ondřej Kuželka\",\"doi\":\"10.1016/j.artint.2024.104114\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Weighted model counting (WMC) is the task of computing the weighted sum of all satisfying assignments (i.e., models) of a propositional formula. Similarly, weighted model sampling (WMS) aims to randomly generate models with probability proportional to their respective weights. Both WMC and WMS are hard to solve exactly, falling under the #<span>P</span>-hard complexity class. However, it is known that the counting problem may sometimes be tractable, if the propositional formula can be compactly represented and expressed in first-order logic. In such cases, model counting problems can be solved in time polynomial in the domain size, and are known as <em>domain-liftable</em>. The following question then arises: Is it also the case for WMS? This paper addresses this question and answers it affirmatively. Specifically, we prove the <em>domain-liftability under sampling</em> for the two-variables fragment of first-order logic with counting quantifiers in this paper, by devising an efficient sampling algorithm for this fragment that runs in time polynomial in the domain size. We then further show that this result continues to hold even in the presence of cardinality constraints. To empirically validate our approach, we conduct experiments over various first-order formulas designed for the uniform generation of combinatorial structures and sampling in statistical-relational models. The results demonstrate that our algorithm outperforms a state-of-the-art WMS sampler by a substantial margin, confirming the theoretical results.</p></div>\",\"PeriodicalId\":8434,\"journal\":{\"name\":\"Artificial Intelligence\",\"volume\":\"331 \",\"pages\":\"Article 104114\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2024-03-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S000437022400050X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S000437022400050X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

加权模型计数(WMC)的任务是计算命题式的所有满足赋值(即模型)的加权和。同样,加权模型抽样(WMS)的目的是随机生成与各自权重成比例的模型。WMC 和 WMS 都很难精确求解,属于 #P 难复杂度类别。不过,众所周知,如果命题式可以用一阶逻辑紧凑地表示和表达,计数问题有时可能是可控的。在这种情况下,模型计数问题可以在领域大小为多项式的时间内求解,被称为领域可提升问题。下面的问题随之而来:微信也是这种情况吗?本文针对这一问题给出了肯定的答案。具体来说,我们在本文中证明了带有计数量词的一阶逻辑双变量片段在采样条件下的域可提升性,为此我们为该片段设计了一种高效的采样算法,其运行时间与域大小成多项式关系。然后,我们进一步证明,即使存在心量限制,这一结果依然成立。为了从经验上验证我们的方法,我们对各种一阶公式进行了实验,这些公式是为统计关系模型中组合结构和抽样的统一生成而设计的。结果表明,我们的算法大大优于最先进的 WMS 采样器,证实了理论结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Lifted algorithms for symmetric weighted first-order model sampling

Weighted model counting (WMC) is the task of computing the weighted sum of all satisfying assignments (i.e., models) of a propositional formula. Similarly, weighted model sampling (WMS) aims to randomly generate models with probability proportional to their respective weights. Both WMC and WMS are hard to solve exactly, falling under the #P-hard complexity class. However, it is known that the counting problem may sometimes be tractable, if the propositional formula can be compactly represented and expressed in first-order logic. In such cases, model counting problems can be solved in time polynomial in the domain size, and are known as domain-liftable. The following question then arises: Is it also the case for WMS? This paper addresses this question and answers it affirmatively. Specifically, we prove the domain-liftability under sampling for the two-variables fragment of first-order logic with counting quantifiers in this paper, by devising an efficient sampling algorithm for this fragment that runs in time polynomial in the domain size. We then further show that this result continues to hold even in the presence of cardinality constraints. To empirically validate our approach, we conduct experiments over various first-order formulas designed for the uniform generation of combinatorial structures and sampling in statistical-relational models. The results demonstrate that our algorithm outperforms a state-of-the-art WMS sampler by a substantial margin, confirming the theoretical results.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence
Artificial Intelligence 工程技术-计算机:人工智能
CiteScore
11.20
自引率
1.40%
发文量
118
审稿时长
8 months
期刊介绍: The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信