Kevin Leyton-Brown , Mausam , Yatin Nandwani , Hedayat Zarkoob , Chris Cameron , Neil Newman , Dinesh Raghu
{"title":"Matching papers and reviewers at large conferences","authors":"Kevin Leyton-Brown , Mausam , Yatin Nandwani , Hedayat Zarkoob , Chris Cameron , Neil Newman , Dinesh Raghu","doi":"10.1016/j.artint.2024.104119","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104119","url":null,"abstract":"<div><p>Peer-reviewed conferences, the main publication venues in CS, rely critically on matching highly qualified reviewers for each paper. Because of the growing scale of these conferences, the tight timelines on which they operate, and a recent surge in explicitly dishonest behavior, there is now no alternative to performing this matching in an automated way. This paper introduces <em>Large Conference Matching (LCM)</em>, a novel reviewer–paper matching approach that was recently deployed in the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), and has since been adopted (wholly or partially) by other conferences including ICML 2022, AAAI 2022-2024, and IJCAI 2022-2024. LCM has three main elements: (1) collecting and processing input data to identify problematic matches and generate reviewer–paper scores; (2) formulating and solving an optimization problem to find good reviewer–paper matchings; and (3) a two-phase reviewing process that shifts reviewing resources away from papers likely to be rejected and towards papers closer to the decision boundary. This paper also describes an evaluation of these innovations based on an extensive post-hoc analysis on real data—including a comparison with the matching algorithm used in AAAI's previous (2020) iteration—and supplements this with additional numerical experimentation.<span><sup>2</sup></span></p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104119"},"PeriodicalIF":14.4,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224000559/pdfft?md5=fb8e284a4c8e25a00c2339ca22f7ea3a&pid=1-s2.0-S0004370224000559-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140537025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haris Aziz , Bo Li , Hervé Moulin , Xiaowei Wu , Xinran Zhu
{"title":"Almost proportional allocations of indivisible chores: Computation, approximation and efficiency","authors":"Haris Aziz , Bo Li , Hervé Moulin , Xiaowei Wu , Xinran Zhu","doi":"10.1016/j.artint.2024.104118","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104118","url":null,"abstract":"<div><p>Proportionality (PROP) is one of the simplest and most intuitive fairness criteria used for allocating items among agents with additive utilities. However, when the items are indivisible, ensuring PROP becomes unattainable, leading to increased focus on its relaxations. In this paper, we focus on the relaxation of proportionality up to any item (PROPX), where proportionality is satisfied if an arbitrary item is removed from every agent's allocation. We show that PROPX is an appealing fairness notion for the allocation of indivisible chores, which approximately implies some share-based notions, such as maximin share (MMS) and AnyPrice share (APS). We further provide a comprehensive understanding of PROPX allocations, regarding the computation, approximation, and compatibility with efficiency. On top of these, we extend the study to scenarios where agents do not share equal liability towards the chores, and approximate PROPX allocations using partial information about agents' utilities.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104118"},"PeriodicalIF":14.4,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140327620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo Fermé , Marco Garapa , Maurício D.L. Reis , Yuri Almeida , Teresa Paulino , Mariana Rodrigues
{"title":"Knowledge-driven profile dynamics","authors":"Eduardo Fermé , Marco Garapa , Maurício D.L. Reis , Yuri Almeida , Teresa Paulino , Mariana Rodrigues","doi":"10.1016/j.artint.2024.104117","DOIUrl":"10.1016/j.artint.2024.104117","url":null,"abstract":"<div><p>In the last decades, user profiles have been used in several areas of information technology. In the literature, most research works, and systems focus on the creation of profiles (using Data Mining techniques based on user's navigation or interaction history). In general, the dynamics of profiles are made by means of a systematic recreation of the profiles, without using the previous profiles. In this paper we propose to formalize the creation, representation, and dynamics of profiles from a Knowledge-Driven perspective. We introduce and axiomatically characterize four operators for changing profiles using a belief change inspired approach.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104117"},"PeriodicalIF":14.4,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224000535/pdfft?md5=db488321dca5a05c84cb8b5f4f52cadf&pid=1-s2.0-S0004370224000535-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140277373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Regular decision processes","authors":"Ronen I. Brafman , Giuseppe De Giacomo","doi":"10.1016/j.artint.2024.104113","DOIUrl":"10.1016/j.artint.2024.104113","url":null,"abstract":"<div><p>We introduce and study Regular Decision Processes (RDPs), a new, compact model for domains with non-Markovian dynamics and rewards, in which the dependence on the past is regular, in the language theoretic sense. RDPs are an intermediate model between MDPs and POMDPs. They generalize <em>k</em>-order MDPs and can be viewed as a POMDP in which the hidden state is a regular function of the entire history. In factored RDPs, transition and reward functions are specified using formulas in linear temporal logics over finite traces, or using regular expressions. This allows specifying complex dependence on the past using intuitive and compact formulas, and building models of partially observable domains without specifying an underlying state space.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104113"},"PeriodicalIF":14.4,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140276630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanhong Wang , Juhua Pu , Yuyi Wang , Ondřej Kuželka
{"title":"Lifted algorithms for symmetric weighted first-order model sampling","authors":"Yuanhong Wang , Juhua Pu , Yuyi Wang , Ondřej Kuželka","doi":"10.1016/j.artint.2024.104114","DOIUrl":"10.1016/j.artint.2024.104114","url":null,"abstract":"<div><p>Weighted model counting (WMC) is the task of computing the weighted sum of all satisfying assignments (i.e., models) of a propositional formula. Similarly, weighted model sampling (WMS) aims to randomly generate models with probability proportional to their respective weights. Both WMC and WMS are hard to solve exactly, falling under the #<span>P</span>-hard complexity class. However, it is known that the counting problem may sometimes be tractable, if the propositional formula can be compactly represented and expressed in first-order logic. In such cases, model counting problems can be solved in time polynomial in the domain size, and are known as <em>domain-liftable</em>. The following question then arises: Is it also the case for WMS? This paper addresses this question and answers it affirmatively. Specifically, we prove the <em>domain-liftability under sampling</em> for the two-variables fragment of first-order logic with counting quantifiers in this paper, by devising an efficient sampling algorithm for this fragment that runs in time polynomial in the domain size. We then further show that this result continues to hold even in the presence of cardinality constraints. To empirically validate our approach, we conduct experiments over various first-order formulas designed for the uniform generation of combinatorial structures and sampling in statistical-relational models. The results demonstrate that our algorithm outperforms a state-of-the-art WMS sampler by a substantial margin, confirming the theoretical results.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104114"},"PeriodicalIF":14.4,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140182563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Embedding justification theory in approximation fixpoint theory","authors":"Simon Marynissen , Bart Bogaerts , Marc Denecker","doi":"10.1016/j.artint.2024.104112","DOIUrl":"10.1016/j.artint.2024.104112","url":null,"abstract":"<div><p>Approximation Fixpoint Theory (AFT) and Justification Theory (JT) are two frameworks to unify logical formalisms. AFT studies semantics in terms of fixpoints of lattice operators, and JT in terms of so-called justifications, which are explanations of why certain facts do or do not hold in a model. While the approaches differ, the frameworks were designed with similar goals in mind, namely to study the different semantics that arise in (mainly) non-monotonic logics. The first contribution of our current paper is to provide a formal link between the two frameworks. To be precise, we show that every justification frame induces an approximator and that this mapping from JT to AFT preserves all major semantics. The second contribution exploits this correspondence to extend JT with a novel class of semantics, namely <em>ultimate semantics</em>: we formally show that ultimate semantics can be obtained in JT by a syntactic transformation on the justification frame, essentially performing a sort of resolution on the rules.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104112"},"PeriodicalIF":14.4,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140182566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco M. Castro-Macías , Pablo Morales-Álvarez , Yunan Wu , Rafael Molina , Aggelos K. Katsaggelos
{"title":"Hyperbolic Secant representation of the logistic function: Application to probabilistic Multiple Instance Learning for CT intracranial hemorrhage detection","authors":"Francisco M. Castro-Macías , Pablo Morales-Álvarez , Yunan Wu , Rafael Molina , Aggelos K. Katsaggelos","doi":"10.1016/j.artint.2024.104115","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104115","url":null,"abstract":"<div><p>Multiple Instance Learning (MIL) is a weakly supervised paradigm that has been successfully applied to many different scientific areas and is particularly well suited to medical imaging. Probabilistic MIL methods, and more specifically Gaussian Processes (GPs), have achieved excellent results due to their high expressiveness and uncertainty quantification capabilities. One of the most successful GP-based MIL methods, VGPMIL, resorts to a variational bound to handle the intractability of the logistic function. Here, we formulate VGPMIL using Pólya-Gamma random variables. This approach yields the same variational posterior approximations as the original VGPMIL, which is a consequence of the two representations that the Hyperbolic Secant distribution admits. This leads us to propose a general GP-based MIL method that takes different forms by simply leveraging distributions other than the Hyperbolic Secant one. Using the Gamma distribution we arrive at a new approach that obtains competitive or superior predictive performance and efficiency. This is validated in a comprehensive experimental study including one synthetic MIL dataset, two well-known MIL benchmarks, and a real-world medical problem. We expect that this work provides useful ideas beyond MIL that can foster further research in the field.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104115"},"PeriodicalIF":14.4,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224000511/pdfft?md5=5d905ec5cc97e9127870dd8564615620&pid=1-s2.0-S0004370224000511-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140162783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shivam Goel , Panagiotis Lymperopoulos , Ravenna Thielstrom , Evan Krause , Patrick Feeney , Pierrick Lorang , Sarah Schneider , Yichen Wei , Eric Kildebeck , Stephen Goss , Michael C. Hughes , Liping Liu , Jivko Sinapov , Matthias Scheutz
{"title":"A neurosymbolic cognitive architecture framework for handling novelties in open worlds","authors":"Shivam Goel , Panagiotis Lymperopoulos , Ravenna Thielstrom , Evan Krause , Patrick Feeney , Pierrick Lorang , Sarah Schneider , Yichen Wei , Eric Kildebeck , Stephen Goss , Michael C. Hughes , Liping Liu , Jivko Sinapov , Matthias Scheutz","doi":"10.1016/j.artint.2024.104111","DOIUrl":"10.1016/j.artint.2024.104111","url":null,"abstract":"<div><p>“Open world” environments are those in which novel objects, agents, events, and more can appear and contradict previous understandings of the environment. This runs counter to the “closed world” assumption used in most AI research, where the environment is assumed to be fully understood and unchanging. The types of environments AI agents can be deployed in are limited by the inability to handle the novelties that occur in open world environments. This paper presents a novel cognitive architecture framework to handle open-world novelties. This framework combines symbolic planning, counterfactual reasoning, reinforcement learning, and deep computer vision to detect and accommodate novelties. We introduce general algorithms for exploring open worlds using inference and machine learning methodologies to facilitate novelty accommodation. The ability to detect and accommodate novelties allows agents built on this framework to successfully complete tasks despite a variety of novel changes to the world. Both the framework components and the entire system are evaluated in Minecraft-like simulated environments. Our results indicate that agents are able to efficiently complete tasks while accommodating “concealed novelties” not shared with the architecture development team.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104111"},"PeriodicalIF":14.4,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140182567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Gao , Katsumi Inoue , Yongzhi Cao , Hanpin Wang
{"title":"A differentiable first-order rule learner for inductive logic programming","authors":"Kun Gao , Katsumi Inoue , Yongzhi Cao , Hanpin Wang","doi":"10.1016/j.artint.2024.104108","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104108","url":null,"abstract":"<div><p>Learning first-order logic programs from relational facts yields intuitive insights into the data. Inductive logic programming (ILP) models are effective in learning first-order logic programs from observed relational data. Symbolic ILP models support rule learning in a data-efficient manner. However, symbolic ILP models are not robust to learn from noisy data. Neuro-symbolic ILP models utilize neural networks to learn logic programs in a differentiable manner which improves the robustness of ILP models. However, most neuro-symbolic methods need a strong language bias to learn logic programs, which reduces the usability and flexibility of ILP models and limits the logic program formats. In addition, most neuro-symbolic ILP methods cannot learn logic programs effectively from both small-size datasets and large-size datasets such as knowledge graphs. In the paper, we introduce a novel differentiable ILP model called differentiable first-order rule learner (DFORL), which is scalable to learn rules from both smaller and larger datasets. Besides, DFORL only needs the number of variables in the learned logic programs as input. Hence, DFORL is easy to use and does not need a strong language bias. We demonstrate that DFORL can perform well on several standard ILP datasets, knowledge graphs, and probabilistic relation facts and outperform several well-known differentiable ILP models. Experimental results indicate that DFORL is a precise, robust, scalable, and computationally cheap differentiable ILP model.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104108"},"PeriodicalIF":14.4,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140190964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non-deterministic approximation fixpoint theory and its application in disjunctive logic programming","authors":"Jesse Heyninck , Ofer Arieli , Bart Bogaerts","doi":"10.1016/j.artint.2024.104110","DOIUrl":"https://doi.org/10.1016/j.artint.2024.104110","url":null,"abstract":"<div><p>Approximation fixpoint theory (AFT) is an abstract and general algebraic framework for studying the semantics of nonmonotonic logics. It provides a unifying study of the semantics of different formalisms for nonmonotonic reasoning, such as logic programming, default logic and autoepistemic logic. In this paper, we extend AFT to dealing with <em>non-deterministic constructs</em> that allow to handle indefinite information, represented e.g. by disjunctive formulas. This is done by generalizing the main constructions and corresponding results of AFT to non-deterministic operators, whose ranges are sets of elements rather than single elements. The applicability and usefulness of this generalization is illustrated in the context of disjunctive logic programming.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"331 ","pages":"Article 104110"},"PeriodicalIF":14.4,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224000468/pdfft?md5=cd0fbaeca4863aca2ff57b389adc84ed&pid=1-s2.0-S0004370224000468-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140113628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}