{"title":"An α-regret analysis of adversarial bilateral trade","authors":"Yossi Azar , Amos Fiat , Federico Fusco","doi":"10.1016/j.artint.2024.104231","DOIUrl":"10.1016/j.artint.2024.104231","url":null,"abstract":"<div><div>We study sequential bilateral trade where sellers and buyers valuations are completely arbitrary (<em>i.e.</em>, determined by an adversary). Sellers and buyers are strategic agents with private valuations for the good and the goal is to design a mechanism that maximizes efficiency (or gain from trade) while being incentive compatible, individually rational and budget balanced. In this paper we consider gain from trade, which is harder to approximate than social welfare.</div><div>We consider a variety of feedback scenarios and distinguish the cases where the mechanism posts one price and when it can post different prices for buyer and seller. We show several surprising results about the separation between the different scenarios. In particular we show that (a) it is impossible to achieve sublinear <em>α</em>-regret for any <span><math><mi>α</mi><mo><</mo><mn>2</mn></math></span>, (b) but with full feedback sublinear 2-regret is achievable; (c) with a single price and partial feedback one cannot get sublinear <em>α</em> regret for any constant <em>α</em> (d) nevertheless, posting two prices even with one-bit feedback achieves sublinear 2-regret, and (e) there is a provable separation in the 2-regret bounds between full and partial feedback.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"337 ","pages":"Article 104231"},"PeriodicalIF":5.1,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142421612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angelo Gilio , David E. Over , Niki Pfeifer , Giuseppe Sanfilippo
{"title":"On trivalent logics, probabilistic weak deduction theorems, and a general import-export principle","authors":"Angelo Gilio , David E. Over , Niki Pfeifer , Giuseppe Sanfilippo","doi":"10.1016/j.artint.2024.104229","DOIUrl":"10.1016/j.artint.2024.104229","url":null,"abstract":"<div><div>In this paper we first recall some results for conditional events, compound conditionals, conditional random quantities, p-consistency, and p-entailment. We discuss the equivalence between conditional bets and bets on conditionals, and review de Finetti's trivalent analysis of conditionals. But we go beyond de Finetti's early trivalent logical analysis and his later ideas, aiming to take his proposals to a higher level. We examine two recent articles that explore trivalent logics for conditionals and their definitions of logical validity and compare them with the approach to compound conditionals introduced by Gilio and Sanfilippo within the framework of conditional random quantities. As we use the notion of p-entailment, the full deduction theorem does not hold. We prove a Probabilistic Weak Deduction Theorem for conditional events. After that we study some variants of it, with further results, and we present several examples. Moreover, we illustrate how to derive new inference rules related to selected Aristotelian syllogisms. We focus on iterated conditionals and the invalidity of the Import-Export principle in the light of our Probabilistic Weak Deduction Theorem. We use the inference from a disjunction, <em>A or B</em>, to the conditional, <em>if not-A then B</em>, as an example to show the invalidity of this principle. We introduce a General Import-Export principle by examining examples and counterexamples. In particular, when considering the inference rules of System P, we find that a General Import-Export principle is satisfied, even if the assumptions of the Probabilistic Weak Deduction Theorem do not hold. We also deepen further aspects related to p-entailment and p-consistency. Finally, we briefly discuss some related work relevant to AI.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"337 ","pages":"Article 104229"},"PeriodicalIF":5.1,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Eiter , Tobias Geibinger , Nelson Higuera Ruiz , Nysret Musliu , Johannes Oetsch , Dave Pfliegler , Daria Stepanova
{"title":"Adaptive large-neighbourhood search for optimisation in answer-set programming","authors":"Thomas Eiter , Tobias Geibinger , Nelson Higuera Ruiz , Nysret Musliu , Johannes Oetsch , Dave Pfliegler , Daria Stepanova","doi":"10.1016/j.artint.2024.104230","DOIUrl":"10.1016/j.artint.2024.104230","url":null,"abstract":"<div><div>Answer-set programming (ASP) is a prominent approach to declarative problem solving that is increasingly used to tackle challenging optimisation problems. We present an approach to leverage ASP optimisation by using large-neighbourhood search (LNS), which is a meta-heuristic where parts of a solution are iteratively destroyed and reconstructed in an attempt to improve an overall objective. In our LNS framework, neighbourhoods can be specified either declaratively as part of the ASP encoding or automatically generated by code. Furthermore, our framework is self-adaptive, i.e., it also incorporates portfolios for the LNS operators along with selection strategies to adjust search parameters on the fly. The implementation of our framework, the system ALASPO, currently supports the ASP solver clingo, as well as its extensions clingo-dl and clingcon that allow for difference and full integer constraints, respectively. It utilises multi-shot solving to efficiently realise the LNS loop and in this way avoids program regrounding. We describe our LNS framework for ASP as well as its implementation, discuss methodological aspects, and demonstrate the effectiveness of the adaptive LNS approach for ASP on different optimisation benchmarks, some of which are notoriously difficult, as well as real-world applications for shift planning, configuration of railway-safety systems, parallel machine scheduling, and test laboratory scheduling.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"337 ","pages":"Article 104230"},"PeriodicalIF":5.1,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001668/pdfft?md5=ab34b67efbd10758275677814fac17d7&pid=1-s2.0-S0004370224001668-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integration of memory systems supporting non-symbolic representations in an architecture for lifelong development of artificial agents","authors":"François Suro, Fabien Michel, Tiberiu Stratulat","doi":"10.1016/j.artint.2024.104228","DOIUrl":"10.1016/j.artint.2024.104228","url":null,"abstract":"<div><p>Compared to autonomous agent learning, lifelong agent learning tackles the additional challenge of accumulating skills in a way favourable to long term development. What an agent learns at a given moment can be an element for the future creation of behaviours of greater complexity, whose purpose cannot be anticipated.</p><p>Beyond its initial low-level sensorimotor development phase, the agent is expected to acquire, in the same manner as skills, values and goals which support the development of complex behaviours beyond the reactive level. To do so, it must have a way to represent and memorize such information.</p><p>In this article, we identify the properties suitable for a representation system supporting the lifelong development of agents through a review of a wide range of memory systems and related literature. Following this analysis, our second contribution is the proposition and implementation of such a representation system in MIND, a modular architecture for lifelong development. The new <em>variable module</em> acts as a simple memory system which is strongly integrated to the hierarchies of skill modules of MIND, and allows for the progressive structuration of behaviour around persistent non-symbolic representations. <em>Variable modules</em> have many applications for the development and structuration of complex behaviours, but also offer designers and operators explicit models of values and goals facilitating human interaction, control and explainability.</p><p>We show through experiments two possible uses of <em>variable modules</em>. In the first experiment, skills exchange information by using a variable representing the concept of “target”, which allows the generalization of navigation behaviours. In the second experiment, we show how a non-symbolic representation can be learned and memorized to develop beyond simple reactive behaviour, and keep track of the steps of a process whose state cannot be inferred by observing the environment.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"337 ","pages":"Article 104228"},"PeriodicalIF":5.1,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001644/pdfft?md5=6e7585d9f3b58a33fd4b372451648a7b&pid=1-s2.0-S0004370224001644-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PathLAD+: Towards effective exact methods for subgraph isomorphism problem","authors":"Yiyuan Wang , Chenghou Jin , Shaowei Cai","doi":"10.1016/j.artint.2024.104219","DOIUrl":"10.1016/j.artint.2024.104219","url":null,"abstract":"<div><p>The subgraph isomorphism problem (SIP) is a challenging problem with wide practical applications. In the last decade, despite being a theoretical hard problem, researchers designed various algorithms for solving SIP. In this work, we propose five main strategies and develop an improved exact algorithm for SIP. First, we design a probing search procedure to try whether the search procedure can successfully obtain a solution at first sight. Second, we design a novel matching ordering strategy as a value-ordering heuristic, which uses some useful information obtained from the probing search procedure to preferentially select some promising target vertices. Third, we discuss the characteristics of different propagation methods in the context of SIP and present an adaptive propagation method to make a good balance between these methods. Moreover, to further improve the performance of solving large graphs, we propose an enhanced implementation of the edge constraint method and a domain limitation strategy, which aims to accelerate the search process. Experimental results on a broad range of classic and graph-database benchmarks show that our proposed algorithm performs better than several state-of-the-art algorithms for the SIP.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"337 ","pages":"Article 104219"},"PeriodicalIF":5.1,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni
{"title":"Interval abstractions for robust counterfactual explanations","authors":"Junqi Jiang, Francesco Leofante, Antonio Rago, Francesca Toni","doi":"10.1016/j.artint.2024.104218","DOIUrl":"10.1016/j.artint.2024.104218","url":null,"abstract":"<div><p>Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models. However, CEs found by existing methods often become invalid when slight changes occur in the parameters of the model they were generated for. The literature lacks a way to provide exhaustive robustness guarantees for CEs under model changes, in that existing methods to improve CEs' robustness are mostly heuristic, and the robustness performances are evaluated empirically using only a limited number of retrained models. To bridge this gap, we propose a novel interval abstraction technique for parametric machine learning models, which allows us to obtain provable robustness guarantees for CEs under a possibly infinite set of plausible model changes Δ. Based on this idea, we formalise a robustness notion for CEs, which we call Δ-robustness, in both binary and multi-class classification settings. We present procedures to verify Δ-robustness based on Mixed Integer Linear Programming, using which we further propose algorithms to generate CEs that are Δ-robust. In an extensive empirical study involving neural networks and logistic regression models, we demonstrate the practical applicability of our approach. We discuss two strategies for determining the appropriate hyperparameters in our method, and we quantitatively benchmark CEs generated by eleven methods, highlighting the effectiveness of our algorithms in finding robust CEs.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104218"},"PeriodicalIF":5.1,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001541/pdfft?md5=8e8f378cd7774b1862ed4a88c2531907&pid=1-s2.0-S0004370224001541-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142130124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Polynomial calculus for optimization","authors":"Ilario Bonacina , Maria Luisa Bonet , Jordi Levy","doi":"10.1016/j.artint.2024.104208","DOIUrl":"10.1016/j.artint.2024.104208","url":null,"abstract":"<div><p>MaxSAT is the problem of finding an assignment satisfying the maximum number of clauses in a CNF formula. We consider a natural generalization of this problem to generic sets of polynomials and propose a weighted version of Polynomial Calculus to address this problem.</p><p>Weighted Polynomial Calculus is a natural generalization of the systems MaxSAT-Resolution and weighted Resolution. Unlike such systems, weighted Polynomial Calculus manipulates polynomials with coefficients in a finite field and either weights in <span><math><mi>N</mi></math></span> or <span><math><mi>Z</mi></math></span>. We show the soundness and completeness of weighted Polynomial Calculus via an algorithmic procedure.</p><p>Weighted Polynomial Calculus, with weights in <span><math><mi>N</mi></math></span> and coefficients in <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>, is able to prove efficiently that Tseitin formulas on a connected graph are minimally unsatisfiable. Using weights in <span><math><mi>Z</mi></math></span>, it also proves efficiently that the Pigeonhole Principle is minimally unsatisfiable.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"337 ","pages":"Article 104208"},"PeriodicalIF":5.1,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001449/pdfft?md5=dff7733d570b1ecd6ce03a4fc7392fcb&pid=1-s2.0-S0004370224001449-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Approximating problems in abstract argumentation with graph convolutional networks","authors":"Lars Malmqvist, Tangming Yuan, Peter Nightingale","doi":"10.1016/j.artint.2024.104209","DOIUrl":"10.1016/j.artint.2024.104209","url":null,"abstract":"<div><p>In this article, we present a novel approximation approach for abstract argumentation using a customized Graph Convolutional Network (GCN) architecture and a tailored training method. Our approach demonstrates promising results in approximating abstract argumentation tasks across various semantics, setting a new state of the art for performance on certain tasks. We provide a detailed analysis of approximation and runtime performance and propose a new scheme for evaluation. By advancing the state of the art for approximating the acceptability status of abstract arguments, we make theoretical and empirical advances in understanding the limits and opportunities for approximation in this field. Our approach shows potential for creating both general purpose and task-specific approximators and offers insights into the performance differences across benchmarks and semantics.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104209"},"PeriodicalIF":5.1,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001450/pdfft?md5=01068bd413e8769bb4469a717c95128e&pid=1-s2.0-S0004370224001450-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142095894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicolau Andrés-Thió , Mario Andrés Muñoz , Kate Smith-Miles
{"title":"Characterising harmful data sources when constructing multi-fidelity surrogate models","authors":"Nicolau Andrés-Thió , Mario Andrés Muñoz , Kate Smith-Miles","doi":"10.1016/j.artint.2024.104207","DOIUrl":"10.1016/j.artint.2024.104207","url":null,"abstract":"<div><p>Surrogate modelling techniques have seen growing attention in recent years when applied to both modelling and optimisation of industrial design problems. These techniques are highly relevant when assessing the performance of a particular design carries a high cost, as the overall cost can be mitigated via the construction of a model to be queried in lieu of the available high-cost source. The construction of these models can sometimes employ other sources of information which are both cheaper and less accurate. The existence of these sources however poses the question of which sources should be used when constructing a model. Recent studies have attempted to characterise harmful data sources to guide practitioners in choosing when to ignore a certain source. These studies have done so in a synthetic setting, characterising sources using a large amount of data that is not available in practice. Some of these studies have also been shown to potentially suffer from bias in the benchmarks used in the analysis. In this study, we approach the characterisation of harmful low-fidelity sources as an algorithm selection problem. We employ recently developed benchmark filtering techniques to conduct a bias-free assessment, providing objectively varied benchmark suites of different sizes for future research. Analysing one of these benchmark suites with the technique known as Instance Space Analysis, we provide an intuitive visualisation of when a low-fidelity source should be used. By performing this analysis using only the limited data available to train a surrogate model, we are able to provide guidelines that can be directly used in an applied industrial setting.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104207"},"PeriodicalIF":5.1,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001437/pdfft?md5=63ca7126b7bf14477005c50a202f2c7d&pid=1-s2.0-S0004370224001437-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Ming Ting , Takashi Washio , Ye Zhu , Yang Xu , Kaifeng Zhang
{"title":"Is it possible to find the single nearest neighbor of a query in high dimensions?","authors":"Kai Ming Ting , Takashi Washio , Ye Zhu , Yang Xu , Kaifeng Zhang","doi":"10.1016/j.artint.2024.104206","DOIUrl":"10.1016/j.artint.2024.104206","url":null,"abstract":"<div><p>We investigate an open question in the study of the curse of dimensionality: Is it possible to find the single nearest neighbor of a query in high dimensions? Using the notion of (in)distinguishability to examine whether the feature map of a kernel is able to distinguish two distinct points in high dimensions, we analyze this ability of a metric-based Lipschitz continuous kernel as well as that of the recently introduced Isolation Kernel. Between the two kernels, we show that only Isolation Kernel has distinguishability and it performs consistently well in four tasks: indexed search for exact nearest neighbor search, anomaly detection using kernel density estimation, t-SNE visualization and SVM classification in both low and high dimensions, compared with distance, Gaussian and three other existing kernels.</p></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"336 ","pages":"Article 104206"},"PeriodicalIF":5.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0004370224001425/pdfft?md5=a9c748954d0721f2e62c5fa4e574bf6e&pid=1-s2.0-S0004370224001425-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}