Artificial Intelligence最新文献

筛选
英文 中文
Estimating possible causal effects with latent variables via adjustment and novel rule orientation 通过调整和新规则导向估计潜在变量可能的因果效应
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-06-13 DOI: 10.1016/j.artint.2025.104387
Tian-Zuo Wang , Lue Tao , Tian Qin , Zhi-Hua Zhou
{"title":"Estimating possible causal effects with latent variables via adjustment and novel rule orientation","authors":"Tian-Zuo Wang ,&nbsp;Lue Tao ,&nbsp;Tian Qin ,&nbsp;Zhi-Hua Zhou","doi":"10.1016/j.artint.2025.104387","DOIUrl":"10.1016/j.artint.2025.104387","url":null,"abstract":"<div><div>Causal effect estimation from observational data is a fundamental task in artificial intelligence and has been widely studied given known causal relations. However, in the presence of latent confounders, only a part of causal relations can be identified from observational data, characterized by a partial ancestral graph (PAG), where some causal relations are indeterminate. In such cases, the causal effect is often unidentifiable, as there could be super-exponential number of potential causal graphs consistent with the identified PAG but associated with different causal effects. In this paper, we target on <em>set determination</em> within a PAG, <em>i.e.</em>, determining the set of possible causal effects of a specified variable <em>X</em> on another variable <em>Y</em> via covariate adjustment. We develop the first set determination method that does not require enumerating any causal graphs. Furthermore, we present two novel orientation rules for incorporating structural background knowledge (BK) into a PAG, which facilitate the identification of additional causal relations given BK. Notably, we show that these rules can further enhance the efficiency of our set determination method, as certain transformed edges during the procedure can be interpreted as BK and enable the rules to reveal further causal information. Theoretically and empirically, we demonstrate that our set determination methods can yield the same results as the enumeration-based method with <em>super-exponentially less</em> computational complexity.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"347 ","pages":"Article 104387"},"PeriodicalIF":5.1,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144298811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarially robust unsupervised domain adaptation 对抗鲁棒无监督域自适应
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-06-13 DOI: 10.1016/j.artint.2025.104383
Lianghe Shi, Weiwei Liu
{"title":"Adversarially robust unsupervised domain adaptation","authors":"Lianghe Shi,&nbsp;Weiwei Liu","doi":"10.1016/j.artint.2025.104383","DOIUrl":"10.1016/j.artint.2025.104383","url":null,"abstract":"<div><div>Unsupervised domain adaptation (UDA) has been successfully applied in many contexts with domain shifts. However, we find that existing UDA methods are vulnerable to adversarial attacks. A direct modification of the existing UDA methods to improve adversarial robustness is to feed the algorithms with adversarial source examples. However, empirical results show that traditional discrepancy fails to measure the distance between adversarial examples, leading to poor alignment between adversarial examples of source and target domains and inefficient transfer of the robustness from source domain to target domain. And the traditional theoretical bounds do not always hold in adversarial scenarios. Accordingly, we first propose a novel adversarial discrepancy (AD) to narrow the gap between adversarial robustness and UDA. Based on AD, this paper provides a generalization error bound for adversarially robust unsupervised domain adaptation through the lens of Rademacher complexity, theoretically demonstrating that the expected adversarial target error can be bounded by empirical adversarial source error and AD. We also present the upper bounds of Rademacher complexity, with a particular focus on linear models and multi-layer neural networks under <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mi>r</mi></mrow></msub></math></span> attack (<span><math><mi>r</mi><mo>≥</mo><mn>1</mn></math></span>). Inspired by this theory, we go on to develop an adversarially robust algorithm for UDA. We further conduct comprehensive experiments to support our theory and validate the robustness improvement of our proposed method on challenging domain adaptation tasks.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"347 ","pages":"Article 104383"},"PeriodicalIF":5.1,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144305110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-agent pathfinding on strongly connected digraphs: Feasibility and solution algorithms 强连接有向图上的多智能体寻路:可行性和求解算法
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-06-04 DOI: 10.1016/j.artint.2025.104372
S. Ardizzoni , L. Consolini , M. Locatelli , B. Nebel , I. Saccani
{"title":"Multi-agent pathfinding on strongly connected digraphs: Feasibility and solution algorithms","authors":"S. Ardizzoni ,&nbsp;L. Consolini ,&nbsp;M. Locatelli ,&nbsp;B. Nebel ,&nbsp;I. Saccani","doi":"10.1016/j.artint.2025.104372","DOIUrl":"10.1016/j.artint.2025.104372","url":null,"abstract":"<div><div>On an assigned graph, the problem of Multi-Agent Pathfinding (MAPF) consists in finding paths for multiple agents, avoiding collisions. Finding the minimum-length solution is known to be NP-hard, and computation times grows exponentially with the number of agents. However, in industrial applications, it is important to find feasible, suboptimal solutions, in a time that grows polynomially with the number of agents. Such algorithms exist for undirected and biconnected directed graphs. Our main contribution is to generalize these algorithms to the more general case of strongly connected directed graphs. In particular, we describe a procedure that checks the problem feasibility in linear time with respect to the number of vertices <em>n</em>, and we find a necessary and sufficient condition for feasibility of any MAPF instance. Moreover, we present an algorithm (diSC) that provides a feasible solution of length <span><math><mi>O</mi><mo>(</mo><mi>k</mi><msup><mrow><mi>n</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>c</mi><mo>)</mo></math></span>, where <em>k</em> is the number of agents and <em>c</em> the maximum length of the corridors of the graph.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"347 ","pages":"Article 104372"},"PeriodicalIF":5.1,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144241420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Factored-reward bandits with intermediate observations: Regret minimization and best arm identification 具有中间观察的因子奖励盗匪:后悔最小化和最佳武器识别
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-05-23 DOI: 10.1016/j.artint.2025.104362
Marco Mussi , Simone Drago , Marcello Restelli, Alberto Maria Metelli
{"title":"Factored-reward bandits with intermediate observations: Regret minimization and best arm identification","authors":"Marco Mussi ,&nbsp;Simone Drago ,&nbsp;Marcello Restelli,&nbsp;Alberto Maria Metelli","doi":"10.1016/j.artint.2025.104362","DOIUrl":"10.1016/j.artint.2025.104362","url":null,"abstract":"<div><div>In several real-world sequential decision problems, at every step, the learner is required to select different actions. Every action affects a specific part of the system and generates an observable intermediate effect. In this paper, we introduce the Factored-Reward Bandits (FRBs), a novel setting able to effectively capture and exploit the structure of this class of scenarios, where the reward is computed as the product of the action intermediate observations. We characterize the statistical complexity of the learning problem in the FRBs, by deriving worst-case and asymptotic instance-dependent regret lower bounds. Then, we devise and analyze two regret minimization algorithms. The former, <span>F-UCB</span>, is an anytime optimistic approach matching the worst-case lower bound (up to logarithmic factors) but fails to perform optimally from the instance-dependent perspective. The latter, <span>F-Track</span>, is a bound-tracking approach, that enjoys optimal asymptotic instance-dependent regret guarantees. Finally, we study the problem of performing best arm identification in this setting. We derive an error probability lower bound, and we develop <span>F-SR</span>, a nearly optimal rejection-based algorithm for identifying the best action vector, given a time budget.<span><span><sup>2</sup></span></span></div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"347 ","pages":"Article 104362"},"PeriodicalIF":5.1,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NT-FAN: A simple yet effective noise-tolerant few-shot adaptation network NT-FAN:一种简单而有效的耐噪少射自适应网络
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-05-22 DOI: 10.1016/j.artint.2025.104363
Wenjing Yang , Haoang Chi , Yibing Zhan , Bowen Hu , Xiaoguang Ren , Dapeng Tao , Long Lan
{"title":"NT-FAN: A simple yet effective noise-tolerant few-shot adaptation network","authors":"Wenjing Yang ,&nbsp;Haoang Chi ,&nbsp;Yibing Zhan ,&nbsp;Bowen Hu ,&nbsp;Xiaoguang Ren ,&nbsp;Dapeng Tao ,&nbsp;Long Lan","doi":"10.1016/j.artint.2025.104363","DOIUrl":"10.1016/j.artint.2025.104363","url":null,"abstract":"<div><div><em>Few-shot domain adaptation</em> (FDA) aims to train a target model with <em>clean</em> labeled data from the source domain and <em>few</em> labeled data from the target domain. Given a limited annotation budget, source data may contain many noisy labels, which can detrimentally impact the performance of models in real-world applications. This problem setting is denoted as <em>wildly few-shot domain adaptation</em> (WFDA), simultaneously taking care of label noise and data shortage. While previous studies have achieved some success, they typically rely on multiple adaptation models to collaboratively filter noisy labels, resulting in substantial computational overhead. To address WFDA more simply and elegantly, we offer a theoretical analysis of this problem and propose a comprehensive upper bound for the excess risk on the target domain. Our theoretical result reveals that correct domain-invariant representations can be obtained even in the presence of source noise and limited target data without incurring additional costs. In response, we propose a simple yet effective WFDA method, referred to as <em>noise-tolerant few-shot adaptation network</em> (NT-FAN). Experiments demonstrate that our method significantly outperforms all the state-of-the-art competitors while maintaining a more <em>lightweight</em> architecture. Notably, NT-FAN consistently exhibits robust performance when dealing with more realistic and intractable source noise (e.g., instance-dependent label noise) and severe source noise (e.g., a 40% noise rate) in the source domain.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"346 ","pages":"Article 104363"},"PeriodicalIF":5.1,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A semantics for probabilistic hybrid knowledge bases with function symbols 带有函数符号的概率混合知识库的语义
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-05-20 DOI: 10.1016/j.artint.2025.104361
Marco Alberti , Evelina Lamma , Fabrizio Riguzzi , Riccardo Zese
{"title":"A semantics for probabilistic hybrid knowledge bases with function symbols","authors":"Marco Alberti ,&nbsp;Evelina Lamma ,&nbsp;Fabrizio Riguzzi ,&nbsp;Riccardo Zese","doi":"10.1016/j.artint.2025.104361","DOIUrl":"10.1016/j.artint.2025.104361","url":null,"abstract":"<div><div>Hybrid Knowledge Bases (HKBs) successfully integrate Logic Programming (LP) and Description Logics (DL) under the Minimal Knowledge with Negation as Failure semantics. Both world closure assumptions (open and closed) can be used in the same HKB, a feature required in many domains, such as the legal and health-care ones. In previous work, we proposed (function-free) Probabilistic HKBs, whose semantics applied Sato's distribution semantics approach to the well-founded HKB semantics proposed by Knorr et al. and Lyu and You. This semantics relied on the fact that the grounding of a function-free Probabilistic HKB (PHKB) is finite. In this article, we extend the PHKB language to allow function symbols, obtaining PHKB<sup><em>FS</em></sup>. Because the grounding of a PHKB<sup><em>FS</em></sup> can be infinite, we propose a novel semantics which does not require the PHKB<sup><em>FS</em></sup>'s grounding to be finite. We show that the proposed semantics extends the previously proposed semantics and that, for a large class of PHKB<sup><em>FS</em></sup>, every query can be assigned a probability.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"346 ","pages":"Article 104361"},"PeriodicalIF":5.1,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144098911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active legibility in multiagent reinforcement learning 多智能体强化学习中的主动易读性
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-05-19 DOI: 10.1016/j.artint.2025.104357
Yanyu Liu , Yinghui Pan , Yifeng Zeng , Biyang Ma , Prashant Doshi
{"title":"Active legibility in multiagent reinforcement learning","authors":"Yanyu Liu ,&nbsp;Yinghui Pan ,&nbsp;Yifeng Zeng ,&nbsp;Biyang Ma ,&nbsp;Prashant Doshi","doi":"10.1016/j.artint.2025.104357","DOIUrl":"10.1016/j.artint.2025.104357","url":null,"abstract":"<div><div>A multiagent sequential decision problem has been seen in many critical applications including urban transportation, autonomous driving cars, military operations, etc. Its widely known solution, namely multiagent reinforcement learning, has evolved tremendously in recent years. Among them, the solution paradigm of modeling other agents attracts our interest, which is different from traditional value decomposition or communication mechanisms. It enables agents to understand and anticipate others' behaviors and facilitates their collaboration. Inspired by recent research on the legibility that allows agents to reveal their intentions through their behavior, we propose a <em>multiagent active legibility framework</em> to improve their performance. The legibility-oriented framework drives agents to conduct legible actions so as to help others optimize their behaviors. In addition, we design a series of problem domains that emulate a common legibility-needed scenario and effectively characterize the legibility in multiagent reinforcement learning. The experimental results demonstrate that the new framework is more efficient and requires less training time compared to several multiagent reinforcement learning algorithms.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"346 ","pages":"Article 104357"},"PeriodicalIF":5.1,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144098910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A theory of synaptic neural balance: From local to global order 突触神经平衡理论:从局部秩序到全局秩序
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-05-16 DOI: 10.1016/j.artint.2025.104360
Pierre Baldi, Antonios Alexos, Ian Domingo, Alireza Rahmansetayesh
{"title":"A theory of synaptic neural balance: From local to global order","authors":"Pierre Baldi,&nbsp;Antonios Alexos,&nbsp;Ian Domingo,&nbsp;Alireza Rahmansetayesh","doi":"10.1016/j.artint.2025.104360","DOIUrl":"10.1016/j.artint.2025.104360","url":null,"abstract":"<div><div>We develop a general theory of synaptic neural balance and how it can emerge or be enforced in neural networks. For a given additive cost function <em>R</em> (regularizer), a neuron is said to be in balance if the total cost of its input weights is equal to the total cost of its output weights. The basic example is provided by feedforward networks of ReLU units trained with <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> regularizers, which exhibit balance after proper training. The theory explains this phenomenon and extends it in several directions. The first direction is the extension to bilinear and other activation functions. The second direction is the extension to more general regularizers, including all <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span> (<span><math><mi>p</mi><mo>&gt;</mo><mn>0</mn></math></span>) regularizers. The third direction is the extension to non-layered architectures, recurrent architectures, convolutional architectures, as well as architectures with mixed activation functions and to different balancing algorithms. Gradient descent on the error function alone does not converge in general to a balanced state, where every neuron is in balance, even when starting from a balanced state. However, gradient descent on the regularized error function ought to converge to a balanced state, and thus network balance can be used to assess learning progress. The theory is based on two local neuronal operations: scaling which is commutative, and balancing which is not commutative. Finally, and most importantly, given any set of weights, when local balancing operations are applied to each neuron in a stochastic manner, global order always emerges through the convergence of the stochastic balancing algorithm to the same unique set of balanced weights. The reason for this convergence is the existence of an underlying strictly convex optimization problem where the relevant variables are constrained to a linear, only architecture-dependent, manifold. Simulations show that balancing neurons prior to learning, or during learning in alternation with gradient descent steps, can improve learning speed and performance thereby expanding the arsenal of available training tools. Scaling and balancing operations are entirely local and thus physically plausible in biological and neuromorphic neural networks.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"346 ","pages":"Article 104360"},"PeriodicalIF":5.1,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RelBERT: Embedding relations with language models 用语言模型嵌入关系
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-05-15 DOI: 10.1016/j.artint.2025.104359
Asahi Ushio, Jose Camacho-Collados, Steven Schockaert
{"title":"RelBERT: Embedding relations with language models","authors":"Asahi Ushio,&nbsp;Jose Camacho-Collados,&nbsp;Steven Schockaert","doi":"10.1016/j.artint.2025.104359","DOIUrl":"10.1016/j.artint.2025.104359","url":null,"abstract":"<div><div>Many applications need access to background knowledge about how different concepts and entities are related. Although Large Language Models (LLM) can address this need to some extent, LLMs are inefficient and difficult to control. As an alternative, we propose to extract relation embeddings from relatively small language models. In particular, we show that masked language models such as RoBERTa can be straightforwardly fine-tuned for this purpose, using only a small amount of training data. The resulting model, which we call RelBERT, captures relational similarity in a surprisingly fine-grained way, allowing us to set a new state-of-the-art in analogy benchmarks. Crucially, RelBERT is capable of modelling relations that go well beyond what the model has seen during training. For instance, we obtained strong results on relations between named entities with a model that was only trained on lexical relations between concepts, and we observed that RelBERT can recognise morphological analogies despite not being trained on such examples. Overall, we find that RelBERT significantly outperforms strategies based on prompting language models that are several orders of magnitude larger, including recent GPT-based models and open source models.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"347 ","pages":"Article 104359"},"PeriodicalIF":5.1,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144190068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CBS-Budget (CBSB): A complete and bounded suboptimal search for multi-agent path finding CBS-Budget (CBSB):用于多智能体路径查找的完整和有界次优搜索
IF 5.1 2区 计算机科学
Artificial Intelligence Pub Date : 2025-05-08 DOI: 10.1016/j.artint.2025.104349
Jaein Lim , Panagiotis Tsiotras
{"title":"CBS-Budget (CBSB): A complete and bounded suboptimal search for multi-agent path finding","authors":"Jaein Lim ,&nbsp;Panagiotis Tsiotras","doi":"10.1016/j.artint.2025.104349","DOIUrl":"10.1016/j.artint.2025.104349","url":null,"abstract":"<div><div>Multi-Agent Path Finding (MAPF) is the problem of finding a collection of conflict-free paths for a team of multiple agents while minimizing some global cost, such as the sum of the travel time of all agents, or the travel time of the last agent. Conflict Based Search (CBS) is a leading complete and optimal MAPF algorithm that lazily explores the joint agent state space, using an admissible heuristic joint plan. Such an admissible heuristic joint plan is computed by combining individual shortest paths computed without considering inter-agent conflicts, and becoming gradually more informed as constraints are added to the individual agents' path-planning problems to avoid discovered conflicts. In this paper, we seek to speed up CBS by finding a more informed heuristic joint plan that is bounded. We first propose the budgeted Class-Ordered A* (bCOA*), a novel algorithm that finds the least-cost path with the minimal number of conflicts that is upper bounded in terms of path length. Then, we propose a novel bounded-cost variant of CBS, called CBS-Budget (CBSB) by using bCOA* search at the low-level search of the CBS and by using a modified focal search at the high-level search of the CBS. We prove that CBSB is complete and bounded-suboptimal. In our numerical experiments, CBSB finds a near-optimal solution for hundreds of agents within a fraction of a second. CBSB shows state-of-the-art performance, comparable to Explicit Estimation CBS (EECBS), an enhanced recent version of CBS. On the other hand, CBSB is much easier to implement than EECBS, since only one priority queue at the low-level search is needed, as in CBS, and only two priority queues at the high-level search are needed, as in Enhanced CBS (ECBS).</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"346 ","pages":"Article 104349"},"PeriodicalIF":5.1,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144067939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信