Bita Banihashemi , Giuseppe De Giacomo , Yves Lespérance
{"title":"Abstracting situation calculus action theories","authors":"Bita Banihashemi , Giuseppe De Giacomo , Yves Lespérance","doi":"10.1016/j.artint.2025.104407","DOIUrl":"10.1016/j.artint.2025.104407","url":null,"abstract":"<div><div>We develop a general framework for <em>agent abstraction</em> based on the situation calculus and the <span>ConGolog</span> agent programming language. We assume that we have a high-level specification and a low-level specification of the agent, both represented as basic action theories. A <em>refinement mapping</em> specifies how each high-level action is implemented by a low-level <span>ConGolog</span> program and how each high-level fluent can be translated into a low-level formula. We define a notion of <em>sound abstraction</em> between such action theories in terms of the existence of a suitable bisimulation between their respective models. Sound abstractions have many useful properties that ensure that we can reason about the agent's actions (e.g., executability, projection, and planning) at the abstract level, and refine and concretely execute them at the low level. We also characterize the notion of <em>complete abstraction</em> where all actions (including exogenous ones) that the high level thinks can happen can in fact occur at the low level. To facilitate verifying that one has a sound/complete abstraction relative to a mapping, we provide a set of necessary and sufficient conditions. Finally, we identify a set of basic action theory constraints that ensure that for any low-level action sequence, there is a unique high-level action sequence that it refines. This allows us to track/monitor what the low-level agent is doing and describe it in abstract terms (i.e., provide high-level explanations, for instance, to a client or manager).</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104407"},"PeriodicalIF":4.6,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tanya Ignatenko , Kirill Kondrashov , Marco Cox , Bert de Vries
{"title":"On preference learning based on sequential Bayesian optimization with pairwise comparison","authors":"Tanya Ignatenko , Kirill Kondrashov , Marco Cox , Bert de Vries","doi":"10.1016/j.artint.2025.104400","DOIUrl":"10.1016/j.artint.2025.104400","url":null,"abstract":"<div><div>User preference learning is generally a hard problem. Individual preferences are typically unknown even to users themselves, while the space of choices is infinite. Here we study user preference learning from information-theoretic perspective. We model preference learning as a system with two interacting sub-systems, one representing a user with his/her preferences and another one representing an agent that has to learn these preferences. The user with his/her behavior is modeled by a parametric preference function. To efficiently learn the preferences and reduce search space quickly, we propose the agent that interacts with the user to collect the most informative data for learning. The agent presents two proposals to the user for evaluation, and the user rates them based on his/her preference function. We show that the optimum agent strategy for data collection and preference learning is a result of maximin optimization of the normalized weighted Kullback-Leibler (KL) divergence between true and agent-assigned predictive user response distributions. The resulting value of the KL-divergence, which we also call of a remaining system uncertainty (RSU), provides an efficient performance metric in the absence of the ground truth. This metric characterizes how well the agent can predict user and, thus, the quality of the underlying learned user (preference) model. Our proposed agent comprises sequential mechanisms for user model inference and proposal generation. To infer the user model (preference function), Bayesian approximate inference is used in the agent. The data collection strategy is to generate proposals, responses to which help resolving uncertainty associated with prediction of the user responses the most. The efficiency of our approach is validated by numerical simulations. Also a real-life example of preference learning application is provided.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104400"},"PeriodicalIF":4.6,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145018407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards optimal subsidy bounds for envy-freeable allocations","authors":"Yasushi Kawase , Kazuhisa Makino , Hanna Sumita , Akihisa Tamura , Makoto Yokoo","doi":"10.1016/j.artint.2025.104406","DOIUrl":"10.1016/j.artint.2025.104406","url":null,"abstract":"<div><div>We study the fair division of indivisible items with subsidies among <em>n</em> agents, where the absolute marginal valuation of each item is at most one. Under monotone nondecreasing valuations (where each item is a good), Brustle et al. <span><span>[9]</span></span> demonstrated that a maximum subsidy of <span><math><mn>2</mn><mo>(</mo><mi>n</mi><mo>−</mo><mn>1</mn><mo>)</mo></math></span> and a total subsidy of <span><math><mn>2</mn><msup><mrow><mo>(</mo><mi>n</mi><mo>−</mo><mn>1</mn><mo>)</mo></mrow><mrow><mn>2</mn></mrow></msup></math></span> are sufficient to guarantee the existence of an envy-freeable allocation. In this paper, we improve upon these bounds, even in a wider model. Namely, we show that, given an EF1 allocation, we can compute in polynomial time an envy-free allocation with a subsidy of at most <span><math><mi>n</mi><mo>−</mo><mn>1</mn></math></span> per agent and a total subsidy of at most <span><math><mi>n</mi><mo>(</mo><mi>n</mi><mo>−</mo><mn>1</mn><mo>)</mo><mo>/</mo><mn>2</mn></math></span>. Moreover, when the valuations are monotone nondecreasing, we provide a polynomial-time algorithm that computes an envy-free allocation with a subsidy of at most <span><math><mi>n</mi><mo>−</mo><mn>1.5</mn></math></span> per agent and a total subsidy of at most <span><math><mo>(</mo><msup><mrow><mi>n</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>−</mo><mi>n</mi><mo>−</mo><mn>1</mn><mo>)</mo><mo>/</mo><mn>2</mn></math></span>.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104406"},"PeriodicalIF":4.6,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144912200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Lin , Shaowei Cai , Mengchuan Zou , Jinkun Lin
{"title":"Local-MIP: Efficient local search for mixed integer programming","authors":"Peng Lin , Shaowei Cai , Mengchuan Zou , Jinkun Lin","doi":"10.1016/j.artint.2025.104405","DOIUrl":"10.1016/j.artint.2025.104405","url":null,"abstract":"<div><div>Mixed Integer Programming (MIP) is a fundamental model in operations research with broad industrial applications. Local search is a powerful methodology for solving complex optimization problems; however, the development of local search algorithms for MIP still needs exploration. In this work, we propose <em>Local-MIP</em>, an efficient local search algorithm tailored for MIP that integrates novel operators and employs a two-mode architecture to adaptively apply operators based on the current solution's feasibility. For the feasible mode, we propose the lift move operator and a corresponding lift process to improve the objective value while maintaining feasibility. For the infeasible mode, we propose the breakthrough move and mixed tight move operators to respectively optimize the objective function and satisfy constraints. To apply operators intelligently, we develop a dynamic weighting scheme that balances the priorities of the objective function and constraints. Furthermore, we propose a two-level scoring function structure that hierarchically selects operations, guiding the search toward high-quality feasible solutions. Experiments are conducted on public benchmarks to compare <em>Local-MIP</em> with state-of-the-art MIP solvers in finding high-quality solutions. The results show that <em>Local-MIP</em> significantly outperforms <em>CPLEX</em>, <em>HiGHS</em>, <em>SCIP</em>, and <em>Feasibility Jump</em> while remaining competitive with the commercial solver <em>Gurobi</em> on challenging problems within short time limits. Moreover, <em>Local-MIP</em> establishes 10 new records on MIPLIB open instances.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104405"},"PeriodicalIF":4.6,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algebras of actions in an agent's representations of the world","authors":"Alexander Dean, Eduardo Alonso, Esther Mondragón","doi":"10.1016/j.artint.2025.104403","DOIUrl":"10.1016/j.artint.2025.104403","url":null,"abstract":"<div><div>Learning efficient representations allows robust processing of data, data that can then be generalised across different tasks and domains, and it is thus paramount in various areas of Artificial Intelligence, including computer vision, natural language processing and reinforcement learning, among others. Within the context of reinforcement learning, we propose in this paper a mathematical framework to learn representations by extracting the algebra of the transformations of worlds from the perspective of an agent. As a starting point, we use our framework to reproduce representations from the symmetry-based disentangled representation learning (SBDRL) formalism proposed by <span><span>[1]</span></span> and prove that, although useful, they are restricted to transformations that respond to the properties of algebraic groups. We then generalise two important results of SBDRL –the equivariance condition and the disentangling definition– from only working with group-based symmetry representations to working with representations capturing the transformation properties of worlds for any algebra, using examples common in reinforcement learning and generated by an algorithm that computes their corresponding Cayley tables. Finally, we combine our generalised equivariance condition and our generalised disentangling definition to show that disentangled sub-algebras can each have their own individual equivariance conditions, which can be treated independently, using category theory. In so doing, our framework offers a rich formal tool to represent different types of symmetry transformations in reinforcement learning, extending the scope of previous proposals and providing Artificial Intelligence developers with a sound foundation to implement efficient applications.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104403"},"PeriodicalIF":4.6,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Rodler , Birgit Hofer , Dietmar Jannach , Iulia Nica , Franz Wotawa
{"title":"Choosing abstraction levels for model-based software debugging: A theoretical and empirical analysis for spreadsheet programs","authors":"Patrick Rodler , Birgit Hofer , Dietmar Jannach , Iulia Nica , Franz Wotawa","doi":"10.1016/j.artint.2025.104399","DOIUrl":"10.1016/j.artint.2025.104399","url":null,"abstract":"<div><div>Model-based diagnosis is a generally applicable, principled approach to the systematic debugging of a wide range of system types such as circuits, knowledge bases, physical devices, or software. Based on a formal description of the system, it enables precise and deterministic reasoning about potential faults responsible for observed misbehavior. In software, such a formal system description can often even be extracted from the buggy program fully automatically. As logical reasoning is central to diagnosis, the performance of model-based debuggers is largely influenced by reasoning efficiency, which in turn depends on the complexity and expressivity of the system description. Since highly detailed models capturing exact semantics often exceed the capabilities of current reasoning tools, researchers have proposed more abstract representations.</div><div>In this work, we thoroughly analyze system modeling techniques with a focus on fault localization in spreadsheets—one of the most widely used end-user programming paradigms. Specifically, we present three constraint model types characterizing spreadsheets at different abstraction levels, show how to extract them automatically from faulty spreadsheets, and provide theoretical and empirical investigations of the impact of abstraction on both diagnostic output and computational performance. Our main conclusions are that <em>(i)</em> for the model types, there is a trade-off between the conciseness of generated fault candidates and computation time, <em>(ii)</em> the exact model is often impractical, and <em>(iii)</em> a new model based on qualitative reasoning yields the same solutions as the exact one in up to more than half the cases while being orders of magnitude faster.</div><div>Due to their ability to restrict the solution space in a sound way, the explored model-based techniques, rather than being used as standalone approaches, are expected to realize their full potential in combination with iterative sequential diagnosis or indeterministic but more performant statistical debugging methods.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104399"},"PeriodicalIF":4.6,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144898569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Piero Bonatti , Gianluca Cima , Domenico Lembo , Francesco Magliocca , Lorenzo Marconi , Riccardo Rosati , Luigi Sauro , Domenico Fabio Savo
{"title":"Enhancing cooperativity in controlled query evaluation over ontologies","authors":"Piero Bonatti , Gianluca Cima , Domenico Lembo , Francesco Magliocca , Lorenzo Marconi , Riccardo Rosati , Luigi Sauro , Domenico Fabio Savo","doi":"10.1016/j.artint.2025.104402","DOIUrl":"10.1016/j.artint.2025.104402","url":null,"abstract":"<div><div>Controlled Query Evaluation (CQE) is a methodology designed to maintain confidentiality by either rejecting specific queries or adjusting responses to safeguard sensitive information. In this investigation, our focus centers on CQE within Description Logic ontologies, aiming to ensure that queries are answered truthfully as long as possible before resorting to deceptive responses, a cooperativity property which is called the “longest honeymoon”. Our work introduces new semantics for CQE, denoted as MC-CQE, which enjoys the longest honeymoon property and outperforms previous methodologies in terms of cooperativity.</div><div>We study the complexity of query answering in this new framework for ontologies expressed in the Description Logic <span><math><msub><mrow><mtext>DL-Lite</mtext></mrow><mrow><mi>R</mi></mrow></msub></math></span>. Specifically, we establish data complexity results under different maximally cooperative semantics and for different classes of queries. Our results identify both tractable and intractable cases. In particular, we show that the evaluation of Boolean unions of conjunctive queries is the same under all the above semantics and its data complexity is in <figure><img></figure>. This result makes query answering amenable to SQL query rewriting. However, this favorable property does not extend to open queries, even with a restricted query language limited to conjunctions of atoms. While, in general, answering open queries in the MC-CQE framework is intractable, we identify a sub-family of semantics under which answering full conjunctive queries is tractable.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104402"},"PeriodicalIF":4.6,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingji Li , Mengnan Du , Rui Song , Mu Liu , Ying Wang
{"title":"BATED: Learning fair representation for Pre-trained Language Models via biased teacher-guided disentanglement","authors":"Yingji Li , Mengnan Du , Rui Song , Mu Liu , Ying Wang","doi":"10.1016/j.artint.2025.104401","DOIUrl":"10.1016/j.artint.2025.104401","url":null,"abstract":"<div><div>With the rapid development of Pre-trained Language Models (PLMs) and their widespread deployment in various real-world applications, social biases of PLMs have attracted increasing attention, especially the fairness of downstream tasks, which potentially affects the development and stability of society. Among existing debiasing methods, intrinsic debiasing methods are not necessarily effective when applied to downstream tasks, and the downstream fine-tuning process may introduce new biases or catastrophic forgetting. Most extrinsic debiasing methods rely on sensitive attribute words as prior knowledge to supervise debiasing training. However, it is difficult to collect sensitive attribute information of real data due to privacy and regulation. Moreover, limited sensitive attribute words may lead to inadequate debiasing training. To this end, this paper proposes a debiasing method to learn fair representation for PLMs via <strong>B</strong>i<strong>A</strong>sed <strong>TE</strong>acher-guided <strong>D</strong>isentanglement (called <strong>BATED</strong>). Specific to downstream tasks, BATED performs debiasing training under the guidance of a biased teacher model rather than relying on sensitive attribute information of the training data. First, we leverage causal contrastive learning to train a task-agnostic general biased teacher model. We then employ Variational Auto-Encoder (VAE) to disentangle the PLM-encoded representation into the fair representation and the biased representation. The Biased representation is further decoupled via biased teacher-guided disentanglement, while the fair representation learn downstream tasks. Therefore, BATED guarantees the performance of downstream tasks while improving the fairness. Experimental results on seven PLMs testing three downstream tasks demonstrate that BATED outperforms the state-of-the-art overall in terms of fairness and performance on downstream tasks.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104401"},"PeriodicalIF":4.6,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144810517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengyang He, Tanishq Duhan, Parth Tulsyan, Patrick Kim, Guillaume Sartoretti
{"title":"Social behavior as a key to learning-based multi-agent pathfinding dilemmas","authors":"Chengyang He, Tanishq Duhan, Parth Tulsyan, Patrick Kim, Guillaume Sartoretti","doi":"10.1016/j.artint.2025.104397","DOIUrl":"10.1016/j.artint.2025.104397","url":null,"abstract":"<div><div>The Multi-agent Path Finding (MAPF) problem involves finding collision-free paths for a team of agents in a known, static environment, with important applications in warehouse automation, logistics, or last-mile delivery. To meet the needs of these large-scale applications, current learning-based methods often deploy the same fully trained, decentralized network to all agents to improve scalability. However, such parameter sharing typically results in homogeneous behaviors among agents, which may prevent agents from breaking ties around symmetric conflict (e.g., bottlenecks) and might lead to live-/deadlocks. In this paper, we propose SYLPH, a novel learning-based MAPF framework aimed to mitigate the adverse effects of homogeneity by allowing agents to learn and dynamically select different social behaviors (akin to individual, dynamic roles), without affecting the scalability offered by parameter sharing. Specifically, SYLPH offers a novel hierarchical mechanism by introducing Social Value Orientation (SVO) as a temporally extended latent variable that plays a central role in both policy generation and reward assignment. To support this hierarchical decision-making process, we introduce Social-aware Multi-Policy PPO (SMP3O), a reinforcement learning method that ensures stable and effective training through a mechanism for the cross-utilization of advantages. Moreover, we design an SVO-based learning tie-breaking algorithm, allowing agents to proactively avoid collisions, rather than relying solely on post-processing techniques. As a result of this hierarchical decision-making and exchange of social preferences, SYLPH endows agents with the ability to reason about the MAPF task through more latent spaces and nuanced contexts, leading to varied responses that can help break ties around symmetric conflicts. Our comparative experiments show that SYLPH achieves state-of-the-art performance, surpassing other learning-based MAPF planners in random, room-like, and maze-like maps, while our ablation studies demonstrate the advantages of each component in SYLPH. We finally experimentally validate our trained policies on hardware in three types of maps, showing how SYLPH allows agents to find high-quality paths under real-life conditions. Our code and videos are available at: <span><span>marmotlab.github.io/mapf_sylph</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104397"},"PeriodicalIF":4.6,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144739690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiyang Gu , Binqi Chen , Zihao Chen , Ziyue Qiao , Xiao Luo , Junyu Luo , Zhiping Xiao , Wei Ju , Ming Zhang
{"title":"MATE: Masked optimal transport with dynamic selection for partial label graph learning","authors":"Yiyang Gu , Binqi Chen , Zihao Chen , Ziyue Qiao , Xiao Luo , Junyu Luo , Zhiping Xiao , Wei Ju , Ming Zhang","doi":"10.1016/j.artint.2025.104396","DOIUrl":"10.1016/j.artint.2025.104396","url":null,"abstract":"<div><div>This paper investigates the problem of partial label graph learning, in which every graph is associated with a set of candidate labels. Previous methods for weakly supervised graph classification often provide pseudo-labels for graph samples that could be overconfident and biased towards the dominant classes, thus resulting in substantial error accumulation. In this paper, we introduce a new framework named <u>M</u>asked Optim<u>a</u>l <u>T</u>ransport with Dynamic S<u>e</u>lection (MATE) for partial label graph learning, which improves the quality of graph assignments from the perspectives of class balancing and uncertainty mining. In particular, our MATE masks probabilities out of candidate sets and then adopts optimal transport to optimize the assignments without class biases. This design is based on the assumption that the true label distribution is class-balanced or nearly balanced, which is common in various training datasets and real-world scenarios. To further reduce potential noise, we propose a novel scoring metric termed partial energy discrepancy (PED) to evaluate the uncertainty of assignments, and then introduce a dynamic selection strategy that modifies the sample-specific thresholds via momentum updating. Finally, these samples are divided into three levels, i.e., confident, less-confident, and unconfident and each group is trained separately in our collaborative optimization framework. Extensive experiments on various benchmarks demonstrate the superiority of our MATE compared to various state-of-the-art baselines.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104396"},"PeriodicalIF":4.6,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144810516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}