{"title":"Contra2: A one-step active learning method for imbalanced graphs","authors":"Wenjie Yang , Shengzhong Zhang , Jiaxing Guo , Zengfeng Huang","doi":"10.1016/j.artint.2025.104439","DOIUrl":"10.1016/j.artint.2025.104439","url":null,"abstract":"<div><div>Graph active learning (GAL) is an important research direction in graph neural networks (GNNs) that aims to select the most valuable nodes for labeling to train GNNs. Previous works in GAL have primarily focused on the overall performance of GNNs, overlooking the balance among different classes. However, graphs in real-world applications are often imbalanced, which leads GAL methods to select class-imbalanced training sets, resulting in biased GNN models. Furthermore, due to the high cost of multi-turn queries, there is an increasing demand for one-step GAL methods, where the entire training set is queried at once. These realities prompt us to investigate the problem of one-step active learning on imbalanced graphs.</div><div>In this paper, we propose a theory-driven method called Contrast & Contract (Contra<sup>2</sup>) to tackle the above issues. The key idea of Contra<sup>2</sup> is that intra-class edges within the majority are dominant in the edge set, so contracting these edges will reduce the imbalance ratio. Specifically, Contra<sup>2</sup> first learns node representations by graph <strong>contrast</strong>ive learning (GCL), then stochastically <strong>contract</strong>s the edges that connect nodes with similar embeddings. We theoretically show that Contra<sup>2</sup> reduces the imbalance ratio with high probability. By leveraging a more evenly distributed graph, we can achieve a balanced selection of labeled nodes without requiring any seed labels. The effectiveness of Contra<sup>2</sup> is evaluated against various baselines on 11 datasets with different budgets. Contra<sup>2</sup> demonstrates remarkable performance, achieving either higher or on-par performance with only half of the annotation budget on some datasets.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104439"},"PeriodicalIF":4.6,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna
{"title":"Constraints and lifting-based (conditional) preferences in abstract argumentation","authors":"Gianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna","doi":"10.1016/j.artint.2025.104437","DOIUrl":"10.1016/j.artint.2025.104437","url":null,"abstract":"<div><div>Dealing with controversial information is an important issue in several application contexts. Formal argumentation enables reasoning on arguments for and against a claim to decide on an outcome. Abstract Argumentation Framework (AF) has emerged as a central formalism in argument-based reasoning. In recent years there has been an increasing interest in extending AF to facilitate the knowledge representation and reasoning process. In this paper, we present an extension of AF that allows for the representation of labelled constraints and labelled preferences. A labelled argument is of the form <span><math><mrow><mi>in</mi></mrow><mo>(</mo><mi>a</mi><mo>)</mo></math></span>, <span><math><mrow><mi>out</mi></mrow><mo>(</mo><mi>a</mi><mo>)</mo></math></span>, or <span><math><mrow><mi>und</mi></mrow><mo>(</mo><mi>a</mi><mo>)</mo></math></span>, where <em>a</em> is an argument, whereas <strong>in</strong>, <strong>out</strong>, and <strong>und</strong> denote the acceptance status (i.e., accepted, rejected, undecided, respectively) of the specified argument. We start by considering an extension of AF with labelled constraints, namely <em>Labelled Constrained AF</em> (LCAF), then we focus on AF with labelled preferences (<em>Labelled Preference-based AF</em>, LPAF for short) and, finally, we introduce a general framework called <em>Labelled Preference-based Constrained AF</em> (LPCAF) that combines AF, labelled constraints, and labelled preferences. We also investigate an extension of AF with labelled conditional (or extended) preferences, namely <em>Labelled extended Preference-based AF</em> (LePAF), and its further combination with labelled constraints (<em>Labelled extended Preference-based Constrained AF</em>, LePCAF for short). Herein, conditional preferences are of the form <span><math><mi>a</mi><mo>></mo><mi>b</mi><mo>←</mo></math></span> <em>body</em>, where <strong>a</strong> and <strong>b</strong> are labelled arguments, whereas <em>body</em> is a propositional formula over labelled arguments. For each framework, we define its syntax and semantics, and investigate the computational complexity of four canonical argumentation problems: <em>existence</em>, <em>verification</em>, and <em>credulous</em> and <em>skeptical acceptance</em>, under the well-known <em>complete</em>, <em>stable</em>, <em>semi-stable</em>, and <em>preferred</em> semantics.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104437"},"PeriodicalIF":4.6,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145322151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dolev Mutzari , Tonmoay Deb , Cristian Molinaro , Andrea Pugliese , V.S. Subrahmanian , Sarit Kraus
{"title":"Defending a city from multi-drone attacks: A sequential Stackelberg security games approach","authors":"Dolev Mutzari , Tonmoay Deb , Cristian Molinaro , Andrea Pugliese , V.S. Subrahmanian , Sarit Kraus","doi":"10.1016/j.artint.2025.104425","DOIUrl":"10.1016/j.artint.2025.104425","url":null,"abstract":"<div><div>To counter an imminent multi-drone attack on a city, defenders have deployed drones across the city. These drones must intercept/eliminate the threat, thus reducing potential damage from the attack. We model this as a Sequential Stackelberg Security Game, where the defender first commits to a mixed sequential defense strategy, and the attacker then best responds. We develop an efficient algorithm called S2D2, which outputs a defense strategy. We demonstrate the efficacy of S2D2 in extensive experiments on data from 80 real cities, improving the performance of the defender in comparison to greedy heuristics based on prior works. We prove that under some reasonable assumptions about the city structure, S2D2 outputs an approximate Strong Stackelberg Equilibrium (SSE) with a convenient structure.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104425"},"PeriodicalIF":4.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Georgios Amanatidis , Ben Berger , Tomer Ezra , Michal Feldman , Federico Fusco , Rebecca Reiffenhäuser , Artem Tsikiridis
{"title":"Pandora's box problem with time constraints","authors":"Georgios Amanatidis , Ben Berger , Tomer Ezra , Michal Feldman , Federico Fusco , Rebecca Reiffenhäuser , Artem Tsikiridis","doi":"10.1016/j.artint.2025.104426","DOIUrl":"10.1016/j.artint.2025.104426","url":null,"abstract":"<div><div>The Pandora's Box problem models the search for the best alternative when evaluation is costly. In the simplest variant, a decision maker is presented with <em>n</em> boxes, each associated with a cost of inspection and a hidden random reward. The decision maker inspects a subset of these boxes one after the other, in a possibly adaptive order, and gains the difference between the largest revealed reward and the sum of the inspection costs. Although this classic version is well understood (Weitzman 1979), there is a flourishing recent literature on variants of the problem. Here we introduce a general framework—the Pandora's Box Over Time problem—that captures a wide range of variants where time plays a role, e.g., by constraining the schedules of exploration and influencing costs and rewards. In our framework, boxes have time-dependent rewards and costs, whereas inspection may require a box-specific processing time. Moreover, once a box is inspected, its reward may deteriorate over time. Our main result is an efficient constant-factor approximation to the optimal strategy for the Pandora's Box Over Time problem, which is generally NP-hard to compute. We further obtain improved results for the natural special cases where boxes have no processing time, boxes are available only in specific time slots, or when costs and reward distributions are time-independent (but rewards may still deteriorate after inspection).</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104426"},"PeriodicalIF":4.6,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Panagiotis Kanellopoulos , Maria Kyropoulou , Hao Zhou
{"title":"Optimal bailouts and strategic debt forgiveness in financial networks","authors":"Panagiotis Kanellopoulos , Maria Kyropoulou , Hao Zhou","doi":"10.1016/j.artint.2025.104424","DOIUrl":"10.1016/j.artint.2025.104424","url":null,"abstract":"<div><div>A financial system is represented by a network, where nodes correspond to banks, and directed labeled edges correspond to debt contracts between banks. Once a payment schedule has been defined, the liquidity of the system is defined as the sum of total payments made in the network. Maximizing systemic liquidity is a natural objective of any financial authority, so, we study the setting where the financial authority offers bailout money to some bank(s) or forgives the debts of others in order to help them avoid costs related to default, and, hence, maximize liquidity. We investigate the approximation ratio provided by the greedy bailout policy compared to the optimal one, and we study the computational hardness of finding the optimal debt-removal and budget-constrained optimal bailout policy, respectively.</div><div>We also study financial systems from a game-theoretic standpoint. We observe that the removal of some incoming debt might be in the best interest of a bank, if that helps one of its borrowers remain solvent and avoid costs related to default. Assuming that a bank's well-being (i.e., utility) is aligned with the incoming payments they receive from the network, we define and analyze a game among banks who want to maximize their utility by strategically giving up some incoming payments. In addition, we extend the previous game by considering bailout payments. After formally defining the above games, we prove results about the existence and quality of pure Nash equilibria, as well as the computational complexity of finding such equilibria.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104424"},"PeriodicalIF":4.6,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145263920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandru Baltag , Nick Bezhanishvili , David Fernández-Duque
{"title":"The topology of surprise","authors":"Alexandru Baltag , Nick Bezhanishvili , David Fernández-Duque","doi":"10.1016/j.artint.2025.104423","DOIUrl":"10.1016/j.artint.2025.104423","url":null,"abstract":"<div><div>In this paper we present a topological epistemic logic, with modalities for knowledge (modelled as the universal modality), knowability (represented by the topological interior operator), and unknowability of the actual world. The last notion has a non-self-referential reading (modelled by Cantor derivative: the set of limit points of a given set) and a self-referential one (modelled by Cantor's perfect core of a given set: its largest subset without isolated points, where <em>x</em> is isolated iff <span><math><mo>{</mo><mi>x</mi><mo>}</mo></math></span> is open). We completely axiomatize this logic, showing that it is decidable and <span>pspace</span>-complete, and we apply it to the analysis of a famous epistemic puzzle: the Surprise Exam Paradox.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104423"},"PeriodicalIF":4.6,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145189732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learngene: Inheritable “genes” in intelligent agents","authors":"Fu Feng , Jing Wang , Xu Yang , Xin Geng","doi":"10.1016/j.artint.2025.104421","DOIUrl":"10.1016/j.artint.2025.104421","url":null,"abstract":"<div><div>Biological intelligence has driven significant progress in artificial intelligence (AI), but a critical gap remains: biological systems inherit innate abilities from genes, with brains initialized by blueprints refined over 3.5 billion years of evolution, while machines rely heavily on inefficient, data-driven learning from scratch. This gap arises from the lack of a genetic mechanism in machines to transfer and accumulate inheritable knowledge across generations. To bridge this gap, we propose learngenes, network fragments that act as inheritable “genes” for machines. Unlike conventional knowledge transfer methods, learngenes enable efficient and universal knowledge transfer by selectively encapsulating task-agnostic knowledge. To facilitate the transfer and accumulation of task-agnostic knowledge across generations, we introduce Genetic Reinforcement Learning (GRL), a framework that simulates the learning and evolution of organisms in intelligent agents following Lamarckian principles. Through GRL, we identify learngenes as network fragments within agents' policy networks, equipping newborn agents with innate abilities for rapid adaptation to novel tasks. We demonstrate the advantages of learngene-based knowledge transfer over evolution-based search and traditional pre-trained models, and show how learngenes evolve through the accumulation of task-agnostic knowledge. Overall, this work establishes a novel paradigm for knowledge transfer and model initialization in AI, offering new possibilities for more adaptive, efficient, and scalable learning systems.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104421"},"PeriodicalIF":4.6,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145154766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised sentence selection for creating a representative corpus in Turkish: An active learning approach","authors":"Hayri Volkan Agun","doi":"10.1016/j.artint.2025.104422","DOIUrl":"10.1016/j.artint.2025.104422","url":null,"abstract":"<div><div>In this study, active learning methods adapted for sentence selection of Turkish sentences are evaluated through language learning with neural models. Turkish is an agglutinative language with a complex morphology, where the linguistic properties of words are encoded in suffixes. The active learning methods based on regression, clustering, language models, distance metrics, and neural networks are applied to unlabeled sentence selection. In this respect, a sentence corpus is selected from a larger corpus, with the same number of samples for each target word in intrinsic and extrinsic evaluation tasks. The selected sentences are used for the training of SkipGram, CBOW, and self-attention LSTM language models and extracted embeddings are evaluated by the semantic analogy, POS and sentiment analysis tasks. The evaluation scores of the models trained on the samples selected by the active learning method are compared. The results of the selected sentences based on language models indicate an improvement over random selection based on a static vocabulary. These results also show that the selection affects the quality of unsupervised word embedding extraction even if the target vocabulary is kept the same. Along with the accuracy, the time efficiency of the language models is shown to be better than other methods especially methods based on neural network models, and distance metrics.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104422"},"PeriodicalIF":4.6,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145104222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bridging theory and practice in bidirectional heuristic search with front-to-end consistent heuristics","authors":"Lior Siag, Shahaf S. Shperberg","doi":"10.1016/j.artint.2025.104420","DOIUrl":"10.1016/j.artint.2025.104420","url":null,"abstract":"<div><div>Recent research on bidirectional heuristic search (BiHS) has been shaped by the <em>must-expand pairs</em> (MEP) theory, which identifies the pairs of nodes that must be expanded to ensure solution optimality. Another line of research has focused on algorithms utilizing lower bounds derived from consistent heuristics during the search. This paper bridges these two approaches, offering a unified framework that demonstrates how both existing and novel algorithms can be derived from MEP theory. We introduce an extended set of bounds, encompassing both previously known and newly formulated ones. Using these bounds, we develop a range of algorithms, each employing different criteria for termination, node selection, and search direction. Finally, we empirically evaluate how these bounds and algorithms impact search efficiency.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104420"},"PeriodicalIF":4.6,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145104221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alberto Maria Metelli, Alessio Russo, Marcello Restelli
{"title":"Minimax off-policy evaluation and learning with subgaussian and differentiable importance weighting","authors":"Alberto Maria Metelli, Alessio Russo, Marcello Restelli","doi":"10.1016/j.artint.2025.104419","DOIUrl":"10.1016/j.artint.2025.104419","url":null,"abstract":"<div><div>In this work, we study the statistical properties of the <em>off-policy estimation</em> problem, i.e., estimating expectations under a target policy using samples collected from a different policy. We begin by presenting a novel minimax concentration lower bound that highlights the fundamental limits of off-policy estimation. We then analyze two well-known <em>importance weighting</em> (IW) techniques: vanilla IW and self-normalized importance weighting (SN). For both methods, we derive concentration and anti-concentration results, showing that their concentration rates are provably suboptimal compared to our lower bound. Observing that this undesired behavior arises from the <em>heavy-tailed</em> nature of the IW and SN estimators, we propose a new class of parametric estimators based on a transformation using the <em>power mean</em> (PM), which is no longer heavy-tailed. We study the theoretical properties of the PM estimator in terms of bias and variance. We show that, with suitable (possibly data-driven) tuning of its parameters, the PM estimator satisfies two key properties under certain conditions: (<em>i</em>) it achieves a <em>subgaussian</em> concentration rate that matches our lower bound and (<em>ii</em>) it maintains differentiability with respect to the target policy. Finally, we validate our approach through numerical simulations on both synthetic datasets and contextual bandits, comparing it against standard off-policy evaluation and learning baselines.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104419"},"PeriodicalIF":4.6,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}