{"title":"Algorithms for computing the set of acceptable arguments","authors":"Lars Bengel , Matthias Thimm , Federico Cerutti , Mauro Vallati","doi":"10.1016/j.ijar.2025.109478","DOIUrl":"10.1016/j.ijar.2025.109478","url":null,"abstract":"<div><div>We investigate the computational problem of determining the set of acceptable arguments in abstract argumentation wrt. credulous and skeptical reasoning under grounded, complete, stable, and preferred semantics. In particular, we investigate the computational complexity of that problem and its verification variant, and develop several algorithms for all problem variants, including two baseline approaches based on iterative acceptability queries and extension enumeration, and some optimised versions. We experimentally compare the runtime performance of these algorithms: our results show that our newly optimised algorithms significantly outperform the baseline algorithms in most cases.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109478"},"PeriodicalIF":3.2,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144154440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrigendum to “Estimating bounds on causal effects in high-dimensional and possibly confounded systems” [Int. J. Approx. Reason. 88 (2017) 371–384]","authors":"Daniel Malinsky , Peter Spirtes","doi":"10.1016/j.ijar.2025.109475","DOIUrl":"10.1016/j.ijar.2025.109475","url":null,"abstract":"","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109475"},"PeriodicalIF":3.2,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144116699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On approximation of lattice-valued functions using lattice integral transforms","authors":"Viec Bui Quoc , Michal Holčapek","doi":"10.1016/j.ijar.2025.109476","DOIUrl":"10.1016/j.ijar.2025.109476","url":null,"abstract":"<div><div>This paper examines the approximation capabilities of lattice integral transforms and their compositions in reconstructing lattice-valued functions. By introducing an integral kernel <em>Q</em> on the function domain, we define the concept of a <em>Q</em>-inverse integral kernel, which generalizes the traditional inverse kernel defined as a transposed integral kernel. Leveraging these <em>Q</em>-inverses, we establish upper and lower bounds for a transformed version of the original function induced by the integral kernel <em>Q</em>. The quality of approximation is analyzed using a lattice-based modulus of continuity, specifically designed for functions valued in complete residuated lattices. Additionally, under specific conditions, we demonstrate that the approximation quality for extensional functions with respect to the kernel <em>Q</em> can be estimated through the integral of the square of <em>Q</em>, and in certain cases, these extensional functions can be perfectly reconstructed. The theoretical findings, illustrated through examples, provide a strong foundation for further theoretical advancement and practical applications.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109476"},"PeriodicalIF":3.2,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144169994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soroush Ghandi , Benjamin Quost , Cassio P. de Campos
{"title":"Soft learning probabilistic circuits","authors":"Soroush Ghandi , Benjamin Quost , Cassio P. de Campos","doi":"10.1016/j.ijar.2025.109467","DOIUrl":"10.1016/j.ijar.2025.109467","url":null,"abstract":"<div><div>Probabilistic Circuits (PCs) are prominent tractable probabilistic models, allowing for a wide range of exact inferences. This paper focuses on a main algorithm for training PCs, LearnSPN, arguably a gold standard due to its efficiency, performance, and ease of use, in particular for tabular data. We show that LearnSPN is a greedy likelihood maximizer under mild assumptions. While inferences in PCs may use the entire circuit structure for processing queries, LearnSPN applies a <em>hard</em> method for learning PCs, propagating at each sum node a data point through one and only one of the children/edges as in a hard clustering process. We propose a new learning procedure named SoftLearn, that induces a PC using a <em>soft</em> clustering process. We investigate the effect of this learning-inference compatibility in PCs. Our experiments show that SoftLearn outperforms LearnSPN in many situations, yielding better likelihoods and arguably better samples. We also analyze comparable tractable models to highlight the differences between soft/hard learning and model querying.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109467"},"PeriodicalIF":3.2,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144124424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The diameter of a stochastic matrix: A new measure for sensitivity analysis in Bayesian networks","authors":"Manuele Leonelli , Jim Q. Smith","doi":"10.1016/j.ijar.2025.109470","DOIUrl":"10.1016/j.ijar.2025.109470","url":null,"abstract":"<div><div>Bayesian networks are one of the most widely used classes of probabilistic models for risk management and decision support because of their interpretability and flexibility in including heterogeneous pieces of information. In any applied modelling, it is critical to assess how robust the inferences on certain target variables are to changes in the model. In Bayesian networks, these analyses fall under the umbrella of sensitivity analysis, which is most commonly carried out by quantifying dissimilarities using Kullback-Leibler information measures. We argue that robustness methods based instead on the total variation distance provide simple and more valuable bounds on robustness to misspecification, which are both formally justifiable and transparent. We introduce a novel measure of dependence in conditional probability tables called the <em>diameter</em> to derive such bounds. This measure quantifies the strength of dependence between a variable and its parents. Furthermore, the diameter is a versatile measure that can be applied to a wide range of sensitivity analysis tasks. It is particularly useful for quantifying edge strength, assessing influence between pairs of variables, detecting asymmetric dependence, and amalgamating levels of variables. This flexibility makes the diameter an invaluable tool for enhancing the robustness and interpretability of Bayesian network models in applied risk management and decision support.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109470"},"PeriodicalIF":3.2,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144089915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enrique Miranda, Juan J. Salamanca, Ignacio Montes
{"title":"A comparative analysis of aggregation rules for coherent lower previsions","authors":"Enrique Miranda, Juan J. Salamanca, Ignacio Montes","doi":"10.1016/j.ijar.2025.109474","DOIUrl":"10.1016/j.ijar.2025.109474","url":null,"abstract":"<div><div>We consider the problem of aggregating belief models elicited by experts when these are expressed by means of coherent lower previsions. These constitute a framework general enough so as to include as particular cases not only probability measures but also the majority of models from imprecise probability theory. Although the aggregation problem has already been tackled in the literature, our contribution provides a unified view by gathering a number of rationality criteria and aggregation rules studied in different papers. Specifically, we consider six aggregation rules and twenty rationality criteria. We exhaustively analyse the relationships between the rules, the properties satisfied by each rule and the characterisations of the rules in terms of the criteria.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109474"},"PeriodicalIF":3.2,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The structure of rough sets defined by reflexive relations","authors":"Jouni Järvinen , Sándor Radeleczki","doi":"10.1016/j.ijar.2025.109471","DOIUrl":"10.1016/j.ijar.2025.109471","url":null,"abstract":"<div><div>For several types of information relations, the induced rough sets system RS does not form a lattice but only a partially ordered set. However, by studying its Dedekind–MacNeille completion DM(RS), one may reveal new important properties of rough set structures. Building upon D. Umadevi's work on describing joins and meets in DM(RS), we previously investigated pseudo-Kleene algebras defined on DM(RS) for reflexive relations. This paper delves deeper into the order-theoretic properties of DM(RS) in the context of reflexive relations. We describe the completely join-irreducible elements of DM(RS) and characterize when DM(RS) is a spatial completely distributive lattice. We show that even in the case of a non-transitive reflexive relation, DM(RS) can form a Nelson algebra, a property generally associated with quasiorders. We introduce a novel concept, the core of a relational neighbourhood, and use it to provide a necessary and sufficient condition for DM(RS) to determine a Nelson algebra.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109471"},"PeriodicalIF":3.2,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144089913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information fusion based conflict analysis model for multi-source fuzzy data","authors":"Xinxin Tang , Mengyu Yan , Jinhai Li , Fei Hao","doi":"10.1016/j.ijar.2025.109472","DOIUrl":"10.1016/j.ijar.2025.109472","url":null,"abstract":"<div><div>Conflict is ubiquitous in life. Conflict analysis is a tool for understanding conflicts, whose aim is to analyze the conflict situations in data to help decision makers avoid risks. Existing conflict analysis methods mainly focus on single-source data. However, the emergence of big data era has generated more complex data, such as multi-source data obtained from different perspectives, which can capture details that single-source data is missing. Not only that, most data also exhibit characteristics of fuzziness. The above situations make it more challenging to construct a conflict analysis model in the environment of multi-source fuzzy data to acquire a compliant decision. Therefore, conflict analysis for multi-source fuzzy data is a worthy research topic. However, the existing few studies on multi-source fuzzy data either favor attribute values or ignore conflict resolution, which reduces the conflict resolution performance due to underutilizing attribute information. To solve the above problem, we divide the attribute values of multi-source fuzzy data into three attitude intervals to distinguish different attitudes of agents. Then, we propose a function to measure conflict and construct a conflict analysis model for a multi-source fuzzy formal context. Additionally, we put forward an information fusion method based on the minimum of fuzzy entropy, whose purpose is to achieve conflict resolution quickly. Finally, experiments conducted on 18 datasets demonstrate that our information fusion method can achieve conflict resolution effectively, and provide a useful reference for decision-makers.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109472"},"PeriodicalIF":3.2,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144083735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qilin Li , Zhong Yuan , Dezhong Peng , Xiaomin Song , Huiming Zheng , Xinyu Su
{"title":"Granular-ball fuzzy information-based outlier detector","authors":"Qilin Li , Zhong Yuan , Dezhong Peng , Xiaomin Song , Huiming Zheng , Xinyu Su","doi":"10.1016/j.ijar.2025.109473","DOIUrl":"10.1016/j.ijar.2025.109473","url":null,"abstract":"<div><div>Outlier detection is an important part of the process of carrying out data mining and analysis and has been applied to many fields. Existing methods are typically anchored in a single-sample processing paradigm, where the processing unit is each individual and single-granularity sample. This processing paradigm is inefficient and ignores the multi-granularity features inherent in data. In addition, these methods often overlook the uncertainty information present in the data. To remedy the above-mentioned shortcomings, we propose an unsupervised outlier detection method based on Granular-Ball Fuzzy Granules (GBFG). GBFG adopts a granular-ball-based computing paradigm, where the fundamental processing units are granular-balls. This shift from individual samples to granular-balls enables GBFG to capture the overall data structure from a multi-granularity perspective and improve the performance of outlier detection. Subsequently, we calculate the outlier factor based on the outlier degrees of the granular-ball fuzzy granules to which the sample belongs, serving as a measure of the outlier degrees of samples. The experimental results prove that GBFG has a remarkable performance compared with the existing excellent algorithms. The code of GBFG is publicly available on <span><span>https://github.com/Mxeron/GBFG</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109473"},"PeriodicalIF":3.2,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144089914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Latent Gaussian and Hüsler–Reiss graphical models with Golazo penalty","authors":"Ignacio Echave-Sustaeta Rodríguez, Frank Röttger","doi":"10.1016/j.ijar.2025.109468","DOIUrl":"10.1016/j.ijar.2025.109468","url":null,"abstract":"<div><div>The existence of latent variables in practical problems is common, for example when some variables are difficult or expensive to measure, or simply unknown. When latent variables are unaccounted for, structure learning for Gaussian graphical models can be blurred by additional correlation between the observed variables that is incurred by the latent variables. A standard approach for this problem is a latent version of the graphical lasso that splits the inverse covariance matrix into a sparse and a low-rank part that are penalized separately. This approach has recently been extended successfully to Hüsler–Reiss graphical models, which can be considered as an analogue of Gaussian graphical models in extreme value statistics. In this paper we propose a generalization of structure learning for Gaussian and Hüsler–Reiss graphical models via the flexible Golazo penalty. This allows us to introduce latent versions of for example the adaptive lasso, positive dependence constraints or predetermined sparsity patterns, and combinations of those. We develop algorithms for both latent graphical models with the Golazo penalty and demonstrate it on simulated and real data.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109468"},"PeriodicalIF":3.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144089912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}