{"title":"Knowledge granularity reduction for fuzzy relation decision systems","authors":"Qing Wang , Xiuwei Gao , Guilong Liu","doi":"10.1016/j.ijar.2025.109494","DOIUrl":"10.1016/j.ijar.2025.109494","url":null,"abstract":"<div><div>Knowledge granularity measures how effectively an equivalence relation can classify data within a knowledge base. It can also be used to describe the relationship between condition and decision attributes in decision tables. Knowledge granularity reduction is a significant type of attribute reduction for decision tables and its reduction algorithm for identifying all reducts was obtained in the past. In this paper, we extend such a reduction to relation decision systems and fuzzy relation decision systems, and obtain the corresponding reduction algorithm to identify all reducts. This algorithm unifies knowledge granularity reduction for decision tables, relation decision systems, and fuzzy relation decision systems. Finally, we use UCI and KRBM datasets to check the practicality and efficacy of our proposed algorithms.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109494"},"PeriodicalIF":3.2,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144221953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xixuan Zhao , Bingzhen Sun , Xiaodong Chu , Jin Ye , Xiaoli Chu
{"title":"A hybrid machine learning and three-way soft clustering integrated decision-making method with incomplete multi-source heterogeneous attribute information","authors":"Xixuan Zhao , Bingzhen Sun , Xiaodong Chu , Jin Ye , Xiaoli Chu","doi":"10.1016/j.ijar.2025.109491","DOIUrl":"10.1016/j.ijar.2025.109491","url":null,"abstract":"<div><div>Decision-making under uncertainty in the era of big data and technological innovation can often lead to more objective and scientific results. However, data characterized by multi-source heterogeneity, nonlinearity, imbalance, and incompleteness pose a substantial challenge to traditional decision-making theories and methods. In view of this, this paper defines the concept of an incomplete multi-source heterogeneous attribute information system (IMHAS), introduces an attribute reduction method on IMHAS that integrates rough sets and machine learning, and combines three-way soft clustering and hybrid machine learning models. A novel theoretical framework is proposed to address uncertain decision-making problems involving data characterized by multi-source heterogeneity, nonlinearity, imbalance, and incompleteness is proposed. First, IMHAS is established and its attribute reduction method is defined using rough set, neighborhood rough set, bag-of-words, and random forest techniques. Second, to further resolve the correlation between objects in IMHAS, a three-way soft clustering method is introduced. Finally, to target decision-making for different object categories, two types of machine learning models are constructed for handling discrete and conditional decision attributes. The scientific superiority of the proposed method was verified using four distinct datasets from medical and healthcare domains. The results show that the proposed method outperforms all comparative methods, and it can effectively support uncertain decision-making in four different cases. In conclusion, this paper proposes a general theoretical framework for uncertain decision-making based on artificial intelligence techniques for incomplete multi-source heterogeneous data, thus offering realistic guidance for clinical applications involving such data.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109491"},"PeriodicalIF":3.2,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Calcagnì , Przemysław Grzegorzewski , Maciej Romaniuk
{"title":"Bayesianize fuzziness in the statistical analysis of fuzzy data","authors":"Antonio Calcagnì , Przemysław Grzegorzewski , Maciej Romaniuk","doi":"10.1016/j.ijar.2025.109495","DOIUrl":"10.1016/j.ijar.2025.109495","url":null,"abstract":"<div><div>Fuzzy data, prevalent in social sciences and other fields, capture uncertainties arising from subjective evaluations and measurement imprecision. Despite significant advancements in fuzzy statistics, a unified inferential regression-based framework remains undeveloped. Hence, we propose a novel approach for analyzing bounded fuzzy variables within a regression framework. Building on the premise that fuzzy data result from a process analogous to statistical coarsening, we introduce a conditional probabilistic approach that links observed fuzzy statistics (e.g., mode, spread) to the underlying, unobserved statistical model, which depends on external covariates. The inferential problem is addressed using Approximate Bayesian methods, mainly through a Gibbs sampler incorporating a quadratic approximation of the posterior distribution. Simulation studies and applications involving external validations are employed to evaluate the effectiveness of the proposed approach for fuzzy data analysis. By reintegrating fuzzy data analysis into a more traditional statistical framework, this work provides a significant step toward enhancing the interpretability and applicability of fuzzy statistical methods in many applicative contexts.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109495"},"PeriodicalIF":3.2,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junjie Zhu , Qinghua Zhang , Nanfang Luo , Fan Liu , Longjun Yin
{"title":"Three-way conflict analysis model via the best-worst method: Balancing subjective preferences and objective data on incomplete and dispersed systems","authors":"Junjie Zhu , Qinghua Zhang , Nanfang Luo , Fan Liu , Longjun Yin","doi":"10.1016/j.ijar.2025.109490","DOIUrl":"10.1016/j.ijar.2025.109490","url":null,"abstract":"<div><div>In real-world environments, different issues contribute to conflicts with varying weights. However, current conflict analysis weighting models face significant limitations when dealing with incomplete data and dispersed knowledge. Objective methods are sensitive to missing values and struggle to accurately capture authentic preferences, while subjective approaches lack systematic evaluation criteria, leading to substantial randomness in weight assignments. Therefore, a three-way conflict analysis model via the best-worst method is proposed, which is combined with a correlation coefficient method to balance subjective preferences and objective data. First, the trisection of agent pairs is derived through Bayesian minimum risk. Subsequently, a new conflict distance function is defined on the incomplete information system to provide a more precise measurement of conflict degrees. Then, for incomplete and dispersed information systems, a maximal coalition-based agent partitioning algorithm is designed, along with a new weighted voting mechanism to aggregate dispersed knowledge. Finally, the scientific transparency of the weighting process, as well as the robustness and feasibility of the model, are demonstrated through experimental analysis.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109490"},"PeriodicalIF":3.2,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144221952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Natural revision is contingently-conditionalized revision","authors":"Paolo Liberatore","doi":"10.1016/j.ijar.2025.109489","DOIUrl":"10.1016/j.ijar.2025.109489","url":null,"abstract":"<div><div>Natural revision seems so natural: it changes beliefs as little as possible to incorporate new information. Yet, some counterexamples show it wrong. It is so conservative that it never fully believes. It only believes in the current conditions. This is right in some cases and wrong in others. Which is which? The answer requires extending natural revision from simple formulae expressing universal truths (something holds) to conditionals expressing conditional truth (something holds in certain conditions). The extension is based on the basic principles natural revision follows, identified as minimal change and naivety: change mind as little as possible; believe what not contradicted. The extension says that natural revision restricts changes to the current conditions. A comparison with an unrestricting revision shows what exactly the current conditions are. It is not what currently considered true if it contradicts the new information. It includes something more and more unlikely until the new information is at least possible.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109489"},"PeriodicalIF":3.2,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting graphical models based on functional data","authors":"Qiying Wu , Huiwen Wang","doi":"10.1016/j.ijar.2025.109493","DOIUrl":"10.1016/j.ijar.2025.109493","url":null,"abstract":"<div><div>Graphical models are widely used to model complex relationships between variables in various fields. However, existing analysis methods focus primarily on scalar data and give little attention to addressing the challenges posed by nonscalar data, e.g., functional data, which are prevalent in many real-world applications. Additionally, most methods assume a static graphical model within the observed period, neglecting the dynamic changes that may occur over time. In this paper, we propose a novel method for predicting graphical models based on functional data. Our approach transforms functional data into finite-dimensional vectors via basis function expansion and cross-validation. We then establish a sequential prediction model for determining the correlation coefficient matrix of the decomposed data, thus accounting for the constraints imposed on the matrix via a transformation technique. Finally, we employ conditional independence tests to identify the edges of the predicted graphical model. We demonstrate the effectiveness of our method through extensive simulations and real data analyses. The results show that our method performs better than the competing methods in terms of prediction accuracy and provides valuable insights into the dynamic changes exhibited by a network. This work opens new possibilities for conducting graphical model analyses in various domains, particularly in terms of handling functional data and predicting dynamic relationships.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109493"},"PeriodicalIF":3.2,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144212380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The language of Contextual Attribute Logics – Introduction and survey","authors":"Bernhard Ganter","doi":"10.1016/j.ijar.2025.109487","DOIUrl":"10.1016/j.ijar.2025.109487","url":null,"abstract":"<div><div>This article provides an introductory overview of Contextual Attribute Logic(s), a branch of Contextual Logic founded by Rudolf Wille. It presents a logic language that is based on mathematical logic but uses a different terminology to better serve the interpretative goal of Formal Concept Analysis.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109487"},"PeriodicalIF":3.2,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144204717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertainty quantification in ordinal classification: A comparison of measures","authors":"Stefan Haas , Eyke Hüllermeier","doi":"10.1016/j.ijar.2025.109479","DOIUrl":"10.1016/j.ijar.2025.109479","url":null,"abstract":"<div><div>Uncertainty quantification has received increasing attention in machine learning in the recent past, but the focus has mostly been on standard (nominal) classification and regression so far. In this paper, we address the question of how to quantify uncertainty in ordinal classification, where class labels have a natural (linear) order. We reckon that commonly used uncertainty measures such as Shannon entropy, confidence, or margin are not appropriate for the ordinal case. In our search for better measures, we draw inspiration from the social sciences literature, which offers various measures to assess so-called consensus or agreement in ordinal data. We argue that these measures, or, more specifically, the dual measures of dispersion or polarization, do have properties that qualify them as measures of uncertainty. Furthermore, inspired by binary decomposition techniques for multi-class classification in machine learning, we propose a new method that allows for turning any uncertainty measure into an ordinal uncertainty measure in a generic way. We evaluate all measures in an empirical study on twenty-three ordinal benchmark datasets, as well as in a real-world case study on automotive goodwill claim assessment. Our studies confirm that dispersion measures and our binary decomposition method surpass conventional (nominal) uncertainty measures.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"186 ","pages":"Article 109479"},"PeriodicalIF":3.2,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144168827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algorithms for computing the set of acceptable arguments","authors":"Lars Bengel , Matthias Thimm , Federico Cerutti , Mauro Vallati","doi":"10.1016/j.ijar.2025.109478","DOIUrl":"10.1016/j.ijar.2025.109478","url":null,"abstract":"<div><div>We investigate the computational problem of determining the set of acceptable arguments in abstract argumentation wrt. credulous and skeptical reasoning under grounded, complete, stable, and preferred semantics. In particular, we investigate the computational complexity of that problem and its verification variant, and develop several algorithms for all problem variants, including two baseline approaches based on iterative acceptability queries and extension enumeration, and some optimised versions. We experimentally compare the runtime performance of these algorithms: our results show that our newly optimised algorithms significantly outperform the baseline algorithms in most cases.</div></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109478"},"PeriodicalIF":3.2,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144154440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrigendum to “Estimating bounds on causal effects in high-dimensional and possibly confounded systems” [Int. J. Approx. Reason. 88 (2017) 371–384]","authors":"Daniel Malinsky , Peter Spirtes","doi":"10.1016/j.ijar.2025.109475","DOIUrl":"10.1016/j.ijar.2025.109475","url":null,"abstract":"","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"185 ","pages":"Article 109475"},"PeriodicalIF":3.2,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144116699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}