{"title":"Dynamic collective argumentation: Constructing the revision and contraction operators","authors":"Weiwei Chen, Shier Ju","doi":"10.1016/j.ijar.2024.109234","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109234","url":null,"abstract":"<div><p>Collective argumentation has always focused on obtaining rational collective argumentative decisions. One approach that has been extensively studied in the literature is the aggregation of individual extensions of an argumentation framework. However, previous studies have only examined aggregation processes in static terms, focusing on preserving semantic properties at a given time. In contrast, this paper investigates whether decisions remain rational when the preservation process is dynamic, meaning that it can incorporate new information. To address the dynamic nature of collective argumentation, we introduce the revision and contraction operators. These operators reflect the idea that when an individual or a group learns something new by accepting or rejecting an argument, they have to update their collective decision accordingly. Our study examines whether the order of revising individual opinions and aggregating them affects the final outcome, i.e., whether aggregation and revision commute.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"172 ","pages":"Article 109234"},"PeriodicalIF":3.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141308610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Incremental reduction of imbalanced distributed mixed data based on k-nearest neighbor rough set","authors":"Weihua Xu, Changchun Liu","doi":"10.1016/j.ijar.2024.109218","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109218","url":null,"abstract":"<div><p>Incremental feature selection methods have garnered significant research attention in improving the efficiency of feature selection for dynamic datasets. However, there is currently a dearth of research on incremental feature selection methods specifically targeted for unbalanced mixed-type data. Furthermore, the widely used neighborhood rough set algorithm exhibits low classification efficiency for imbalanced data distribution and performs poorly in classifying mixed samples. Motivated by these two challenges, we investigate the use of an incremental feature reduction algorithm based on <em>k-</em>nearest neighbors and mutual information in this study. Firstly, we enhance the capabilities of the neighborhood rough set model by incorporating the concept of <em>k-</em>nearest neighbors, thereby improving its ability to handle samples with varying densities. Subsequently, we apply information entropy theory and combine neighborhood mutual information with the maximum relevance minimum redundancy criterion to construct a novel feature importance evaluation function. This function is utilized as the evaluation metric for feature selection. Finally, an incremental feature selection algorithm is designed based on the above static algorithm. Experiments were conducted on twelve public datasets to evaluate the robustness of the proposed feature metrics and the performance of the incremental feature selection algorithm. The experimental results validated the robustness of the proposed metrics and demonstrated that our incremental algorithm is effective and efficient in feature reduction for updating unbalanced mixed data.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"172 ","pages":"Article 109218"},"PeriodicalIF":3.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141291726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Zhang , Juncheng Bai , Bingzhen Sun , Yuqi Guo , Xiangtang Chen
{"title":"Kernel multi-granularity double-quantitative rough set based on ensemble empirical mode decomposition: Application to stock price trends prediction","authors":"Lin Zhang , Juncheng Bai , Bingzhen Sun , Yuqi Guo , Xiangtang Chen","doi":"10.1016/j.ijar.2024.109217","DOIUrl":"10.1016/j.ijar.2024.109217","url":null,"abstract":"<div><p>As financial markets grow increasingly complex and dynamic, accurately predicting stock price trends becomes crucial for investors and financial analysts. Effectively identifying and selecting the most predictive attributes has become a challenge in stock trends prediction. To address this problem, this study proposes a new attribute reduction model. A rough set theory model is built by simplifying the prediction process and combining it with the long short-term memory network (LSTM) to enhance the accuracy of stock trends prediction. Firstly, the Ensemble Empirical Mode Decomposition (EEMD) is utilized to decompose the stock price data into a multi-granularity information system. Secondly, due to the numerical characteristics of stock data, a kernel function is applied to construct binary relationships. Thirdly, recognizing the noise inherent in stock data, the double-quantitative rough set theory is utilized to improve fault tolerance during the construction of decision attributes' lower and upper approximations. Moreover, calculate the correlation between conditional and decision attributes, and retain highly correlated conditional attributes for prediction. The kernel multi-granularity double-quantitative rough set based on the EEMD (EEMD-KMGDQRS) model proposed identifies the key factors behind stock data. Finally, the efficacy of the proposed model is validated by selecting 356 stocks from diverse industries in the Shanghai and Shenzhen stock markets as experimental samples. The results show that the proposed model improves the generalization of attribute reduction results through a fault tolerance mechanism by combining kernel function with multi-granularity double-quantitative rough set, thereby enhancing the accuracy of stock trends prediction in subsequent LSTM prediction processes.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"172 ","pages":"Article 109217"},"PeriodicalIF":3.9,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141131751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ClusterLP: A novel Cluster-aware Link Prediction model in undirected and directed graphs","authors":"Shanfan Zhang , Wenjiao Zhang , Zhan Bu , Xia Zhang","doi":"10.1016/j.ijar.2024.109216","DOIUrl":"10.1016/j.ijar.2024.109216","url":null,"abstract":"<div><p>Link prediction models endeavor to understand the distribution of links within graphs and forecast the presence of potential links. With the advancements in deep learning, prevailing methods typically strive to acquire low-dimensional representations of nodes in networks, aiming to capture and retain the structure and inherent characteristics of networks. However, the majority of these methods primarily focus on preserving the microscopic structure, such as the first- and second-order proximities of nodes, while largely disregarding the mesoscopic cluster structure, which stands out as one of the network's most prominent features. Following the homophily principle, nodes within the same cluster exhibit greater similarity to each other compared to those from different clusters, suggesting that they should possess analogous vertex representations and higher probabilities of linkage. In this study, we develop a straightforward yet efficient <strong><em>Cluster</em></strong>-aware <strong><em>L</em></strong>ink <strong><em>P</em></strong>rediction framework (<em>ClusterLP</em>), with the objective of directly leveraging cluster structures to predict links among nodes with maximum accuracy in both undirected and directed graphs. Specifically, we posit that establishing links between nodes with similar representation vectors and cluster tendencies is more feasible in undirected graphs, whereas nodes in directed graphs are inclined to point towards nodes with akin representation vectors and greater influence. We tailor the implementation of <em>ClusterLP</em> for undirected and directed graphs, respectively, and experimental findings using multiple real-world networks demonstrate the high competitiveness of our models in the realm of link prediction tasks. The code utilized in our implementation is accessible at <span>https://github.com/ZINUX1998/ClusterLP</span><svg><path></path></svg>.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"172 ","pages":"Article 109216"},"PeriodicalIF":3.9,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141136446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing the fit of data and external sets via an imprecise Sargan-Hansen test","authors":"Martin Jann","doi":"10.1016/j.ijar.2024.109214","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109214","url":null,"abstract":"<div><p>In empirical sciences such as psychology, the term cumulative science mostly refers to the integration of theories, while external (prior) information may also be used in statistical inference. This external information can be in the form of statistical moments and is subject to various types of uncertainty, e.g., because it is estimated, or because of qualitative uncertainty due to differences in study design or sampling. Before using it in statistical inference, it is therefore important to test whether the external information fits a new data set, taking into account its uncertainties. As a frequentist approach, the Sargan-Hansen test from the generalized method of moments framework is used in this paper. It tests, given a statistical model, whether data and point-wise external information are in conflict. A separability result is given that simplifies the Sargan-Hansen test statistic in most cases. The Sargan-Hansen test is then extended to the imprecise scenario with (estimated) external sets using stochastically ordered credal sets. Furthermore, an exact small sample version is derived for normally distributed variables. As a Bayesian approach, two prior-data conflict criteria are discussed as a test for the fit of external information to the data. Two simulation studies are performed to test and compare the power and type I error of the methods discussed. Different small sample scenarios are implemented, varying the moments used, the level of significance, and other aspects. The results show that both the Sargan-Hansen test and the Bayesian criteria control type I errors while having sufficient or even good power. To facilitate the use of the methods by applied scientists, easy-to-use R functions are provided in the R script in the supplementary materials.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109214"},"PeriodicalIF":3.9,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24001014/pdfft?md5=0b8a11d3dd0d2c29383d6a48f52f8f27&pid=1-s2.0-S0888613X24001014-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141090147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An attribute ranking method based on rough sets and interval-valued fuzzy sets","authors":"Bich Khue Vo , Hung Son Nguyen","doi":"10.1016/j.ijar.2024.109215","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109215","url":null,"abstract":"<div><p>Feature importance is a complex issue in machine learning, as determining a superior attribute is vague, uncertain, and dependent on the model. This study introduces a rough-fuzzy hybrid (RAFAR) method that merges various techniques from rough set theory and fuzzy set theory to tackle uncertainty in attribute importance and ranking. RAFAR utilizes an interval-valued fuzzy matrix to depict preference between attribute pairs. This research focuses on constructing these matrices from datasets and identifying suitable rankings based on these matrices. The concept of interval-valued weight vectors is introduced to represent attribute importance, and their additive and multiplicative compatibility is examined. The properties of these consistency types and the efficient algorithms for solving related problems are discussed. These new theoretical findings are valuable for creating effective optimization models and algorithms within the RAFAR framework. Additionally, novel approaches for constructing pairwise comparison matrices and enhancing the scalability of RAFAR are suggested. The study also includes experimental results on benchmark datasets to demonstrate the accuracy of the proposed solutions.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109215"},"PeriodicalIF":3.9,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141083329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A possible worlds semantics for trustworthy non-deterministic computations","authors":"Ekaterina Kubyshkina, Giuseppe Primiero","doi":"10.1016/j.ijar.2024.109212","DOIUrl":"10.1016/j.ijar.2024.109212","url":null,"abstract":"<div><p>The notion of trustworthiness, central to many fields of human inquiry, has recently attracted the attention of various researchers in logic, computer science, and artificial intelligence (AI). Both conceptual and formal approaches for modeling trustworthiness as a (desirable) property of AI systems are emerging in the literature. To develop logics fit for this aim means to analyze both the non-deterministic aspect of AI systems and to offer a formalization of the intended meaning of their trustworthiness. In this work we take a semantic perspective on representing such processes, and provide a measure on possible worlds for evaluating them as trustworthy. In particular, we intend trustworthiness as the correspondence within acceptable limits between a model in which the theoretical probability of a process to produce a given output is expressed and a model in which the frequency of showing such output as established during a relevant number of tests is measured. From a technical perspective, we show that our semantics characterizes the probabilistic typed natural deduction calculus introduced in D'Asaro and Primiero (2021)<span>[12]</span> and further extended in D'Asaro et al. (2023) <span>[13]</span>. This contribution connects those results on trustworthy probabilistic processes with the mainstream method in modal logic, thereby facilitating the understanding of this field of research for a larger audience of logicians, as well as setting the stage for an epistemic logic appropriate to the task.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"172 ","pages":"Article 109212"},"PeriodicalIF":3.9,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000999/pdfft?md5=7a0c991c70c70e79ac2349285a1a28c0&pid=1-s2.0-S0888613X24000999-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141143080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Imprecision in martingale- and test-theoretic prequential randomness","authors":"Floris Persiau, Gert de Cooman","doi":"10.1016/j.ijar.2024.109213","DOIUrl":"10.1016/j.ijar.2024.109213","url":null,"abstract":"<div><p>In a prequential approach to algorithmic randomness, probabilities for the next outcome can be forecast ‘on the fly’ without the need for fully specifying a probability measure on all possible sequences of outcomes, as is the case in the more standard approach. We take the first steps in allowing for probability intervals instead of precise probabilities on this prequential approach, based on ideas borrowed from our earlier imprecise-probabilistic, standard account of algorithmic randomness. We define what it means for an infinite sequence <span><math><mo>(</mo><msub><mrow><mi>I</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>x</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>I</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>,</mo><msub><mrow><mi>x</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>,</mo><mo>…</mo><mo>)</mo></math></span> of successive interval forecasts <span><math><msub><mrow><mi>I</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span> and subsequent binary outcomes <span><math><msub><mrow><mi>x</mi></mrow><mrow><mi>k</mi></mrow></msub></math></span> to be random, both in a martingale-theoretic and a test-theoretic sense. We prove that these two versions of prequential randomness coincide, we compare the resulting prequential randomness notions with the more standard ones, and we investigate where the prequential and standard randomness notions coincide.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109213"},"PeriodicalIF":3.9,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141023719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distribution-free Inferential Models: Achieving finite-sample valid probabilistic inference, with emphasis on quantile regression","authors":"Leonardo Cella","doi":"10.1016/j.ijar.2024.109211","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109211","url":null,"abstract":"<div><p>This paper presents a novel distribution-free Inferential Model (IM) construction that provides valid probabilistic inference across a broad spectrum of distribution-free problems, even in finite sample settings. More specifically, the proposed IM has the capability to assign (imprecise) probabilities to assertions of interest about any feature of the unknown quantities under examination, and these probabilities are well-calibrated in a frequentist sense. It is also shown that finite-sample confidence regions can be derived from the IM for any such features. Particular emphasis is placed on quantile regression, a domain where uncertainty quantification often takes the form of set estimates for the regression coefficients in applications. Within this context, the IM facilitates the acquisition of these set estimates, ensuring they are finite-sample confidence regions. It also enables the provision of finite-sample valid probabilistic assignments for any assertions of interest about the regression coefficients. As a result, regardless of the type of uncertainty quantification desired, the proposed framework offers an appealing solution to quantile regression.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109211"},"PeriodicalIF":3.9,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140948806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianhua Dai , Zhilin Zhu , Min Li , Xiongtao Zou , Chucai Zhang
{"title":"Attribute reduction for heterogeneous data based on monotonic relative neighborhood granularity","authors":"Jianhua Dai , Zhilin Zhu , Min Li , Xiongtao Zou , Chucai Zhang","doi":"10.1016/j.ijar.2024.109210","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109210","url":null,"abstract":"<div><p>The neighborhood rough set model serves as an important tool for handling attribute reduction tasks involving heterogeneous attributes. However, measuring the relationship between conditional attributes and decision in the neighborhood rough set model is a crucial issue. Most studies have utilized neighborhood information entropy to measure the relationship between attributes. When using neighborhood conditional information entropy to measure the relationships between the decision and conditional attributes, it lacks monotonicity, consequently affecting the rationality of the final attribute reduction subset. In this paper, we introduce the concept of neighborhood granularity and propose a new form of relative neighborhood granularity to measure the relationship between the decision and conditional attributes, which exhibits monotonicity. Moreover, our approach for measuring neighborhood granularity avoids the logarithmic function computation involved in neighborhood information entropy. Finally, we conduct comparative experiments on 12 datasets using two classifiers to compare the results of attribute reduction with six other attribute reduction algorithms. The comparison demonstrates the advantages of our measurement approach.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109210"},"PeriodicalIF":3.9,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140924463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}