Manuel S. Lazo-Cortés , Guillermo Sanchez-Diaz , Nelva N. Almanza Ortega
{"title":"Shortest-length and coarsest-granularity constructs vs. reducts: An experimental evaluation","authors":"Manuel S. Lazo-Cortés , Guillermo Sanchez-Diaz , Nelva N. Almanza Ortega","doi":"10.1016/j.ijar.2024.109187","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109187","url":null,"abstract":"<div><p>In the domain of rough set theory, super-reducts represent subsets of attributes possessing the same discriminative power as the complete set of attributes when it comes to distinguishing objects across distinct classes in supervised classification problems. Within the realm of super-reducts, the concept of reducts holds significance, denoting subsets that are irreducible.</p><p>Contrastingly, constructs, while serving the purpose of distinguishing objects across different classes, also exhibit the capability to preserve certain shared characteristics among objects within the same class. In essence, constructs represent a subtype of super-reducts that integrates information both inter-classes and intra-classes. Despite their potential, constructs have garnered comparatively less attention than reducts.</p><p>Both reducts and constructs find application in the reduction of data dimensionality. This paper exposes key concepts related to constructs and reducts, providing insights into their roles. Additionally, it conducts an experimental comparative study between optimal reducts and constructs, considering specific criteria such as shortest length and coarsest granularity, and evaluates their performance using classical classifiers.</p><p>The outcomes derived from employing seven classifiers on sixteen datasets lead us to propose that both coarsest granularity reducts and constructs prove to be effective choices for dimensionality reduction in supervised classification problems. Notably, when considering the optimality criterion of the shortest length, constructs exhibit clear superiority over reducts, which are found to be less favorable.</p><p>Moreover, a comparative analysis was conducted between the results obtained using the coarsest granularity constructs and a technique from outside of rough set theory, specifically correlation-based feature selection. The former demonstrated statistically superior performance, providing further evidence of its efficacy in comparison.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109187"},"PeriodicalIF":3.9,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140352004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao Jia , Yingchi Mao , Zhenxiang Pan , Zicheng Wang , Ping Ping
{"title":"Few-shot learning based on hierarchical feature fusion via relation networks","authors":"Xiao Jia , Yingchi Mao , Zhenxiang Pan , Zicheng Wang , Ping Ping","doi":"10.1016/j.ijar.2024.109186","DOIUrl":"10.1016/j.ijar.2024.109186","url":null,"abstract":"<div><p>Few-shot learning, which aims to identify new classes with few samples, is an increasingly popular and crucial research topic in the machine learning. Recently, the development of deep learning has deepened the network structure of a few-shot model, thereby obtaining deeper features from the samples. This trend led to an increasing number of few-shot learning models pursuing more complex structures and deeper features. However, discarding shallow features and blindly pursuing the depth of sample feature levels is not reasonable. The features at different levels of the sample have different information and characteristics. In this paper, we propose a few-shot image classification model based on deep and shallow feature fusion and a coarse-grained relationship score network (HFFCR). First, we utilize networks with different depth structures as feature extractors and then fuse the two kinds of sample features. The fused sample features collect sample information at different levels. Second, we condense the fused features into a coarse-grained prototype point. Prototype points can better represent the information in this class and improve classification efficiency. Finally, we construct a relationship score network, concatenating the prototype points and query samples into a feature map and sending it into the network to calculate the relationship score. The classification criteria for learnable relationship scores reflect the information difference between the two samples. Experiments on three datasets show that HFFCR has advanced performance.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109186"},"PeriodicalIF":3.9,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140399025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A statistical approach to learning constraints","authors":"Steven Prestwich, Nic Wilson","doi":"10.1016/j.ijar.2024.109184","DOIUrl":"10.1016/j.ijar.2024.109184","url":null,"abstract":"<div><p>A constraint-based model represents knowledge about a domain by a set of constraints, which must be satisfied by solutions in that domain. These models may be used for reasoning, decision making and optimisation. Unfortunately, modelling itself is a hard and error-prone task that requires expertise. The automation of this process is often referred to as <em>constraint acquisition</em> and has been pursued for over 20 years. Methods typically learn constraints by testing candidates against a dataset of solutions and non-solutions, and often use some form of machine learning to decide which should be learned. However, few methods are robust under errors in the data, some cannot handle large sets of candidates, and most are computationally expensive even for small problems. We describe a statistical approach based on sequential analysis that is robust, fast and scalable to large biases. Its correctness depends on an assumption that does not always hold but which is, we show using Bayesian analysis, reasonable in practice.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"171 ","pages":"Article 109184"},"PeriodicalIF":3.9,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000719/pdfft?md5=7463d0a55072aa62d2359ac14f325d31&pid=1-s2.0-S0888613X24000719-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140407794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrey Ruschel , Arthur Colombini Gusmão , Fabio Gagliardi Cozman
{"title":"Explaining answers generated by knowledge graph embeddings","authors":"Andrey Ruschel , Arthur Colombini Gusmão , Fabio Gagliardi Cozman","doi":"10.1016/j.ijar.2024.109183","DOIUrl":"10.1016/j.ijar.2024.109183","url":null,"abstract":"<div><p>Completion of large-scale knowledge bases, such as DBPedia or Freebase, often relies on embedding models that turn symbolic relations into vector-based representations. Such embedding models are rather opaque to the human user. Research in interpretability has emphasized non-relational classifiers, such as deep neural networks, and has devoted less effort to opaque models extracted from relational structures, such as knowledge graph embeddings. We introduce techniques that produce explanations, expressed as logical rules, for predictions based on the embeddings of knowledge graphs. Algorithms build explanations out of paths in an input knowledge graph, searched through contextual and heuristic cues.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"171 ","pages":"Article 109183"},"PeriodicalIF":3.9,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140402241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Yu , Ming Wan , Jin Qian , Duoqian Miao , Zhiqiang Zhang , Pengfei Zhao
{"title":"Feature selection for multi-label learning based on variable-degree multi-granulation decision-theoretic rough sets","authors":"Ying Yu , Ming Wan , Jin Qian , Duoqian Miao , Zhiqiang Zhang , Pengfei Zhao","doi":"10.1016/j.ijar.2024.109181","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109181","url":null,"abstract":"<div><p>Multi-label learning (MLL) suffers from the high-dimensional feature space teeming with irrelevant and redundant features. To tackle this, several multi-label feature selection (MLFS) algorithms have emerged as vital preprocessing steps. Nonetheless, existing MLFS methods have their shortcomings. Primarily, while they excel at harnessing label-feature relationships, they often struggle to leverage inter-feature information effectively. Secondly, numerous MLFS approaches overlook the uncertainty in the boundary domain, despite its critical role in identifying high-quality features. To address these issues, this paper introduces a novel MLFS algorithm, named VMFS. It innovatively integrates multi-granulation rough sets with three-way decision, leveraging multi-granularity decision-theoretic rough sets (MGDRS) with variable degrees for optimal performance. Initially, we construct coarse decision (RDC), fine decision (RDF), and uncertainty decision (RDU) functions for each object based on MGDRS with variable degrees. These decision functions then quantify the dependence of attribute subsets, considering both deterministic and uncertain aspects. Finally, we employ the dependency to assess attribute importance and rank them accordingly. Our proposed method has undergone rigorous evaluation on various standard multi-label datasets, demonstrating its superiority. Experimental results consistently show that VMFS significantly outperforms other algorithms on most datasets, underscoring its effectiveness and reliability in multi-label learning tasks.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"169 ","pages":"Article 109181"},"PeriodicalIF":3.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tamás Mihálydeák , Tamás Kádek , Dávid Nagy , Mihir K. Chakraborty
{"title":"Intensions and extensions of granules: A two-component treatment","authors":"Tamás Mihálydeák , Tamás Kádek , Dávid Nagy , Mihir K. Chakraborty","doi":"10.1016/j.ijar.2024.109182","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109182","url":null,"abstract":"<div><p>The concept of a granule (of knowledge) originated from Zadeh, where granules appeared as references to words (phrases) of a natural (or an artificial) language. According to Zadeh's program, “a granule is a collection of objects drawn together by similarity or functionality and considered therefore as a whole”. Pawlak's original theory of rough sets and its different generalizations have a common property: all systems rely on a given background knowledge represented by the system of base sets. Since the members of a base set have to be treated similarly, base sets can be considered as granules. The background knowledge has a conceptual structure, and it contains information that does not appear on the level of base granules, so such information cannot be taken into consideration in approximations. A new problem arises: is there any possibility of constructing a system modeling the background knowledge better? A two-component treatment can be a solution to this problem. After giving the formal language of granules involving the tools for approximations, a logical calculus containing approximation operators is introduced. Then, a two-component semantics (treating intensions and extensions of granule expressions) is defined. The authors show the connection between the logical calculus and the two-component semantics.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"169 ","pages":"Article 109182"},"PeriodicalIF":3.9,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Desirable gambles based on pairwise comparisons","authors":"Serafín Moral","doi":"10.1016/j.ijar.2024.109180","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109180","url":null,"abstract":"<div><p>This paper proposes a model for imprecise probability information based on bounds on probability ratios, instead of bounds on events. This model is studied in the language of coherent sets of desirable gambles, which provides an elegant mathematical formulation and a more expressive power. The paper provides methods to check avoiding sure loss and coherence, and to compute the natural extension. The relationships with other formalisms such as imprecise multiplicative preferences, the constant odd ratio model, or comparative probability are analyzed.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"169 ","pages":"Article 109180"},"PeriodicalIF":3.9,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000677/pdfft?md5=1b16f3b4f516e5705b5d062c10b3a2be&pid=1-s2.0-S0888613X24000677-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantified neural Markov logic networks","authors":"Peter Jung , Giuseppe Marra , Ondřej Kuželka","doi":"10.1016/j.ijar.2024.109172","DOIUrl":"10.1016/j.ijar.2024.109172","url":null,"abstract":"<div><p>Markov Logic Networks (MLNs) are discrete generative models in the exponential family. However, specifying these rules requires considerable expertise and can pose a significant challenge. To overcome this limitation, Neural MLNs (NMLNs) have been introduced, enabling the specification of potential functions as neural networks. Thanks to the compact representation of their neural potential functions, NMLNs have shown impressive performance in modeling complex domains like molecular data. Despite the superior performance of NMLNs, their theoretical expressiveness is still equivalent to that of MLNs without quantifiers. In this paper, we propose a new class of NMLN, called Quantified NMLN, that extends the expressivity of NMLNs to the quantified setting. Furthermore, we demonstrate how to leverage the neural nature of NMLNs to employ learnable aggregation functions as quantifiers, increasing expressivity even further. We demonstrate the competitiveness of Quantified NMLNs over original NMLNs and state-of-the-art diffusion models in molecule generation experiments.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"171 ","pages":"Article 109172"},"PeriodicalIF":3.9,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140283143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multidimensional fuzzy sets: Negations and an algorithm for multi-attribute group decision making","authors":"Landerson Santiago , Benjamin Bedregal","doi":"10.1016/j.ijar.2024.109171","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109171","url":null,"abstract":"<div><p>Multidimensional fuzzy sets (MFS) is a new extension of fuzzy sets on which the membership values of an element in the universe of discourse are increasingly ordered vectors on the set of real numbers in the interval <span><math><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></math></span>. This paper aims to investigate fuzzy negations on the set of increasingly ordered vectors on <span><math><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo></math></span>, i.e. on <span><math><msub><mrow><mi>L</mi></mrow><mrow><mo>∞</mo></mrow></msub><mo>(</mo><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo><mo>)</mo></math></span>, MFN in short, with respect to some partial order. In this paper we study partial orders, giving special attention to admissible orders on <span><math><msub><mrow><mi>L</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>(</mo><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo><mo>)</mo></math></span> and <span><math><msub><mrow><mi>L</mi></mrow><mrow><mo>∞</mo></mrow></msub><mo>(</mo><mo>[</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo>]</mo><mo>)</mo></math></span>. In addition, we study the possibility of existence of strong multidimensional fuzzy negations and some properties and methods to construct such operators. In particular, we define the ordinal sums of n-dimensional negations and ordinal sums of multidimensional fuzzy negations on a multidimensional product order. A multi-attribute group decision making algorithm is presented.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"169 ","pages":"Article 109171"},"PeriodicalIF":3.9,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140180509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of tropical optimization for solving multicriteria problems of pairwise comparisons using log-Chebyshev approximation","authors":"Nikolai Krivulin","doi":"10.1016/j.ijar.2024.109168","DOIUrl":"10.1016/j.ijar.2024.109168","url":null,"abstract":"<div><p>We consider a decision-making problem to find absolute ratings of alternatives that are compared in pairs under multiple criteria, subject to constraints in the form of two-sided bounds on ratios between the ratings. Given matrices of pairwise comparisons made according to the criteria, the problem is formulated as the log-Chebyshev approximation of these matrices by a common consistent matrix (a symmetrically reciprocal matrix of unit rank) to minimize the approximation errors for all matrices simultaneously. We rearrange the approximation problem as a constrained multiobjective optimization problem of finding a vector that determines the approximating consistent matrix. The problem is then represented in the framework of tropical algebra, which deals with the theory and applications of idempotent semirings and provides a formal basis for fuzzy and interval arithmetic. We apply methods and results of tropical optimization to develop a new approach for handling the multiobjective optimization problem according to various principles of optimality. New complete solutions in the sense of the max-ordering, lexicographic ordering and lexicographic max-ordering optimality are obtained, which are given in a compact vector form ready for formal analysis and efficient computation. We present numerical examples of solving multicriteria problems of rating four alternatives from pairwise comparisons to illustrate the technique and compare it with others.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"169 ","pages":"Article 109168"},"PeriodicalIF":3.9,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140154085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}