Stefan Heid , Jonas Hanselle , Johannes Fürnkranz , Eyke Hüllermeier
{"title":"Learning decision catalogues for situated decision making: The case of scoring systems","authors":"Stefan Heid , Jonas Hanselle , Johannes Fürnkranz , Eyke Hüllermeier","doi":"10.1016/j.ijar.2024.109190","DOIUrl":"10.1016/j.ijar.2024.109190","url":null,"abstract":"<div><p>In this paper, we formalize the problem of learning coherent collections of decision models, which we call decision catalogues, and illustrate it for the case where models are scoring systems. This problem is motivated by the recent rise of algorithmic decision-making and the idea to improve human decision-making through machine learning, in conjunction with the observation that decision models should be situated in terms of their complexity and resource requirements: Instead of constructing a single decision model and using this model in all cases, different models might be appropriate depending on the decision context. Decision catalogues are supposed to support a seamless transition from very simple, resource-efficient to more sophisticated but also more demanding models. We present a general algorithmic framework for inducing such catalogues from training data, which tackles the learning task as a problem of searching the space of candidate catalogues systematically and, to this end, makes use of heuristic search methods. We also present a concrete instantiation of this framework as well as empirical studies for performance evaluation, which, in a nutshell, show that greedy search is an efficient and hard-to-beat strategy for the construction of catalogues of scoring systems.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"171 ","pages":"Article 109190"},"PeriodicalIF":3.9,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X2400077X/pdfft?md5=dbbec50c1dcf50fb106bcfb2cf8b65f9&pid=1-s2.0-S0888613X2400077X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140772000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Irene M. Coronel , Melisa G. Escañuela Gonzalez , Diego C. Martinez , Gerardo I. Simari , Maximiliano C.D. Budán
{"title":"Neighborhood-based argumental community support in the context of multi-topic debates","authors":"Irene M. Coronel , Melisa G. Escañuela Gonzalez , Diego C. Martinez , Gerardo I. Simari , Maximiliano C.D. Budán","doi":"10.1016/j.ijar.2024.109189","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109189","url":null,"abstract":"<div><p>The formal characterization of abstract argumentation has allowed the study of many exciting characteristics of the argumentation process. Nevertheless, while helpful in many aspects, abstraction diminishes the knowledge representation capabilities available to describe naturally occurring features of argumentative dialogues; one of these elements is the consideration of the topics involved in a discussion. In studying dialogical processes, participants recognize that some topics are closely related to the original issue; in contrast, others are more distant from the central subject or refer to unrelated matters. Consequently, it is reasonable to study different argumentation semantics that considers a discussion's focus to evaluate acceptability. In this work, we introduce the necessary representational elements required to reflect the focus of a discussion. We propose a novel extension of the semantics for <em>multi-topic abstract argumentation frameworks</em>, acknowledging that every argument has its own <em>zone of relevance</em> in the argumentation framework, leading to the concepts of neighborhoods and communities of legitimate defenses. Furthermore, other semantic elaborations are defined and discussed around this structure.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109189"},"PeriodicalIF":3.9,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140555371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hierarchical variable clustering based on the predictive strength between random vectors","authors":"Sebastian Fuchs, Yuping Wang","doi":"10.1016/j.ijar.2024.109185","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109185","url":null,"abstract":"<div><p>A rank-invariant clustering of variables is introduced that is based on the predictive strength between groups of variables, i.e., two groups are assigned a high similarity if the variables in the first group contain high predictive information about the behaviour of the variables in the other group and/or vice versa. The method presented here is model-free, dependence-based and does not require any distributional assumptions. Various general invariance and continuity properties are investigated, with special attention to those that are beneficial for the agglomerative hierarchical clustering procedure. A fully non-parametric estimator is considered whose excellent performance is demonstrated in several simulation studies and by means of real-data examples.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109185"},"PeriodicalIF":3.9,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000720/pdfft?md5=5ad583de2d0cc01c6ad863362aee583f&pid=1-s2.0-S0888613X24000720-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140548461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards an effective practice of learning from data and knowledge","authors":"Yizuo Chen, Haiying Huang, Adnan Darwiche","doi":"10.1016/j.ijar.2024.109188","DOIUrl":"10.1016/j.ijar.2024.109188","url":null,"abstract":"<div><p>We discuss some recent advances on combining data and knowledge in the context of supervised learning using Bayesian networks. A first set of advances concern the computational efficiency of learning and inference, and they include a software-level boost based on compiling Bayesian network structures into tractable circuits in the form of <em>tensor graphs</em>, and algorithmic improvements based on exploiting a type of knowledge called <em>unknown functional dependencies.</em> The used tensor graphs capitalize on a highly optimized tensor operation (matrix multiplication) which brings orders of magnitude speedups in circuit training and evaluation. The exploitation of unknown functional dependencies yields exponential reductions in the size of tractable circuits and gives rise to the notion of <em>causal treewidth</em> for offering a corresponding complexity bound. Beyond computational efficiency, we discuss empirical evidence showing the promise of learning from a combination of data and knowledge, in terms of data hungriness and robustness against noise perturbations. Sometimes, however, an accurate Bayesian network structure may not be available due to the incompleteness of human knowledge, leading to <em>modeling errors</em> in the form of missing dependencies or missing variable values. On this front, we discuss another set of advances for recovering from certain types of modeling errors. This is achieved using Testing Bayesian networks which dynamically select parameters based on the input evidence, and come with theoretical guarantees on full recovery under certain conditions.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"171 ","pages":"Article 109188"},"PeriodicalIF":3.9,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000756/pdfft?md5=a14a683e66d7ef5d6aabb38b3d5cd7fa&pid=1-s2.0-S0888613X24000756-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140629876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuel S. Lazo-Cortés , Guillermo Sanchez-Diaz , Nelva N. Almanza Ortega
{"title":"Shortest-length and coarsest-granularity constructs vs. reducts: An experimental evaluation","authors":"Manuel S. Lazo-Cortés , Guillermo Sanchez-Diaz , Nelva N. Almanza Ortega","doi":"10.1016/j.ijar.2024.109187","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109187","url":null,"abstract":"<div><p>In the domain of rough set theory, super-reducts represent subsets of attributes possessing the same discriminative power as the complete set of attributes when it comes to distinguishing objects across distinct classes in supervised classification problems. Within the realm of super-reducts, the concept of reducts holds significance, denoting subsets that are irreducible.</p><p>Contrastingly, constructs, while serving the purpose of distinguishing objects across different classes, also exhibit the capability to preserve certain shared characteristics among objects within the same class. In essence, constructs represent a subtype of super-reducts that integrates information both inter-classes and intra-classes. Despite their potential, constructs have garnered comparatively less attention than reducts.</p><p>Both reducts and constructs find application in the reduction of data dimensionality. This paper exposes key concepts related to constructs and reducts, providing insights into their roles. Additionally, it conducts an experimental comparative study between optimal reducts and constructs, considering specific criteria such as shortest length and coarsest granularity, and evaluates their performance using classical classifiers.</p><p>The outcomes derived from employing seven classifiers on sixteen datasets lead us to propose that both coarsest granularity reducts and constructs prove to be effective choices for dimensionality reduction in supervised classification problems. Notably, when considering the optimality criterion of the shortest length, constructs exhibit clear superiority over reducts, which are found to be less favorable.</p><p>Moreover, a comparative analysis was conducted between the results obtained using the coarsest granularity constructs and a technique from outside of rough set theory, specifically correlation-based feature selection. The former demonstrated statistically superior performance, providing further evidence of its efficacy in comparison.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109187"},"PeriodicalIF":3.9,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140352004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao Jia , Yingchi Mao , Zhenxiang Pan , Zicheng Wang , Ping Ping
{"title":"Few-shot learning based on hierarchical feature fusion via relation networks","authors":"Xiao Jia , Yingchi Mao , Zhenxiang Pan , Zicheng Wang , Ping Ping","doi":"10.1016/j.ijar.2024.109186","DOIUrl":"10.1016/j.ijar.2024.109186","url":null,"abstract":"<div><p>Few-shot learning, which aims to identify new classes with few samples, is an increasingly popular and crucial research topic in the machine learning. Recently, the development of deep learning has deepened the network structure of a few-shot model, thereby obtaining deeper features from the samples. This trend led to an increasing number of few-shot learning models pursuing more complex structures and deeper features. However, discarding shallow features and blindly pursuing the depth of sample feature levels is not reasonable. The features at different levels of the sample have different information and characteristics. In this paper, we propose a few-shot image classification model based on deep and shallow feature fusion and a coarse-grained relationship score network (HFFCR). First, we utilize networks with different depth structures as feature extractors and then fuse the two kinds of sample features. The fused sample features collect sample information at different levels. Second, we condense the fused features into a coarse-grained prototype point. Prototype points can better represent the information in this class and improve classification efficiency. Finally, we construct a relationship score network, concatenating the prototype points and query samples into a feature map and sending it into the network to calculate the relationship score. The classification criteria for learnable relationship scores reflect the information difference between the two samples. Experiments on three datasets show that HFFCR has advanced performance.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"170 ","pages":"Article 109186"},"PeriodicalIF":3.9,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140399025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A statistical approach to learning constraints","authors":"Steven Prestwich, Nic Wilson","doi":"10.1016/j.ijar.2024.109184","DOIUrl":"10.1016/j.ijar.2024.109184","url":null,"abstract":"<div><p>A constraint-based model represents knowledge about a domain by a set of constraints, which must be satisfied by solutions in that domain. These models may be used for reasoning, decision making and optimisation. Unfortunately, modelling itself is a hard and error-prone task that requires expertise. The automation of this process is often referred to as <em>constraint acquisition</em> and has been pursued for over 20 years. Methods typically learn constraints by testing candidates against a dataset of solutions and non-solutions, and often use some form of machine learning to decide which should be learned. However, few methods are robust under errors in the data, some cannot handle large sets of candidates, and most are computationally expensive even for small problems. We describe a statistical approach based on sequential analysis that is robust, fast and scalable to large biases. Its correctness depends on an assumption that does not always hold but which is, we show using Bayesian analysis, reasonable in practice.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"171 ","pages":"Article 109184"},"PeriodicalIF":3.9,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0888613X24000719/pdfft?md5=7463d0a55072aa62d2359ac14f325d31&pid=1-s2.0-S0888613X24000719-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140407794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrey Ruschel , Arthur Colombini Gusmão , Fabio Gagliardi Cozman
{"title":"Explaining answers generated by knowledge graph embeddings","authors":"Andrey Ruschel , Arthur Colombini Gusmão , Fabio Gagliardi Cozman","doi":"10.1016/j.ijar.2024.109183","DOIUrl":"10.1016/j.ijar.2024.109183","url":null,"abstract":"<div><p>Completion of large-scale knowledge bases, such as DBPedia or Freebase, often relies on embedding models that turn symbolic relations into vector-based representations. Such embedding models are rather opaque to the human user. Research in interpretability has emphasized non-relational classifiers, such as deep neural networks, and has devoted less effort to opaque models extracted from relational structures, such as knowledge graph embeddings. We introduce techniques that produce explanations, expressed as logical rules, for predictions based on the embeddings of knowledge graphs. Algorithms build explanations out of paths in an input knowledge graph, searched through contextual and heuristic cues.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"171 ","pages":"Article 109183"},"PeriodicalIF":3.9,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140402241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Yu , Ming Wan , Jin Qian , Duoqian Miao , Zhiqiang Zhang , Pengfei Zhao
{"title":"Feature selection for multi-label learning based on variable-degree multi-granulation decision-theoretic rough sets","authors":"Ying Yu , Ming Wan , Jin Qian , Duoqian Miao , Zhiqiang Zhang , Pengfei Zhao","doi":"10.1016/j.ijar.2024.109181","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109181","url":null,"abstract":"<div><p>Multi-label learning (MLL) suffers from the high-dimensional feature space teeming with irrelevant and redundant features. To tackle this, several multi-label feature selection (MLFS) algorithms have emerged as vital preprocessing steps. Nonetheless, existing MLFS methods have their shortcomings. Primarily, while they excel at harnessing label-feature relationships, they often struggle to leverage inter-feature information effectively. Secondly, numerous MLFS approaches overlook the uncertainty in the boundary domain, despite its critical role in identifying high-quality features. To address these issues, this paper introduces a novel MLFS algorithm, named VMFS. It innovatively integrates multi-granulation rough sets with three-way decision, leveraging multi-granularity decision-theoretic rough sets (MGDRS) with variable degrees for optimal performance. Initially, we construct coarse decision (RDC), fine decision (RDF), and uncertainty decision (RDU) functions for each object based on MGDRS with variable degrees. These decision functions then quantify the dependence of attribute subsets, considering both deterministic and uncertain aspects. Finally, we employ the dependency to assess attribute importance and rank them accordingly. Our proposed method has undergone rigorous evaluation on various standard multi-label datasets, demonstrating its superiority. Experimental results consistently show that VMFS significantly outperforms other algorithms on most datasets, underscoring its effectiveness and reliability in multi-label learning tasks.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"169 ","pages":"Article 109181"},"PeriodicalIF":3.9,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140345178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tamás Mihálydeák , Tamás Kádek , Dávid Nagy , Mihir K. Chakraborty
{"title":"Intensions and extensions of granules: A two-component treatment","authors":"Tamás Mihálydeák , Tamás Kádek , Dávid Nagy , Mihir K. Chakraborty","doi":"10.1016/j.ijar.2024.109182","DOIUrl":"https://doi.org/10.1016/j.ijar.2024.109182","url":null,"abstract":"<div><p>The concept of a granule (of knowledge) originated from Zadeh, where granules appeared as references to words (phrases) of a natural (or an artificial) language. According to Zadeh's program, “a granule is a collection of objects drawn together by similarity or functionality and considered therefore as a whole”. Pawlak's original theory of rough sets and its different generalizations have a common property: all systems rely on a given background knowledge represented by the system of base sets. Since the members of a base set have to be treated similarly, base sets can be considered as granules. The background knowledge has a conceptual structure, and it contains information that does not appear on the level of base granules, so such information cannot be taken into consideration in approximations. A new problem arises: is there any possibility of constructing a system modeling the background knowledge better? A two-component treatment can be a solution to this problem. After giving the formal language of granules involving the tools for approximations, a logical calculus containing approximation operators is introduced. Then, a two-component semantics (treating intensions and extensions of granule expressions) is defined. The authors show the connection between the logical calculus and the two-component semantics.</p></div>","PeriodicalId":13842,"journal":{"name":"International Journal of Approximate Reasoning","volume":"169 ","pages":"Article 109182"},"PeriodicalIF":3.9,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140308614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}