Information FusionPub Date : 2025-03-21DOI: 10.1016/j.inffus.2025.103107
Mingshuo Cao , Tiantian Gai , Jian Wu , Francisco Chiclana , Zhen Zhang , Yucheng Dong , Enrique Herrera-Viedma , Francisco Herrera
{"title":"Social network group decision making: Characterization, taxonomy, challenges and future directions from an AI and LLMs perspective","authors":"Mingshuo Cao , Tiantian Gai , Jian Wu , Francisco Chiclana , Zhen Zhang , Yucheng Dong , Enrique Herrera-Viedma , Francisco Herrera","doi":"10.1016/j.inffus.2025.103107","DOIUrl":"10.1016/j.inffus.2025.103107","url":null,"abstract":"<div><div>In the past decade, social network group decision making (SNGDM) has experienced significant advancements. This breakthrough is largely attributed to the rise of social networks, which provides crucial data support for SNGDM. As a result, it has emerged as a rapidly developing research field within decision sciences, attracting extensive attention and research over the past ten years. SNGDM events involve complex decision making processes with multiple interconnected stakeholders, where the evaluation of alternatives is influenced by network relationships. Since this research has evolved from group decision making (GDM) scenarios, there is currently no clear definition for SNGDM problems. This article aims to address this gap by first providing a clear definition of the SNGDM framework. It describes basic procedures, advantages, and challenges, serving as a foundational portrait of the SNGDM framework. Furthermore, this article offers a macro description of the literature on SNGDM over the past decade based on bibliometric analysis. Solving SNGDM problems effectively is challenging and requires careful consideration of the impact of social networks among decision-makers and the facilitation of consensus between different participants. Therefore, we propose a classification and overview of key elements for SNGDM models based on the existing literature: trust models, internal structure, and consensus mechanism for SNGDM. This article identifies the research challenges in SNGDM and points out the future research directions from two dimensions: first, the key SNGDM methodologies and second, the opportunities from artificial intelligence technology, in particular, combining large language models and multimodal fusion technologies. This look will be analyzed from a double perspective, both from the decision problem and from the technology views.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103107"},"PeriodicalIF":14.7,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-03-20DOI: 10.1016/j.inffus.2025.103111
Yidong Luo, Junchao Zhang, Chenggong Li
{"title":"CPIFuse: Toward realistic color and enhanced textures in color polarization image fusion","authors":"Yidong Luo, Junchao Zhang, Chenggong Li","doi":"10.1016/j.inffus.2025.103111","DOIUrl":"10.1016/j.inffus.2025.103111","url":null,"abstract":"<div><div>Conventional image fusion aims to integrate multiple sets of source images into one with more details, representing the merging of intensity information. In contrast, polarization image fusion seeks to enhance texture of the intensity image <span><math><msub><mrow><mi>S</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span> in the corresponding spectral bands by integrating strong texture features reflected by DoLP (Degree of Linear Polarization) images, representing the combination of intensity and polarization, which are both physical properties of light. However, the 3-Dimensional information contained in DoLP is presented in a highlighted form within the 2-Dimensional image, and fusing it directly can result in spectrum discontinuities and obscuring necessary details of the fused image. Existing polarization image fusion methods do not analyze this phenomenon and fail to examine the physical information represented by DoLP images. Instead, they simply integrate this interference information in the same manner as fusing infrared images, leading to fused images that suffer from information loss and significant color discrepancies. In this paper, we propose a new color polarization image fusion strategy that takes into account the physical properties reflected in the <span><math><msub><mrow><mi>S</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span> and DoLP images, namely CPIFuse. CPIFuse designs a customized loss function to optimize parameters through a lightweight Transformer-based image fusion framework, and color polarization image fusion has been achieved with color fidelity, enhanced texture and high efficiency. These advantages can be demonstrated in the visual effects, quantitative metrics, and car detection tasks of our comparative experiments. Furthermore, a new polarization dataset is constructed by the mechanism of division of focal plane polarimeter camera, which addresses the scarcity of datasets in the field of polarization image fusion. The source code and CPIF-dataset will be available at <span><span>https://github.com/roydon-luo/CPIFuse</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103111"},"PeriodicalIF":14.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-03-20DOI: 10.1016/j.inffus.2025.103118
Qun Wang , Feng Zhu , Ge Wu , Pengfei Zhao , Jianyu Wang , Xiang Li
{"title":"Object-Level and Scene-Level Feature Aggregation with CLIP for scene recognition","authors":"Qun Wang , Feng Zhu , Ge Wu , Pengfei Zhao , Jianyu Wang , Xiang Li","doi":"10.1016/j.inffus.2025.103118","DOIUrl":"10.1016/j.inffus.2025.103118","url":null,"abstract":"<div><div>Scene recognition is a fundamental task in computer vision, pivotal for applications like visual navigation and robotics. However, traditional methods struggle to effectively capture and aggregate scene-related features due to the inherent complexity and diversity of scenes, often leading to sub-optimal performance. To address this limitation, we propose a novel method, named OSFA (<strong>O</strong>bject-level and <strong>S</strong>cene-level <strong>F</strong>eature <strong>A</strong>ggregation), that leverages CLIP’s multimodal strengths to enhance scene feature representation through a two-stage aggregation strategy: Object-Level Feature Aggregation (OLFA) and Scene-Level Feature Aggregation (SLFA). In OLFA, we first generate an initial scene feature by integrating the average-pooled feature map of the base visual encoder and the CLIP visual feature. The initial scene feature is then used as a query in object-level cross-attention to extract object-level details most relevant to the scene from the feature map, thereby enhancing the representation. In SLFA, we first use CLIP’s textual encoder to provide category-level textual features for the scene, guiding the aggregation of corresponding visual features from the feature map. OLFA’s enhanced scene feature then queries these category-aware features using scene-level cross-attention to further capture scene-level information and obtain the final scene representation. To strengthen training, we employ a multi-loss strategy inspired by contrastive learning, improving feature robustness and discriminative ability. We evaluate OSFA on three challenging datasets (i.e. Places365, MIT67, and SUN397), achieving substantial improvements in classification accuracy. These results highlight the effectiveness of our method in enhancing scene feature representation through CLIP-guided aggregation. This advancement significantly improves scene recognition performance. Our code is public at <span><span>https://github.com/WangqunQAQ/OSFA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103118"},"PeriodicalIF":14.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-03-20DOI: 10.1016/j.inffus.2025.103079
Md Atik Ahamed , Qiang Cheng
{"title":"TSCMamba: Mamba meets multi-view learning for time series classification","authors":"Md Atik Ahamed , Qiang Cheng","doi":"10.1016/j.inffus.2025.103079","DOIUrl":"10.1016/j.inffus.2025.103079","url":null,"abstract":"<div><div>Multivariate time series classification (TSC) is critical for various applications in fields such as healthcare and finance. While various approaches for TSC have been explored, important properties of time series, such as shift equivariance and inversion invariance, are largely underexplored by existing works. To fill this gap, we propose a novel multi-view approach to capture patterns with properties like shift equivariance. Our method integrates diverse features, including spectral, temporal, local, and global features, to obtain rich, complementary contexts for TSC. We use continuous wavelet transform to capture time–frequency features that remain consistent even when the input is shifted in time. These features are fused with temporal convolutional or multilayer perceptron features to provide complex local and global contextual information. We utilize the Mamba state space model for efficient and scalable sequence modeling and capturing long-range dependencies in time series. Moreover, we introduce a new scanning scheme for Mamba, called tango scanning, to effectively model sequence relationships and leverage inversion invariance, thereby enhancing our model’s generalization and robustness. Experiments on two sets of benchmark datasets (10+20 datasets) demonstrate our approach’s effectiveness, achieving average accuracy improvements of 4.01–6.45% and 7.93% respectively, over leading TSC models such as TimesNet and TSLANet.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103079"},"PeriodicalIF":14.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143678378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-03-19DOI: 10.1016/j.inffus.2025.103114
Xin Jia , Jinglei Zhang , Lei Jia , Yunbo Wang , Shengyong Chen
{"title":"Rotation invariant dual-view 3D point cloud reconstruction with geometrical consistency based feature aggregation","authors":"Xin Jia , Jinglei Zhang , Lei Jia , Yunbo Wang , Shengyong Chen","doi":"10.1016/j.inffus.2025.103114","DOIUrl":"10.1016/j.inffus.2025.103114","url":null,"abstract":"<div><div>Multi-view 3D reconstruction usually aggregates the features from object with different views for recovering 3D shape. We argue that exploring the rotation invariance of object region and further learning the geometrical consistency of regions across views enables better feature aggregation. However, existing methods fail to investigate this insight. Meanwhile, the intrinsic self-occlusion existed in input views would also compromise the consistency learning. This paper presents an approach termed Rotation invariant dual-view 3D point cloud reconstruction with Geometrical consistency based Feature aggregation (R3GF), reconstructing a 3D point cloud from two RGB images with arbitrary views. In encoding, a point cloud initialization network is assigned to initialize a rough point cloud for each view. To explore the rotation invariance of object region, a regional feature extraction network is proposed. It uses Euclidean distance and angle-based clues to capture rotation-invariant features that characterize geometrical information from different regions of rough point clouds. In decoding, to perform consistency learning even when self-occlusion existed in input views, a dual-stage cross attention mechanism is devised. It enhances the captured regional features by global shapes of rough point clouds, enriching the information of occluded regions. Then, the enhanced regional features from rough point clouds with different views are aligned to model the geometrical consistency among regions, achieving feature aggregation accurately. Furthermore, a point cloud refinement module is constructed to produce a refined point cloud using the aggregated feature. Extensive experiments on the ShapeNet and Pix3D datasets show that our R3GF outperforms the state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103114"},"PeriodicalIF":14.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-03-19DOI: 10.1016/j.inffus.2025.103109
Jingchao Peng , Thomas Bashford-Rogers , Francesco Banterle , Haitao Zhao , Kurt Debattista
{"title":"HDRT: A large-scale dataset for infrared-guided HDR imaging","authors":"Jingchao Peng , Thomas Bashford-Rogers , Francesco Banterle , Haitao Zhao , Kurt Debattista","doi":"10.1016/j.inffus.2025.103109","DOIUrl":"10.1016/j.inffus.2025.103109","url":null,"abstract":"<div><div>Capturing images with enough details to solve imaging tasks is a long-standing challenge in imaging, particularly due to the limitations of standard dynamic range (SDR) images which often lose details in underexposed or overexposed regions. Traditional high dynamic range (HDR) methods, like multi-exposure fusion or inverse tone mapping, struggle with ghosting and incomplete data reconstruction. Infrared (IR) imaging offers a unique advantage by being less affected by lighting conditions, providing consistent detail capture regardless of visible light intensity. In this paper, we introduce the HDRT dataset, the first comprehensive dataset that consists of HDR and thermal IR images. The HDRT dataset comprises 50,000 images captured across three seasons over six months in eight cities, providing a diverse range of lighting conditions and environmental contexts. Leveraging this dataset, we propose HDRTNet, a novel deep neural method that fuses IR and SDR content to generate HDR images. Extensive experiments validate HDRTNet against the state-of-the-art, showing substantial quantitative and qualitative quality improvements. The HDRT dataset not only advances IR-guided HDR imaging but also offers significant potential for broader research in HDR imaging, multi-modal fusion, domain transfer, and beyond. The dataset is available at <span><span>https://huggingface.co/datasets/jingchao-peng/HDRTDataset</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103109"},"PeriodicalIF":14.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-03-19DOI: 10.1016/j.inffus.2025.103104
Weichao Yi , Liquan Dong , Ming Liu , Lingqin Kong , Yue Yang , Xuhong Chu , Yuejin Zhao
{"title":"Towards haze removal with derived pseudo-label supervision from real-world non-aligned training data","authors":"Weichao Yi , Liquan Dong , Ming Liu , Lingqin Kong , Yue Yang , Xuhong Chu , Yuejin Zhao","doi":"10.1016/j.inffus.2025.103104","DOIUrl":"10.1016/j.inffus.2025.103104","url":null,"abstract":"<div><div>Single-image dehazing seeks to restore clear images by addressing degradation issues caused by hazy conditions, such as detail loss and color distortion. However, since collecting large-scale and precisely aligned hazy/clear image pairs is unrealistic in real-world scenarios, existing data-driven learning-based dehazing algorithms are often affected by data authenticity and domain gaps between synthetic and real-world scenes, resulting in unsatisfactory performance. To this end, we propose a novel haze removal framework derived from real-world captured non-aligned training data. Specifically, our framework can be divided into two components: a pseudo-label supervision generation stage and an image dehazing stage. For one thing, the former stage explores clean-related style information from the haze-free image and transfers it to its corresponding hazy counterpart, thus generating fine-aligned training image pairs. More clearly, we relieve domain divergence and pixel misalignment through a well-designed Cross-Modulation Align Network (CMA-Net), which includes the Domain Transfer Module (DTM) and Feature Alignment Module (FAM). For another, the later stage focuses on constructing an effective dehazing architecture with a pseudo-label pixel-wise supervision training paradigm. Therefore, we propose a standard U-shape dehazing network with Physics-related Feature Unit (PFU) and Gate Attentive Fusion (GAF). Furthermore, we establish a new non-aligned hazy/clear dataset named Hazy-JXBIT by our camera devices, to further evaluate the proposed dehazing framework. Extensive experimental results demonstrate that these fine-aligned pseudo-label training pairs generated by CMA-Net can be beneficial for building a steady dehazing network USD-Net, and prompt us to obtain superior performance over existing state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103104"},"PeriodicalIF":14.7,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-03-18DOI: 10.1016/j.inffus.2025.103110
Jamin Liu , Rui Xu , Yule Duan , Tan Guo , Guangyao Shi , Fulin Luo
{"title":"MDGF-CD: Land-cover change detection with multi-level DiffFormer feature grouping fusion for VHR remote sensing images","authors":"Jamin Liu , Rui Xu , Yule Duan , Tan Guo , Guangyao Shi , Fulin Luo","doi":"10.1016/j.inffus.2025.103110","DOIUrl":"10.1016/j.inffus.2025.103110","url":null,"abstract":"<div><div>Recently, Transformer has become a popular tool for change detection (CD) in remote sensing images due to its ability to model global information. However, the existing Transformer models lack specific designs tailored for change information and do not adequately consider the fusion of features at different levels, thereby impeding their ability to distinguish the changes of objects of interest in complex backgrounds. To tackle these issues, we propose a multi-level DiffFormer feature grouping fusion network (MDGF-CD). Specifically, the proposed MDGF-CD develops a DiffFormer structure to form the backbone for extracting multi-level features from the original input image pairs. DiffFormer better matches the CD task by considering the change and global information in the attention mechanism. Then, we define a novel multi-level feature grouping fusion method to effectively integrate features across different levels, which adopts a grouping pattern to fully fuse low-level spatial details and high-level abstract semantics. In addition, a spatial change-aware module is designed to preserve low-level spatial details for better detecting subtle changes. Experimental results on the LEVIR-CD, WHU-CD and CLCD datasets demonstrate that the proposed MDGF-CD outperforms the existing state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103110"},"PeriodicalIF":14.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-03-17DOI: 10.1016/j.inffus.2025.103126
Tommaso Zoppi , Peter Popov
{"title":"Confidence ensembles: Tabular data classifiers on steroids","authors":"Tommaso Zoppi , Peter Popov","doi":"10.1016/j.inffus.2025.103126","DOIUrl":"10.1016/j.inffus.2025.103126","url":null,"abstract":"<div><div>The astounding amount of research conducted in the last decades provided plenty of Machine Learning (ML) algorithms and models for solving a wide variety of tasks for tabular data. However, classifiers are not always fast, accurate, and robust to unknown inputs, calling for further research in the domain. This paper proposes two classifiers based on <em>confidence ensembles</em>: Confidence Bagging (ConfBag) and Confidence Boosting (ConfBoost). Confidence ensembles build upon a base estimator and create base learners relying on the concept of “confidence” in predictions. They apply to any classification problem: binary and multi-class, supervised or unsupervised, without requiring additional data with respect to those already required by the base estimator. Our experimental evaluation using a range of tabular datasets shows that confidence ensembles, and especially ConfBoost, i) build more accurate classifiers than base estimators alone, even using a limited amount of base learners, ii) are relatively easy to tune as they rely on a limited number of hyper-parameters, and iii) are significantly more robust when dealing with unknown, unexpected input data compared to other tabular data classifiers. Amongst others, confidence ensembles showed potential in going beyond the performance of de-facto standard classifiers for tabular data such as Random Forest and eXtreme Gradient Boosting. ConfBag and ConfBoost are publicly available as PyPI package, compliant with widely used Python frameworks such as <em>scikit-learn</em> and <em>pyod</em>, and require little to no tuning to be exercised on tabular datasets for classification tasks.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103126"},"PeriodicalIF":14.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-03-17DOI: 10.1016/j.inffus.2025.103113
Chaoyu Gong , Zhi-gang Su , Thierry Denoeux
{"title":"Multi-view evidential K-NN classification","authors":"Chaoyu Gong , Zhi-gang Su , Thierry Denoeux","doi":"10.1016/j.inffus.2025.103113","DOIUrl":"10.1016/j.inffus.2025.103113","url":null,"abstract":"<div><div>Multi-view classification, aiming to classify samples represented by multiple feature vectors, has become a hot topic in pattern recognition. Although many methods with promising performances have been proposed, their practicality is still limited by the lack of interpretability in some situations. Besides, an appropriate description for the soft labels of multi-view samples is missing, which may degrade the classification performance, especially for those samples located in highly-overlapping areas of multiple vector spaces. To address these issues, we extend the <em>K</em>-nearest neighbor (K-NN) classification algorithm to multi-view learning, under the theoretical framework of evidence theory. The learning process is formalized, firstly, as an optimization problem, where the weights of different views, an adaptive <em>K</em> value of every sample and the distance matrix are determined jointly based on training error. Then, the final classification result is derived according to the philosophy of the evidential K-NN classification algorithm. Detailed ablation studies demonstrate the benefits of the joint learning for adaptive neighborhoods and view weights in a supervised way. Comparative experiments on real-world datasets show that our algorithm performs better than other state-of-the-art methods. A real-world industrial application for condition monitoring shown in Appendix F exemplifies the need to use the evidence theory and the benefits from the unique interpretability of K-NN in detail.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103113"},"PeriodicalIF":14.7,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}