Information Fusion最新文献

筛选
英文 中文
Deep multi-view clustering: A comprehensive survey of the contemporary techniques
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-20 DOI: 10.1016/j.inffus.2025.103012
Anal Roy Chowdhury , Avisek Gupta , Swagatam Das
{"title":"Deep multi-view clustering: A comprehensive survey of the contemporary techniques","authors":"Anal Roy Chowdhury ,&nbsp;Avisek Gupta ,&nbsp;Swagatam Das","doi":"10.1016/j.inffus.2025.103012","DOIUrl":"10.1016/j.inffus.2025.103012","url":null,"abstract":"<div><div>Data can be represented by multiple sets of features, where each semantically coherent set of features is called a view. For example, an image can be represented by multiple sets of features that measure textures, shapes, edge features, etc. Collecting multiple views of data is generally easier than annotating it with the help of experts. Thus, the unsupervised exploration of data in consultation with all collected views is essential to identify naturally occurring clusters of data instances. In deep multi-view clustering, deep neural networks are used to obtain non-linear latent representations of data instances that agree with the multiple views, using which clusters of data instances are identified. A wide variety of such deep multi-view clustering approaches exist, which we systematically study and categorize into a novel taxonomy that provides structure to the existing literature and can also guide future researchers. We provide a pedagogical discussion on preliminary concepts to help understand topics relevant to the studied deep clustering methods. Various multi-view problems that are being studied are summarized, and future research scopes have been noted.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103012"},"PeriodicalIF":14.7,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of medical text analysis: Theory and practice 医学文本分析综述:理论与实践
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-19 DOI: 10.1016/j.inffus.2025.103024
Yani Chen , Chunwu Zhang , Ruibin Bai , Tengfang Sun , Weiping Ding , Ruili Wang
{"title":"A review of medical text analysis: Theory and practice","authors":"Yani Chen ,&nbsp;Chunwu Zhang ,&nbsp;Ruibin Bai ,&nbsp;Tengfang Sun ,&nbsp;Weiping Ding ,&nbsp;Ruili Wang","doi":"10.1016/j.inffus.2025.103024","DOIUrl":"10.1016/j.inffus.2025.103024","url":null,"abstract":"<div><div>Medical data analysis has emerged as an important driving force for smart healthcare with applications ranging from disease analysis to triage, diagnosis, and treatment. Text data plays a crucial role in providing contexts and details that other data types cannot capture alone, making its analysis an indispensable resource in medical research. Natural language processing, a key technology for analyzing and interpreting text, is essential for extracting meaningful insights from medical text data. This systematic review explores the analysis of text data in medicine, focusing on the applications of natural language processing methods. We retrieved a total of 4,784 publications from four databases. After applying rigorous exclusion criteria, 192 relevant publications are selected for in-depth analysis. These studies are evaluated from five critical perspectives: emerging trends of medical text analysis, commonly employed methodologies, major data sources, research topics, and applications in real-world problem-solving. Our analysis provides a comprehensive overview of the current state of medical text analysis, highlighting its advantages, limitations, and future potential. Finally, we identify key challenges and outline future research directions for advancing medical text analysis.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103024"},"PeriodicalIF":14.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing cross-domain generalization by fusing language-guided feature remapping
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-19 DOI: 10.1016/j.inffus.2025.103029
Ziteng Qiao , Dianxi Shi , Songchang Jin , Yanyan Shi , Luoxi Jing , Chunping Qiu
{"title":"Enhancing cross-domain generalization by fusing language-guided feature remapping","authors":"Ziteng Qiao ,&nbsp;Dianxi Shi ,&nbsp;Songchang Jin ,&nbsp;Yanyan Shi ,&nbsp;Luoxi Jing ,&nbsp;Chunping Qiu","doi":"10.1016/j.inffus.2025.103029","DOIUrl":"10.1016/j.inffus.2025.103029","url":null,"abstract":"<div><div>Domain generalization refers to training a model with annotated source domain data and making it generalize to various unseen target domains. It has been extensively studied in classification, but remains challenging in object detection. Existing domain generalization object detection methods mainly rely on generative or adversarial data augmentation, which increases the complexity of training. Recently, vision-language models (VLMs), such as CLIP, have demonstrated strong cross-modal alignment capabilities, showing potential for enhancing domain generalization. On this basis, the paper proposes a language-guided feature remapping method, which leverages VLMs to augment sample features and improve the generalization performance of regular models. In detail, we first construct a teacher-student network structure. Then, we introduce a feature remapping module that remaps sample features in both local and global spatial dimensions to improve the distribution of feature representations. Concurrently, we design domain prompts and class prompts to guide the sample features to remap into a more generalized and universal feature space. Finally, we establish a knowledge distillation structure to facilitate knowledge transfer between teacher and student networks, enhancing the domain generalization ability of the student network. Multiple experimental results demonstrate the superiority of our proposed method.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103029"},"PeriodicalIF":14.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143453712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSF-Net: Multi-stage fusion network for emotion recognition from multimodal signals in scalable healthcare
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-19 DOI: 10.1016/j.inffus.2025.103028
Md. Milon Islam , Fakhri Karray , Ghulam Muhammad
{"title":"MSF-Net: Multi-stage fusion network for emotion recognition from multimodal signals in scalable healthcare","authors":"Md. Milon Islam ,&nbsp;Fakhri Karray ,&nbsp;Ghulam Muhammad","doi":"10.1016/j.inffus.2025.103028","DOIUrl":"10.1016/j.inffus.2025.103028","url":null,"abstract":"<div><div>Automatic emotion recognition has attracted significant interest in healthcare, thanks to remarkable developments made recently in smart and innovative technologies. A real-time emotion recognition system allows for continuous monitoring, comprehension, and enhancement of the physical entity’s capacities, along with continuing advice for enhancing quality of life and well-being in the context of personalized healthcare. Multimodal emotion recognition presents a significant challenge in terms of efficiently using the diverse modalities present in the data. In this article, we introduce a Multi-Stage Fusion Network (MSF-Net) for emotion recognition capable of extracting multimodal information and achieving significant performances. We propose utilizing the transformer-based structure to extract deep features from facial expressions. We exploited two visual descriptors, local binary pattern and Oriented FAST and Rotated BRIEF, to retrieve the computer vision-based features from the facial videos. A feature-level fusion network integrates the extraction of features from these modules, directing the output into the triplet attention technique. This module employs a three-branch architecture to compute attention weights to capture cross-dimensional interactions efficiently. The temporal dependencies in physiological signals are modeled by a Bi-directional Gated Recurrent Unit (Bi-GRU) in forward and backward directions at each time step. Lastly, the output feature representations from the triplet attention module and the extracted high-level patterns from Bi-GRU are fused and fed into the classification module to recognize emotion. The extensive experimental evaluations revealed that the proposed MSF-Net outperformed the state-of-the-art approaches on two popular datasets, BioVid Emo DB and MGEED. Finally, we tested the proposed MSF-Net in the Internet of Things environment to facilitate real-world scalable smart healthcare application.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103028"},"PeriodicalIF":14.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSEFusion: Salient semantic enhancement for multimodal medical image fusion with Mamba and dynamic spiking neural networks
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-19 DOI: 10.1016/j.inffus.2025.103031
Shiqiang Liu , Weisheng Li , Dan He , Guofen Wang , Yuping Huang
{"title":"SSEFusion: Salient semantic enhancement for multimodal medical image fusion with Mamba and dynamic spiking neural networks","authors":"Shiqiang Liu ,&nbsp;Weisheng Li ,&nbsp;Dan He ,&nbsp;Guofen Wang ,&nbsp;Yuping Huang","doi":"10.1016/j.inffus.2025.103031","DOIUrl":"10.1016/j.inffus.2025.103031","url":null,"abstract":"<div><div>Multimodal medical image fusion technology enhances medical representations and plays a vital role in clinical diagnosis. However, fusing medical images remains a challenge due to the stochastic nature of lesions and the complex structures of organs. Although many fusion methods have been proposed recently, most struggle to effectively establish global context dependency while preserving salient semantic features, leading to the loss of crucial medical information. Therefore, we propose a novel salient semantic enhancement fusion (SSEFusion) framework, whose key components include a dual-branch encoder that combines Mamba and spiking neural network (SNN) models (Mamba-SNN encoder), feature interaction attention (FIA) blocks, and a decoder equipped with detail enhancement (DE) blocks. In the encoder, the Mamba-based branch introduces visual state space (VSS) blocks to efficiently establish global dependencies and extract global features for the effective identification of the lesion area. Meanwhile, the SNN-based branch dynamically extracts fine-grained salient features to enhance the retention of medical semantic information in complex structures. Global features and fine-grained salient features semantically interact to achieve feature complementarity through the FIA blocks. Benefiting from the DE block, SSEFusion generates fused images with prominent edge textures. Furthermore, we propose a salient semantic loss based on leaky-integrate-and-fire (LIF) neurons to enhance the guidance in extracting key features. Extensive fusion experiments show that SSEFusion outperforms state-of-the-art fusion methods in terms of salient medical semantic information retention. The code is available at <span><span>https://github.com/Shiqiang-Liu/SSEFusion</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103031"},"PeriodicalIF":14.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DGFD: A dual-graph convolutional network for image fusion and low-light object detection
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-18 DOI: 10.1016/j.inffus.2025.103025
Xiaoxuan Chen , Shuwen Xu , Shaohai Hu , Xiaole Ma
{"title":"DGFD: A dual-graph convolutional network for image fusion and low-light object detection","authors":"Xiaoxuan Chen ,&nbsp;Shuwen Xu ,&nbsp;Shaohai Hu ,&nbsp;Xiaole Ma","doi":"10.1016/j.inffus.2025.103025","DOIUrl":"10.1016/j.inffus.2025.103025","url":null,"abstract":"<div><div>Traditional convolutional operations primarily concentrate on local feature extraction, which can result in the loss of global features. However, current fusion methods for extracting global features exhibit high time complexity and have difficulties in capturing long-range dependencies. In this paper, a dual-graph convolutional neural network is constructed to perform cross-modal graph inference based on the modal structures of infrared and visible images. Specifically, to aggregate the global and local features of different modal images, a contextual graph convolutional module is proposed that allows the network structure adapting to the features of different modal images and facilitates the extraction of features at various levels. A content graph convolutional module is also proposed to construct correlation relationships between infrared and visible images, which achieves feature fusion without manually designing interference. Furthermore, the fused features are fed into a unified framework to integrate the fusion and detection tasks. Extensive qualitative and quantitative experiments have demonstrated that the proposed unified network significantly reduces the time complexity and improves detection performance under low-light conditions.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103025"},"PeriodicalIF":14.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From patches to WSIs: A systematic review of deep Multiple Instance Learning in computational pathology
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-18 DOI: 10.1016/j.inffus.2025.103027
Yuchen Zhang , Zeyu Gao , Kai He , Chen Li , Rui Mao
{"title":"From patches to WSIs: A systematic review of deep Multiple Instance Learning in computational pathology","authors":"Yuchen Zhang ,&nbsp;Zeyu Gao ,&nbsp;Kai He ,&nbsp;Chen Li ,&nbsp;Rui Mao","doi":"10.1016/j.inffus.2025.103027","DOIUrl":"10.1016/j.inffus.2025.103027","url":null,"abstract":"<div><div>Clinical decision support systems for pathology, particularly those utilizing computational pathology (CPATH) for whole slide image (WSI) analysis, face significant challenges due to the need for high-quality annotated datasets. Given the vast amount of information contained in WSIs, creating such datasets is often prohibitively expensive and time-consuming. Multiple Instance Learning (MIL) has emerged as a promising alternative, enabling training that relies solely on coarse-grained supervision by the fusion of extensive localized information from large-scale wholes, thereby reducing the dependency on costly pixel-level labeling. As a result, MIL has become a pivotal technique in CPATH, driving a surge in related research, particularly over the past five years. This expanding body of work has catalyzed technological innovation, introduced transformative advancements in the field, and been further accelerated by progress in deep learning architectures, large-scale pretraining strategies, and Large Language Models (LLMs). This paper provides a systematic review of recent developments in deep MIL methods, analyzing technological advancements from multiple perspectives, including encoder backbone architectures, encoder pretraining strategies, and MIL aggregation techniques. We present a comprehensive overview of progress in each domain, catalog specific application scenarios, and highlight pivotal contributions that have shaped the field. Finally, we explore emerging research directions and potential future challenges for MIL-based CPATH.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103027"},"PeriodicalIF":14.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143463879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MapFusion: A novel BEV feature fusion network for multi-modal map construction 地图融合:用于构建多模态地图的新型 BEV 特征融合网络
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-18 DOI: 10.1016/j.inffus.2025.103018
Xiaoshuai Hao , Yunfeng Diao , Mengchuan Wei , Yifan Yang , Peng Hao , Rong Yin , Hui Zhang , Weiming Li , Shu Zhao , Yu Liu
{"title":"MapFusion: A novel BEV feature fusion network for multi-modal map construction","authors":"Xiaoshuai Hao ,&nbsp;Yunfeng Diao ,&nbsp;Mengchuan Wei ,&nbsp;Yifan Yang ,&nbsp;Peng Hao ,&nbsp;Rong Yin ,&nbsp;Hui Zhang ,&nbsp;Weiming Li ,&nbsp;Shu Zhao ,&nbsp;Yu Liu","doi":"10.1016/j.inffus.2025.103018","DOIUrl":"10.1016/j.inffus.2025.103018","url":null,"abstract":"<div><div>Map construction task plays a vital role in providing precise and comprehensive static environmental information essential for autonomous driving systems. Primary sensors include cameras and LiDAR, with configurations varying between camera-only, LiDAR-only, or camera-LiDAR fusion, based on cost-performance considerations. While fusion-based methods typically perform best, existing approaches often neglect modality interaction and rely on simple fusion strategies, which suffer from the problems of misalignment and information loss. To address these issues, we propose <em>MapFusion</em>, a novel multi-modal Bird’s-Eye View (BEV) feature fusion method for map construction. Specifically, to solve the semantic misalignment problem between camera and LiDAR BEV features, we introduce the Cross-modal Interaction Transform (CIT) module, enabling interaction between two BEV feature spaces and enhancing feature representation through a self-attention mechanism. Additionally, we propose an effective Dual Dynamic Fusion (DDF) module to adaptively select valuable information from different modalities, which can take full advantage of the inherent information between different modalities. Moreover, <em>MapFusion</em> is designed to be simple and plug-and-play, easily integrated into existing pipelines. We evaluate <em>MapFusion</em> on two map construction tasks, including High-definition (HD) map and BEV map segmentation, to show its versatility and effectiveness. Compared with the state-of-the-art methods, MapFusion achieves 3.6% and 6.2% absolute improvements on the HD map construction and BEV map segmentation tasks on the nuScenes dataset, respectively, demonstrating the superiority of our approach.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103018"},"PeriodicalIF":14.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive survey of visible and infrared imaging in complex environments: Principle, degradation and enhancement
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-17 DOI: 10.1016/j.inffus.2025.103036
Yuanbo Li , Ping Zhou , Gongbo Zhou , Haozhe Wang , Yunqi Lu , Yuxing Peng
{"title":"A comprehensive survey of visible and infrared imaging in complex environments: Principle, degradation and enhancement","authors":"Yuanbo Li ,&nbsp;Ping Zhou ,&nbsp;Gongbo Zhou ,&nbsp;Haozhe Wang ,&nbsp;Yunqi Lu ,&nbsp;Yuxing Peng","doi":"10.1016/j.inffus.2025.103036","DOIUrl":"10.1016/j.inffus.2025.103036","url":null,"abstract":"<div><div>Images captured in extreme environments, including deep-earth, deep-sea, and deep-space exploration sites, often suffer from significant degradation due to complex visual factors, which adversely impact visual quality and complicate perceptual tasks. This survey systematically synthesizes recent advancements in visual perception and understanding within these challenging contexts. It focuses on the imaging principles and degradation mechanisms affecting both visible light and infrared images, as well as the image enhancement techniques developed to mitigate various degradation factors. The survey begins by examining key degradation mechanisms, such as low light, high water vapor, and heavy dust in visible light images (VLI), along with atmospheric radiation attenuation and turbulence distortion in infrared images (IRI). Next, a categorization and critical evaluation of both traditional and deep learning-based image enhancement algorithms is conducted, with a particular emphasis placed on their applications to VLI and IRI. Additionally, we summarize the application of image enhancement algorithms in complex environments, using deep underground scenes of coal mines as a case study, and analyze current trends by tracking the evolution of these algorithms. Finally, the survey highlights the challenges of image enhancement under complex and harsh conditions, offering a critical assessment of existing limitations and suggesting future research directions. By consolidating key insights and identifying emerging trends and challenges, this survey aims to serve as a comprehensive resource for researchers engaged in image enhancement techniques in extreme environmental conditions, such as those found in deep-earth, deep-sea, and deep-space environments.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103036"},"PeriodicalIF":14.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weighted-digraph-guided multi-kernelized learning for outlier explanation
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-02-17 DOI: 10.1016/j.inffus.2025.103026
Lili Guan , Lei Duan , Xinye Wang , Haiying Wang , Rui Lin
{"title":"Weighted-digraph-guided multi-kernelized learning for outlier explanation","authors":"Lili Guan ,&nbsp;Lei Duan ,&nbsp;Xinye Wang ,&nbsp;Haiying Wang ,&nbsp;Rui Lin","doi":"10.1016/j.inffus.2025.103026","DOIUrl":"10.1016/j.inffus.2025.103026","url":null,"abstract":"<div><div>Outlier explanation methods based on outlying subspace mining have been widely used in various applications due to their effectiveness and explainability. These existing methods aim to find an outlying subspace of the original space (a set of features) that can clearly distinguish a query outlier from all inliers. However, when the query outlier in the original space are linearly inseparable from inliers, these existing methods may not be able to accurately identify an outlying subspace that effectively distinguishes the query outlier from all inliers. Moreover, these methods ignore differences between the query outlier and other outliers. In this paper, we propose a novel method named WANDER (<strong>W</strong>ighted-digr<strong>A</strong>ph-Guided Multi-Ker<strong>N</strong>elize<strong>D</strong> l<strong>E</strong>a<strong>R</strong>ning) for outlier explanation, aiming to learn an optimal outlying subspace that can separate the query outlier from other outliers and the inliers simultaneously. Specifically, we first design a quadruplet sampling module to transform the original dataset into a set of quadruplets to mitigate extreme data imbalances and to help the explainer better capture the differences among the query outlier, other outliers, and inliers. Then we design a weighted digraph generation module to capture the geometric structure in each quadruplet within the original space. In order to consider the condition that quadruplets are linearly inseparable in the original space, we further construct a feature embedding module to map the set of quadruplets from the original space to a kernelized embedding space. To find the optimal kernelized embedding space, we design an outlying measure module to iteratively update the parameters in the feature embedding module by the weighted-digraph-based quadruplet loss. Finally, WANDER outputs an outlying subspace used to interpret the query outlier through an outlying subspace extraction module. Extensive experiments show that WANDER outperforms state-of-the-art methods, achieving improvements in AUPRC, AUROC, Jaccard Index, and <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> scores of up to 25.3%, 16.5%, 37.4%, and 28.4%, respectively, across seven real-world datasets. Our datasets and source code are publicly available at <span><span>https://github.com/KDDElab/WANDER1</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"119 ","pages":"Article 103026"},"PeriodicalIF":14.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143463880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信