Information FusionPub Date : 2025-05-21DOI: 10.1016/j.inffus.2025.103292
Bin Pu , Zhizhi Liu , Liwen Wu , Kai Xu , Bocheng Liang , Ziyang He , Benteng Ma , Lei Zhao
{"title":"CGGL: A client-side generative gradient leakage attack with double diffusion prior","authors":"Bin Pu , Zhizhi Liu , Liwen Wu , Kai Xu , Bocheng Liang , Ziyang He , Benteng Ma , Lei Zhao","doi":"10.1016/j.inffus.2025.103292","DOIUrl":"10.1016/j.inffus.2025.103292","url":null,"abstract":"<div><div>Federated learning (FL) has emerged as a widely adopted privacy-preserving distributed framework that facilitates information fusion and model training across multiple clients without requiring direct data sharing with a central server. Despite its advantages, recent studies have revealed that FL is vulnerable to gradient inversion attacks, wherein adversaries can reconstruct clients’ private training data from shared gradients. These existing attacks often assumed typically unrealistic in practical FL deployments. In real-world scenarios, malicious clients are more likely to initiate such attacks. In this paper, we propose a novel <strong><em><u>C</u></em></strong>lient-side <strong><em><u>G</u></em></strong>enerative <strong><em><u>G</u></em></strong>radient <strong><em><u>L</u></em></strong>eakage (<strong>CGGL</strong>) attack tailored for FL-based information fusion scenarios. Our approach targets gradient inversion attacks originating from clients and introduces an adaptive poisoning strategy. By utilizing poisoned gradients in the local updates, a malicious client can stealthily embed the target gradients into the aggregated global model updates, enabling the reconstruction of private data from the aggregated gradients. To enhance the effectiveness of the attack, we further develop a reconstruction framework based on a conditional diffusion model incorporating dual diffusion priors. This design significantly improves image reconstruction fidelity, particularly under larger batch sizes and on high-resolution datasets. We validate the proposed CGGL method through extensive experiments on both natural and medical imaging datasets. Results demonstrate that CGGL consistently outperforms existing client-side gradient inversion attacks, achieving pixel-level data reconstruction and revealing substantial privacy risks in FL-enabled information fusion systems—even in the presence of various defense mechanisms.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103292"},"PeriodicalIF":14.7,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-05-21DOI: 10.1016/j.inffus.2025.103301
Bin Cao , Huanyu Deng , Yiming Hao , Xiao Luo
{"title":"Multi-view information fusion based on federated multi-objective neural architecture search for MRI semantic segmentation","authors":"Bin Cao , Huanyu Deng , Yiming Hao , Xiao Luo","doi":"10.1016/j.inffus.2025.103301","DOIUrl":"10.1016/j.inffus.2025.103301","url":null,"abstract":"<div><div>With the rapid development of artificial intelligence, medical image semantic segmentation is being used more widely. However, centralized training can lead to privacy risks. At the same time, MRI provides multiple views that together describe the anatomical structure of a lesion, but a single view may not fully capture all features. Therefore, integrating multi-view information in a federated learning setting is a key challenge for improving model generalization. This study combines federated learning, neural architecture search (NAS) and data fusion techniques to address issues related to data privacy, cross-institutional data distribution differences and multi-view information fusion in medical imaging. To achieve this, we propose the FL-MONAS framework, which leverages the advantages of NAS and federated learning. It uses a Pareto-frontier-based multi-objective optimization strategy to effectively combine 2D MRI with 3D anatomical structures, improving model performance while ensuring data privacy. Experimental results show that FL-MONAS maintains strong segmentation performance even in non-IID scenarios, providing an efficient and privacy-friendly solution for medical image analysis.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103301"},"PeriodicalIF":14.7,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144166561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-05-20DOI: 10.1016/j.inffus.2025.103343
Guowei Dai , Chaoyu Wang , Qingfeng Tang , Yi Zhang , Duwei Dai , Lang Qiao , Jiaojun Yan , Hu Chen
{"title":"Interpretable breast cancer diagnosis using histopathology and lesion mask as domain concepts conditional simulation ultrasonography","authors":"Guowei Dai , Chaoyu Wang , Qingfeng Tang , Yi Zhang , Duwei Dai , Lang Qiao , Jiaojun Yan , Hu Chen","doi":"10.1016/j.inffus.2025.103343","DOIUrl":"10.1016/j.inffus.2025.103343","url":null,"abstract":"<div><div>Breast cancer diagnosis using ultrasound imaging presents challenges due to inherent limitations in image quality and the complex nature of lesion interpretation. We propose SgmaFuse, a novel interpretable multimodal framework that integrates histopathological concepts and lesion masks information , treated as domain concepts, with ultrasound imaging for accurate and explainable breast cancer diagnosis. At its core, SgmaFuse employs a Spatially Guided Multi-Level Alignment Mechanism (SGMLAM) that orchestrates global–local feature interactions across modalities. This is achieved through a sophisticated hierarchical strategy incorporating cross-modal fusion and attention-based feature correspondence at four distinct levels: global image-report alignment, local mask-guided attention report alignment, local image diagnostic report alignment, and concept-level diagnostic report alignment. Concurrently, a Histological Semantic Activation Vector Learning (HSAVL) module, leveraging kernel Support Vector Machines, learns discriminative semantic concepts directly from histopathological data, thereby bridging the gap between ultrasound imaging features and established pathological patterns via robust concept-level alignment. The framework ability to provide transparent, structured diagnostic explanations through interpretable visual attention maps and semantic concept contributions demonstrates its potential as a reliable clinical decision support tool, particularly in the challenging domain of breast ultrasound diagnosis.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103343"},"PeriodicalIF":14.7,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144116560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-05-20DOI: 10.1016/j.inffus.2025.103307
Carlos Fernandez-Basso , David Díaz-Jimenez , Jose L. López , Macarena Espinilla
{"title":"Fuzzy processing applied to improve multimodal sensor data fusion to discover frequent behavioral patterns for smart healthcare","authors":"Carlos Fernandez-Basso , David Díaz-Jimenez , Jose L. López , Macarena Espinilla","doi":"10.1016/j.inffus.2025.103307","DOIUrl":"10.1016/j.inffus.2025.103307","url":null,"abstract":"<div><div>The extraction and utilization of latent information from sensor data is gaining increasing prominence due to its potential for transforming decision-making processes across various sectors. Data mining techniques provide robust tools for analyzing large-scale data generated by advanced network management systems, offering actionable insights that drive operational efficiency and strategic improvements. However, the sheer volume of sensor data, combined with challenges related to real-world sensor deployment and user interaction, necessitates the development of advanced data fusion and processing frameworks. This paper presents an innovative automatic fusion and fuzzification methodology designed to integrate multi-source sensor data into coherent, high-quality intelligent outputs. By applying fuzzy logic, the proposed system enhances the interpretability and interoperability of complex sensor datasets. The approach has been validated in a real-world scenario within sensorized homes of Type II diabetic patients in Cabra (Córdoba, Spain), where it aids healthcare professionals in monitoring and optimizing patient routines. Experimental results demonstrate the system’s effectiveness in identifying and analyzing behavioral patterns, highlighting its potential to improve patient care through advanced sensor data fusion techniques.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103307"},"PeriodicalIF":14.7,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144123262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-05-20DOI: 10.1016/j.inffus.2025.103293
Hongchao Zhou , Shiyu Liu , Shunbo Hu
{"title":"Multi-scale dual-attention frequency fusion for joint segmentation and deformable medical image registration","authors":"Hongchao Zhou , Shiyu Liu , Shunbo Hu","doi":"10.1016/j.inffus.2025.103293","DOIUrl":"10.1016/j.inffus.2025.103293","url":null,"abstract":"<div><div>Deformable medical image registration is a crucial aspect of medical image analysis. Improving the accuracy and plausibility of registration by information fusion is still a problem that needs to be addressed. To solve this problem, we propose DAFF-Net, a novel framework that systematically unifies three kind of information fusion (low-level fusion, high-level fusion, and loss fusion) to enhance registration precision and plausibility: (i) low-level fusion: DAFF-Net employs a shared global encoder to extract common anatomical features from both moving and fixed images in two tasks, reducing redundancy and ensuring foundational consistency across tasks; (ii) high-level fusion: through the dual attention frequency fusion (DAFF) module, DAFF-Net dynamically combines multi-scale registration and segmentation features, leverages features of low-frequency structural coherences and high-frequency boundary details, and adaptively reweighting them to enhance registration via global and local attention mechanisms; (iii) loss fusion: a unified loss function enforces bidirectional consistency, i.e., segmentation supervises registration through anatomical constraints, while registration refines segmentation via deformation-correct anatomical consistency. Extensive experiments on three public 3D brain magnetic resonance imaging (MRI) datasets demonstrate that the proposed DAFF-Net and its unsupervised variant outperform state-of-the-art registration methods across several evaluation metrics, demonstrating the effectiveness of our approach in deformable medical image registration. The proposed framework holds promise for practical clinical applications such as preoperative planning, longitudinal disease tracking, and structural analysis in neurological disorders.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103293"},"PeriodicalIF":14.7,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-05-20DOI: 10.1016/j.inffus.2025.103295
Qingke Zou , Jie Zhou , Mingjie Luo
{"title":"Hyperspectral super-resolution via nonlinear unmixing","authors":"Qingke Zou , Jie Zhou , Mingjie Luo","doi":"10.1016/j.inffus.2025.103295","DOIUrl":"10.1016/j.inffus.2025.103295","url":null,"abstract":"<div><div>Fusing a hyperspectral image (HSI) with a multispectral image (MSI) to produce a super-resolution image (SRI) that possesses both fine spatial and spectral resolutions is a widely adopted technique in hyperspectral super-resolution (HSR). Most existing HSR methods accomplish this task within the framework of linear mixing model (LMM). However, a severe challenge lies in the inherent linear constraint of LMM, which hinders the adaptability of these HSR methods to complex real-world scenarios. In this work, the LMM is extended to the generalized bilinear model (GBM), and a novel HSR method based on nonnegative tensor factorization is proposed in the framework of nonlinear unmixing. Apart from the linear part, it additionally considers the main nonlinear interactions, that is, the bilinear interactions between the endmembers. Crucially, each potential decomposition factor possesses a physical interpretation, enabling the incorporation of prior information to enhance reconstruction performance. Furthermore, an HSR algorithm has been devised specifically for scenarios where the spatial degradation operators from SRI to HSI are unknown, which undoubtedly enhances its practical applicability. The proposed methods overcome the inherent linear limitations of the LMM framework while avoiding the information loss associated with matrixizing HSI and MSI. The effectiveness of the proposed methods is showcased through simulated and real data.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103295"},"PeriodicalIF":14.7,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-05-20DOI: 10.1016/j.inffus.2025.103306
Mingwei Wen, Xuming Zhang
{"title":"LPM-Net: Lightweight pixel-level modeling network based on CNN and Mamba for 3D medical image fusion","authors":"Mingwei Wen, Xuming Zhang","doi":"10.1016/j.inffus.2025.103306","DOIUrl":"10.1016/j.inffus.2025.103306","url":null,"abstract":"<div><div>Deep learning-based medical image fusion has become a prevalent approach to facilitate computer-aided diagnosis and treatment. The mainstream image fusion methods predominantly rely on encoder–decoder architectures and utilize unsupervised loss functions for training, resulting in the blurring or loss of fused image details and limited inference speed. To resolve these problems, this paper presents a pixel-level modeling network for effective fusion of 3D medical images. The network comprises three structurally identical branches: an unsupervised fusion branch and two supervised reconstruction branches. In the fusion branch, the feature extraction modules utilize the dense convolutional neural network and Mamba to extract image features based on axis decomposition. The base and detail components are then predicted from these extracted features and fused to generate the fused image pixel by pixel. Notably, two reconstruction branches share the parameters of feature extraction modules with the fusion branch and provide the supervised loss, which is integrated with the unsupervised loss to enhance the fusion performance. The experiments on six datasets of multiple modalities and organs demonstrates that our method achieves effective medical image fusion by preserving image details effectively, minimizing image blurring and reducing the number of parameters. Meanwhile, our method has significant advantages in eight fusion metrics over the compared mainstream methods, and it provides relatively fast inference speed (e.g., 90 volumes/s on the BraTS2020 dataset). Indeed, our method will provide valuable means to improve the accuracy and efficiency of image fusion-based diagnosis and treatment systems. The source code is available on GitHub at <span><span>https://github.com/coolllcat/LPM-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103306"},"PeriodicalIF":14.7,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-05-20DOI: 10.1016/j.inffus.2025.103299
Yan Liu, Yan Yang, Yongquan Jiang, Xiaole Zhao, Zhuyang Xie
{"title":"FABRF-Net: A frequency-aware boundary and region fusion network for breast ultrasound image segmentation","authors":"Yan Liu, Yan Yang, Yongquan Jiang, Xiaole Zhao, Zhuyang Xie","doi":"10.1016/j.inffus.2025.103299","DOIUrl":"10.1016/j.inffus.2025.103299","url":null,"abstract":"<div><div>Breast ultrasound (BUS) image segmentation is crucial for tumor analysis and cancer diagnosis. However, the challenges of lesion segmentation in BUS images arise from inter-class indistinction caused by low contrast, high speckle noise, artifacts, and blurred boundaries, as well as intra-class inconsistency due to variations in lesion size, shape, and location. To address these challenges, we propose a novel frequency-aware boundary and region fusion network (FABRF-Net). The core of our FABRF-Net is the frequency domain-based Haar wavelet decomposition module (HWDM), which effectively captures multi-scale frequency feature information from global spatial contexts. This allows our network to integrate the advantages of CNNs and Transformers for more comprehensive frequency and spatial feature modeling, effectively addressing intra-class inconsistency. Moreover, the frequency awareness based on HWDM is used to separate features into boundary and region streams, enhancing detailed edges in boundary features and reducing the impact of noise on lesion region features. We further develop a boundary-region fusion module (BRFM) to enable adaptive fusion and mutual guidance of frequency-aware region and boundary features, effectively mitigating inter-class indistinction and achieving accurate breast lesion segmentation. Quantitative and qualitative experimental results demonstrate that FABRF-Net achieves state-of-the-art segmentation accuracy on six cross-domain ultrasound datasets and has obvious advantages in segmenting small breast tumors.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103299"},"PeriodicalIF":14.7,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-05-18DOI: 10.1016/j.inffus.2025.103352
Vyacheslav L. Kalmykov , Lev V. Kalmykov
{"title":"Towards eXplicitly eXplainable Artificial Intelligence","authors":"Vyacheslav L. Kalmykov , Lev V. Kalmykov","doi":"10.1016/j.inffus.2025.103352","DOIUrl":"10.1016/j.inffus.2025.103352","url":null,"abstract":"<div><div>Artificial Intelligence (AI) plays a leading role in Industry 4.0 and future Industry 5.0. Concerns about the opacity of today's neural network AI solutions have led to the Explainable AI (XAI) project, which attempts to open the black box of neural networks. While XAI can help to partially interpret and explain the workings of neural networks, it has not changed their original subsymbolic nature and the opaque statistical nature of their workings. Significant uncertainties remain about the safety, reliability, and accountability of modern neural network AI solutions. Here we present a novel AI method that has a fully transparent white-box nature - eXplicitly eXplainable Artificial Intelligence (XXAI). XXAI is implemented on deterministic cellular automata whose rules are based on first principles of the problem domain. XXAI overcomes the limitations for a broader application of symbolic AI. The practical value of XXAI lies in its ability to make autonomous, fully transparent decisions due to its multi-component, multi-level, networked, hyper-logical nature. Looking ahead, XXAI has the potential to become a leading strategic partner in the field of neuro-symbolic hybrid AI systems. XXAI is able to systematically validate neural network solutions, ensuring that the required standards of reliability, security and ethics are met throughout the AI lifecycle, from training to deployment. By creating a clear cognitive framework, XXAI will enable the development of advanced autonomous solutions to achieve the human-centric values of the future Industry 5.0. A comprehensive program for the further development of the proposed approach is presented.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103352"},"PeriodicalIF":14.7,"publicationDate":"2025-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144116561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-05-17DOI: 10.1016/j.inffus.2025.103283
Jialun Wu , Kai He , Rui Mao , Xuequn Shang , Erik Cambria
{"title":"Harnessing the potential of multimodal EHR data: A comprehensive survey of clinical predictive modeling for intelligent healthcare","authors":"Jialun Wu , Kai He , Rui Mao , Xuequn Shang , Erik Cambria","doi":"10.1016/j.inffus.2025.103283","DOIUrl":"10.1016/j.inffus.2025.103283","url":null,"abstract":"<div><div>The digitization of healthcare has led to the accumulation of vast amounts of patient data through Electronic Health Records (EHRs) systems, creating significant opportunities for advancing intelligent healthcare. Recent breakthroughs in deep learning and information fusion techniques have enabled the seamless integration of diverse data sources, providing richer insights for clinical decision-making. This review offers a comprehensive analysis of predictive modeling approaches that leverage multimodal EHR data, focusing on the latest methodologies and their practical applications. We classify the current advancements from both task-driven and method-driven perspectives, while distilling key challenges and motivations that have fueled these innovations. This exploration examines the real-world impact of advanced technologies in healthcare, addressing issues from data integration to task formulation, challenges, and method refinement. The role of information fusion in enhancing model performance is also emphasized. Building on the discussions and findings, we highlight promising future research directions critical for advancing multimodal fusion technologies in clinical predictive modeling, addressing the complex challenges of real-world clinical environments, and moving toward universal intelligence in healthcare.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"123 ","pages":"Article 103283"},"PeriodicalIF":14.7,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}