Information Fusion最新文献

筛选
英文 中文
Class label fusion guided correlation learning for incomplete multi-label classification
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-07 DOI: 10.1016/j.inffus.2025.103072
Qingwei Jia , Tingquan Deng , Ming Yang , Yan Wang , Changzhong Wang
{"title":"Class label fusion guided correlation learning for incomplete multi-label classification","authors":"Qingwei Jia ,&nbsp;Tingquan Deng ,&nbsp;Ming Yang ,&nbsp;Yan Wang ,&nbsp;Changzhong Wang","doi":"10.1016/j.inffus.2025.103072","DOIUrl":"10.1016/j.inffus.2025.103072","url":null,"abstract":"<div><div>Label correlation learning is a challenging issue in multi-label classification, which has been extensively studied recently. Typically, the second-order label correlation is achieved by fusing information from pairwise labels, while high-order correlation arises from integrating global information of the entire label matrix with the help of some regularization constraints. However, few studies focus on collaboratively learning label correlations through local and global label fusion. Unfortunately, in the case of label missing, neither second-order nor high-order label correlations can be accurately measured and characterized. To address the two issues, a novel approach for incomplete multi-label classification called class label fusion guided correlation learning (CLFCL) is proposed. The pointwise fuzzy mutual information is introduced for prior fusion of paired labels. Specifically, the second-order label correlation is obtained by relaxing the pointwise mutual information. Simultaneously, an adaptively low-rank regularization technique is developed to fuse globally relevant labels so as to extract the high-order correlations. By integrating second-order and high-order label correlations, the label distribution of instances is learned. To recover missing labels, a multi-label classifier is trained by regressing features to label distribution space rather than original logical label space. An efficient algorithm is designed to solve the built nonconvex optimization. Extensive experimental results validate the superior performance of the proposed model against state-of-the-art missing multi-label classification methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103072"},"PeriodicalIF":14.7,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving heterogeneous multi-modal sensor data fusion via federated learning for smart healthcare
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-07 DOI: 10.1016/j.inffus.2025.103084
Jing Wang , Mohammad Tabrez Quasim , Bo Yi
{"title":"Privacy-preserving heterogeneous multi-modal sensor data fusion via federated learning for smart healthcare","authors":"Jing Wang ,&nbsp;Mohammad Tabrez Quasim ,&nbsp;Bo Yi","doi":"10.1016/j.inffus.2025.103084","DOIUrl":"10.1016/j.inffus.2025.103084","url":null,"abstract":"<div><div>The widespread availability of medical Internet of Things devices and smart healthcare monitoring systems has unprecedentedly led to the emergence of the generation of heterogeneous sensor data throughout the different decentralized healthcare institutions. Although this data has a significant potential to enhance patient care, the handling of multi-modal sensor data, with the need to maintain the privacy of the patients and comply with the necessary regulations, proves to be very difficult using traditional ways of central processing. We propose PHMS-Fed, a novel privacy-preserving heterogeneous multi-modal sensor fusion framework based on federated learning for smart healthcare applications. Our framework enables healthcare institutions to train shared diagnostic models collaboratively without exchanging raw sensor data while effectively capturing complex interactions between different sensor modalities. In order to maintain the privacy of its use, PHMS-Fed, through adaptive tensor decomposition and secure parameter aggregation, automatically matches different combinations of sensor modalities across different institutions. The conducted extensive experiments on real-world healthcare datasets reveal the prominent effectiveness of the proposed framework, as PHMS-Fed has surpassed selected state-of-the-art methods by 25.6 % concerning privacy preservation and by 23.4 % in relation to the accuracy of the cross-institutional monitoring. As the results clearly show, the framework is extremely efficient in handling multiple sensor modalities while being able to deliver strong results in physiological monitoring (accuracy score: 0.9386 out of 1.0), privacy preservation (protection score: 0.9845 out of 1.0), and sensor fusion (fusion accuracy: 0.9591 out of 1.0) applications.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103084"},"PeriodicalIF":14.7,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143592990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QMLSC: A quantum multimodal learning model for sentiment classification
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-06 DOI: 10.1016/j.inffus.2025.103049
YaoChong Li, Yi Qu, Ri-Gui Zhou, Jing Zhang
{"title":"QMLSC: A quantum multimodal learning model for sentiment classification","authors":"YaoChong Li,&nbsp;Yi Qu,&nbsp;Ri-Gui Zhou,&nbsp;Jing Zhang","doi":"10.1016/j.inffus.2025.103049","DOIUrl":"10.1016/j.inffus.2025.103049","url":null,"abstract":"<div><div>Sentiment classification research is gaining prominence for enhancing user experience, facilitating targeted marketing, and supporting mental health assessments while driving technological innovation. Due to the complexity and diversity of emotional expression, this study proposes quantum multimodal learning for sentiment classification (QMLSC), a novel quantum–classical hybrid model that integrates text and speech data to capture emotional signals more effectively. To address the limitations of the noisy intermediate-scale quantum era, we designed advanced variational quantum circuit (VQC) architectures to efficiently process high-dimensional data, maximizing feature retention and minimizing information loss. Our approach employs a residual structure that fuses quantum and classical components, enhancing the benefits of quantum features and conventional machine learning attributes. By using randomized expressive circuits, we improve system flexibility, accuracy, and robustness in sentiment classification tasks. Integrating VQC significantly reduces the number of parameters compared to fully connected layers, resulting in improved accuracy and computational efficiency. Empirical findings validate the superior performance of our fusion approach in effectively mitigating noise and error impacts associated with quantum computing and demonstrate strong potential for future applications in complex emotional information processing. This study provides new insights and methodologies for advancing sentiment classification technology and highlights the broad application potential for advancing quantum computing in information processing fields.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103049"},"PeriodicalIF":14.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143579943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly Supervised RGBT salient object detection via SAM-Guided Label Optimization and Progressive Cross-modal Cross-scale Fusion
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-06 DOI: 10.1016/j.inffus.2025.103048
Sulan Zhai , Chengzhuang Liu , Zhengzheng Tu , Chenglong Li , Liuxuanqi Gao
{"title":"Weakly Supervised RGBT salient object detection via SAM-Guided Label Optimization and Progressive Cross-modal Cross-scale Fusion","authors":"Sulan Zhai ,&nbsp;Chengzhuang Liu ,&nbsp;Zhengzheng Tu ,&nbsp;Chenglong Li ,&nbsp;Liuxuanqi Gao","doi":"10.1016/j.inffus.2025.103048","DOIUrl":"10.1016/j.inffus.2025.103048","url":null,"abstract":"<div><div>Current fully supervised RGB-Thermal salient object detection (RGBT SOD) methods rely on labor-intensive pixel-wise annotations. This work explores weakly supervised RGBT SOD using scribble annotations to reduce the annotation cost. Existing scribble-supervised methods mainly rely on pseudo-labels, which often suffer from issues such as information redundancy, incomplete objects, or inaccurate boundaries. Inspired by the advanced Segment Anything Model (SAM), we propose a two-stage SAM-Guided Label Optimization method to obtain accurate pseudo-labels. The first stage employs the prompt-based segmentation ability of SAM to generate initial masks from RGB and thermal images, while the second stage refines these initial masks using the zero-shot segmentation capability of SAM and complementary information from RGB and thermal modalities. Moreover, existing multi-modal fusion methods might not fully synergize the interactions between channel and spatial dimensions, and they often neglect effective cross-scale feature collaboration during the multi-modal fusion. To address this, we propose the Progressive Cross-modal Cross-scale Fusion Unit (PCCFU), which fuses same-level multi-modal features while progressively integrating higher-level features. PCCFU consists of the Dual Cross-attention Fusion Module and the Cross-scale Aggregation Module. The former facilitates synergistic interactions across both the channel and spatial dimensions, while the latter interacts individually with each higher-level feature during multi-modal feature fusion. Extensive experiments indicate that our method outperforms most fully supervised RGBT SOD approaches and surpasses the previous state-of-the-art weakly supervised method, achieving average improvements of 3.4% in mean F-measure (<span><math><msubsup><mrow><mi>F</mi></mrow><mrow><mi>β</mi></mrow><mrow><mi>a</mi><mi>v</mi><mi>g</mi></mrow></msubsup></math></span>) and 21.5% in mean absolute error (<span><math><mrow><mi>M</mi><mi>A</mi><mi>E</mi></mrow></math></span>) across three benchmark datasets. The code is available at: <span><span>https://github.com/tzz-ahu</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103048"},"PeriodicalIF":14.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143593047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-grained knowledge fusion for retrieval-augmented medical visual question answering
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-06 DOI: 10.1016/j.inffus.2025.103059
Xiao Liang , Di Wang , Bin Jing , Zhicheng Jiao , Ronghan Li , Ruyi Liu , Qiguang Miao , Quan Wang
{"title":"Fine-grained knowledge fusion for retrieval-augmented medical visual question answering","authors":"Xiao Liang ,&nbsp;Di Wang ,&nbsp;Bin Jing ,&nbsp;Zhicheng Jiao ,&nbsp;Ronghan Li ,&nbsp;Ruyi Liu ,&nbsp;Qiguang Miao ,&nbsp;Quan Wang","doi":"10.1016/j.inffus.2025.103059","DOIUrl":"10.1016/j.inffus.2025.103059","url":null,"abstract":"<div><div>Given that medical image analysis often requires experts to recall typical symptoms from diagnostic archives or their own experience, implementing retrieval augmentation in multi-modal tasks like Medical Visual Question Answering (MedVQA) becomes a logical step to facilitate access and use of diverse case data. However, introducing existing retrieval augmentation methods to MedVQA faces two limitations: (1) Due to privacy concerns, direct access to original medical data is typically restricted. (2) The symptoms distinguishing various diseases are often subtle and fine-grained, complicating the task of ensuring that the retrieved information precisely matches the query. To address these challenges, we propose a retrieval augmentation framework with the <strong>F</strong>ine-<strong>G</strong>rained <strong>R</strong>e-<strong>W</strong>eighting (<strong>FGRW</strong>) strategy, which employs fine-grained encoding for retrieved multi-source knowledge, avoiding direct access to original image–text data. It then computes re-weighted relevance scores between queries and knowledge, using these scores as supervised priors to guide the fusion of queries and knowledge, thus reducing interference from redundant information in answering questions. Experimental results on PathVQA, VQA-RAD, and SLAKE public benchmarks demonstrate FGRW’s state-of-the-art performance. Code is available at the public repository.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103059"},"PeriodicalIF":14.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive symmetry-based adversarial perturbation augmentation for molecular graph representations with dual-fusion attention information
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-06 DOI: 10.1016/j.inffus.2025.103062
Shuting Jin , Xiangrong Liu , Junlin Xu , Sisi Yuan , Hongxing Xiang , Lian Shen , Chunyan Li , Zhangming Niu , Yinhui Jiang
{"title":"Adaptive symmetry-based adversarial perturbation augmentation for molecular graph representations with dual-fusion attention information","authors":"Shuting Jin ,&nbsp;Xiangrong Liu ,&nbsp;Junlin Xu ,&nbsp;Sisi Yuan ,&nbsp;Hongxing Xiang ,&nbsp;Lian Shen ,&nbsp;Chunyan Li ,&nbsp;Zhangming Niu ,&nbsp;Yinhui Jiang","doi":"10.1016/j.inffus.2025.103062","DOIUrl":"10.1016/j.inffus.2025.103062","url":null,"abstract":"<div><div>High-quality molecular representation is essential for AI-driven drug discovery. Despite recent progress in Graph Neural Networks (GNNs) for this purpose, challenges such as data imbalance and overfitting persist due to the limited availability of labeled molecules. Augmentation techniques have become a popular solution, yet strategies that modify the topological structure of molecular graphs could lead to the loss of critical chemical information. Moreover, adversarial augmentation approaches, given the sparsity and complexity of molecular data, tend to amplify the potential risks of introducing noise. This paper introduces a novel plug-and-play architecture, GapCL, which employs a symmetric perturbation mechanism during gradient-based adversarial augmentation to ensure that the perturbed graphs retain potentially essential chemical space information. Additionally, GapCL incorporates a dual-fusion attention module to amplify key information and leverages contrastive learning constraints to enable an adaptive perturbation strategy tailored to various benchmark models. The proposed method has been evaluated across 12 molecular property prediction tasks, demonstrating the potential of GapCL to comprehensively enhance the robustness and generalization of molecular graph representation models. Further experimental studies also indicate that equipping models with GapCL leads to improved representation capabilities and achieves state-of-the-art performance in most cases. The source data and code are available at <span><span>https://github.com/stjin-XMU/GapCL.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103062"},"PeriodicalIF":14.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143579945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotion inference of text based on counterfactual behavior knowledge
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-06 DOI: 10.1016/j.inffus.2025.103060
Xinzhi Wang , Jiayan Qian , Yudong Chang , Hang Yu , Hui Zhang
{"title":"Emotion inference of text based on counterfactual behavior knowledge","authors":"Xinzhi Wang ,&nbsp;Jiayan Qian ,&nbsp;Yudong Chang ,&nbsp;Hang Yu ,&nbsp;Hui Zhang","doi":"10.1016/j.inffus.2025.103060","DOIUrl":"10.1016/j.inffus.2025.103060","url":null,"abstract":"<div><div>Reader’s emotion is triggered by writer’s expression and content of text. Reader’s emotion inference can help industries and companies discover the preferences and needs of readers and customize appealing content. Currently, most scholars focus on mining emotion expressed by writers, while neglecting easily-transferred reader’s emotion which is simultaneously influenced by objective event content, subjective writer’s affect and individual cognition. Faced with that, we propose a reader’s emotion inference method based on counterfactual behavior knowledge. Multi-granularity elements are extracted, including event indicator, behavior driver and subjective words that hidden in text. Three steps are included in this method. First, counterfactual behavior knowledge is constructed by replacing A–F–B knowledge in original text, which assists models to process and understand the relationships between events and emotions according to the fusion of facts and counterfacts. Second, a knowledge prompt method is proposed, which splice the A–F–B knowledge (as a new feature) to supplement and reinforce established facts. Third, a decision enhancement method is proposed, which employs self-reflection mechanism to fuse original decision and improved decision based on emotion transfer preferences. Experiments and a questionnaire survey are conducted on social news data with reader’s emotion votes. The results show that the models with proposed method outperforms baseline models.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103060"},"PeriodicalIF":14.7,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143583087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminative approximate regression projection for feature extraction
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-05 DOI: 10.1016/j.inffus.2025.103088
Zhonghua Liu , Fa Zhu , Athanasios V. Vasilakos , Xingchi Chen , Qiang Zhao , David Camacho
{"title":"Discriminative approximate regression projection for feature extraction","authors":"Zhonghua Liu ,&nbsp;Fa Zhu ,&nbsp;Athanasios V. Vasilakos ,&nbsp;Xingchi Chen ,&nbsp;Qiang Zhao ,&nbsp;David Camacho","doi":"10.1016/j.inffus.2025.103088","DOIUrl":"10.1016/j.inffus.2025.103088","url":null,"abstract":"<div><div>Dimension reduction has attracted much attention in pattern recognition and computer vision to mitigate the ‘curse of dimensionality’. As a classical dimension reduction method, least squares regression (LSR) shows its powerful ability in fitting data and extracting discriminative features. However, LSR still faces the following issues. First, the retained feature dimension is strictly equal to the number of classes, which lacks flexibility. Especially, when the categories are few, LSR is insufficient to extract enough features. Second, the local structure relation is neglected. The manifold in LSR only preserves the neighborhood relationship of high-dimensional data, but cannot cluster these data to a single point in a low-dimensional space. Third, LSR considers dimensionality reduction, but neglects data reconstruction. In fact, if the extracted features can reconstruct the original data well, it indicates that the extracted features retain most information of the original data, which will be beneficial for the further classification. To this end, this paper proposes Discriminative Approximate Regression Project (DARP) to address the above issues. In DARP, two matrices are introduced for feature extraction and label fitting to ensure the flexibility of the extracted feature dimension. Using the label matrix as a weight matrix, the samples from each class are projected onto their corresponding class centroid point, which can make the samples from each class to gather in a low-dimensional space. Furthermore, a data reconstruction regularization term is introduced to enable that the feature extraction matrix can reconstruct the original data to preserve most information of the original data. Comprehensive experimental results on widely tested benchmarks demonstrate the competitive performance of DARP.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103088"},"PeriodicalIF":14.7,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143619905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking information fusion: Achieving adaptive information throughput and interaction pattern in graph convolutional networks for collaborative filtering
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-04 DOI: 10.1016/j.inffus.2025.103050
JiaXin Wu , Chenglong Pang , Guangxiong Chen , Jihong Wan , Xiaocao Ouyang , Jie Zhao
{"title":"Rethinking information fusion: Achieving adaptive information throughput and interaction pattern in graph convolutional networks for collaborative filtering","authors":"JiaXin Wu ,&nbsp;Chenglong Pang ,&nbsp;Guangxiong Chen ,&nbsp;Jihong Wan ,&nbsp;Xiaocao Ouyang ,&nbsp;Jie Zhao","doi":"10.1016/j.inffus.2025.103050","DOIUrl":"10.1016/j.inffus.2025.103050","url":null,"abstract":"<div><div>Graph convolutional networks (GCNs) are popular in collaborative filtering because they have robust information fusion mechanisms. However, existing GCN-based models generally treat all nodes uniformly within a bipartite graph, such as the classic e-commerce scenario with user nodes and item nodes, which overlooks variations in node types and node information amount. Furthermore, these models rarely consider the adaptability of different information fusion methods to various user interaction patterns across different scenarios. These oversights may result in the loss of node information and inadequate representation learning, thereby potentially leading to sub-optimal model performance and efficiency. To address these problems, we propose an adaptive GCN-based model, called AdaptGCN, which handles both node and layer heterogeneity. AdaptGCN introduces a new node information fusion method to achieve adaptive information throughput according to node type and node information amount. It also has different layer information fusion methods to handle various interaction patterns across scenarios. For the node information amount, we provide the corresponding definition and explore properties for quantifying such information. Based on the quantification properties, we design a unified quantification function and develop it into an adaptive quantification method according to the characteristics of different types of nodes, which we use for node information fusion in AdaptGCN to achieve adaptive information throughput for each node. Based on this, we further propose a unified layer information fusion method to address the varying interaction patterns across different scenarios. Extensive experimental results show that AdaptGCN outperforms state-of-the-art GCN-based models in terms of both performance and efficiency.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103050"},"PeriodicalIF":14.7,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143562526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed estimation for uncertain systems subject to measurement quantization and adversarial attacks
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2025-03-03 DOI: 10.1016/j.inffus.2025.103044
Raquel Caballero-Águila , Jun Hu , Josefa Linares-Pérez
{"title":"Distributed estimation for uncertain systems subject to measurement quantization and adversarial attacks","authors":"Raquel Caballero-Águila ,&nbsp;Jun Hu ,&nbsp;Josefa Linares-Pérez","doi":"10.1016/j.inffus.2025.103044","DOIUrl":"10.1016/j.inffus.2025.103044","url":null,"abstract":"<div><div>This study presents recursive algorithms for distributed estimation over a sensor network with a fixed topology, where each sensor node performs estimation using its own data as well as information from neighboring nodes. The algorithms are developed under the assumption that the sensor measurements are quantized and subject to random parameter variations, in addition to time-correlated additive noises. The network is assumed to be exposed to adversarial disruptions, specifically random deception attacks and denial-of-service (DoS) attacks. To address data loss due to DoS attacks, we introduce a compensation strategy that utilizes predicted values to preserve estimation reliability. In the proposed distributed estimation framework, each sensor local processor produces least-squares linear estimators based on both its own and neighboring sensor measurements. These initial estimators are termed early estimators, as those within the neighborhood of each node are subsequently fused in a second stage to yield the final distributed estimators. The algorithms rely on a covariance-based estimation approach that operates without specific structural assumptions about the dynamics of the signal process. A numerical experiment illustrates the applicability and effectiveness of the proposed algorithms and evaluates the effects of adversarial attacks on the estimation accuracy.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103044"},"PeriodicalIF":14.7,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143550299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信