Information FusionPub Date : 2025-09-25DOI: 10.1016/j.inffus.2025.103787
Dihua Wu , Yi Lu , Donger Yang , Di Cui , Mingchuan Zhou , Jinming Pan , Yibin Ying
{"title":"A low-stress dual-modal imaging system and dead chicken detection method for commercial layer farms","authors":"Dihua Wu , Yi Lu , Donger Yang , Di Cui , Mingchuan Zhou , Jinming Pan , Yibin Ying","doi":"10.1016/j.inffus.2025.103787","DOIUrl":"10.1016/j.inffus.2025.103787","url":null,"abstract":"<div><div>Conventional methods for detecting dead chickens in commercial poultry farming rely heavily on labor-intensive manual inspections, which are prone to inefficiency, biosecurity risks, and human error. While sensor-based and computer vision techniques have improved automated detection, single-modality methods still face significant limitations: visible-light imaging requires stressful supplemental lighting, while thermal imaging lacks critical textural details. Although RGB-thermal (RGB-T) fusion alleviates some of these challenges, current systems often struggle with spatiotemporal misalignment and simplistic fusion techniques, resulting in redundancy and performance bottlenecks. This study introduces a low-stress, spatiotemporally synchronized RGB-T dual-modal imaging system combined with an end-to-end Dual-Stream Dead Chicken Detection Network (DS-DCDNet). By employing spectral beam splitting and multi-source synchronization, the hardware enables real-time, aligned RGB-T data acquisition. DS-DCDNet leverages adaptive feature self-fusion and dual-stream interactions, overcoming the limitations of manual parameter dependencies and improving detection accuracy by robustly integrating features at the representation level. Experimental results demonstrate that DS-DCDNet outperforms existing weighted and layer fusion methods, offering superior accuracy and stress-free detection capabilities. This research provides a scalable solution for high-precision automated dead chicken detection, meeting the growing demands of modern poultry farming. Related demonstration videos are available on YouTube (<span><span>https://youtu.be/Pr1GjgX6kuw?si=kKRLe3PEDBlPQrSq</span><svg><path></path></svg></span>) and YouKu (<span><span>https://v.youku.com/video?vid=XNjQ3NTMwNjM2NA==</span><svg><path></path></svg></span>) for reference.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103787"},"PeriodicalIF":15.5,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145181280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-25DOI: 10.1016/j.inffus.2025.103769
Zhe Liu , Sukumar Letchmunan , Muhammet Deveci , Dragan Pamucar , Patrick Siarry
{"title":"New symmetric belief α-divergence and belief entropy via belief-plausibility transformation for multi-source information fusion","authors":"Zhe Liu , Sukumar Letchmunan , Muhammet Deveci , Dragan Pamucar , Patrick Siarry","doi":"10.1016/j.inffus.2025.103769","DOIUrl":"10.1016/j.inffus.2025.103769","url":null,"abstract":"<div><div>Dempster-Shafer evidence theory, a powerful tool for managing imperfect information, has been extensively used in various fields of multi-source information fusion. However, how to effectively quantify the difference between evidences and the uncertainty within each evidence remains a challenge. In this paper, we introduce two new symmetric belief <span><math><mi>α</mi></math></span>-divergences based on belief-plausibility transformation to measure the difference between evidences. These divergences exhibit key properties such as nonnegativity, nondegeneracy and symmetry. We also show that they reduce to well-known divergences like <span><math><msup><mi>χ</mi><mn>2</mn></msup></math></span>, Jeffreys, Hellinger, Jensen-Shannon and arithmetic-geometric in specific cases. Additionally, we propose a new belief entropy, derived from the belief-plausibility transformation, to quantify the uncertainty inherent in evidence. Leveraging both the divergences and entropy, we develop a new multi-source information fusion method that assesses the credibility and informational volume of each evidence, providing deeper insights into the importance of each evidence. To demonstrate the effectiveness of our method, we apply it to plant disease detection and fault diagnosis, where it outperforms existing techniques.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103769"},"PeriodicalIF":15.5,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-25DOI: 10.1016/j.inffus.2025.103786
Yue Ni , Donglin Xue , Weijian Chi , Ji Luan , Jiahang Liu
{"title":"CSFAFormer: Category-selective feature aggregation transformer for multimodal remote sensing image semantic segmentation","authors":"Yue Ni , Donglin Xue , Weijian Chi , Ji Luan , Jiahang Liu","doi":"10.1016/j.inffus.2025.103786","DOIUrl":"10.1016/j.inffus.2025.103786","url":null,"abstract":"<div><div>Feature fusion is one of the keys to multimodal data segmentation. Different fusion mechanisms vary significantly in how effectively they utilize inter-modal features, exploit complementary information, and enhance representations, while also greatly affecting model parameters and computational complexity. Cross-attention fusion mechanism (CAFM) is the most widely used feature fusion mechanism in the current multimodal fusion classification task, but due to the inherent limitation, it cannot adapt to the differentiated feature requirements of different classes and leads to the blurring of interclass and dispersal features of intraclass. To address these challenges, a novel Category-Selective Feature Aggregation Transformer (CSFAFormer) is proposed to dynamically adjust the interaction weights between modalities along the class dimension, thereby fully leveraging the complementary advantages of different modalities. To accommodate the differentiated needs of different categories, a Category Cross-Calibration Mechanism (C<sup>3</sup>M) is designed to compress multi-channel features, estimate pixel-level class distributions, and employ a confidence-based cross-calibration strategy to dynamically adjust interaction weights along the class dimension, better accommodating the varying demands of different classes. To further semantic consistency and inter-class separability, a Category-Selective Transformer Module is proposed to leverage the class information calibrated by C<sup>3</sup>M for adaptive weighted fusion along the class dimension, thereby optimizing the representation of category-specific features. Experimental results indicate that CSFAFormer significantly outperforms in segmentation performance. Compared to the CAFM, CSFAFormer reduces the parameter count by 38.5 % and the computational cost by 72.3 %, while maintaining superior performance. The code is available at: <span><span>https://github.com/NUAALISILab/CSFAFormer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103786"},"PeriodicalIF":15.5,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-24DOI: 10.1016/j.inffus.2025.103771
Minghao Wang, Shaoyi Du, Juan Wang, Hongcheng Han, Huanhuan Huo, Dong Zhang, Shanshan Yu, Jue Jiang
{"title":"A segment anything model for transesophageal echocardiography based on bidirectional spatiotemporal context fusion","authors":"Minghao Wang, Shaoyi Du, Juan Wang, Hongcheng Han, Huanhuan Huo, Dong Zhang, Shanshan Yu, Jue Jiang","doi":"10.1016/j.inffus.2025.103771","DOIUrl":"10.1016/j.inffus.2025.103771","url":null,"abstract":"<div><div>Accurate segmentation of the left atrial appendage (LAA) in transesophageal echocardiography is the foundation for clinical evaluation. However, the ambiguous boundaries of the LAA, together with ultrasound noise and complex cardiac motion, make it challenging to obtain temporally consistent and spatially reliable segmentation results. Furthermore, existing works often process spatial and temporal features in isolation, without effectively leveraging spatiotemporal context fusion to enhance segmentation performance. To address these challenges, we propose a Segment Anything Model Based on Bidirectional Spatiotemporal Context Fusion (BiSTC-SAM). First, we design a spatiotemporal context network that encodes effective pixels associated with target changes, thereby mining temporal cues from spatial features. Building on this, we further develop a multi-scale context memory network that performs dynamic feature alignment, thereby integrating temporal representations to refine spatial features. We evaluate the segmentation and generalization performance of our method on a self-constructed transesophageal echocardiography dataset, and further assess its adaptability to different modalities on two publicly available transthoracic echocardiography datasets. Experimental results demonstrate that our method outperforms competing methods in terms of boundary segmentation accuracy and temporal consistency.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103771"},"PeriodicalIF":15.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-23DOI: 10.1016/j.inffus.2025.103637
Slawek Smyl , Boris N. Oreshkin , Paweł Pełka , Grzegorz Dudek
{"title":"Any-quantile probabilistic forecasting of short-term electricity demand: Fusing uncertainties from diverse sources","authors":"Slawek Smyl , Boris N. Oreshkin , Paweł Pełka , Grzegorz Dudek","doi":"10.1016/j.inffus.2025.103637","DOIUrl":"10.1016/j.inffus.2025.103637","url":null,"abstract":"<div><div>Power systems operate under significant uncertainty arising from diverse and dynamic factors such as fluctuating renewable energy generation, evolving consumption patterns, and complex market dynamics. Accurately forecasting electricity demand necessitates advanced methodologies capable of capturing these multifaceted uncertainties. Our work develops any-quantile probabilistic forecasting framework, which enables the generation of forecasts for arbitrary quantile levels at inference time using a single trained model. This constitutes a substantial methodological advancement over traditional quantile regression techniques, which typically require training a separate model for each quantile or limiting predictions to a fixed set of predefined quantile levels. We show that integrating this framework into state-of-the-art neural architectures, specifically ESRNN and N-BEATS, yields superior distributional forecasting performance in the context of short-term electricity demand. Additionally, we develop the general Bayesian theory of cross-learning and link its latent objects with the elements of our architectures, providing a Fusion theory foundation for cross-learning from multiple power systems.</div><div>Empirical validation utilizing a comprehensive dataset of hourly electricity demand from 35 European countries showcases the efficacy of our approach, demonstrating superior predictive performance and enhanced quantile forecasting accuracy.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103637"},"PeriodicalIF":15.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-23DOI: 10.1016/j.inffus.2025.103760
Yuanyuan Zhou , Ligang Zhou
{"title":"Dynamic adaptive consensus reaching process with risk attitude for multi-criteria group decision making","authors":"Yuanyuan Zhou , Ligang Zhou","doi":"10.1016/j.inffus.2025.103760","DOIUrl":"10.1016/j.inffus.2025.103760","url":null,"abstract":"<div><div>To address conflicts among decision-makers (DMs) in multi-criteria group decision making (MCGDM), this study proposes a dynamic adaptive consensus-based conflict detection and reaching model that integrates decision risk and risk attitudes into the MCGDM process. First, criterion weights are determined based on the significance levels and risk attitudes of the DMs. Then, a consensus measure is developed by incorporating both cognitive and interest conflicts. A consensus reaching process (CRP) with dynamic adaptive feedback mechanism is subsequently applied to enhance the consensus level among DMs. Furthermore, internal and external decision risks are combined to determine the weights of DMs. Finally, a case study, sensitivity analysis, and comparative assessment are conducted to validate the rationale and effectiveness of the proposed approach.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103760"},"PeriodicalIF":15.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-23DOI: 10.1016/j.inffus.2025.103763
Fu-Quan Zhang , Kai-Hong Chen , Tsu-Yang Wu , Yang Hong , Jia-Jun Zhu , Chao Chen , Lin-Juan Ma , Jia-Xin Xu
{"title":"MOGAR: Multi-view optical and geometry adaptive refinement for high-fidelity 3D asset generation","authors":"Fu-Quan Zhang , Kai-Hong Chen , Tsu-Yang Wu , Yang Hong , Jia-Jun Zhu , Chao Chen , Lin-Juan Ma , Jia-Xin Xu","doi":"10.1016/j.inffus.2025.103763","DOIUrl":"10.1016/j.inffus.2025.103763","url":null,"abstract":"<div><div>Multi-modal 3D asset generation from sparse-view inputs remains a core challenge in both computer vision and graphics due to the inherent difficulties in modelling multi-view geometric consistency and recovering high-frequency appearance details. While Convolutional Neural Networks (CNNs) and Transformers have demonstrated impressive capabilities in 3D generation, they suffer from significant limitations–CNNs struggle with capturing long-range dependencies and global multi-view coherence, whereas Transformers incur quadratic computational complexity and often yield view-inconsistent or structurally ambiguous outputs. Fortunately, recent advancements in state space models, particularly the Mamba architecture, have shown remarkable potential by combining long-range dependency modelling with linear computational efficiency. However, the original Mamba is inherently constrained to unidirectional causal sequence modelling, making it suboptimal for high-dimensional visual scenarios. To address this, we propose MOGAR (Multi-View Optical and Geometry Adaptive Refinement), a novel and efficient multi-modal framework for 3D asset generation. MOGAR introduces the Multi-view Guided Selective Mamba (MvGSM) module as its core, enabling cross-directional and cross-scale alignment and integration of geometric and optical features. By synergistically combining feed-forward coarse asset generation, multi-view structural optimisation, optical attribute prediction, and cross-modal detail refinement via a UNet architecture, MOGAR achieves a tightly coupled reasoning pipeline from global structure to fine-grained details. We conduct extensive evaluations on several standard benchmarks, demonstrating that MOGAR consistently outperforms existing approaches in terms of geometric accuracy, rendering fidelity, and cross-view consistency, establishing a new paradigm for efficient and high-quality 3D asset generation under sparse input settings.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103763"},"PeriodicalIF":15.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-23DOI: 10.1016/j.inffus.2025.103768
Pratibha Sharma , Ankit Kumar , Subit K. Jain
{"title":"Ultrasound image segmentation: A systematic review of deformable models from classical techniques to intelligent advancements","authors":"Pratibha Sharma , Ankit Kumar , Subit K. Jain","doi":"10.1016/j.inffus.2025.103768","DOIUrl":"10.1016/j.inffus.2025.103768","url":null,"abstract":"<div><div>Ultrasound imaging is a widely used diagnostic modality in modern medicine due to its affordability, safety, and real-time functionality, which eliminates the need for radiation exposure. However, low contrast, speckle noise, and imaging artifacts often limit its effectiveness, making accurate interpretation and analysis challenging. This highlights the need for advanced segmentation techniques to extract clinically meaningful information. Deformable models have emerged as reliable solutions for ultrasound image segmentation, as they effectively capture complex anatomical structures with mathematical stability and adaptability. This review systematically explores the development and application of deformable models and hybrid approaches that integrate edge-region-based methods, statistical techniques, and deep learning strategies. We critically analyze recent advances, compare various models across multiple datasets and clinical contexts, and discuss their strengths and limitations. The review highlights that synergistic edge-region hybrid models tend to offer higher segmentation accuracy, while deep learning-based hybrid models provide the advantage of automation. Despite these advancements, most models still struggle with noisy and low-contrast images, indicating the need for more robust, adaptive, and computationally efficient solutions for real-world clinical use.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103768"},"PeriodicalIF":15.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-23DOI: 10.1016/j.inffus.2025.103756
Zecong Ye , Hexiang Hao , Yueping Peng , Wei Tang , Xuekai Zhang , Baixuan Han , Haolong Zhai
{"title":"MBUDet: Misaligned bimodal UAV target detection via target offset label generation","authors":"Zecong Ye , Hexiang Hao , Yueping Peng , Wei Tang , Xuekai Zhang , Baixuan Han , Haolong Zhai","doi":"10.1016/j.inffus.2025.103756","DOIUrl":"10.1016/j.inffus.2025.103756","url":null,"abstract":"<div><div>The widespread use of unmanned aerial vehicles (UAVs) has increased the demand for airborne target detection technologies in security and surveillance. The use of only infrared or visible detection technology is often limited by environmental factors and target characteristics. Consequently, the utilization of RGB-Infrared fusion techniques in detection has emerged as a key area of research. However, the alignment operation of multimodal images is quite time-consuming in practical UAV target detection missions. To address this challenge, we propose Misaligned Bimodal UAV Target Detection (MBUDet), which ingeniously integrates the two stages of target alignment and RGB-Infrared object detection into a process, thereby enhancing the detection speed. It primarily comprises four modules: size alignment, target alignment, modal weight calculation, and modal feature fusion. The size alignment module unifies the visible and infrared image sizes; The target alignment module uses existing bimodal target labels to generate target offset labels, which supervise the network to learn target feature alignment, and this module overcomes the effect of mosaic augmentation; the modal weight calculation module mainly solves the problem of a single modality appearing as a target resulting in the network not being able to learn it effectively; the modal feature fusion module focuses on enhancing the feature representations utilizing a spatial attention module. Experiments on our proposed Misaligned Bimodal UAV target dataset (MBU), MBUDet outperforms baseline by 4.8 % and 4.1 % in F1, and AP50 respectively. Also, the experimental results show that the method performs better than existing algorithms. The code associated with this study will be made publicly available soon at the following GitHub repository: <span><span>http://github.com/Yipzcc/MBUDet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103756"},"PeriodicalIF":15.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-22DOI: 10.1016/j.inffus.2025.103761
Dezhi Sun , Jiwei Qin , Weilin Tang , Xizhong Qin , Fei Shi , Minrui Wang , Zhenliang Liao
{"title":"DS-HBI: Dual-stream fusion forecasting model with historical backfilling imputation","authors":"Dezhi Sun , Jiwei Qin , Weilin Tang , Xizhong Qin , Fei Shi , Minrui Wang , Zhenliang Liao","doi":"10.1016/j.inffus.2025.103761","DOIUrl":"10.1016/j.inffus.2025.103761","url":null,"abstract":"<div><div>Deep learning models demonstrate significant potential for atmospheric carbon concentration forecasting, yet confront dual challenges of pervasive data missingness in real-world monitoring scenarios and intricate multivariate dynamic interactions. This paper proposes a <strong>Dual-Stream fusion forecasting model with Historical Backfilling Imputation (DS-HBI)</strong>, a parallel architectural framework that resolves these challenges through dual-modal complementary pathways. The first pathway processed raw incomplete sequences via masked self-attention to capture intrinsic patterns without imputation bias. In contrast, the second integrates dynamic time warping (DTW) and probabilistic imputation to reconstruct temporally consistent data. A gated attention mechanism dynamically fuses both streams, adaptively balancing their contributions to jointly capture multi-scale temporal features, including long-term trends and abrupt changes, while ensuring robustness under severe data missingness. Evaluated on multi-site Total Carbon Column Observing Network (TCCON) data, DS-HBI demonstrates superior performance in predicting <span><math><mrow><msub><mtext>XCO</mtext><mn>2</mn></msub></mrow></math></span> and <span><math><mrow><msub><mtext>XCH</mtext><mn>4</mn></msub></mrow></math></span>, significantly reducing prediction errors compared to baseline methods. The model particularly excels in high missing-rate scenarios, with ablation studies confirming the necessity of its dual-stream design and hybrid imputation strategy.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103761"},"PeriodicalIF":15.5,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}