Information FusionPub Date : 2025-09-15DOI: 10.1016/j.inffus.2025.103743
Dingkang Yang , Jinjie Wei , Mingcheng Li , Jiyao Liu , Lihao Liu , Ming Hu , Junjun He , Yakun Ju , Wei Zhou , Yang Liu , Lihua Zhang
{"title":"MedAide: Information fusion and anatomy of medical intents via LLM-based agent collaboration","authors":"Dingkang Yang , Jinjie Wei , Mingcheng Li , Jiyao Liu , Lihao Liu , Ming Hu , Junjun He , Yakun Ju , Wei Zhou , Yang Liu , Lihua Zhang","doi":"10.1016/j.inffus.2025.103743","DOIUrl":"10.1016/j.inffus.2025.103743","url":null,"abstract":"<div><div>In healthcare intelligence, the ability to fuse heterogeneous, multi-intent information from diverse clinical sources is fundamental to building reliable decision-making systems. Large Language Model (LLM)-driven information interaction systems currently showing potential promise in the healthcare domain. Nevertheless, they often suffer from information redundancy and coupling when dealing with complex medical intents, leading to severe hallucinations and performance bottlenecks. To this end, we propose <span>MedAide</span>, an LLM-based medical multi-agent collaboration framework designed to enable intent-aware information fusion and coordinated reasoning across specialized healthcare domains. Specifically, we introduce a regularization-guided module that combines syntactic constraints with retrieval-augmented generation to decompose complex queries into structured representations, facilitating fine-grained clinical information fusion and intent resolution. Additionally, a dynamic intent prototype matching module is proposed to utilize dynamic prototype representation with a semantic similarity matching mechanism to achieve adaptive recognition and updating of the agent’s intent in multi-round healthcare dialogues. Ultimately, we design a rotation agent collaboration mechanism that introduces dynamic role rotation and decision-level information fusion across specialized medical agents. Extensive experiments are conducted on four medical benchmarks with composite intents. Experimental results from automated metrics and expert doctor evaluations show that <span>MedAide</span> outperforms current LLMs and improves their medical proficiency and strategic reasoning.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103743"},"PeriodicalIF":15.5,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-14DOI: 10.1016/j.inffus.2025.103744
Linlin Ding , Yinghao Gu , Mo Li , Yishan Pan , Xiaoyang Wang , Ningning Cui , Xin Wang , Yongxin Tong
{"title":"HyIE: An internal-external induced embedding for knowledge hypergraph link prediction","authors":"Linlin Ding , Yinghao Gu , Mo Li , Yishan Pan , Xiaoyang Wang , Ningning Cui , Xin Wang , Yongxin Tong","doi":"10.1016/j.inffus.2025.103744","DOIUrl":"10.1016/j.inffus.2025.103744","url":null,"abstract":"<div><div>Knowledge hypergraphs have the widespread availability due to the ubiquity of <span><math><mi>n</mi></math></span>-ary relational facts in the real world. Link prediction over knowledge hypergraphs has emerged as a promising fundamental task in various domains, such as biology and social networks. However, existing approaches fail to consider the external information of <span><math><mi>n</mi></math></span>-ary tuples and extract the sequential information of entities within <span><math><mi>n</mi></math></span>-ary tuples, which leads to the performance bottleneck. To address this challenge, in this paper, we propose a novel knowledge hypergraph link prediction model, called <strong>HyIE</strong>. Specifically, by introducing virtual nodes, we design a hypergraph convolutional neural networks, called <strong>V-HGCN</strong>, to capture external structural information. To extract sequential information of entities within <span><math><mi>n</mi></math></span>-ary tuples, a relation-aware model equipped by Mamba tailored for knowledge hypergraphs is proposed, named <strong>HyMamba</strong>. Furthermore, to enhance the performance, we develop three negative sampling methods, namely, adversarial learning negative sampling, intra-loop negative sampling and degree-based negative sampling. Extensive experiments on real-world datasets have demonstrated that our HyIE outperforms the state-of-the-art models. Code for HyIE is available at <span><span>https://github.com/nldmz/maincode</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103744"},"PeriodicalIF":15.5,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-14DOI: 10.1016/j.inffus.2025.103734
Zhaoxu Xing , Da-Fang Zhang , Kun Xie , Jinxiong Fang , Xia-An Bi
{"title":"HSCFA: Hierarchical sparse and collaborative fusion attention with large foundation models for diagnosing Alzheimer’s disease","authors":"Zhaoxu Xing , Da-Fang Zhang , Kun Xie , Jinxiong Fang , Xia-An Bi","doi":"10.1016/j.inffus.2025.103734","DOIUrl":"10.1016/j.inffus.2025.103734","url":null,"abstract":"<div><div>Integrating macro-level neuroimaging data with micro-level genetic data offers mechanistic understanding into Alzheimer’s Disease (AD). However, existing methods fail to fully exploit multi-level features and their collaborative patterns. To address this limitation, this paper proposes a unified framework incorporating large foundation models and attention mechanisms to construct, extract, and fuse hierarchical features. We first construct a Hierarchical Sparse and Collaborative Fusion Attention (HSCFA) model to characterize AD pathogenesis, where two sparse attention mechanisms are utilized to extract hierarchical features and co-attention is used to achieve feature fusion. Subsequently, we implement an HSCFA algorithm based on the model, leveraging biomedical large foundation models to construct high-quality features and applying attention mechanisms to capture characteristic AD-specific association patterns. Finally, experiments on public datasets validate the superiority of HSCFA in sample classification and pathogeny extraction, achieving 3-class classification accuracy of 88.41 %. This work provides an effective algorithm for the early diagnosis of AD and identifies AD-related risk genes and abnormal brain regions, offering novel insight for the pathological research of AD. The code of HSCFA can be accessed at the following link: <span><span>https://github.com/fmri123456/HSCFA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103734"},"PeriodicalIF":15.5,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-14DOI: 10.1016/j.inffus.2025.103728
Kun Ren , Zhengzhen Li , Yongping Du , Honggui Han , Yufeng Wu
{"title":"FII-DETR: Few-shot object detection with fully information interaction","authors":"Kun Ren , Zhengzhen Li , Yongping Du , Honggui Han , Yufeng Wu","doi":"10.1016/j.inffus.2025.103728","DOIUrl":"10.1016/j.inffus.2025.103728","url":null,"abstract":"<div><div>Few-shot object detection (FSOD) aims to effectively classify and localize objects in images with only a few annotated samples. Recent meta-learning-based DETR approaches achieve promising performance in FSOD tasks. However, classifying confusing categories remains a critical challenge, particularly in scenarios involving occluded or small objects. To tackle this problem, we propose a meta-learning FSOD model built upon Deformable DETR, focusing on full information interaction, named FII-DETR. Firstly, an Adaptive Foreground Enhancement (AFE) module is designed to adaptively enhance important information and edge-aware representations in support images, enabling the model to capture discriminative features more effectively. Secondly, a Multiscale Local Information Fusion (MLIF) module and a Global Symmetric Aggregation (GSA) module are proposed to enhance local information interaction and aggregate support and query features from local and global perspectives. In addition, we introduce self-supervised pretraining (SSP) into the meta-learning FSOD framework to further enhance FII-DETR’s generalization capability by maximizing the mutual information of prior knowledge. We comprehensively evaluate the performance of FII-DETR on PASCAL VOC and MS COCO benchmarks. FII-DETR outperforms state-of-the-art FM-FSOD by 3 %, Meta-DeDETR by 2.6 %, and Meta-DETR by 6.5 %, averaging three splits on the PASCAL VOC. On the COCO dataset, FII-DETR outperforms Meta-DETR and Meta-DeDETR and is also superior to FM-FSOD in the 1-shot and 3-shot settings. This work demonstrates that fully information interaction and aggregation can provide effective and robust support for improving the performance of FSOD built upon DETR.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103728"},"PeriodicalIF":15.5,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-13DOI: 10.1016/j.inffus.2025.103644
Xulin Song , Xing Jin , Jin Qi , Jun Liu
{"title":"Dual alignment: Partial negative and soft-label alignment for text-to-image person retrieval","authors":"Xulin Song , Xing Jin , Jin Qi , Jun Liu","doi":"10.1016/j.inffus.2025.103644","DOIUrl":"10.1016/j.inffus.2025.103644","url":null,"abstract":"<div><div>Text-to-image person retrieval is a task to retrieve the right matched images based on a given textual description of the interested person. The main challenge lies in the inherent modal difference between texts and images. Most existing works narrow the modality gap by aligning the feature representations of text and image in a latent embedding space. However, these methods usually leverage the hard label and mine insufficient or incorrect hard negatives to achieve cross-modal alignment, generating incorrect hard negative pairs so as to suboptimal performance. To tackle the above problems, we propose a dual alignment framework, Partial negative and Soft-label Alignment (PASA), which includes the partial negative alignment (PA) strategy and the Soft-label Alignment (SA) strategy. Specifically, PA pushes far away the hard negatives in the triplet loss by considering a certain amount of negatives within each mini-batch as hard negatives, preventing the distraction to the positive text–image pairs. Based on PA, SA further achieves the alignment between the similarity distribution on these hard negatives by the manner of soft-label, as well as the alignment between inter-modal and intra-modal. Extensive experiments on three public datasets, CUHK-PEDES, ICFG-PEDES and RSTPReid, demonstrate that our proposed PASA method can consistently improve the performance of text-to-image person retrieval, and achieve new state-of-the-art results on the above three datasets.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103644"},"PeriodicalIF":15.5,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AAE-CycleWGAN fusion framework for generating fused strain data from sparse to dense domains in bridge monitoring systems","authors":"Sahar Hassani , Ulrike Dackermann , Mohsen Mousavi , Samir Mustapha , Jianchun Li","doi":"10.1016/j.inffus.2025.103736","DOIUrl":"10.1016/j.inffus.2025.103736","url":null,"abstract":"<div><div>Effective monitoring of pedestrian bridges is challenged by noisy measurements, missing data, and time-varying crowd excitation–critical issues for infrastructure and crowd management where timely safety monitoring is paramount. In this work, we quantify human–structure interaction to enable a monitoring system that detects abnormal loads and supports bridge safety management. We also model structural responses under dense-occupancy conditions to predict behavior in busy scenarios, providing decision-ready insights that reduce the risk of serviceability loss. We propose a fusion-centred framework that (i) derives informative, noise-robust features from raw noisy structural strain responses via an optimized adversarial autoencoder (AAE) for denoising and dimensionality reduction, (ii) mitigates data scarcity by synthesizing missing fused modalities using a Cycle-Consistent Wasserstein GAN with Gradient Penalty (CycleWGAN-GP), and (iii) performs downstream condition assessment and crowd–structure interaction analysis with an optimized 2D-CNN. Using only structural sensors, the system infers aspects of crowd movement (e.g., speed and weight proxies), supports real-time safety decisions, and generalizes from unpaired training conditions to predict responses under future or unseen regimes. Validation is conducted on a laboratory-scale pedestrian timber bridge instrumented at midspan with three Fiber Bragg Grating (FBG) strain sensors under multiple scenarios that mimic moving human-induced loads. We evaluate generation quality with standard criteria and assess classification performance on both multiclass and binary tasks. Comparative studies include standard ML baselines and an ablation without CycleWGAN-GP. Results show improved missing-data generation with CycleWGAN-GP and robust condition monitoring on held-out, unseen real data (avoiding leakage): binary classification accuracy improved from 85.21 % to 95.40 %, while multiclass accuracy increased from 85.50 % to 96.65 %. The proposed framework enhances the predictive capability and reliability of bridge monitoring systems by jointly addressing noise, missingness, and crowd-induced variability within a unified SHM pipeline.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103736"},"PeriodicalIF":15.5,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-13DOI: 10.1016/j.inffus.2025.103726
Ye Ni , Ruiyu Liang , Xiaoshuai Hao , Jiaming Cheng , Qingyun Wang , Chengwei Huang , Cairong Zou , Wei Zhou , Weiping Ding , Björn W. Schuller
{"title":"Affine modulation-based audiogram fusion network for joint noise reduction and hearing loss compensation","authors":"Ye Ni , Ruiyu Liang , Xiaoshuai Hao , Jiaming Cheng , Qingyun Wang , Chengwei Huang , Cairong Zou , Wei Zhou , Weiping Ding , Björn W. Schuller","doi":"10.1016/j.inffus.2025.103726","DOIUrl":"10.1016/j.inffus.2025.103726","url":null,"abstract":"<div><div>Hearing aids (HAs) are widely used to provide personalized speech enhancement (PSE) services, improving the quality of life for individuals with hearing loss. However, HA performance significantly declines in noisy environments as it treats noise reduction (NR) and hearing loss compensation (HLC) as separate tasks. This separation leads to a lack of systematic optimization, overlooking the interactions between these two critical tasks, and increases the system complexity. To address these challenges, we propose a novel audiogram fusion network, named AFN-HearNet, which simultaneously tackles the NR and HLC tasks by fusing cross-domain audiogram and spectrum features. We propose an audiogram-specific encoder that transforms the sparse audiogram profile into a deep representation, addressing the alignment problem of cross-domain features prior to fusion. To incorporate the interactions between NR and HLC tasks, we propose the affine modulation-based audiogram fusion frequency-temporal Conformer that adaptively fuses these two features into a unified deep representation for speech reconstruction. Furthermore, we introduce a voice activity detection auxiliary training task to embed speech and non-speech patterns into the unified deep representation implicitly. We conduct comprehensive experiments across multiple datasets to validate the effectiveness of each proposed module. The results indicate that the AFN-HearNet significantly outperforms state-of-the-art in-context fusion joint models regarding key metrics such as HASQI and PESQ, achieving a considerable trade-off between performance and efficiency. The source code and data will be released at <span><span>https://github.com/deepnetni/AFN-HearNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103726"},"PeriodicalIF":15.5,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-13DOI: 10.1016/j.inffus.2025.103738
Kaijun Mai, Chen Chen, Yuhongxu Feng, Ao Li, Liang Xi
{"title":"Dual-level principal component fusion networks for synthetic speech detection","authors":"Kaijun Mai, Chen Chen, Yuhongxu Feng, Ao Li, Liang Xi","doi":"10.1016/j.inffus.2025.103738","DOIUrl":"10.1016/j.inffus.2025.103738","url":null,"abstract":"<div><div>Synthetic speech detection has emerged as a critical defense against misinformation and security threats. Currently, the most effective detection methods leverage neural network models. For the same speech input, these models can learn diverse semantic representations. Crucially, the correspondence of these distinct representations to a single input implies the existence of latent shared variables that govern their intrinsic relationship. These shared variables reveal inherent redundancy across semantic representations, manifesting as different expressions of the same underlying essence. Therefore, it is necessary to model this underlying essence. To this end, we propose the dual-level principal component fusion networks (DPCFN) to fuse representations derived from different models. The DPCFN comprises dual-level networks and principal component fusion networks (PCFN). The dual-level networks are employed to learn diverse semantic representations. The PCFN is designed to first extract the latent structures of individual semantic representations and represent them as principal components. Then the principal components of diverse semantic representations are fused into an underlying essence feature. The resulting DPCFN-processed fused feature exhibits robust discriminative capability for distinguishing synthetic from genuine speech. The proposed DPCFN method is evaluated on the 2019 and 2021 versions of Automatic Speaker Verification Spoofing and Countermeasures (ASVspoof) database. The experimental results show that DPCFN method achieves decent experimental performance on the logical access sub-challenge of these two databases. The codes are available at: <span><span>https://github.com/splab-HRBUST/DPCFN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103738"},"PeriodicalIF":15.5,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-13DOI: 10.1016/j.inffus.2025.103713
Shilei Tan , Yongcheng Zhou , Haoxiang Liu , Xuesong Wang , Si Chen , Wei Gong
{"title":"DeTinyLLM: Efficient detection of machine-generated text via compact paraphrase transformation","authors":"Shilei Tan , Yongcheng Zhou , Haoxiang Liu , Xuesong Wang , Si Chen , Wei Gong","doi":"10.1016/j.inffus.2025.103713","DOIUrl":"10.1016/j.inffus.2025.103713","url":null,"abstract":"<div><div>The growing fusion of human-written and machine-generated text poses significant challenges in distinguishing their origins, as advanced large language models (LLMs) increasingly mimic human linguistic patterns. Existing detection methods, such as SimLLM, rely on querying proprietary LLMs for proofreading to measure similarity, which incurs high computational costs and instability due to dependency on fluctuating model updates. To address these limitations, we propose DeTinyLLM, a novel framework that leverages fusion-driven compact paraphrase models for efficient and stable detection. First, we train a lightweight transformation model (e.g., fine-tuned T5-large) to rewrite machine-generated text into human-like text, effectively “de-AI-ifying” it through iterative fusion of syntactic and semantic features. For detection, the input text and its rewritten version are fused and classified via a hybrid neural network, capitalizing on divergence patterns between human and machine text. Experiments across diverse datasets demonstrate that DeTinyLLM achieves state-of-the-art accuracy (surpassing SimLLM by 4.3 % in ROC-AUC) while reducing inference latency by 77.2 %. By eliminating reliance on proprietary LLMs and integrating multi-level fusion of linguistic signals, this work advances scalable, cost-effective solutions for real-world deployment in AI-generated text detection systems.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103713"},"PeriodicalIF":15.5,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-13DOI: 10.1016/j.inffus.2025.103731
Wenhao Wang , Hao Gu , Zhixuan Wu , Hao Chen , Xingguo Chen , Fan Shi
{"title":"PTFusion: LLM-driven context-aware knowledge fusion for web penetration testing","authors":"Wenhao Wang , Hao Gu , Zhixuan Wu , Hao Chen , Xingguo Chen , Fan Shi","doi":"10.1016/j.inffus.2025.103731","DOIUrl":"10.1016/j.inffus.2025.103731","url":null,"abstract":"<div><div>This paper presents PTFusion, an LLM-driven web penetration testing framework that addresses inefficient task guidance and imprecise command execution challenges in web penetration testing. Employing a semi-decentralized multi-agent collaborative architecture, PTFusion maintains strategic coherence while enabling autonomous tactical execution, and uses the Model Context Protocol to more conveniently call different types of penetration testing tools. To effectively guide task execution, the PTFusion designs a context-aware knowledge fusion mechanism to plan tasks based on the dynamic knowledge graph and executed actions, and uses the preference-based chain-of-thought prompting to address the issue of redundant and difficult to align outputs from different types of penetration testing tools. Compared to methods like PentestGPT, PTFusion demonstrates significantl superior performance in both task completion effectiveness and stability. The context-aware knowledge fusion mechanism enables PTFusion to conduct more precise strategic planning and execute penetration testing commands with greater accuracy, ensuring reliable completion of web penetration testing tasks across various scenarios.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103731"},"PeriodicalIF":15.5,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}