Neural Networks最新文献

筛选
英文 中文
Enhancing image restoration through learning context-rich and detail-accurate features 通过学习上下文丰富和细节准确的特征来增强图像恢复。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-11 DOI: 10.1016/j.neunet.2025.108096
Hu Gao , Xiaoning Lei , Depeng Dang
{"title":"Enhancing image restoration through learning context-rich and detail-accurate features","authors":"Hu Gao ,&nbsp;Xiaoning Lei ,&nbsp;Depeng Dang","doi":"10.1016/j.neunet.2025.108096","DOIUrl":"10.1016/j.neunet.2025.108096","url":null,"abstract":"<div><div>Image restoration aims to recover high-quality images from their degraded counterparts, necessitating a delicate balance between preserving spatial details and capturing contextual information. Although some methods attempt to address this trade-off, they tend to focus primarily on spatial features while overlooking the importance of understanding frequency variations. Moreover, these approaches commonly utilize skip connections–implemented via addition or concatenation–to fuse encoder and decoder features for improved restoration. However, since encoder features may still carry degradation artifacts, such direct fusion strategies risk introducing implicit noise, ultimately hindering restoration performance. In this paper, we present a multi-scale design that optimally balances these competing objectives, seamlessly integrating spatial and frequency domain knowledge to selectively recover the most informative information. Specifically, we develop a hybrid scale frequency selection block (HSFSBlock), which not only captures multi-scale information from the spatial domain, but also selects the most informative components for image restoration in the frequency domain. Furthermore, to mitigate the inherent noise introduced by skip connections employing only addition or concatenation, we introduce a skip connection attention mechanism (SCAM) to selectively determines the information that should propagate through skip connections. The resulting tightly interlinked architecture, named as LCDNet. Extensive experiments conducted across diverse image restoration tasks showcase that our model attains performance levels that are either superior or comparable to those of state-of-the-art algorithms. The code and the pre-trained models are released at <span><span>https://github.com/Tombs98/LCDNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108096"},"PeriodicalIF":6.3,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design, analysis and verification of noise-tolerant and overshoot-free recurrent neural network 抗噪声无超调递归神经网络的设计、分析与验证。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-10 DOI: 10.1016/j.neunet.2025.108075
Lei Jia , Tiandong Zheng , Yujie Wu , Yiwei Li
{"title":"Design, analysis and verification of noise-tolerant and overshoot-free recurrent neural network","authors":"Lei Jia ,&nbsp;Tiandong Zheng ,&nbsp;Yujie Wu ,&nbsp;Yiwei Li","doi":"10.1016/j.neunet.2025.108075","DOIUrl":"10.1016/j.neunet.2025.108075","url":null,"abstract":"<div><div>A kind of recurrent neural network (RNN) specialized in solving time-varying problems has wide applications in various fields, where the RNN with integral terms (RNN-IT) as a state-of-art method plays an important role in rejecting noise. However, the RNN-IT always experiences overshoot phenomenon when suppressing noise, which greatly affects the convergence time. In order to overcome the above disadvantage of the RNN-IT, this paper proposes a noise-tolerant and overshoot-free recurrent neural network (NORNN) by designing a time-varying additional term, which can flexibly compensate errors and avoid accumulation, thereby resisting noise and eliminating overshoot. Furthermore, the convergence time of the NORNN is obviously improved, which means that the NORNN can effectively and quickly address time-varying problems even when the noise disturbed. Two theorems and a corollary analyze the convergence, noise-tolerance, and overshoot-free properties of the proposed NORNN. Meanwhile, simulation experiments on solving the time-varying matrix inversion problem and the trajectory tracking of the RPRR manipulator also verify its excellent performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108075"},"PeriodicalIF":6.3,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145207986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly supervised multi-modal imitation learning from incompletely labeled demonstrations 基于不完全标记演示的弱监督多模态模仿学习。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-10 DOI: 10.1016/j.neunet.2025.108098
Sijia Gu, Fei Zhu
{"title":"Weakly supervised multi-modal imitation learning from incompletely labeled demonstrations","authors":"Sijia Gu,&nbsp;Fei Zhu","doi":"10.1016/j.neunet.2025.108098","DOIUrl":"10.1016/j.neunet.2025.108098","url":null,"abstract":"<div><div>Multi-modal imitation learning enables the agent to learn demonstrations of multiple modes at the same time. However, as expert demonstrations in practice tend to have incomplete labels for behavior modes, most methods are inefficient. To address this issue, an approach capable of imitation learning from incompletely labeled expert demonstrations, referred to as Weakly Supervised Multi-modal Imitation Learning (WSMIL), is proposed. WSMIL incorporates weakly supervised learning into multi-modal imitation learning by adding a behavior mode classifier to the adversarial network, thus forming adversaries among three players (generator, classifier and discriminator). Both labeled and unlabeled data are fully utilized in this adversarial process where fake state-action-label pairs generated by the generator and the classifier try to deceive the discriminator that tries to identify them and limited labeled expert demonstrations. Additionally, in order to ensure the data distribution of classifier and generator individually to converge to the expert’s real distribution, three extra losses are employed, where simulated annealing behavioral cloning is also added to the generator network to improve the generalization of policy. Experiments show that WSMIL accurately distinguishes modes with incomplete modal labels in demonstrations, learns close to the expert standard for each mode, and is more stable than other multi-modal methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108098"},"PeriodicalIF":6.3,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
La-LoRA: Parameter-efficient fine-tuning with layer-wise adaptive low-rank adaptation La-LoRA:具有分层自适应低秩自适应的参数有效微调
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-10 DOI: 10.1016/j.neunet.2025.108095
Jiancheng Gu , Jiabin Yuan , Jiyuan Cai , Xianfa Zhou , Lili Fan
{"title":"La-LoRA: Parameter-efficient fine-tuning with layer-wise adaptive low-rank adaptation","authors":"Jiancheng Gu ,&nbsp;Jiabin Yuan ,&nbsp;Jiyuan Cai ,&nbsp;Xianfa Zhou ,&nbsp;Lili Fan","doi":"10.1016/j.neunet.2025.108095","DOIUrl":"10.1016/j.neunet.2025.108095","url":null,"abstract":"<div><div>Parameter-efficient fine-tuning (PEFT) has emerged as a critical paradigm for adapting large pre-trained models to downstream tasks, offering a balance between computational efficiency and model performance. Among these methods, Low-Rank Adaptation (LoRA) has gained significant popularity due to its efficiency; it freezes the pre-trained weights and decomposes the incremental matrices into two trainable low-rank matrices. However, a critical limitation of LoRA lies in its uniform rank assignment across all layers, which fails to account for the heterogeneous importance of different layers in contributing to task performance, potentially resulting in suboptimal adaptation. To address this limitation, we propose Layer-wise Adaptive Low-Rank Adaptation (La-LoRA), a novel approach that dynamically allocates rank to each layer based on Dynamic Contribution-Driven Parameter Budget (DCDPB) and Truncated Norm Weighted Dynamic Rank Allocation (TNW-DRA) during training. By treating each layer as an independent unit and progressively adjusting its rank allocation, La-LoRA ensures optimal model performance while maintaining computational efficiency and adapting to the complexity of diverse tasks. We conducted extensive experiments across multiple tasks and models to evaluate the effectiveness of La-LoRA. The results demonstrate that La-LoRA consistently outperforms existing benchmarks, validating its effectiveness in diverse scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108095"},"PeriodicalIF":6.3,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid aggregation strategy with double inverted residual blocks for lightweight salient object detection 基于双反向残差块的轻型显著目标检测混合聚合策略。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-10 DOI: 10.1016/j.neunet.2025.108097
Jianhua Ma , Mingfeng Jiang , Xian Fang , Jiatong Chen , Yaming Wang , Guang Yang
{"title":"Hybrid aggregation strategy with double inverted residual blocks for lightweight salient object detection","authors":"Jianhua Ma ,&nbsp;Mingfeng Jiang ,&nbsp;Xian Fang ,&nbsp;Jiatong Chen ,&nbsp;Yaming Wang ,&nbsp;Guang Yang","doi":"10.1016/j.neunet.2025.108097","DOIUrl":"10.1016/j.neunet.2025.108097","url":null,"abstract":"<div><div>Lightweight salient object detection (SOD) is widely used in various downstream applications due to its low resource requirements and fast inference speed. The use of hybrid encoders offers the potential to achieve a better balance between efficiency and accuracy for SOD task. However, the aggregation of features from convolutional neural networks (CNNs) and transformers remains challenging, and most existing lightweight SOD models rarely explore the efficient aggregation of cross-architecture features derived from hybrid encoders. In this paper, we propose a hybrid aggregation strategy network (HASNet) that balances accuracy and efficiency for lightweight SOD by grouping and aggregating features to leverage salient information across different architectures. Specifically, the features obtained after hybrid encoder processing are divided into convolutional and transformer features for shallow and deep aggregation respectively. Deep aggregation uses the global inverted residual block (GIRB) to facilitate the transfer of salient information encoded within transformer features across various levels. Meanwhile, shallow aggregation uses the lightweight inverted residual block (LIRB) to efficiently integrate the spatial information inherent in convolutional features. The GIRB incorporates an efficient global operation to extract channel semantic information from the high-dimensional transformer features. The LIRB fuses low-level features by efficiently exploiting the spatial information in features at extremely low computational cost. Comprehensive experiments conducted across five datasets demonstrate that our HASNet significantly outperform existing methods in a thorough evaluation encompassing parameter sizes, inference speed, and accuracy. The source code will be publicly available at <span><span>https://github.com/LitterMa-820/HASNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108097"},"PeriodicalIF":6.3,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view learning meets state-space model: A dynamical system perspective 多视图学习满足状态空间模型:动态系统视角。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-09 DOI: 10.1016/j.neunet.2025.108088
Weibin Chen , Ying Zou , Zhiyong Xu , Li Xu , Shiping Wang
{"title":"Multi-view learning meets state-space model: A dynamical system perspective","authors":"Weibin Chen ,&nbsp;Ying Zou ,&nbsp;Zhiyong Xu ,&nbsp;Li Xu ,&nbsp;Shiping Wang","doi":"10.1016/j.neunet.2025.108088","DOIUrl":"10.1016/j.neunet.2025.108088","url":null,"abstract":"<div><div>Multi-view learning exploits the complementary nature of multiple modalities to enhance performance across diverse tasks. While deep learning has significantly advanced these fields by enabling sophisticated modeling of intra-view and cross-view interactions, many existing approaches still rely on heuristic architectures and lack a principled framework to capture the dynamic evolution of feature representations. This limitation hampers interpretability and theoretical understanding. To address these challenges, this paper introduces the Multi-view State-Space Model (MvSSM), which formulates multi-view representation learning as a continuous-time dynamical system inspired by control theory. In this framework, view-specific features are treated as external inputs, and a shared latent representation evolves as the internal system state, driven by learnable dynamics. This formulation unifies feature integration and label prediction within a single interpretable model, enabling theoretical analysis of system stability and representational transitions. Two variants, MvSSM-Lap and MvSSM-iLap, are further developed using Laplace and inverse Laplace transformations to derive system dynamics representations. These solutions exhibit structural similarities to graph convolution operations in deep networks, supporting efficient feature propagation and theoretical interpretability. Experiments on benchmark datasets such as IAPR-TC12, and ESP demonstrate the effectiveness of the proposed method, achieving up to 4.31 % improvement in accuracy and 4.27 % in F1-score over existing state-of-the-art approaches.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108088"},"PeriodicalIF":6.3,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-level dynamic heterogeneous graph network for video question answering 面向视频问答的双级动态异构图网络
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-09 DOI: 10.1016/j.neunet.2025.108094
Zefan Zhang, Yanhui Li, Weiqi Zhang, Tian Bai
{"title":"Dual-level dynamic heterogeneous graph network for video question answering","authors":"Zefan Zhang,&nbsp;Yanhui Li,&nbsp;Weiqi Zhang,&nbsp;Tian Bai","doi":"10.1016/j.neunet.2025.108094","DOIUrl":"10.1016/j.neunet.2025.108094","url":null,"abstract":"<div><div>Recently, Video Question Answering (VideoQA) has garnered considerable research interest as a pivotal task within the realm of vision-language understanding. However, existing Video Question Answering datasets often lack sufficient entity and event information. Thus, the Vision Language Models (VLMs) struggle to complete intricate grounding and reasoning among multi-modal entities or events and heavily rely on language short-cut or irrelevant visual context. To address these challenges, we make improvements from both data and model perspectives. In terms of VideoQA data, we focus on supplementing the missing specific entities and events with the proposed event and entity augmentation strategies. Based on the augmented data, we propose a Dual-Level Dynamic Heterogeneous Graph Network (DDHG) for Video Question Answering. DDHG incorporates transformer layers to capture the dynamic temporal-spatial changes of visual entities. Then, DDHG establishes multi-modal semantic grounding ability between vision and text with entity-level and event-level heterogeneous graphs. Finally, the Dual-level Cross-modal Interaction Module integrates the dual-level features to predict correct answers. Our method not only significantly outperforms existing VideoQA models on two complex event-based benchmark datasets (Causal-VidQA and NExT-QA) but also demonstrates superior event content prediction ability over several state-of-the-art approaches.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108094"},"PeriodicalIF":6.3,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ClickAttention: Click region similarity guided interactive segmentation 点击注意:点击区域相似度引导交互式分割。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-09 DOI: 10.1016/j.neunet.2025.108090
Long Xu , Yongquan Chen , Shanghong Li , Junkang Chen , Ziyuan Tang
{"title":"ClickAttention: Click region similarity guided interactive segmentation","authors":"Long Xu ,&nbsp;Yongquan Chen ,&nbsp;Shanghong Li ,&nbsp;Junkang Chen ,&nbsp;Ziyuan Tang","doi":"10.1016/j.neunet.2025.108090","DOIUrl":"10.1016/j.neunet.2025.108090","url":null,"abstract":"<div><div>Interactive segmentation algorithms based on click points have attracted significant attention from researchers in recent years. However, most existing methods rely on sparse click maps as model inputs to segment specific target objects. These clicks primarily affect local regions, limiting the model’s ability to focus on the entire target object and often resulting in a higher number of required clicks. Additionally, many current algorithms struggle to balance performance and efficiency effectively. To address these challenges, we propose a click attention algorithm that expands the influence of positive clicks by leveraging the similarity between positively-clicked regions and the entire input. We further introduce a discriminative affinity loss to reduce attention coupling between positive and negative click regions, minimizing accuracy degradation caused by mutual interference. On the DAVIS dataset, our method achieves a 2 % performance gain (NoC@90) over the state-of-the-art SimpleClick-ViT-L, while using only 15.6 % of its parameters. Extensive experiments demonstrate that our approach outperforms existing methods and achieves state-of-the-art performance with fewer parameters. <span><span>Data and code</span><svg><path></path></svg></span> are published.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108090"},"PeriodicalIF":6.3,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145088042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal orthogonal fusion network via cross-layer guidance for Alzheimer’s disease diagnosis 跨层引导的多模态正交融合网络用于阿尔茨海默病诊断
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-08 DOI: 10.1016/j.neunet.2025.108091
Yumiao Zhao , Bo Jiang , Yuan Chen , Ye Luo , Jin Tang
{"title":"Multi-modal orthogonal fusion network via cross-layer guidance for Alzheimer’s disease diagnosis","authors":"Yumiao Zhao ,&nbsp;Bo Jiang ,&nbsp;Yuan Chen ,&nbsp;Ye Luo ,&nbsp;Jin Tang","doi":"10.1016/j.neunet.2025.108091","DOIUrl":"10.1016/j.neunet.2025.108091","url":null,"abstract":"<div><div>Multi-modal neuroimaging techniques are widely employed for the accurate diagnosis of Alzheimer’s Disease (AD). Existing fusion methods typically focus on capturing semantic correlations between modalities through feature-level interactions. However, they fail to suppress redundant cross-modal information, resulting in sub-optimal multi-modal representation. Moreover, these methods ignore subject-specific differences in modality contributions. To address these challenges, we propose a novel Multi-modal Orthogonal Fusion Network via cross-layer guidance (MOFNet) to effectively fuse multi-modal information for AD diagnosis. We first design a Cross-layer Guidance Interaction module (CGI), leveraging high-level features to guide the learning of low-level features, thereby enhancing the fine-grained representations on disease-relevant regions. Then, we introduce a Multi-modal Orthogonal Compensation module (MOC) to realize bidirectional interaction between modalities. MOC encourages each modality to compensate for its limitations by learning orthogonal components from other modalities. Finally, a Feature Enhancement Fusion module (FEF) is developed to adaptively fuse multi-modal features based on the contributions of different modalities. Extensive experiments on the ADNI dataset demonstrate that MOFNet achieves superior performance in AD classification tasks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108091"},"PeriodicalIF":6.3,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HMT-DTI: Hierarchical meta-path learning with transformer for drug–target interaction prediction HMT-DTI:基于transformer的分层元路径学习用于药物-靶标相互作用预测
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-08 DOI: 10.1016/j.neunet.2025.108093
Dianlei Gao, Fei Zhu
{"title":"HMT-DTI: Hierarchical meta-path learning with transformer for drug–target interaction prediction","authors":"Dianlei Gao,&nbsp;Fei Zhu","doi":"10.1016/j.neunet.2025.108093","DOIUrl":"10.1016/j.neunet.2025.108093","url":null,"abstract":"<div><div>Drug–target interaction (DTI) prediction plays a crucial role in drug discovery and repurposing by efficiently and accurately identifying potential therapeutic targets. Existing methods face challenges in capturing high-order semantic relationships in heterogeneous graphs and effectively integrating multi-meta-path information while also suffering from low computational efficiency. To address these challenges, a pre-computation-style hierarchical meta-path learning framework named HMT-DTI is proposed. HMT-DTI can effectively capture rich semantic information about drugs and targets while ensuring high computational efficiency. Specifically, during the pre-collection stage, HMT-DTI employs a Transformer-based message passing mechanism to evaluate neighbors’ importance and adaptively collect meta-path information. The incorporation of even-relation propagation reduces redundant iterations and improves efficiency. During training, HMT-DTI adopts a hierarchical knowledge extraction strategy to evaluate the importance of multi-hop neighbors and different meta-path patterns, capturing fine-grained semantic representations of drugs and targets. HMT-DTI is evaluated on three heterogeneous biological datasets and compared with several state-of-the-art methods. The results demonstrate the superiority of HMT-DTI in DTI prediction.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108093"},"PeriodicalIF":6.3,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信