Neural Networks最新文献

筛选
英文 中文
Information-theoretic complementary prompts for improved continual text classification 改进连续文本分类的信息理论补充提示
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-09 DOI: 10.1016/j.neunet.2025.107676
Duzhen Zhang , Yong Ren , Chenxing Li , Dong Yu , Tielin Zhang
{"title":"Information-theoretic complementary prompts for improved continual text classification","authors":"Duzhen Zhang ,&nbsp;Yong Ren ,&nbsp;Chenxing Li ,&nbsp;Dong Yu ,&nbsp;Tielin Zhang","doi":"10.1016/j.neunet.2025.107676","DOIUrl":"10.1016/j.neunet.2025.107676","url":null,"abstract":"<div><div>Continual Text Classification (CTC) aims to continuously classify new text data over time while minimizing catastrophic forgetting of previously acquired knowledge. However, existing methods often focus on task-specific knowledge, overlooking the importance of shared, task-agnostic knowledge. Inspired by the complementary learning systems theory, which posits that humans learn continually through the interaction of two systems — the hippocampus, responsible for forming distinct representations of specific experiences, and the neocortex, which extracts more general and transferable representations from past experiences — we introduce Information-Theoretic Complementary Prompts (InfoComp), a novel approach for CTC. InfoComp explicitly learns two distinct prompt spaces: P(rivate)-Prompt and S(hared)-Prompt. These respectively encode task-specific and task-invariant knowledge, enabling models to sequentially learn classification tasks without relying on data replay. To promote more informative prompt learning, InfoComp uses an information-theoretic framework that maximizes mutual information between different parameters (or encoded representations). Within this framework, we design two novel loss functions: (1) to strengthen the accumulation of task-specific knowledge in P-Prompt, effectively mitigating catastrophic forgetting, and (2) to enhance the retention of task-invariant knowledge in S-Prompt, improving forward knowledge transfer. Extensive experiments on diverse CTC benchmarks show that our approach outperforms previous state-of-the-art methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107676"},"PeriodicalIF":6.0,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relation prediction in knowledge graphs: A self-organizing neural network approach 知识图中的关系预测:一种自组织神经网络方法
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-09 DOI: 10.1016/j.neunet.2025.107679
Budhitama Subagdja, D. Shanthoshigaa, Ah-Hwee Tan
{"title":"Relation prediction in knowledge graphs: A self-organizing neural network approach","authors":"Budhitama Subagdja,&nbsp;D. Shanthoshigaa,&nbsp;Ah-Hwee Tan","doi":"10.1016/j.neunet.2025.107679","DOIUrl":"10.1016/j.neunet.2025.107679","url":null,"abstract":"<div><div>Knowledge graphs (KGs) in specialized domains frequently suffer from incomplete information. While current relation prediction methods for KG completion typically rely on neural network-based representation learning, we present KG2ART—a novel self-organizing neural network that employs a fundamentally different approach. KG2ART performs parallel inference over the graph structure through bidirectional interactions between bottom-up activations and top-down pattern matching to conduct relation prediction without representation learning. Our comprehensive evaluation across five diverse KGs (Nations, UMLS, Kinship, CoDEx-M, and a jet engine technical KG) demonstrates that KG2ART consistently outperforms state-of-the-art baselines (TuckER, ComplEX, RESCAL, ConvE, CompGCN) in prediction accuracy. The model achieves particularly strong results on standard benchmarks, with Hits@1 scores exceeding 90% for Nations and 60% for CoDEx-M. Remarkably, KG2ART attains these superior accuracy results while also being among the fastest models for both training and prediction across all datasets.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107679"},"PeriodicalIF":6.0,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hyper-heuristic enhanced neuro-evolutionary algorithm with self-adaptive operators and various activation functions for classification problems 一种具有自适应算子和多种激活函数的超启发式增强神经进化算法
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-09 DOI: 10.1016/j.neunet.2025.107751
Fehmi Burcin Ozsoydan , İlker Gölcük , Esra Duygu Durmaz
{"title":"A hyper-heuristic enhanced neuro-evolutionary algorithm with self-adaptive operators and various activation functions for classification problems","authors":"Fehmi Burcin Ozsoydan ,&nbsp;İlker Gölcük ,&nbsp;Esra Duygu Durmaz","doi":"10.1016/j.neunet.2025.107751","DOIUrl":"10.1016/j.neunet.2025.107751","url":null,"abstract":"<div><div>Due to their remarkable generalization capabilities, Artificial Neural Networks (ANNs) grab attention of researchers and practitioners. ANNs have two main stages, namely training and testing. The training stage aims to find optimum synapse values. Traditional gradient-descent-based approaches offer notable advantages in training ANNs, nevertheless, they are exposed to some limitations such as convergence and local minima issues. Therefore, stochastic search algorithms are commonly employed. In this regard, the present study adopts and further extends the evolutionary search strategies namely, the Differential Evolution (DE) algorithm and the Self-Adaptive Differential Evolution (SaDE) Algorithm in training ANNs. Accordingly, self-adaptation procedures are modified to perform a more convenient search and to avoid local optima first. Secondarily, mutation operations in these algorithms are reinforced by a hyper-heuristic framework, which selects low-level heuristics out of a heuristics pool, based on their achievements throughout search. Thus, to achieve better performance, while promising mechanisms are more frequently invoked by the proposed approach, naïve operators are less frequently invoked. This implicitly avoids greedy behavior in selecting low-level heuristics and attempts to overcome loss-of-diversity and local optima issues. Moreover, due to possible complexity and nonlinearity in mapping between inputs and outputs, the proposed method is also tested by using various activation functions. Thus, a hyper-heuristic enhanced neuro-evolutionary algorithm with self-adaptive operators is introduced. Finally, the performances of all used methods with various activation functions are tested on the well-known classification problems. Statistically verified computational results point out significant differences among the algorithms and used activation functions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107751"},"PeriodicalIF":6.0,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144306928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INN/ENNS/JNNS - Membership Applic. Form INN/ENNS/JNNS -申请会员资格。形式
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-07 DOI: 10.1016/S0893-6080(25)00602-1
{"title":"INN/ENNS/JNNS - Membership Applic. Form","authors":"","doi":"10.1016/S0893-6080(25)00602-1","DOIUrl":"10.1016/S0893-6080(25)00602-1","url":null,"abstract":"","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"189 ","pages":"Article 107722"},"PeriodicalIF":6.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Node transfer with graph contrastive learning for class-imbalanced node classification 基于图对比学习的类不平衡节点分类节点迁移
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-07 DOI: 10.1016/j.neunet.2025.107674
Yangding Li , Xiangchao Zhao , Yangyang Zeng , Hao Feng , Jiawei Chai , Hao Xie , Shaobin Fu , Shichao Zhang
{"title":"Node transfer with graph contrastive learning for class-imbalanced node classification","authors":"Yangding Li ,&nbsp;Xiangchao Zhao ,&nbsp;Yangyang Zeng ,&nbsp;Hao Feng ,&nbsp;Jiawei Chai ,&nbsp;Hao Xie ,&nbsp;Shaobin Fu ,&nbsp;Shichao Zhang","doi":"10.1016/j.neunet.2025.107674","DOIUrl":"10.1016/j.neunet.2025.107674","url":null,"abstract":"<div><div>In graph representation learning, the class imbalance problem is a significant challenge that has received much attention from academics. Although current approaches have shown promising results, they have not adequately addressed the problems of node quantity imbalance and feature space imbalance in datasets. This research presents a node transfer with graph contrastive learning framework (NT-GCL) that aims to improve the representation capabilities of graph neural networks for minority classes nodes by balancing node quantity and feature space distributions. First, the proposed node transfer algorithm redistributes misclassified nodes from majority classes to achieve a balanced distribution of node quantity and feature space. This approach effectively prevents the feature space of minority classes from being compressed by majority classes during information propagation, further mitigating potential imbalance issues. Subsequently, the self-supervised contrastive learning strategy is employed to train the model without relying on labels, reducing the bias introduced by labeled data. Experiments conducted with various encoders on six public datasets demonstrate that NT-GCL exhibits strong competitiveness in class-imbalanced node classification.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107674"},"PeriodicalIF":6.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144239873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finite time dynamic analysis of memristor-based fuzzy NNs with inertial term: Nonreduced-order approach 具有惯性项的记忆电阻模糊神经网络的有限时间动态分析:非降阶方法
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-07 DOI: 10.1016/j.neunet.2025.107672
Yuxin Jiang , Song Zhu , Mouquan Shen , Shiping Wen , Chaoxu Mu
{"title":"Finite time dynamic analysis of memristor-based fuzzy NNs with inertial term: Nonreduced-order approach","authors":"Yuxin Jiang ,&nbsp;Song Zhu ,&nbsp;Mouquan Shen ,&nbsp;Shiping Wen ,&nbsp;Chaoxu Mu","doi":"10.1016/j.neunet.2025.107672","DOIUrl":"10.1016/j.neunet.2025.107672","url":null,"abstract":"<div><div>The finite-time synchronization (FTS) for memristor-based fuzzy neural networks with inertial term (MFINNs) is studied in this literature. In order to enhance the performance, efficiency and adaptability of the system to complex application scenarios, the memristor and inertial term are considered in the fuzzy neural network (FNNs). Different from the corresponding researches on exponential/asymptotic synchronization, the FTS of MFINNs is first investigated. This work directly analyze the second-order system via nonreduced-order approach, which can better reflect the second-order system because they do not lose any important kinetic information.Subsequently, fuzzy state-feedback and adaptive control schemes are constructed to guarantee the FTS of MFINNs. The algebraic conditions on the FTS of MFINNs are obtained by selecting a suitable Lyapunov–Krasovskii functional. At last, a numerical simulation is presented to substantiate the advantages of the proposed results. And some comparisons with the latest method are demonstrated.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107672"},"PeriodicalIF":6.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144239875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Private image synthesis of latent diffusion model with the ciphertext of prompt 带有提示密文的隐扩散模型的私有图像合成
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-07 DOI: 10.1016/j.neunet.2025.107678
Guanghui He , Yanli Ren , Gaojian Li , Guorui Feng , Xinpeng Zhang
{"title":"Private image synthesis of latent diffusion model with the ciphertext of prompt","authors":"Guanghui He ,&nbsp;Yanli Ren ,&nbsp;Gaojian Li ,&nbsp;Guorui Feng ,&nbsp;Xinpeng Zhang","doi":"10.1016/j.neunet.2025.107678","DOIUrl":"10.1016/j.neunet.2025.107678","url":null,"abstract":"<div><div>Artificial intelligence generated content (AIGC) has made significant strides in enabling users to create various realistic visual content, such as images, videos and audio. Diffusion models have shown promise in generating higher-quality images when the user inputs the prompts. However, the prompts and the generated images may contain some privacy information. To address these concerns, this paper proposes a privacy-preserving image synthesis protocol with the ciphertext of prompt. Specifically, we employ differential privacy-stochastic gradient descent (DP-SGD) to update parameters within the Unet instead of the entire parameters, while protecting the privacy of the generated image without decreasing its quality. To ensure prompt confidentiality, we utilize function encryption and design a secure cross-attention for subsequent propagation. Furthermore, based on the theoretical analysis and experimental results, the generated images under the ciphertext prompts can achieve differential privacy, and were almost identical to those under the plaintext prompts.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107678"},"PeriodicalIF":6.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborate large and small language models for multi-modal emergency rumor detection 协作大型和小型语言模型,实现多模式紧急谣言检测
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-07 DOI: 10.1016/j.neunet.2025.107625
Youcheng Yan , Jinshuo Liu , Juan Deng , Junyan Li , Lina Wang , Jeff Z. Pan
{"title":"Collaborate large and small language models for multi-modal emergency rumor detection","authors":"Youcheng Yan ,&nbsp;Jinshuo Liu ,&nbsp;Juan Deng ,&nbsp;Junyan Li ,&nbsp;Lina Wang ,&nbsp;Jeff Z. Pan","doi":"10.1016/j.neunet.2025.107625","DOIUrl":"10.1016/j.neunet.2025.107625","url":null,"abstract":"<div><div>Multi-modal emergency rumors are spreading in the current digital era, causing significant disruptions and negative impacts. Most existing methods focus on exploring rumor detection using individual small language models (SLMs) or large language models (LLMs), achieving a certain degree of success but with underlying issues. Approaches based on SLMs have reached a bottleneck due to their limited knowledge and capacity. In contrast, LLMs have unique strengths in deep analysis that compensate for the weaknesses of SLMs; however, they struggle to select and integrate analyses to draw appropriate conclusions. Furthermore, recent works on multi-modal feature fusion remain superficial, limiting the ability of these models to fully comprehend and identify rumors. In this work, we propose Collaborate Large and Small Language Models for Multi-Modal Emergency Rumor Detection (M2ERD). Specifically, it consists of two main components. First, LLMs generate multi-dimensional rationales based on multi-perspective prompts, from which SLMs selectively derive insights for rumor detection. Second, a multi-source cross-modal penetration fusion network not only accomplishes unidirectional fusion of auxiliary information such as multi-dimensional rationales but also achieves complete mutual complementation between text and the image. Comprehensive experiments demonstrate the effectiveness of M2ERD for rumor detection on Weibo, RumorEval, and Pheme datasets, achieving a 2.6% improvement in accuracy and a 1.9% improvement in F1-score compared to all baselines. We release the code and data at <span><span>https://github.com/youchengyan/M2ERD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107625"},"PeriodicalIF":6.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144239524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CURRENT EVENTS 时事
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-07 DOI: 10.1016/S0893-6080(25)00601-X
{"title":"CURRENT EVENTS","authors":"","doi":"10.1016/S0893-6080(25)00601-X","DOIUrl":"10.1016/S0893-6080(25)00601-X","url":null,"abstract":"","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"189 ","pages":"Article 107721"},"PeriodicalIF":6.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144230630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data prioritization aware resource allocation in internet of vehicles using multi-agent deep reinforcement learning 基于多智能体深度强化学习的车联网数据优先级感知资源分配
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-06-06 DOI: 10.1016/j.neunet.2025.107671
Cong Wang , Yingshan Guan , Sancheng Peng , Hao Chen , Guorui Li
{"title":"Data prioritization aware resource allocation in internet of vehicles using multi-agent deep reinforcement learning","authors":"Cong Wang ,&nbsp;Yingshan Guan ,&nbsp;Sancheng Peng ,&nbsp;Hao Chen ,&nbsp;Guorui Li","doi":"10.1016/j.neunet.2025.107671","DOIUrl":"10.1016/j.neunet.2025.107671","url":null,"abstract":"<div><div>Intelligent transportation systems (ITS) are facing the limitation of spectral resources and stringent real time communication requirements. How to effectively allocate system resources for maximizing the performance in Internet of Vehicles (IoV) is still a substantial challenge, particularly the priority and urgency of different types of data need to be focused. To improve the allocation spectrum resources and optimize transmission power while taking into the dynamic characteristics of vehicles and data priorities account, we design a time-series-based multi-agent deep reinforcement learning framework (NL-MAPPO for short), in this paper. First, we formulate the joint optimization problem as a multi-agent Markov decision process to ensure the minimization of transmission delays and energy consumption when the total vehicle-to-vehicle (V2V) link capacity is maximized. Here, V2V link capacity refers to the maximum achievable data rate for direct communication between vehicles, which depends on factors such as signal strength, interference, and available bandwidth. Then, we design a multi-agent resource allocation algorithm based on a shared-critic mechanism to realize the global sharing of channel information and solve the optimization problem. Finally, to improve efficiency, we also introduce a time series-based channel information extraction mechanism to capture the temporal characteristics of channel information. The simulation experiments were conducted and the results demonstrated that our proposed NL-MAPPO can demonstrate superiority in multiple metrics.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107671"},"PeriodicalIF":6.0,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144288703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信