Neural Networks最新文献

筛选
英文 中文
Inference of hidden common driver dynamics by anisotropic self-organizing neural networks 基于各向异性自组织神经网络的隐含公共驱动动力推断
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-13 DOI: 10.1016/j.neunet.2025.108113
Zsigmond Benkő , Marcell Stippinger , Attila Bencze , Fülöp Bazsó , András Telcs , Zoltán Somogyvári
{"title":"Inference of hidden common driver dynamics by anisotropic self-organizing neural networks","authors":"Zsigmond Benkő ,&nbsp;Marcell Stippinger ,&nbsp;Attila Bencze ,&nbsp;Fülöp Bazsó ,&nbsp;András Telcs ,&nbsp;Zoltán Somogyvári","doi":"10.1016/j.neunet.2025.108113","DOIUrl":"10.1016/j.neunet.2025.108113","url":null,"abstract":"<div><div>We introduce the Anisotropic Self-Organizing Map (ASOM), a novel neural network-based approach for inferring hidden common drivers in nonlinear dynamical systems from observed time series. Grounded in topological theorems, our method integrates time-delay embedding, intrinsic dimension estimation, and a new anisotropic training scheme for Kohonen’s self-organizing map, enabling the precise decomposition of attractor manifolds into autonomous and shared components of the dynamics. We validated ASOM through simulations involving chaotic maps, where two driven systems were influenced by a hidden nonlinear driver. The inferred time series showed a strong correlation with the actual hidden common driver, unlike the observed systems. We further compared our reconstruction performance against several established methods for identifying shared features in time series, including PCA, kernel PCA, ICA, dynamical component analysis, canonical correlation analysis, deep canonical correlation analysis, traditional self-organizing map, and recent recurrence-based approaches. Our results demonstrate ASOM’s superior accuracy and robustness in recovering latent dynamics, providing a powerful tool for unsupervised learning of hidden causal structures in complex systems.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108113"},"PeriodicalIF":6.3,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MINIGE-MNER: A multi-stage interaction network inspired by gene editing for multimodal named entity recognition minie - mner:一个受基因编辑启发的多阶段交互网络,用于多模态命名实体识别。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-12 DOI: 10.1016/j.neunet.2025.108106
Bo Kong , Shengquan Liu , Liruizhi Jia , Yi Liang , Dongfang Han , Xu Zhang
{"title":"MINIGE-MNER: A multi-stage interaction network inspired by gene editing for multimodal named entity recognition","authors":"Bo Kong ,&nbsp;Shengquan Liu ,&nbsp;Liruizhi Jia ,&nbsp;Yi Liang ,&nbsp;Dongfang Han ,&nbsp;Xu Zhang","doi":"10.1016/j.neunet.2025.108106","DOIUrl":"10.1016/j.neunet.2025.108106","url":null,"abstract":"<div><div>Multimodal Named Entity Recognition (MNER) integrates complementary information from both text and images to identify named entities within text. However, existing methods face three key issues: imbalanced handling of modality noise, the cascading effect of semantic mismatch, and information loss resulting from the lack of text dominance. To address these issues, this paper proposes a <strong>M</strong>ulti-stage <strong>I</strong>nteraction <strong>N</strong>etwork <strong>I</strong>nspired by <strong>G</strong>ene <strong>E</strong>diting for <strong>MNER</strong>(MINIGE-MNER). The core innovations of this method include: A gene knockout module based on the variational information bottleneck, which removes inferior genes (modality noise) from the text, raw image, and generated image features. This approach retains the superior genes, achieving balanced filtering of modality noise. A determination of gene recombination sites module that maximizes the mutual information between superior genes across modalities, reducing the spatial distance between them and ensuring precise, fine-grained semantic alignment. This helps to prevent the cascading effect of semantic mismatch. A text-guided gene recombination module that implements a “text-dominant, vision-supplementary” cross-modal fusion paradigm. This module dynamically filters out visual noise unrelated to the text while avoiding excessive reliance on visual information that could obscure the unique contextual information of the text, effectively mitigating information loss. Experimental results show that MINIGE-MNER achieves F1 scores of 76.45 % and 88.67 % on the Twitter-2015 and Twitter-2017 datasets, respectively, outperforming existing state-of-the-art methods by 0.83 % and 0.42 %. In addition, this paper presents comprehensive experiments that demonstrate the superiority of MINIGE-MNER and the effectiveness of its individual modules.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108106"},"PeriodicalIF":6.3,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A vision-language model for multitask classification of memes 模因多任务分类的视觉语言模型。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-12 DOI: 10.1016/j.neunet.2025.108089
Md. Mithun Hossain , Md. Shakil Hossain , M.F. Mridha , Nilanjan Dey
{"title":"A vision-language model for multitask classification of memes","authors":"Md. Mithun Hossain ,&nbsp;Md. Shakil Hossain ,&nbsp;M.F. Mridha ,&nbsp;Nilanjan Dey","doi":"10.1016/j.neunet.2025.108089","DOIUrl":"10.1016/j.neunet.2025.108089","url":null,"abstract":"<div><div>The emergence of social media and online memes has led to an increasing demand for automated systems that can analyse and classify multimodal data, particularly in online forums. Memes blend text and graphics to express complicated ideas, sometimes containing emotions, satire, or inappropriate material. Memes often represent cultural prejudices such as objectification, sexism, and bigotry, making it difficult for artificial intelligence to classify these components. Our solution is the vision-language model ViT-BERT CAMT (cross-attention multitask), which is intended for multitask meme categorization. Our model uses a linear self-attentive fusion mechanism to combine vision transformer (ViT) features for image analysis and bidirectional encoder representations from transformers (BERT) for text interpretation. In this way, we can see how text and images relate to space and meaning. We tested the ViT-BERT CAMT on two difficult datasets: the SemEval 2020 Memotion dataset, which contains a multilabel classification of sentiment, sarcasm, and offensiveness in memes, and the MIMIC dataset, which focuses on detecting sexism, objectification, and prejudice. The findings show that the ViT-BERT CAMT achieves good accuracy on both datasets and outperforms many current baselines in multitask settings. These results highlight the importance of combined image-text modelling for correctly deciphering nuanced meanings in memes, particularly when spotting abusive and discriminatory content. By improving multimodal categorization algorithms, this study helps better monitor and comprehend online conversation.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108089"},"PeriodicalIF":6.3,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPC: Self-supervised point cloud completion SPC:自监督点云完成。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-12 DOI: 10.1016/j.neunet.2025.108107
Jie Song , Xing Wu , Junfeng Yao , Qi Zhang , Chenhao Shang , Quan Qian , Jun Song
{"title":"SPC: Self-supervised point cloud completion","authors":"Jie Song ,&nbsp;Xing Wu ,&nbsp;Junfeng Yao ,&nbsp;Qi Zhang ,&nbsp;Chenhao Shang ,&nbsp;Quan Qian ,&nbsp;Jun Song","doi":"10.1016/j.neunet.2025.108107","DOIUrl":"10.1016/j.neunet.2025.108107","url":null,"abstract":"<div><div>Shape incompleteness is a common issue in point clouds acquired by depth sensors. Point cloud completion aims to restore partial point clouds to their complete form. However, most existing point cloud completion methods rely on complete point clouds or multi-view information of the same object during training, which is not practical for real-world scenarios with high information acquisition costs. To overcome the above limitation, a self-supervised point cloud completion (SPC) method is proposed, which uses the training set consisting of only a single partial point cloud for each object. Specifically, an autoencoder-like network architecture that includes a two-step strategy is developed. First, a compression-reconstruction strategy is proposed to enable the network to learn the representation of complete point clouds from existing knowledge. Then, considering the potential problem of overfitting in self-supervised training, a global enhancement strategy is further designed to maintain the positional coherence of predicted points. Comprehensive experiments are conducted on the ScanNet, MatterPort3D, KITTI, and ShapeNet datasets. On real-world datasets, the unidirectional Chamfer distance (UCD) and the unidirectional Hausdorff distance (UHD) of the method are reduced by an average of 2.3 and 2.4, respectively, compared to the state-of-the-art method. In addition to its excellent completion capabilities, the proposed method has a positive impact on downstream tasks. In point cloud classification, applying the proposed method improves classification accuracy by an average of 14 %. Extensive experimental results demonstrate that the proposed SPC has a high practical value.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108107"},"PeriodicalIF":6.3,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminative representation learning via attention-enhanced contrastive learning for short text clustering 基于注意增强对比学习的判别表征学习在短文本聚类中的应用。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-12 DOI: 10.1016/j.neunet.2025.108101
Zhihao Yao, Bo Li, Yufei Liao
{"title":"Discriminative representation learning via attention-enhanced contrastive learning for short text clustering","authors":"Zhihao Yao,&nbsp;Bo Li,&nbsp;Yufei Liao","doi":"10.1016/j.neunet.2025.108101","DOIUrl":"10.1016/j.neunet.2025.108101","url":null,"abstract":"<div><div>Contrastive learning has gained significant attention in short text clustering, yet it has an inherent drawback of mistakenly identifying samples from the same category as negatives and separating them in the feature space (i.e., the false negative separation problem). To generate discriminative representations for short text clustering, we propose a novel clustering method, called Discriminative Representation learning via <strong>A</strong>ttention-<strong>E</strong>nhanced <strong>C</strong>ontrastive <strong>L</strong>earning for Short Text Clustering (<strong>AECL</strong>). The <strong>AECL</strong> consists of two modules which are the contrastive learning module and the pseudo-label assisting module. Both modules utilize a sample-level attention mechanism to extract similarities between samples, based on which cross-sample features are aggregated to form a consistent representation for each sample. The contrastive learning module explores the similarity relationships and the consistent representations to form positive samples, effectively addressing the false negative separation issue, and the pseudo-label assisting module utilizes the consistent representations to produce reliable supervision information to assist the clustering task. Experimental results demonstrate that <strong>AECL</strong> outperforms state-of-the-art methods. The code is available at <span><span>https://github.com/YZH0905/AECL-STC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108101"},"PeriodicalIF":6.3,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A unified gradient regularization method for heterogeneous graph neural networks 异构图神经网络的统一梯度正则化方法
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-11 DOI: 10.1016/j.neunet.2025.108104
Xiao Yang , Xuejiao Zhao , Zhiqi Shen
{"title":"A unified gradient regularization method for heterogeneous graph neural networks","authors":"Xiao Yang ,&nbsp;Xuejiao Zhao ,&nbsp;Zhiqi Shen","doi":"10.1016/j.neunet.2025.108104","DOIUrl":"10.1016/j.neunet.2025.108104","url":null,"abstract":"<div><div>Heterogeneous Graph Neural Networks (HGNNs) are advanced deep learning methods widely applied for learning representations of heterogeneous graphs. However, they face challenges such as over-smoothing and non-robustness. Existing methods can mitigate these issues by applying gradient regularization to one of the three information dimensions: node, edge, or propagation message. However, these methods have problems such as unstable training, difficulty in parameter convergence, and inadequate utilization of heterogeneous information. We propose a novel gradient regularization method called Grug, which iteratively applies regularization to the gradients derived from both node type and message matrix during the message-passing process. A detailed theoretical analysis demonstrates its advantages in Stability and Diversity. Notably, Grug potentially exceeds the theoretical upper bounds set by DropMessage. In addition, Grug offers a unified gradient regularization framework that integrates the existing dropping and adversarial training methods, and provides theoretical guidance for their further optimization in different data and tasks. We validate Grug through extensive experiments on six public datasets, showing significant improvements in performance and effectiveness.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108104"},"PeriodicalIF":6.3,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stability of large-scale probabilistic Boolean networks via network aggregation 基于网络聚合的大规模概率布尔网络的稳定性。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-11 DOI: 10.1016/j.neunet.2025.108108
Wen Liu , Shihua Fu , Jianjun Wang , Renato De Leone , Jianwei Xia
{"title":"Stability of large-scale probabilistic Boolean networks via network aggregation","authors":"Wen Liu ,&nbsp;Shihua Fu ,&nbsp;Jianjun Wang ,&nbsp;Renato De Leone ,&nbsp;Jianwei Xia","doi":"10.1016/j.neunet.2025.108108","DOIUrl":"10.1016/j.neunet.2025.108108","url":null,"abstract":"<div><div>Large-scale probabilistic Boolean networks (LSPBNs) are a modeling tool used to simulate and analyze the dynamics of complex systems with uncertainty. However, due to its high computational complexity, previous research methods cannot be directly applied to study such systems. Inspired by network aggregation, this paper conducts network aggregation on LSPBNs to investigate its global stability with probability 1. It is worth mentioning that the stability conclusion proposed in this article holds for any form of network aggregation. First, the entire network is partitioned and the algebraic expressions for each subnetwork are given through the semi-tensor product of matrices. And then, a set of iterative formulas is constructed to describe and reflect the input-output coordination relationship among the subnetworks, and based on which, a sufficient condition for the global stability of LSPBNs is derived, greatly reducing computational complexity. The feasibilities of the proposed method and results are verified through examples.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108108"},"PeriodicalIF":6.3,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiology report generation via visual-semantic ambivalence-aware network and focal self-critical sequence training 基于视觉语义矛盾感知网络和焦点自我批判序列训练的放射学报告生成。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-11 DOI: 10.1016/j.neunet.2025.108102
Xiulong Yi , You Fu , Enxu Bi , Jianguo Liang , Hao Zhang , Jianzhi Yu , Qianqian Li , Rong Hua , Rui Wang
{"title":"Radiology report generation via visual-semantic ambivalence-aware network and focal self-critical sequence training","authors":"Xiulong Yi ,&nbsp;You Fu ,&nbsp;Enxu Bi ,&nbsp;Jianguo Liang ,&nbsp;Hao Zhang ,&nbsp;Jianzhi Yu ,&nbsp;Qianqian Li ,&nbsp;Rong Hua ,&nbsp;Rui Wang","doi":"10.1016/j.neunet.2025.108102","DOIUrl":"10.1016/j.neunet.2025.108102","url":null,"abstract":"<div><div>Radiology report generation, which aims to provide accurate descriptions of both normal and abnormal regions, has been attracting growing research attention. Recently, despite considerable progress, data-driven deep-learning based models still face challenges in capturing and describing the abnormalities, due to the data bias problem. To address this problem, we propose to generate radiology reports via the Visual-Semantic Ambivalence-Aware Network (VSANet) and the Focal Self-Critical Sequence Training (FSCST). In detail, our VSANet follows the encoder-decoder framework. In the encoder part, we first deploy a multi-grained abnormality extractor and a visual extractor to capture both semantic and visual features from given images, and then introduce a Parameter Shared Dual-way Encoder (PSDwE) to delve into the inter- and intra-relationships among these features. In the decoder part, we propose the Visual-Semantic Ambivalence-Aware (VSA) module to generate the abnormality-aware visual features to mitigate the data bias problem. In implementation, our VSA introduces three sub-modules: Dual-way Attention (DwA), introduced to generate both the word-related visual and semantic features; Dual-way Attention on Attention (DwAoA), designed to mitigate redundant information; Score-based Feature Fusion (SFF), constructed to fuse the visual and semantic features in an ambivalence way. We further introduce the FSCST to enhance the overall performance of our VSANet by allocating more attention toward difficult samples. Experimental results demonstrate that our proposal achieves superior performance on various evaluation metrics. Source code have released at <span><span>https://github.com/SKD-HPC/VSANet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108102"},"PeriodicalIF":6.3,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-level graph contrastive learning for community value prediction 社群价值预测的跨层图对比学习。
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-11 DOI: 10.1016/j.neunet.2025.108103
Wenjie Yang , Shengzhong Zhang , Zengfeng Huang
{"title":"Cross-level graph contrastive learning for community value prediction","authors":"Wenjie Yang ,&nbsp;Shengzhong Zhang ,&nbsp;Zengfeng Huang","doi":"10.1016/j.neunet.2025.108103","DOIUrl":"10.1016/j.neunet.2025.108103","url":null,"abstract":"<div><div>Community Value Prediction (CVP) is an important emerging task in the field of social commerce, which aims to predict the community values. However, due to the complex structure of communities and individuals, previous graph machine learning methods have struggled to adequately address this task. This study endeavors to bridge this gap by introducing a cross-level graph contrastive learning method called <em>Cross-level Community Contrastive Learning</em> (CCCL) to handle such subgraph-level tasks. Specifically, we generate two views that describe different levels of social connections, the augmented node-level graph and the community-level graph that is produced by graph coarsening. Subsequently, CCCL captures the mutual information between the two views through a cross-view contrastive loss. The learned embeddings utilize community and node information at various levels, making them capable of handling subgraph-level regression problems. To the best of our knowledge, CCCL is the first graph contrastive learning method that addresses the CVP problem. We theoretically show that CCCL maximizes a lower bound of the mutual information shared between node-view and community-view representations. Experimental results demonstrate that our proposed approach is highly effective for the CVP task, outperforming both end-to-end and self-supervised baselines. Furthermore, our model also exhibits robust resistance to edge perturbation attacks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108103"},"PeriodicalIF":6.3,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fixed/prescribed-time synchronization of state-dependent switching neural networks with stochastic disturbance and impulsive effects 具有随机干扰和脉冲效应的状态依赖切换神经网络的固定/规定时间同步
IF 6.3 1区 计算机科学
Neural Networks Pub Date : 2025-09-11 DOI: 10.1016/j.neunet.2025.108100
Guici Chen , Houxuan Zhang , Shiping Wen , Junhao Hu , Leimin Wang
{"title":"Fixed/prescribed-time synchronization of state-dependent switching neural networks with stochastic disturbance and impulsive effects","authors":"Guici Chen ,&nbsp;Houxuan Zhang ,&nbsp;Shiping Wen ,&nbsp;Junhao Hu ,&nbsp;Leimin Wang","doi":"10.1016/j.neunet.2025.108100","DOIUrl":"10.1016/j.neunet.2025.108100","url":null,"abstract":"<div><div>This paper investigates the fixed-time synchronization (FXTS) and prescribed-time synchronization (PSTS) problems of state-dependent switching neural networks (SDSNNs) with stochastic disturbances and impulsive effects. By leveraging the average impulsive interval, comparison principle, and interval matrix methodology, this study advances a novel analytical framework. Departing from conventional approaches, we reformulate stochastic disturbed and impulsive SDSNNs as interval-parameter systems through rigorous interval matrix transformation. Consequently, we derive some sufficient conditions in the form of linear matrix inequalities (LMIs) to ensure the realization of FXTS and PSTS. Since impulsive effects can potentially compromise synchronization stability, careful controller design becomes critical. To address this challenge, we develop a unified proportional integral (PI) control framework. Through proper adjustment of its control parameters, this framework enables the system to achieve both FXTS and PSTS. Moreover, by reasonably configuring the relationship between the impulsive intensity and the prescribed time, the synchronization performance can be balanced. Finally, we demonstrate the effectiveness of the theoretical results through two examples.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108100"},"PeriodicalIF":6.3,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信