IEEE transactions on neural networks and learning systems最新文献

筛选
英文 中文
Vision Mamba: A Comprehensive Survey and Taxonomy 视觉曼巴:一个全面的调查和分类
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-22 DOI: 10.1109/tnnls.2025.3610435
Xiao Liu, Chenxu Zhang, Fuxiang Huang, Shuyin Xia, Guoyin Wang, Lei Zhang
{"title":"Vision Mamba: A Comprehensive Survey and Taxonomy","authors":"Xiao Liu, Chenxu Zhang, Fuxiang Huang, Shuyin Xia, Guoyin Wang, Lei Zhang","doi":"10.1109/tnnls.2025.3610435","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3610435","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"38 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145116277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward an Effective Action-Region Tracking Framework for Fine-Grained Video Action Recognition 一种用于细粒度视频动作识别的有效动作区域跟踪框架
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-19 DOI: 10.1109/tnnls.2025.3602089
Baoli Sun, Yihan Wang, Xinzhu Ma, Zhihui Wang, Kun Lu, Zhiyong Wang
{"title":"Toward an Effective Action-Region Tracking Framework for Fine-Grained Video Action Recognition","authors":"Baoli Sun, Yihan Wang, Xinzhu Ma, Zhihui Wang, Kun Lu, Zhiyong Wang","doi":"10.1109/tnnls.2025.3602089","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3602089","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"3 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145089114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Semi-Supervised Learning and Meta-Learning for Re-Identification in Few-Shot Spatiotemporal Anomaly Detection 利用半监督学习和元学习在小样本时空异常检测中的再识别
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-19 DOI: 10.1109/tnnls.2025.3578642
Zhen Zhou, Ziyuan Gu, Pan Liu, Wenwu Yu, Zhiyuan Liu
{"title":"Leveraging Semi-Supervised Learning and Meta-Learning for Re-Identification in Few-Shot Spatiotemporal Anomaly Detection","authors":"Zhen Zhou, Ziyuan Gu, Pan Liu, Wenwu Yu, Zhiyuan Liu","doi":"10.1109/tnnls.2025.3578642","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3578642","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"4 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145089115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Peak-Padding: Clustering by Padding Density Peaks With the Minimum Padding Cost 峰值填充:通过填充密度峰值以最小填充成本聚类
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-19 DOI: 10.1109/tnnls.2025.3606527
Junyi Guan, Bingbing Jiang, Weiguo Sheng, Yangyang Zhao, Sheng Li, Xiongxiong He
{"title":"Peak-Padding: Clustering by Padding Density Peaks With the Minimum Padding Cost","authors":"Junyi Guan, Bingbing Jiang, Weiguo Sheng, Yangyang Zhao, Sheng Li, Xiongxiong He","doi":"10.1109/tnnls.2025.3606527","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3606527","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"21 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145089116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Restoring Noisy Demonstration for Imitation Learning With Diffusion Models 用扩散模型恢复模仿学习的噪声演示
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-17 DOI: 10.1109/tnnls.2025.3607111
Shang-Fu Chen, Co Yong, Shao-Hua Sun
{"title":"Restoring Noisy Demonstration for Imitation Learning With Diffusion Models","authors":"Shang-Fu Chen, Co Yong, Shao-Hua Sun","doi":"10.1109/tnnls.2025.3607111","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3607111","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"75 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145077470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual Diffuser (CoD): Mastering Continual Offline RL With Experience Rehearsal. 持续扩散器(CoD):通过经验演练掌握持续离线RL。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-16 DOI: 10.1109/tnnls.2025.3598928
Jifeng Hu,Li Shen,Sili Huang,Zhejian Yang,Hechang Chen,Lichao Sun,Yi Chang,Dacheng Tao
{"title":"Continual Diffuser (CoD): Mastering Continual Offline RL With Experience Rehearsal.","authors":"Jifeng Hu,Li Shen,Sili Huang,Zhejian Yang,Hechang Chen,Lichao Sun,Yi Chang,Dacheng Tao","doi":"10.1109/tnnls.2025.3598928","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3598928","url":null,"abstract":"Artificial neural networks, especially recent diffusion-based models, have shown remarkable superiority in gaming, control, and QA systems, where the training tasks' datasets are usually static. However, in real-world applications, such as robotic control of reinforcement learning (RL), the tasks are changing, and new tasks arise in a sequential order. This situation poses the new challenge of plasticity-stability tradeoff for training an agent who can adapt to task changes and retain acquired knowledge. In view of this, we propose a rehearsal-based continual diffusion model, called continual diffuser (CoD), to endow the diffuser with the capabilities of quick adaptation (plasticity) and lasting retention (stability). Specifically, we first construct an offline benchmark that contains 90 tasks from multiple domains. Then, we train the CoD on each task with sequential modeling and conditional generation for making decisions. Next, we preserve a small portion of previous datasets as the rehearsal buffer and replay it to retain the acquired knowledge. Extensive experiments on a series of tasks show that CoD can achieve a promising plasticity-stability tradeoff and outperform existing diffusion-based methods and other representative baselines on most tasks. The source code is available at https://github.com/JF-Hu/Continual_Diffuser.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"3 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial-Temporal Diffusion Model for Matrix Factorization. 矩阵分解的时空扩散模型。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-16 DOI: 10.1109/tnnls.2025.3605215
Chenxi Tian,Wenming Wu,Lingling Li,Xu Liu,Fang Liu,Wenping Ma,Licheng Jiao,Shuyuan Yang
{"title":"Spatial-Temporal Diffusion Model for Matrix Factorization.","authors":"Chenxi Tian,Wenming Wu,Lingling Li,Xu Liu,Fang Liu,Wenping Ma,Licheng Jiao,Shuyuan Yang","doi":"10.1109/tnnls.2025.3605215","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3605215","url":null,"abstract":"Matrix factorization (MF) is a fundamental problem in machine learning, which is usually used as a feature learning method in various fields. For complex data involving spatiotemporal interactions, MF that only handles 2-D data will disrupt spatial dependence or temporal dynamics, failing to effectively couple spatial information with temporal factors. According to Markov chain principle, the spatial information of the present time is related to the spatial state of the previous time. We propose a spatial-temporal diffusion model for MF (STDMF), which uses graph diffusion to couple spatial-temporal information. Then, MF is used to learn the joint feature of data and spatial-temporal diffusion graph. Specifically, STDMF utilizes the graph diffusion with physical laws to generate spatial-temporal structure information. It obtains the underlying core structure of complex systems from a global perspective, which enhances the generalization ability of MF in noisy time-series data. To learn the lowest rank subspace of MF in time-series data, STDMF uses structural learning to constrain the rank of the learned features. Finally, STDMF is applied to clustering and anomaly detection of dynamic graph. The effectiveness of this method is verified by sufficient experiments, especially for noisy data.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"37 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified Framework for Matrix Backpropagation. 矩阵反向传播的统一框架。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-16 DOI: 10.1109/tnnls.2025.3607405
Gatien Darley,Stephane Bonnet
{"title":"A Unified Framework for Matrix Backpropagation.","authors":"Gatien Darley,Stephane Bonnet","doi":"10.1109/tnnls.2025.3607405","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3607405","url":null,"abstract":"Computing matrix gradient has become a key aspect in modern signal processing/machine learning, with the recent use of matrix neural networks requiring matrix backpropagation. In this field, two main methods exist to calculate the gradient of matrix functions for symmetric positive definite (SPD) matrices, namely, the Daleckiǐ-Kreǐn/Bhatia formula and the Ionescu method. However, there appear to be a few errors. This brief aims to demonstrate each of these formulas in a self-contained and unified framework, to prove theoretically their equivalence, and to clarify inaccurate results of the literature. A numerical comparison of both methods is also provided in terms of computational speed and numerical stability to show the superiority of the Daleckiǐ-Kreǐn/Bhatia approach. We also extend the matrix gradient to the general case of diagonalizable matrices. Convincing results with the two backpropagation methods are shown on the EEG-based BCI competition dataset with the implementation of an SPDNet, yielding around 80% accuracy for one subject. Daleckiǐ-Kreǐn/Bhatia formula achieves an 8% time gain during training and handles degenerate cases.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"84 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Theoretical Advances on Stochastic Configuration Networks. 随机构型网络的理论进展。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-16 DOI: 10.1109/tnnls.2025.3608555
Xiufeng Yan,Dianhui Wang,Ivan Y Tyukin
{"title":"Theoretical Advances on Stochastic Configuration Networks.","authors":"Xiufeng Yan,Dianhui Wang,Ivan Y Tyukin","doi":"10.1109/tnnls.2025.3608555","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3608555","url":null,"abstract":"This article advances the theoretical foundations of stochastic configuration networks (SCNs) by rigorously analyzing their convergence properties, approximation guarantees, and the limitations of nonadaptive randomized methods. We introduce a principled objective function that aligns incremental training with orthogonal projection, ensuring maximal residual reduction at each iteration without recomputing output weights. Under this formulation, we derive a novel necessary and sufficient condition for strong convergence in Hilbert spaces and establish sufficient conditions for uniform geometric convergence, offering the first theoretical justification of the SCN residual constraint. To assess the feasibility of unguided random initialization, we present a probabilistic analysis showing that even small support shifts markedly reduce the likelihood of sampling effective nodes in high-dimensional settings, thereby highlighting the necessity of adaptive refinement in the sampling distribution. Motivated by these insights, we propose greedy SCNs (GSCNs) and two optimized variants-Newton-Raphson GSCN (NR-GSCN) and particle swarm optimization GSCN (PSO-GSCN)-that incorporate Newton-Raphson refinement and particle swarm-based exploration to improve node selection. Empirical results on synthetic and real-world datasets demonstrate that the proposed methods achieve faster convergence, better approximation accuracy, and more compact architectures compared to existing SCN training schemes. Collectively, this work establishes a rigorous theoretical and algorithmic framework for SCNs, laying out a principled foundation for subsequent developments in the field of randomized neural network (NN) training.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"124 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FEU-Diff: A Diffusion Model With Fuzzy Evidence-Driven Dynamic Uncertainty Fusion for Medical Image Segmentation. FEU-Diff:一种基于模糊证据驱动动态不确定性融合的医学图像分割扩散模型。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-09-16 DOI: 10.1109/tnnls.2025.3609085
Sheng Geng,Shu Jiang,Tao Hou,Hongcheng Yao,Jiashuang Huang,Weiping Ding
{"title":"FEU-Diff: A Diffusion Model With Fuzzy Evidence-Driven Dynamic Uncertainty Fusion for Medical Image Segmentation.","authors":"Sheng Geng,Shu Jiang,Tao Hou,Hongcheng Yao,Jiashuang Huang,Weiping Ding","doi":"10.1109/tnnls.2025.3609085","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3609085","url":null,"abstract":"Diffusion models, as a class of generative frameworks based on step-wise denoising, have recently attracted significant attention in the field of medical image segmentation. However, existing diffusion-based methods typically rely on static fusion strategies to integrate conditional priors with denoised features, making them difficult to adaptively balance their respective contributions at different denoising stages. Moreover, these methods often lack explicit modeling of pixel-level uncertainty in ambiguous regions, which may lead to the loss of structural details during the iterative denoising process, ultimately compromising the accuracy (Acc) and completeness of the final segmentation results. To this end, we propose FEU-Diff, a diffusion-based segmentation framework that integrates fuzzy evidence modeling and uncertainty fusion (UF) mechanisms. Specifically, a fuzzy semantic enhancement (FSE) module is designed to model pixel-level uncertainty through Gaussian membership functions and fuzzy logic rules, enhancing the model's ability to identify and represent ambiguous boundaries. An evidence dynamic fusion (EDF) module estimates feature confidence via a Dirichlet-based distribution and adaptively guides the fusion of conditional information and denoised features across different denoising stages. Furthermore, the UF module quantifies discrepancies among multisource predictions to compensate for structural detail loss during the iterative denoising process. Extensive experiments on four public datasets show that FEU-Diff consistently outperforms state-of-the-art (SOTA) methods, achieving an average gain of 1.42% in the Dice similarity coefficient (DSC), 1.47% in intersection over union (IoU), and a 2.26 mm reduction in the 95th percentile Hausdorff distance (HD95). In addition, our method generates uncertainty maps that enhance clinical interpretability.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"16 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信