IEEE transactions on neural networks and learning systems最新文献

筛选
英文 中文
Domain Information Mining and State-Guided Adaptation Network for Multispectral Image Segmentation. 多光谱图像分割的领域信息挖掘和状态引导自适应网络。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-22 DOI: 10.1109/tnnls.2025.3589574
Boyu Zhao,Mengmeng Zhang,Wei Li,Yunhao Gao,Junjie Wang
{"title":"Domain Information Mining and State-Guided Adaptation Network for Multispectral Image Segmentation.","authors":"Boyu Zhao,Mengmeng Zhang,Wei Li,Yunhao Gao,Junjie Wang","doi":"10.1109/tnnls.2025.3589574","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3589574","url":null,"abstract":"Segment anything model (SAM), as a prompt-based image segmentation foundation model, demonstrates strong task versatility and domain generalization (DG) capabilities, providing a new direction for solving cross-scene segmentation tasks. However, SAM still has limitations in multispectral cross-domain segmentation tasks, mainly reflected in: 1) insufficient information utilization, which is reflected in the neglect of nonvisible spectral information and the shift information contained in source domain (SD) samples and target domain (TD) samples; and 2) lack of cross-domain strategies, which leads to insufficient cross-domain adaptation (DA) ability in downstream tasks. To address these challenges, we combine the respective advantages of masked autoencoder (MAE) and cross-domain strategies, propose an improved SAM DA network structure called domain information mining and state-guided adaptation network (DSAnet), aiming to enhance SAM's performance in multispectral cross-domain segmentation tasks from both data and task levels. At the data level, DSAnet incorporates a style masking learning component, which randomly masks image features and replaces them with domain-specific learnable tokens, integrated with the image reconstruction task, to mine the style information and domain invariance of the image itself. At the task level, DSAnet introduces domain state learning and style-guided segmentation: domain state learning, through a state sequence modeling approach, designs specific state representations for SD and TD to capture interdomain differences, thereby reducing task shift. Meanwhile, the learned domain state information can be directly applied to the inference stage. Style prompt segmentation guides the segmentation training process of SD images with TD style prompts, improving SAM's adaptability in cross-domain multispectral segmentation downstream tasks. Extensive experiments on three multitemporal multispectral image (MSI) datasets demonstrate the superiority of the proposed method compared to state-of-the-art cross-domain strategies and SAM variant methods.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"14 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Long Short-Term Financial Time Series Forecasting Based on Residual Multiscale TCN Sparse Expert Network and Informer. 基于残差多尺度TCN稀疏专家网络和信息器的长短期金融时间序列预测。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-22 DOI: 10.1109/tnnls.2025.3584369
Wuzhida Bao,Yuting Cao,Yin Yang,Shiping Wen
{"title":"Long Short-Term Financial Time Series Forecasting Based on Residual Multiscale TCN Sparse Expert Network and Informer.","authors":"Wuzhida Bao,Yuting Cao,Yin Yang,Shiping Wen","doi":"10.1109/tnnls.2025.3584369","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3584369","url":null,"abstract":"Due to the inherent high volatility and complexity of financial markets, traditional time series forecasting models face numerous challenges in handling both short- and long-term predictions in the stock market. Most traditional neural network-based financial prediction models are limited to short-term forecasting and struggle to capture long-term trends and global dependencies in the market fully. To address this, we propose a novel network architecture called ResMMoT-Informer. This model combines the strengths of the residual multiscale temporal convolutional network (TCN) sparse expert network (ResMMoT) and the Informer, enabling it to effectively capture multiscale local features and global dependencies in the stock market. ResMMoT achieves stable training through a residual structure and a sparse multiscale TCN expert network, allowing it to flexibly model complex temporal features and learn trends across different time-step scales. Meanwhile, the Informer optimizes long-sequence forecasting performance through an improved self-attention mechanism. Additionally, we introduce the wavelet noise reduction (WNR) method, further enhancing the model's robustness and prediction accuracy. In the experimental section, ablation experiments first validate the effectiveness and necessity of the proposed strategies and network structure. Subsequent comparison experiments on the NASDAQ100 dataset demonstrate that ResMMoT-Informer excels in both long- and short-term time series forecasting tasks in the stock market, with significantly better prediction accuracy and generalization ability than existing models. Compared to other popular neural network-based financial forecasting models, ResMMoT-Informer leads in prediction accuracy, time robustness, and interpretability, showcasing its cutting-edge advantage in contemporary research.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"32 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
D2Fed: Federated Semi-Supervised Learning With Dual-Role Additive Local Training and Dual-Perspective Global Aggregation. 具有双角色加性局部训练和双视角全局聚合的联邦半监督学习。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-22 DOI: 10.1109/tnnls.2025.3587942
Jingxin Mao,Yu Yang,Zhiwei Wei,Yanlong Bi,Rongqing Zhang
{"title":"D2Fed: Federated Semi-Supervised Learning With Dual-Role Additive Local Training and Dual-Perspective Global Aggregation.","authors":"Jingxin Mao,Yu Yang,Zhiwei Wei,Yanlong Bi,Rongqing Zhang","doi":"10.1109/tnnls.2025.3587942","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3587942","url":null,"abstract":"Federated semi-supervised learning (FSSL) has recently emerged as a promising approach for enhancing the performance of federated learning (FL) using ubiquitous unlabeled data. However, this approach encounters challenges when learning a global model using both fully labeled and fully unlabeled clients. Previous works overlook the dissimilarities between labeled and unlabeled clients, predominantly using shared parameters for local training across these two types of clients, thereby inducing intertask interference during local training. Moreover, these works typically adopt a single-perspective aggregation strategy, primarily focusing on data-volume-aware aggregation (i.e., FedAvg), leading to a lack of comprehensive consideration in model aggregation. In this article, we propose a novel FSSL method termed $text {D}^{{2}}text {Fed}$ , which addresses these issues by rethinking the roles of labeled clients and unlabeled ones to mitigate intertask interference during local training and by integrating client-type-aware with data-volume-aware to provide a more comprehensive perspective for model aggregation. Specifically, in local training, our proposed $text {D}^{{2}}text {Fed}$ distinguishes between the primary and accessory roles of labeled and unlabeled clients, respectively, performing dual-role additive local training (DALT) accordingly. In global aggregation, $text {D}^{{2}}text {Fed}$ uses a dual-perspective global aggregation (DGA) strategy, transitioning from data-volume-aware aggregation to client-type-aware aggregation. The proposed method simultaneously improves both local training and global model aggregation for FSSL without compromising privacy. We demonstrate the effectiveness and robustness of the proposed method through extensive experiments and elaborate ablation studies conducted on the CIFAR-10/100, SVHN, FMNIST, and STL-10 datasets. Experimental results show that $text {D}^{{2}}text {Fed}$ outperforms state-of-the-arts on five datasets under diverse data settings.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"143 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video Prediction of Dynamic Physical Simulations With Pixel-Space Spatiotemporal Transformers. 基于像素空间时空变换的动态物理模拟视频预测。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-22 DOI: 10.1109/tnnls.2025.3585949
Dean L Slack,G Thomas Hudson,Thomas Winterbottom,Noura Al Moubayed
{"title":"Video Prediction of Dynamic Physical Simulations With Pixel-Space Spatiotemporal Transformers.","authors":"Dean L Slack,G Thomas Hudson,Thomas Winterbottom,Noura Al Moubayed","doi":"10.1109/tnnls.2025.3585949","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3585949","url":null,"abstract":"Inspired by the performance and scalability of autoregressive large language models (LLMs), transformer-based models have seen recent success in the visual domain. This study investigates a transformer adaptation for video prediction with a simple end-to-end approach, comparing various spatiotemporal self-attention layouts. Focusing on causal modeling of physical simulations over time; a common shortcoming of existing video-generative approaches, we attempt to isolate spatiotemporal reasoning via physical object tracking metrics and unsupervised training on physical simulation datasets. We introduce a simple yet effective pure transformer model for autoregressive video prediction, utilizing continuous pixel-space representations for video prediction. Without the need for complex training strategies or latent feature-learning components, our approach significantly extends the time horizon for physically accurate predictions by up to 50% when compared with existing latent-space approaches, while maintaining comparable performance on common video quality metrics. In addition, we conduct interpretability experiments to identify network regions that encode information useful to perform accurate estimations of PDE simulation parameters via probing models, and find that this generalizes to the estimation of out-of-distribution simulation parameters. This work serves as a platform for further attention-based spatiotemporal modeling of videos via a simple, parameter efficient, and interpretable approach.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"18 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Copula Density Neural Estimation
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-22 DOI: 10.1109/tnnls.2025.3585755
Nunzio A. Letizia, Nicola Novello, Andrea M. Tonello
{"title":"Copula Density Neural Estimation","authors":"Nunzio A. Letizia, Nicola Novello, Andrea M. Tonello","doi":"10.1109/tnnls.2025.3585755","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3585755","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"28 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Adaptive Neighborhood-Resonated Graph Convolution Network for Undirected Weighted Graph Representation. 无向加权图表示的自适应邻域共振图卷积网络。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-22 DOI: 10.1109/tnnls.2025.3589224
Jiufang Chen,Ye Yuan,Xin Luo,Xinbo Gao
{"title":"An Adaptive Neighborhood-Resonated Graph Convolution Network for Undirected Weighted Graph Representation.","authors":"Jiufang Chen,Ye Yuan,Xin Luo,Xinbo Gao","doi":"10.1109/tnnls.2025.3589224","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3589224","url":null,"abstract":"An undirected weighted graph (UWG) is the fundamental data representation in various real applications. A graph convolution network is frequently utilized for representation learning to a UWG. Nevertheless, existing graph convolutional networks (GCNs) only consider a node's neighborhood during the embedding propagation, which regrettably decreases its representation learning capability due to the information loss in the modeling phase. Motivated by this discovery, this study proposes an adaptive neighborhood-resonated graph convolution network (ANR-GCN) with the following ideas: 1) establishing the weighted embedding propagation with the consideration of link weights in a UWG, thereby incorporating the interaction strength of each node pair into the ANR-GCN model; 2) building the neighborhood-regularization (NR) to make each node resonate with its neighborhoods, thus reinforcing the informative neighborhood information for improving the ANR-GCN's representation capability to the complex topology of the target UWG; and 3)diversifying the NR effects following the attention principle for guaranteeing the ANR-GCN's learning capacity. The proposed ANR-GCN's representation learning ability to a UWG is theoretically guaranteed from the perspectives of bounded generalization error and uniform stability. Extensive experiments on four UWG datasets illustrate that the proposed ANR-GCN significantly outperforms state-of-the-art GCNs in missing edge detection in a UWG, which evidently demonstrates its superior performance.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"103 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PolicyMamba: Localized Policy Attention With State Space Model for Land Cover Classification. PolicyMamba:基于状态空间模型的土地覆盖分类本土化政策关注。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-22 DOI: 10.1109/tnnls.2025.3586836
Muhammad Ahmad,Manuel Mazzara,Salvatore Distefano,Adil Mehmood Khan,Muhammad Hassaan Farooq Butt,Danfeng Hong
{"title":"PolicyMamba: Localized Policy Attention With State Space Model for Land Cover Classification.","authors":"Muhammad Ahmad,Manuel Mazzara,Salvatore Distefano,Adil Mehmood Khan,Muhammad Hassaan Farooq Butt,Danfeng Hong","doi":"10.1109/tnnls.2025.3586836","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3586836","url":null,"abstract":"Multihead self-attention and cross-attention mechanisms often suffer from computational inefficiencies, limited scalability, and suboptimal contextual understanding, particularly in hyperspectral image (HSI) classification. These mechanisms struggle to effectively capture long-range dependencies while maintaining computational feasibility due to the quadratic complexity of self-attention. To address these challenges, this work proposes PolicyMamba, a spectral-spatial mamba model enhanced with a localized policy attention mechanism. This mechanism reduces computational overhead by restricting attention to nonoverlapping localized regions and enforcing sparsity constraints, ensuring that only the most informative interactions are retained. A hierarchical aggregation strategy further integrates patch-wise attention outputs, preserving spectral-spatial correlations across scales. In addition, a sliding window patch process enhances local feature continuity while mitigating information loss. The PolicyMamba framework integrates spectral-spatial token generation, token enhancement, localized attention, and state transition modules, significantly improving HSI feature representation. Extensive experiments demonstrate that PolicyMamba achieves superior classification accuracy, outperforming conventional and state-of-the-art methods in land cover classification (LCC) by efficiently modeling intricate dependencies in HSI data.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"1 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChebMixer: Efficient Graph Representation Learning With MLP Mixer
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-22 DOI: 10.1109/tnnls.2025.3589316
Xiaoyan Kui, Haonan Yan, Qinsong Li, Min Zhang, Liming Chen, Beiji Zou
{"title":"ChebMixer: Efficient Graph Representation Learning With MLP Mixer","authors":"Xiaoyan Kui, Haonan Yan, Qinsong Li, Min Zhang, Liming Chen, Beiji Zou","doi":"10.1109/tnnls.2025.3589316","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3589316","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"17 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144684932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ODMTCNet: An Interpretable Multiview Deep Neural Network Architecture for Feature Representation ODMTCNet:用于特征表示的可解释多视图深度神经网络体系结构
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-21 DOI: 10.1109/tnnls.2025.3588327
Lei Gao, Zheng Guo, Ling Guan
{"title":"ODMTCNet: An Interpretable Multiview Deep Neural Network Architecture for Feature Representation","authors":"Lei Gao, Zheng Guo, Ling Guan","doi":"10.1109/tnnls.2025.3588327","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3588327","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"23 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144677421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NACHOS: Neural Architecture Search for Hardware-Constrained Early-Exit Neural Networks NACHOS:基于硬件约束的早期退出神经网络的神经结构搜索
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2025-07-21 DOI: 10.1109/tnnls.2025.3588558
Matteo Gambella, Jary Pomponi, Simone Scardapane, Manuel Roveri
{"title":"NACHOS: Neural Architecture Search for Hardware-Constrained Early-Exit Neural Networks","authors":"Matteo Gambella, Jary Pomponi, Simone Scardapane, Manuel Roveri","doi":"10.1109/tnnls.2025.3588558","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3588558","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"102 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144677423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信