IEEE transactions on neural networks and learning systems最新文献

筛选
英文 中文
Nearest Neighbor Multivariate Time Series Forecasting 近邻多变量时间序列预测
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2024-11-14 DOI: 10.1109/tnnls.2024.3490603
Huiliang Zhang, Ping Nie, Lijun Sun, Benoit Boulet
{"title":"Nearest Neighbor Multivariate Time Series Forecasting","authors":"Huiliang Zhang, Ping Nie, Lijun Sun, Benoit Boulet","doi":"10.1109/tnnls.2024.3490603","DOIUrl":"https://doi.org/10.1109/tnnls.2024.3490603","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"5 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142637233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Debiasing Graph Representation Learning Based on Information Bottleneck 基于信息瓶颈的去偏差图表示学习
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2024-11-14 DOI: 10.1109/tnnls.2024.3492055
Ziyi Zhang, Mingxuan Ouyang, Wanyu Lin, Hao Lan, Lei Yang
{"title":"Debiasing Graph Representation Learning Based on Information Bottleneck","authors":"Ziyi Zhang, Mingxuan Ouyang, Wanyu Lin, Hao Lan, Lei Yang","doi":"10.1109/tnnls.2024.3492055","DOIUrl":"https://doi.org/10.1109/tnnls.2024.3492055","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"17 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142637286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Causal Intervention in Image Captioning With Causal Prompt 利用因果提示推进图像标题中的因果干预
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2024-11-14 DOI: 10.1109/tnnls.2024.3487200
Youngjoon Yu, Yeonju Kim, Yong Man Ro
{"title":"Advancing Causal Intervention in Image Captioning With Causal Prompt","authors":"Youngjoon Yu, Yeonju Kim, Yong Man Ro","doi":"10.1109/tnnls.2024.3487200","DOIUrl":"https://doi.org/10.1109/tnnls.2024.3487200","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"11 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142637287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Distributed Neural Network Training Through Node-Based Communications. 通过基于节点的通信增强分布式神经网络训练。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-09-18 DOI: 10.1109/TNNLS.2023.3309735
Sergio Moreno-Alvarez, Mercedes E Paoletti, Gabriele Cavallaro, Juan M Haut
{"title":"Enhancing Distributed Neural Network Training Through Node-Based Communications.","authors":"Sergio Moreno-Alvarez, Mercedes E Paoletti, Gabriele Cavallaro, Juan M Haut","doi":"10.1109/TNNLS.2023.3309735","DOIUrl":"10.1109/TNNLS.2023.3309735","url":null,"abstract":"<p><p>The amount of data needed to effectively train modern deep neural architectures has grown significantly, leading to increased computational requirements. These intensive computations are tackled by the combination of last generation computing resources, such as accelerators, or classic processing units. Nevertheless, gradient communication remains as the major bottleneck, hindering the efficiency notwithstanding the improvements in runtimes obtained through data parallelism strategies. Data parallelism involves all processes in a global exchange of potentially high amount of data, which may impede the achievement of the desired speedup and the elimination of noticeable delays or bottlenecks. As a result, communication latency issues pose a significant challenge that profoundly impacts the performance on distributed platforms. This research presents node-based optimization steps to significantly reduce the gradient exchange between model replicas whilst ensuring model convergence. The proposal serves as a versatile communication scheme, suitable for integration into a wide range of general-purpose deep neural network (DNN) algorithms. The optimization takes into consideration the specific location of each replica within the platform. To demonstrate the effectiveness, different neural network approaches and datasets with disjoint properties are used. In addition, multiple types of applications are considered to demonstrate the robustness and versatility of our proposal. The experimental results show a global training time reduction whilst slightly improving accuracy. Code: https://github.com/mhaut/eDNNcomm.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10310569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concurrent Learning-Based Adaptive Control of Underactuated Robotic Systems With Guaranteed Transient Performance for Both Actuated and Unactuated Motions. 基于并行学习的欠驱动机器人系统的自适应控制,保证驱动和非驱动运动的瞬态性能。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-09-18 DOI: 10.1109/TNNLS.2023.3311927
Tong Yang, Ning Sun, Zhuoqing Liu, Yongchun Fang
{"title":"Concurrent Learning-Based Adaptive Control of Underactuated Robotic Systems With Guaranteed Transient Performance for Both Actuated and Unactuated Motions.","authors":"Tong Yang, Ning Sun, Zhuoqing Liu, Yongchun Fang","doi":"10.1109/TNNLS.2023.3311927","DOIUrl":"10.1109/TNNLS.2023.3311927","url":null,"abstract":"<p><p>With the wide applications of underactuated robotic systems, more complex tasks and higher safety demands are put forward. However, it is still an open issue to utilize \"fewer\" control inputs to satisfy control accuracy and transient performance with theoretical and practical guarantee, especially for unactuated variables. To this end, for underactuated robotic systems, this article designs an adaptive tracking controller to realize exponential convergence results, rather than only asymptotic stability or boundedness; meanwhile, unactuated states exponentially converge to a small enough bound, which is adjustable by control gains. The maximum motion ranges and convergence speed of all variables both exhibit satisfactory performance with higher safety and efficiency. Here, a data-driven concurrent learning (CL) method is proposed to compensate for unknown dynamics/disturbances and improve the estimate accuracy of parameters/weights, without the need for persistency of excitation or linear parametrization (LP) conditions. Then, a disturbance judgment mechanism is utilized to eliminate the detrimental impacts of external disturbances. As far as we know, for general underactuated systems with uncertainties/disturbances, it is the first time to theoretically and practically ensure transient performance and exponential convergence speed for unactuated states, and simultaneously obtain the exponential tracking result of actuated motions. Both theoretical analysis and hardware experiment results illustrate the effectiveness of the designed controller.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10312482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking the Robustness of Instance Segmentation Models. 实例分割模型的鲁棒性基准测试。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-09-18 DOI: 10.1109/TNNLS.2023.3310985
Yusuf Dalva, Hamza Pehlivan, Said Fahri Altindis, Aysegul Dundar
{"title":"Benchmarking the Robustness of Instance Segmentation Models.","authors":"Yusuf Dalva, Hamza Pehlivan, Said Fahri Altindis, Aysegul Dundar","doi":"10.1109/TNNLS.2023.3310985","DOIUrl":"10.1109/TNNLS.2023.3310985","url":null,"abstract":"<p><p>This article presents a comprehensive evaluation of instance segmentation models with respect to real-world image corruptions as well as out-of-domain image collections, e.g., images captured by a different set-up than the training dataset. The out-of-domain image evaluation shows the generalization capability of models, an essential aspect of real-world applications, and an extensively studied topic of domain adaptation. These presented robustness and generalization evaluations are important when designing instance segmentation models for real-world applications and picking an off-the-shelf pretrained model to directly use for the task at hand. Specifically, this benchmark study includes state-of-the-art network architectures, network backbones, normalization layers, models trained starting from scratch versus pretrained networks, and the effect of multitask training on robustness and generalization. Through this study, we gain several insights. For example, we find that group normalization (GN) enhances the robustness of networks across corruptions where the image contents stay the same but corruptions are added on top. On the other hand, batch normalization (BN) improves the generalization of the models across different datasets where statistics of image features change. We also find that single-stage detectors do not generalize well to larger image resolutions than their training size. On the other hand, multistage detectors can easily be used on images of different sizes. We hope that our comprehensive study will motivate the development of more robust and reliable instance segmentation models.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10312481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning and Symbolic Regression for Discovering Parametric Equations 用于发现参数方程的深度学习和符号回归。
IF 10.2 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-09-18 DOI: 10.1109/TNNLS.2023.3297978
Michael Zhang;Samuel Kim;Peter Y. Lu;Marin Soljačić
{"title":"Deep Learning and Symbolic Regression for Discovering Parametric Equations","authors":"Michael Zhang;Samuel Kim;Peter Y. Lu;Marin Soljačić","doi":"10.1109/TNNLS.2023.3297978","DOIUrl":"10.1109/TNNLS.2023.3297978","url":null,"abstract":"Symbolic regression is a machine learning technique that can learn the equations governing data and thus has the potential to transform scientific discovery. However, symbolic regression is still limited in the complexity and dimensionality of the systems that it can analyze. Deep learning, on the other hand, has transformed machine learning in its ability to analyze extremely complex and high-dimensional datasets. We propose a neural network architecture to extend symbolic regression to parametric systems where some coefficient may vary, but the structure of the underlying governing equation remains constant. We demonstrate our method on various analytic expressions and partial differential equations (PDEs) with varying coefficients and show that it extrapolates well outside of the training domain. The proposed neural-network-based architecture can also be enhanced by integrating with other deep learning architectures such that it can analyze high-dimensional data while being trained end-to-end. To this end, we demonstrate the scalability of our architecture by incorporating a convolutional encoder to analyze 1-D images of varying spring systems.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"35 11","pages":"16775-16787"},"PeriodicalIF":10.2,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10310571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Quantum-Classical Convolutional Neural Network Model for Image Classification. 用于图像分类的混合量子经典卷积神经网络模型。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-09-18 DOI: 10.1109/TNNLS.2023.3312170
Fan Fan, Yilei Shi, Tobias Guggemos, Xiao Xiang Zhu
{"title":"Hybrid Quantum-Classical Convolutional Neural Network Model for Image Classification.","authors":"Fan Fan, Yilei Shi, Tobias Guggemos, Xiao Xiang Zhu","doi":"10.1109/TNNLS.2023.3312170","DOIUrl":"10.1109/TNNLS.2023.3312170","url":null,"abstract":"<p><p>Image classification plays an important role in remote sensing. Earth observation (EO) has inevitably arrived in the big data era, but the high requirement on computation power has already become a bottleneck for analyzing large amounts of remote sensing data with sophisticated machine learning models. Exploiting quantum computing might contribute to a solution to tackle this challenge by leveraging quantum properties. This article introduces a hybrid quantum-classical convolutional neural network (QC-CNN) that applies quantum computing to effectively extract high-level critical features from EO data for classification purposes. Besides that, the adoption of the amplitude encoding technique reduces the required quantum bit resources. The complexity analysis indicates that the proposed model can accelerate the convolutional operation in comparison with its classical counterpart. The model's performance is evaluated with different EO benchmarks, including Overhead-MNIST, So2Sat LCZ42, PatternNet, RSI-CB256, and NaSC-TG2, through the TensorFlow Quantum platform, and it can achieve better performance than its classical counterpart and have higher generalizability, which verifies the validity of the QC-CNN model on EO data classification tasks.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10312477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semisupervised Subspace Learning With Adaptive Pairwise Graph Embedding. 具有自适应成对图嵌入的半监督子空间学习。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-09-18 DOI: 10.1109/TNNLS.2023.3311789
Hebing Nie, Qi Li, Zheng Wang, Haifeng Zhao, Feiping Nie
{"title":"Semisupervised Subspace Learning With Adaptive Pairwise Graph Embedding.","authors":"Hebing Nie, Qi Li, Zheng Wang, Haifeng Zhao, Feiping Nie","doi":"10.1109/TNNLS.2023.3311789","DOIUrl":"10.1109/TNNLS.2023.3311789","url":null,"abstract":"<p><p>Graph-based semisupervised learning can explore the graph topology information behind the samples, becoming one of the most attractive research areas in machine learning in recent years. Nevertheless, existing graph-based methods also suffer from two shortcomings. On the one hand, the existing methods generate graphs in the original high-dimensional space, which are easily disturbed by noisy and redundancy features, resulting in low-quality constructed graphs that cannot accurately portray the relationships between data. On the other hand, most of the existing models are based on the Gaussian assumption, which cannot capture the local submanifold structure information of the data, thus reducing the discriminativeness of the learned low-dimensional representations. This article proposes a semisupervised subspace learning with adaptive pairwise graph embedding (APGE), which first builds a k<sub>1</sub> -nearest neighbor graph on the labeled data to learn local discriminant embeddings for exploring the intrinsic structure of the non-Gaussian labeled data, i.e., the submanifold structure. Then, a k<sub>2</sub> -nearest neighbor graph is constructed on all samples and mapped to GE learning to adaptively explore the global structure of all samples. Clustering unlabeled data and its corresponding labeled neighbors into the same submanifold, sharing the same label information, improves embedded data's discriminative ability. And the adaptive neighborhood learning method is used to learn the graph structure in the continuously optimized subspace to ensure that the optimal graph matrix and projection matrix are finally learned, which has strong robustness. Meanwhile, the rank constraint is added to the Laplacian matrix of the similarity matrix of all samples so that the connected components in the obtained similarity matrix are precisely equal to the number of classes in the sample, which makes the structure of the graph clearer and the relationship between the near-neighbor sample points more explicit. Finally, multiple experiments on several synthetic and real-world datasets show that the method performs well in exploring local structure and classification tasks.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10307562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Confusion Region Mining for Crowd Counting. 用于人群计数的混淆区域挖掘。
IF 10.4 1区 计算机科学
IEEE transactions on neural networks and learning systems Pub Date : 2023-09-15 DOI: 10.1109/TNNLS.2023.3311020
Jiawen Zhu, Wenda Zhao, Libo Yao, You He, Maodi Hu, Xiaoxing Zhang, Shuo Wang, Tao Li, Huchuan Lu
{"title":"Confusion Region Mining for Crowd Counting.","authors":"Jiawen Zhu, Wenda Zhao, Libo Yao, You He, Maodi Hu, Xiaoxing Zhang, Shuo Wang, Tao Li, Huchuan Lu","doi":"10.1109/TNNLS.2023.3311020","DOIUrl":"10.1109/TNNLS.2023.3311020","url":null,"abstract":"<p><p>Existing works mainly focus on crowd and ignore the confusion regions which contain extremely similar appearance to crowd in the background, while crowd counting needs to face these two sides at the same time. To address this issue, we propose a novel end-to-end trainable confusion region discriminating and erasing network called CDENet. Specifically, CDENet is composed of two modules of confusion region mining module (CRM) and guided erasing module (GEM). CRM consists of basic density estimation (BDE) network, confusion region aware bridge and confusion region discriminating network. The BDE network first generates a primary density map, and then the confusion region aware bridge excavates the confusion regions by comparing the primary prediction result with the ground-truth density map. Finally, the confusion region discriminating network learns the difference of feature representations in confusion regions and crowds. Furthermore, GEM gives the refined density map by erasing the confusion regions. We evaluate the proposed method on four crowd counting benchmarks, including ShanghaiTech Part_A, ShanghaiTech Part_B, UCF_CC_50, and UCF-QNRF, and our CDENet achieves superior performance compared with the state-of-the-arts.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":""},"PeriodicalIF":10.4,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10246190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信