Ke Wang,Binghong Liu,Pandi Liu,Yungao Shi,Ping Guo,Yafei Li,Mingliang Xu
{"title":"Bi-PIL: Bidirectional Gradient-Free Learning Scheme for Multilayer Neural Networks.","authors":"Ke Wang,Binghong Liu,Pandi Liu,Yungao Shi,Ping Guo,Yafei Li,Mingliang Xu","doi":"10.1109/tnnls.2025.3564654","DOIUrl":null,"url":null,"abstract":"Training deep neural networks typically relies on gradient descent learning schemes, which is usually time-consuming, and the design of complex network architectures is often intractable. In this article, we explore the building of multilayer neural networks based on an efficient gradient-free learning scheme offering a potential solution to the architectural design. The proposed learning scheme encompasses both forward and backward training (BT) processes. In the forward process, the pseudoinverse learning (PIL) algorithm is employed to train a multilayer neural network, in which the network is dynamically constructed leveraging a layer-by-layer greedy strategy, enabling the automatic determination of the architecture across different hierarchies in a data-driven manner. The network architecture and connection weights determined in the forward training (FT) process are shared with the backward process which also conducts gradient-free learning to update the connection weights. After the bidirectional learning, a neural network comprising two twin subnetworks is obtained, and the fused features of subnetworks are used as inputs for downstream tasks. Comprehensive experiments and detailed analyses demonstrate the effectiveness and superiority of the proposed learning scheme.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"1 1","pages":""},"PeriodicalIF":10.2000,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3564654","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Training deep neural networks typically relies on gradient descent learning schemes, which is usually time-consuming, and the design of complex network architectures is often intractable. In this article, we explore the building of multilayer neural networks based on an efficient gradient-free learning scheme offering a potential solution to the architectural design. The proposed learning scheme encompasses both forward and backward training (BT) processes. In the forward process, the pseudoinverse learning (PIL) algorithm is employed to train a multilayer neural network, in which the network is dynamically constructed leveraging a layer-by-layer greedy strategy, enabling the automatic determination of the architecture across different hierarchies in a data-driven manner. The network architecture and connection weights determined in the forward training (FT) process are shared with the backward process which also conducts gradient-free learning to update the connection weights. After the bidirectional learning, a neural network comprising two twin subnetworks is obtained, and the fused features of subnetworks are used as inputs for downstream tasks. Comprehensive experiments and detailed analyses demonstrate the effectiveness and superiority of the proposed learning scheme.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.