{"title":"基于反向传播和协同进化的神经网络","authors":"Yuelin Gao , Yuming Zhang , Xiaofeng Xie","doi":"10.1016/j.asoc.2025.113453","DOIUrl":null,"url":null,"abstract":"<div><div>Deep neural networks (DNNs) have a powerful feature extraction capability, which allows them to be employed in various fields. However, as the number of layers and neurons in the network increases, the search space for parameter learning becomes complex. Currently, the most commonly used parameter training method is backpropagation (BP) based on gradient descent, but this method is sensitive to the initialization of the parameters and tends to get stuck in local optima in a complex search space. Therefore, a new training method for DNNs has been proposed that combines cooperative co-evolution (CC) with BP-based gradient descent, called BPCC. In the BPCC method, BP performs multiple training periods intermittently, and the CC algorithm is executed when the difference between the current loss function value and the previous loss function value is less than a given threshold (called a condition met). We found that the algorithm easily enters into CC iterations, which reduces the computational effectiveness of the algorithm. A tolerance parameter is designed to curb this phenomenon, and the CC is executed when the cumulative number of times the condition is met reaches the given value of the tolerance parameter, and the improved gray wolf optimizer (GWO) algorithm is used as the solver for the CC. In addition, in the CC iteration stage, the Chebyshev chaotic map series based on the current optimal point is used to initialize the population of GWO to ensure the diversity of the initial population. Experimental comparisons are made with modern network training methods in 7 network models, and the experimental results show that the improved algorithm in this study is competitive.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"181 ","pages":"Article 113453"},"PeriodicalIF":6.6000,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A neural network based on back-propagation and cooperative co-evolution\",\"authors\":\"Yuelin Gao , Yuming Zhang , Xiaofeng Xie\",\"doi\":\"10.1016/j.asoc.2025.113453\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep neural networks (DNNs) have a powerful feature extraction capability, which allows them to be employed in various fields. However, as the number of layers and neurons in the network increases, the search space for parameter learning becomes complex. Currently, the most commonly used parameter training method is backpropagation (BP) based on gradient descent, but this method is sensitive to the initialization of the parameters and tends to get stuck in local optima in a complex search space. Therefore, a new training method for DNNs has been proposed that combines cooperative co-evolution (CC) with BP-based gradient descent, called BPCC. In the BPCC method, BP performs multiple training periods intermittently, and the CC algorithm is executed when the difference between the current loss function value and the previous loss function value is less than a given threshold (called a condition met). We found that the algorithm easily enters into CC iterations, which reduces the computational effectiveness of the algorithm. A tolerance parameter is designed to curb this phenomenon, and the CC is executed when the cumulative number of times the condition is met reaches the given value of the tolerance parameter, and the improved gray wolf optimizer (GWO) algorithm is used as the solver for the CC. In addition, in the CC iteration stage, the Chebyshev chaotic map series based on the current optimal point is used to initialize the population of GWO to ensure the diversity of the initial population. Experimental comparisons are made with modern network training methods in 7 network models, and the experimental results show that the improved algorithm in this study is competitive.</div></div>\",\"PeriodicalId\":50737,\"journal\":{\"name\":\"Applied Soft Computing\",\"volume\":\"181 \",\"pages\":\"Article 113453\"},\"PeriodicalIF\":6.6000,\"publicationDate\":\"2025-06-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1568494625007641\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625007641","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A neural network based on back-propagation and cooperative co-evolution
Deep neural networks (DNNs) have a powerful feature extraction capability, which allows them to be employed in various fields. However, as the number of layers and neurons in the network increases, the search space for parameter learning becomes complex. Currently, the most commonly used parameter training method is backpropagation (BP) based on gradient descent, but this method is sensitive to the initialization of the parameters and tends to get stuck in local optima in a complex search space. Therefore, a new training method for DNNs has been proposed that combines cooperative co-evolution (CC) with BP-based gradient descent, called BPCC. In the BPCC method, BP performs multiple training periods intermittently, and the CC algorithm is executed when the difference between the current loss function value and the previous loss function value is less than a given threshold (called a condition met). We found that the algorithm easily enters into CC iterations, which reduces the computational effectiveness of the algorithm. A tolerance parameter is designed to curb this phenomenon, and the CC is executed when the cumulative number of times the condition is met reaches the given value of the tolerance parameter, and the improved gray wolf optimizer (GWO) algorithm is used as the solver for the CC. In addition, in the CC iteration stage, the Chebyshev chaotic map series based on the current optimal point is used to initialize the population of GWO to ensure the diversity of the initial population. Experimental comparisons are made with modern network training methods in 7 network models, and the experimental results show that the improved algorithm in this study is competitive.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.