Mohamed A. Taha, Mahmoud M. Saafan, Sarah M. Ayyad
{"title":"重访自然选择:利用遗传算法为复杂控制任务进化动态神经网络","authors":"Mohamed A. Taha, Mahmoud M. Saafan, Sarah M. Ayyad","doi":"10.1007/s10462-025-11382-9","DOIUrl":null,"url":null,"abstract":"<div><p>Reinforcement learning (RL) and Genetic Algorithms (GAs) are widely used in decision-making and control tasks, but they often suffer from prolonged training times and inefficiencies. This paper addresses the need for a faster and more precise method to train neural networks in RL tasks, without sacrificing performance. The proposed approach enhances GAs by introducing mechanisms that optimize network architectures dynamically, minimizing unnecessary complexity while maintaining accuracy. The methodology includes a dynamic architecture adaptation technique that trims the neural network to its most compact and effective configuration. A Blending mechanism is introduced to improve the propagation of essential features across network layers, reducing the usage of non-linearity until necessary. An experience replay buffer is integrated to avoid redundant fitness evaluations, significantly reducing computational overhead. Additionally, a novel approach combines back-propagation with GAs for further refinement in supervised or RL tasks, using it as a mutation method to fine-tune the model. Experimental results demonstrate convergence speeds of around several seconds for simple tasks with well-defined rewards, and several minutes for more complex tasks. Training time is reduced by nearly 70%, and the approach provides faster inference speeds due to minimal architecture, making it applicable for mobile and edge devices. The method reduces computation, especially during inference, by over 90% due to the extremely low number of parameters. The performance metrics show comparable results to conventional approaches at the end of training. The proposed method is scalable and resource-efficient, outperforming existing neural network optimization techniques in both simulated environments and real-world applications. The developed framework is publicly available under the MIT license at https://github.com/AhmedBoin/atgen offering an open-source solution for the broader research community.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 11","pages":""},"PeriodicalIF":13.9000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11382-9.pdf","citationCount":"0","resultStr":"{\"title\":\"Revisiting natural selection: evolving dynamic neural networks using genetic algorithms for complex control tasks\",\"authors\":\"Mohamed A. Taha, Mahmoud M. Saafan, Sarah M. Ayyad\",\"doi\":\"10.1007/s10462-025-11382-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Reinforcement learning (RL) and Genetic Algorithms (GAs) are widely used in decision-making and control tasks, but they often suffer from prolonged training times and inefficiencies. This paper addresses the need for a faster and more precise method to train neural networks in RL tasks, without sacrificing performance. The proposed approach enhances GAs by introducing mechanisms that optimize network architectures dynamically, minimizing unnecessary complexity while maintaining accuracy. The methodology includes a dynamic architecture adaptation technique that trims the neural network to its most compact and effective configuration. A Blending mechanism is introduced to improve the propagation of essential features across network layers, reducing the usage of non-linearity until necessary. An experience replay buffer is integrated to avoid redundant fitness evaluations, significantly reducing computational overhead. Additionally, a novel approach combines back-propagation with GAs for further refinement in supervised or RL tasks, using it as a mutation method to fine-tune the model. Experimental results demonstrate convergence speeds of around several seconds for simple tasks with well-defined rewards, and several minutes for more complex tasks. Training time is reduced by nearly 70%, and the approach provides faster inference speeds due to minimal architecture, making it applicable for mobile and edge devices. The method reduces computation, especially during inference, by over 90% due to the extremely low number of parameters. The performance metrics show comparable results to conventional approaches at the end of training. The proposed method is scalable and resource-efficient, outperforming existing neural network optimization techniques in both simulated environments and real-world applications. The developed framework is publicly available under the MIT license at https://github.com/AhmedBoin/atgen offering an open-source solution for the broader research community.</p></div>\",\"PeriodicalId\":8449,\"journal\":{\"name\":\"Artificial Intelligence Review\",\"volume\":\"58 11\",\"pages\":\"\"},\"PeriodicalIF\":13.9000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s10462-025-11382-9.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10462-025-11382-9\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence Review","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10462-025-11382-9","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Revisiting natural selection: evolving dynamic neural networks using genetic algorithms for complex control tasks
Reinforcement learning (RL) and Genetic Algorithms (GAs) are widely used in decision-making and control tasks, but they often suffer from prolonged training times and inefficiencies. This paper addresses the need for a faster and more precise method to train neural networks in RL tasks, without sacrificing performance. The proposed approach enhances GAs by introducing mechanisms that optimize network architectures dynamically, minimizing unnecessary complexity while maintaining accuracy. The methodology includes a dynamic architecture adaptation technique that trims the neural network to its most compact and effective configuration. A Blending mechanism is introduced to improve the propagation of essential features across network layers, reducing the usage of non-linearity until necessary. An experience replay buffer is integrated to avoid redundant fitness evaluations, significantly reducing computational overhead. Additionally, a novel approach combines back-propagation with GAs for further refinement in supervised or RL tasks, using it as a mutation method to fine-tune the model. Experimental results demonstrate convergence speeds of around several seconds for simple tasks with well-defined rewards, and several minutes for more complex tasks. Training time is reduced by nearly 70%, and the approach provides faster inference speeds due to minimal architecture, making it applicable for mobile and edge devices. The method reduces computation, especially during inference, by over 90% due to the extremely low number of parameters. The performance metrics show comparable results to conventional approaches at the end of training. The proposed method is scalable and resource-efficient, outperforming existing neural network optimization techniques in both simulated environments and real-world applications. The developed framework is publicly available under the MIT license at https://github.com/AhmedBoin/atgen offering an open-source solution for the broader research community.
期刊介绍:
Artificial Intelligence Review, a fully open access journal, publishes cutting-edge research in artificial intelligence and cognitive science. It features critical evaluations of applications, techniques, and algorithms, providing a platform for both researchers and application developers. The journal includes refereed survey and tutorial articles, along with reviews and commentary on significant developments in the field.