Chongjiexin Jia , Tuanjie Li , Hangjia Dong , Chao Xie , Wenxuan Peng , Yuming Ning
{"title":"A leading adaptive activation function for deep reinforcement learning","authors":"Chongjiexin Jia , Tuanjie Li , Hangjia Dong , Chao Xie , Wenxuan Peng , Yuming Ning","doi":"10.1016/j.jocs.2025.102608","DOIUrl":null,"url":null,"abstract":"<div><div>The activation function provides deep reinforcement learning with the capability to solve nonlinear problems. However, traditional activation functions have fixed parameter settings and cannot be adjusted adaptively based on constantly changing environmental conditions. This limitation frequently leads to slow convergence speed and inadequate performance of trained agents when confronted with highly complex nonlinear problems. This paper proposes a new method to enhance the ability of reinforcement learning to handle nonlinear problems. This method is mainly divided into two parts. Firstly, an activation function parameter initialization strategy based on environmental characteristics is adopted. Secondly, the Adam algorithm is used to dynamically update the activation function parameters. The activation function proposed in this paper is compared with both traditional activation functions and state-of-the-art activation functions through two experiments. Experimental data show that compared to ReLu, TanH, APA, and EReLu, its convergence speed in DQN tasks is improved by 3.89, 1.29, 0.981, and 2.173 times, respectively, and in SAC tasks, it is improved by 1.504, 1.013, 1.017, and 1.131 times, respectively. The results demonstrate that when the agent utilizes LaTanH as the activation function, it exhibits significant advantages in terms of convergence speed and performance and alleviates the problems of bilateral saturation and gradient vanishing.</div></div>","PeriodicalId":48907,"journal":{"name":"Journal of Computational Science","volume":"88 ","pages":"Article 102608"},"PeriodicalIF":3.1000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Science","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1877750325000857","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
The activation function provides deep reinforcement learning with the capability to solve nonlinear problems. However, traditional activation functions have fixed parameter settings and cannot be adjusted adaptively based on constantly changing environmental conditions. This limitation frequently leads to slow convergence speed and inadequate performance of trained agents when confronted with highly complex nonlinear problems. This paper proposes a new method to enhance the ability of reinforcement learning to handle nonlinear problems. This method is mainly divided into two parts. Firstly, an activation function parameter initialization strategy based on environmental characteristics is adopted. Secondly, the Adam algorithm is used to dynamically update the activation function parameters. The activation function proposed in this paper is compared with both traditional activation functions and state-of-the-art activation functions through two experiments. Experimental data show that compared to ReLu, TanH, APA, and EReLu, its convergence speed in DQN tasks is improved by 3.89, 1.29, 0.981, and 2.173 times, respectively, and in SAC tasks, it is improved by 1.504, 1.013, 1.017, and 1.131 times, respectively. The results demonstrate that when the agent utilizes LaTanH as the activation function, it exhibits significant advantages in terms of convergence speed and performance and alleviates the problems of bilateral saturation and gradient vanishing.
期刊介绍:
Computational Science is a rapidly growing multi- and interdisciplinary field that uses advanced computing and data analysis to understand and solve complex problems. It has reached a level of predictive capability that now firmly complements the traditional pillars of experimentation and theory.
The recent advances in experimental techniques such as detectors, on-line sensor networks and high-resolution imaging techniques, have opened up new windows into physical and biological processes at many levels of detail. The resulting data explosion allows for detailed data driven modeling and simulation.
This new discipline in science combines computational thinking, modern computational methods, devices and collateral technologies to address problems far beyond the scope of traditional numerical methods.
Computational science typically unifies three distinct elements:
• Modeling, Algorithms and Simulations (e.g. numerical and non-numerical, discrete and continuous);
• Software developed to solve science (e.g., biological, physical, and social), engineering, medicine, and humanities problems;
• Computer and information science that develops and optimizes the advanced system hardware, software, networking, and data management components (e.g. problem solving environments).