{"title":"Safe reinforcement learning for discrete-time nonlinear zero-sum games with unknown state constraints and asymmetric input constraints","authors":"Shihan Liu , Zhi Chen , Dongxu Gao","doi":"10.1016/j.neucom.2025.131026","DOIUrl":null,"url":null,"abstract":"<div><div>In this paper, we propose a novel safe reinforcement learning (RL) algorithm for discrete-time nonlinear zero-sum games with unknown state constraints and asymmetric input constraints. To address this constrained optimal problem, we adopt a value iteration framework based on neural networks, incorporating a critic-only structure. Given the unknown safety constraints, we tackle the state constraint issue by introducing a neural network-based control barrier function (CBF) using collected data to augment the reward function. Furthermore, by leveraging the non-monotonic increasing property of the value function, we ensure the system’s safety. Additionally, we construct a non-quadratic function to further augment the reward function, thereby satisfying the asymmetric input constraints. This paper also includes a series of theoretical proofs that rigorously demonstrate the convergence and safety of the proposed algorithm. Finally, experiments conducted under different scenarios and parameter settings, compared with existing algorithms, validate the algorithm’s effectiveness and safety.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 131026"},"PeriodicalIF":6.5000,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225016984","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we propose a novel safe reinforcement learning (RL) algorithm for discrete-time nonlinear zero-sum games with unknown state constraints and asymmetric input constraints. To address this constrained optimal problem, we adopt a value iteration framework based on neural networks, incorporating a critic-only structure. Given the unknown safety constraints, we tackle the state constraint issue by introducing a neural network-based control barrier function (CBF) using collected data to augment the reward function. Furthermore, by leveraging the non-monotonic increasing property of the value function, we ensure the system’s safety. Additionally, we construct a non-quadratic function to further augment the reward function, thereby satisfying the asymmetric input constraints. This paper also includes a series of theoretical proofs that rigorously demonstrate the convergence and safety of the proposed algorithm. Finally, experiments conducted under different scenarios and parameter settings, compared with existing algorithms, validate the algorithm’s effectiveness and safety.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.