{"title":"Modified Triplet-Average Deep Deterministic Policy Gradient for interpretable neuro-fuzzy deep reinforcement learning","authors":"Tuan-Linh Nguyen , Nguyen Van Thin , Sangmoon Lee","doi":"10.1016/j.jfranklin.2025.107653","DOIUrl":null,"url":null,"abstract":"<div><div>In order to find the control rules of the nonlinear system from the learned data, it is necessary to interpret the learned policy in Deep Reinforcement Learning (DRL). This paper presents a novel interpretable Neuro-Fuzzy (NF) inference system based on Modified Triplet-Average Deep Deterministic Policy Gradient (MTADD) reinforcement learning algorithm with a two-phased training method. The first phase involves exploring and initiating the T-S fuzzy system rule and premise parameter. The second step is the deep reinforcement learning of the NF policy network, which uses a Modified Triplet-Average Deep Deterministic policy gradient algorithm. The experiment results demonstrate that the proposed approach decreases the training time, enhances the control performance, and increases the interpretability of NF DRL.</div></div>","PeriodicalId":17283,"journal":{"name":"Journal of The Franklin Institute-engineering and Applied Mathematics","volume":"362 7","pages":"Article 107653"},"PeriodicalIF":3.7000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of The Franklin Institute-engineering and Applied Mathematics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0016003225001474","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In order to find the control rules of the nonlinear system from the learned data, it is necessary to interpret the learned policy in Deep Reinforcement Learning (DRL). This paper presents a novel interpretable Neuro-Fuzzy (NF) inference system based on Modified Triplet-Average Deep Deterministic Policy Gradient (MTADD) reinforcement learning algorithm with a two-phased training method. The first phase involves exploring and initiating the T-S fuzzy system rule and premise parameter. The second step is the deep reinforcement learning of the NF policy network, which uses a Modified Triplet-Average Deep Deterministic policy gradient algorithm. The experiment results demonstrate that the proposed approach decreases the training time, enhances the control performance, and increases the interpretability of NF DRL.
期刊介绍:
The Journal of The Franklin Institute has an established reputation for publishing high-quality papers in the field of engineering and applied mathematics. Its current focus is on control systems, complex networks and dynamic systems, signal processing and communications and their applications. All submitted papers are peer-reviewed. The Journal will publish original research papers and research review papers of substance. Papers and special focus issues are judged upon possible lasting value, which has been and continues to be the strength of the Journal of The Franklin Institute.