Deep Reinforcement Learning or Lyapunov Analysis? A Preliminary Comparative Study on Event-Triggered Optimal Control

IF 15.3 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Jingwei Lu;Lefei Li;Qinglai Wei;Fei–Yue Wang
{"title":"Deep Reinforcement Learning or Lyapunov Analysis? A Preliminary Comparative Study on Event-Triggered Optimal Control","authors":"Jingwei Lu;Lefei Li;Qinglai Wei;Fei–Yue Wang","doi":"10.1109/JAS.2024.124434","DOIUrl":null,"url":null,"abstract":"Dear Editor, This letter develops a novel method to implement event-triggered optimal control (ETOC) for discrete-time nonlinear systems using parallel control and deep reinforcement learning (DRL), referred to as Deep-ETOC. The developed Deep-ETOC method introduces the communication cost into the performance index through parallel control, so that the developed method enables control systems to learn ETOC policies directly without triggering conditions. Then, dueling double deep Q-network (D3QN) is utilized to achieve our method. In simulations, we present a preliminary comparative study of DRL and Lyapunov analysis for ETOC.","PeriodicalId":54230,"journal":{"name":"Ieee-Caa Journal of Automatica Sinica","volume":"11 7","pages":"1702-1704"},"PeriodicalIF":15.3000,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10555241","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ieee-Caa Journal of Automatica Sinica","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10555241/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Dear Editor, This letter develops a novel method to implement event-triggered optimal control (ETOC) for discrete-time nonlinear systems using parallel control and deep reinforcement learning (DRL), referred to as Deep-ETOC. The developed Deep-ETOC method introduces the communication cost into the performance index through parallel control, so that the developed method enables control systems to learn ETOC policies directly without triggering conditions. Then, dueling double deep Q-network (D3QN) is utilized to achieve our method. In simulations, we present a preliminary comparative study of DRL and Lyapunov analysis for ETOC.
深度强化学习还是李亚普诺夫分析?事件触发优化控制的初步比较研究
亲爱的编辑,这封信提出了一种利用并行控制和深度强化学习(DRL)实现离散时间非线性系统事件触发最优控制(ETOC)的新方法,简称为 Deep-ETOC。所开发的深度-ETOC方法通过并行控制将通信成本引入性能指标,因此所开发的方法可以使控制系统在没有触发条件的情况下直接学习ETOC策略。然后,我们利用决斗双深 Q 网络(D3QN)来实现我们的方法。在仿真中,我们对 ETOC 的 DRL 和 Lyapunov 分析进行了初步比较研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Ieee-Caa Journal of Automatica Sinica
Ieee-Caa Journal of Automatica Sinica Engineering-Control and Systems Engineering
CiteScore
23.50
自引率
11.00%
发文量
880
期刊介绍: The IEEE/CAA Journal of Automatica Sinica is a reputable journal that publishes high-quality papers in English on original theoretical/experimental research and development in the field of automation. The journal covers a wide range of topics including automatic control, artificial intelligence and intelligent control, systems theory and engineering, pattern recognition and intelligent systems, automation engineering and applications, information processing and information systems, network-based automation, robotics, sensing and measurement, and navigation, guidance, and control. Additionally, the journal is abstracted/indexed in several prominent databases including SCIE (Science Citation Index Expanded), EI (Engineering Index), Inspec, Scopus, SCImago, DBLP, CNKI (China National Knowledge Infrastructure), CSCD (Chinese Science Citation Database), and IEEE Xplore.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信