深度强化学习在永磁同步电机控制中的应用综述

IF 10.7 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Larbi Assem Moulai , Fardila M. Zaihidee , Saad Mekhilef , Jing Rui Tang , Marizan Mubin
{"title":"深度强化学习在永磁同步电机控制中的应用综述","authors":"Larbi Assem Moulai ,&nbsp;Fardila M. Zaihidee ,&nbsp;Saad Mekhilef ,&nbsp;Jing Rui Tang ,&nbsp;Marizan Mubin","doi":"10.1016/j.arcontrol.2025.101014","DOIUrl":null,"url":null,"abstract":"<div><div>Permanent Magnet Synchronous Motors (PMSMs) are recognized for high efficiency, torque-to-inertia ratio, and robust properties, making them ideal for the rapid development of electric vehicles, robotics, and the aerospace industry. Recently, Deep Reinforcement Learning (DRL) algorithms have gained significant attention in the control domain due to their independence from plant models and advanced decision-making capabilities. These features make DRL highly suitable for addressing challenges in PMSM control such as load disturbances, speed tracking, and parameter variations. This review explores recent DRL techniques applied to PMSM speed, current, and torque control. Discrete and continuous algorithms, including Deep Q-Network (DQN), Deep Deterministic Policy Gradient (DDPG), and Twin Delayed DDPG (TD3), are examined in terms of their basic principles, practical implementations, and the benefits they provide in overcoming challenges in PMSM control. In addition, to demonstrate the efficiency of DRL, the review provides a summary and comparison of DRL applied to optimize classical control methods elaborated within various PMSM control strategies. Comparisons of DRL implementations in PMSM control are highlighted to validate their real-time applicability in experiments, and potential areas for future research and improvement are outlined.</div></div>","PeriodicalId":50750,"journal":{"name":"Annual Reviews in Control","volume":"60 ","pages":"Article 101014"},"PeriodicalIF":10.7000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Implementation of deep reinforcement learning in permanent magnet synchronous motors control: A review\",\"authors\":\"Larbi Assem Moulai ,&nbsp;Fardila M. Zaihidee ,&nbsp;Saad Mekhilef ,&nbsp;Jing Rui Tang ,&nbsp;Marizan Mubin\",\"doi\":\"10.1016/j.arcontrol.2025.101014\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Permanent Magnet Synchronous Motors (PMSMs) are recognized for high efficiency, torque-to-inertia ratio, and robust properties, making them ideal for the rapid development of electric vehicles, robotics, and the aerospace industry. Recently, Deep Reinforcement Learning (DRL) algorithms have gained significant attention in the control domain due to their independence from plant models and advanced decision-making capabilities. These features make DRL highly suitable for addressing challenges in PMSM control such as load disturbances, speed tracking, and parameter variations. This review explores recent DRL techniques applied to PMSM speed, current, and torque control. Discrete and continuous algorithms, including Deep Q-Network (DQN), Deep Deterministic Policy Gradient (DDPG), and Twin Delayed DDPG (TD3), are examined in terms of their basic principles, practical implementations, and the benefits they provide in overcoming challenges in PMSM control. In addition, to demonstrate the efficiency of DRL, the review provides a summary and comparison of DRL applied to optimize classical control methods elaborated within various PMSM control strategies. Comparisons of DRL implementations in PMSM control are highlighted to validate their real-time applicability in experiments, and potential areas for future research and improvement are outlined.</div></div>\",\"PeriodicalId\":50750,\"journal\":{\"name\":\"Annual Reviews in Control\",\"volume\":\"60 \",\"pages\":\"Article 101014\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annual Reviews in Control\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S136757882500029X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annual Reviews in Control","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S136757882500029X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

永磁同步电机(pmms)以高效率、转惯量比和坚固的性能而闻名,使其成为电动汽车、机器人和航空航天工业快速发展的理想选择。近年来,深度强化学习(DRL)算法因其独立于植物模型和先进的决策能力而在控制领域受到了广泛关注。这些特性使得DRL非常适合解决PMSM控制中的挑战,如负载干扰、速度跟踪和参数变化。本文综述了最近应用于永磁同步电机速度、电流和转矩控制的DRL技术。本文对离散和连续算法,包括深度q -网络(DQN)、深度确定性策略梯度(DDPG)和双延迟DDPG (TD3)的基本原理、实际实现以及它们在克服永磁同步电机控制挑战方面提供的好处进行了研究。此外,为了证明DRL的有效性,本文总结和比较了DRL在各种永磁同步电机控制策略中用于优化经典控制方法的应用。重点比较了DRL在PMSM控制中的实现,验证了它们在实验中的实时性,并概述了未来可能的研究和改进领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Implementation of deep reinforcement learning in permanent magnet synchronous motors control: A review
Permanent Magnet Synchronous Motors (PMSMs) are recognized for high efficiency, torque-to-inertia ratio, and robust properties, making them ideal for the rapid development of electric vehicles, robotics, and the aerospace industry. Recently, Deep Reinforcement Learning (DRL) algorithms have gained significant attention in the control domain due to their independence from plant models and advanced decision-making capabilities. These features make DRL highly suitable for addressing challenges in PMSM control such as load disturbances, speed tracking, and parameter variations. This review explores recent DRL techniques applied to PMSM speed, current, and torque control. Discrete and continuous algorithms, including Deep Q-Network (DQN), Deep Deterministic Policy Gradient (DDPG), and Twin Delayed DDPG (TD3), are examined in terms of their basic principles, practical implementations, and the benefits they provide in overcoming challenges in PMSM control. In addition, to demonstrate the efficiency of DRL, the review provides a summary and comparison of DRL applied to optimize classical control methods elaborated within various PMSM control strategies. Comparisons of DRL implementations in PMSM control are highlighted to validate their real-time applicability in experiments, and potential areas for future research and improvement are outlined.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Annual Reviews in Control
Annual Reviews in Control 工程技术-自动化与控制系统
CiteScore
19.00
自引率
2.10%
发文量
53
审稿时长
36 days
期刊介绍: The field of Control is changing very fast now with technology-driven “societal grand challenges” and with the deployment of new digital technologies. The aim of Annual Reviews in Control is to provide comprehensive and visionary views of the field of Control, by publishing the following types of review articles: Survey Article: Review papers on main methodologies or technical advances adding considerable technical value to the state of the art. Note that papers which purely rely on mechanistic searches and lack comprehensive analysis providing a clear contribution to the field will be rejected. Vision Article: Cutting-edge and emerging topics with visionary perspective on the future of the field or how it will bridge multiple disciplines, and Tutorial research Article: Fundamental guides for future studies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信