{"title":"深度强化学习在永磁同步电机电流跟踪和速度控制中的综合评价","authors":"Yiming Zhang , Jingxiang Li , Hao Zhou , Chin-Boon Chng , Chee-Kong Chui , Shengdun Zhao","doi":"10.1016/j.engappai.2025.110551","DOIUrl":null,"url":null,"abstract":"<div><div>Permanent Magnet Synchronous Motors (PMSMs) are indispensable in industrial applications, requiring precise control to ensure optimal performance. Traditional model-based methods, such as Proportional-Integral (PI) control and Model Predictive Control (MPC), face inherent limitations in robustness and adaptability under complex conditions. Deep Reinforcement Learning (DRL), as a model-free, data-driven approach, offers a transformative solution for PMSM control. This study proposes a DRL-based current control strategy and systematically evaluates the performance of three representative DRL algorithms: Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and Advantage Actor-Critic (A2C) in PMSM control tasks. Key contributions include hyperparameter sensitivity analysis, transfer learning for improved training efficiency, and the application of DRL to multi-objective speed control under varying operational scenarios. Experimental results reveal the hyperparameter sensitivities of different DRL algorithms and provide theoretical insights. The findings demonstrate that transfer learning significantly improves DRL training efficiency and control performance. DRL outperforms traditional controllers in current and speed control, achieving superior dynamic response, tracking accuracy, and adaptability to complex conditions. This study offers new insights into the application of DRL in industrial PMSM control and serves as a reference for its further optimization and practical deployment.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"149 ","pages":"Article 110551"},"PeriodicalIF":8.0000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comprehensive evaluation of deep reinforcement learning for permanent magnet synchronous motor current tracking and speed control applications\",\"authors\":\"Yiming Zhang , Jingxiang Li , Hao Zhou , Chin-Boon Chng , Chee-Kong Chui , Shengdun Zhao\",\"doi\":\"10.1016/j.engappai.2025.110551\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Permanent Magnet Synchronous Motors (PMSMs) are indispensable in industrial applications, requiring precise control to ensure optimal performance. Traditional model-based methods, such as Proportional-Integral (PI) control and Model Predictive Control (MPC), face inherent limitations in robustness and adaptability under complex conditions. Deep Reinforcement Learning (DRL), as a model-free, data-driven approach, offers a transformative solution for PMSM control. This study proposes a DRL-based current control strategy and systematically evaluates the performance of three representative DRL algorithms: Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and Advantage Actor-Critic (A2C) in PMSM control tasks. Key contributions include hyperparameter sensitivity analysis, transfer learning for improved training efficiency, and the application of DRL to multi-objective speed control under varying operational scenarios. Experimental results reveal the hyperparameter sensitivities of different DRL algorithms and provide theoretical insights. The findings demonstrate that transfer learning significantly improves DRL training efficiency and control performance. DRL outperforms traditional controllers in current and speed control, achieving superior dynamic response, tracking accuracy, and adaptability to complex conditions. This study offers new insights into the application of DRL in industrial PMSM control and serves as a reference for its further optimization and practical deployment.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"149 \",\"pages\":\"Article 110551\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625005512\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625005512","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Comprehensive evaluation of deep reinforcement learning for permanent magnet synchronous motor current tracking and speed control applications
Permanent Magnet Synchronous Motors (PMSMs) are indispensable in industrial applications, requiring precise control to ensure optimal performance. Traditional model-based methods, such as Proportional-Integral (PI) control and Model Predictive Control (MPC), face inherent limitations in robustness and adaptability under complex conditions. Deep Reinforcement Learning (DRL), as a model-free, data-driven approach, offers a transformative solution for PMSM control. This study proposes a DRL-based current control strategy and systematically evaluates the performance of three representative DRL algorithms: Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and Advantage Actor-Critic (A2C) in PMSM control tasks. Key contributions include hyperparameter sensitivity analysis, transfer learning for improved training efficiency, and the application of DRL to multi-objective speed control under varying operational scenarios. Experimental results reveal the hyperparameter sensitivities of different DRL algorithms and provide theoretical insights. The findings demonstrate that transfer learning significantly improves DRL training efficiency and control performance. DRL outperforms traditional controllers in current and speed control, achieving superior dynamic response, tracking accuracy, and adaptability to complex conditions. This study offers new insights into the application of DRL in industrial PMSM control and serves as a reference for its further optimization and practical deployment.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.