{"title":"Precision Cylinder Gluing With Uncertainty-Aware MPC-Enhanced DDPG","authors":"Liangshun Wu;Junsuo Qu","doi":"10.1109/OJCSYS.2025.3566323","DOIUrl":null,"url":null,"abstract":"This paper presents an uncertainty-aware optimization method for high-precision servo control in automotive dosing cylinder gluing. A comprehensive system model captures the interdependent dynamics of mechanical, hydraulic, and servo motor subsystems, formulating the control problem as a Markov Decision Process (MDP). Using Deep Deterministic Policy Gradient (DDPG) reinforcement learning with Model Predictive Control (MPC), the approach combines MPC's optimization capabilities with DDPG's adaptive learning, improving resilience to uncertainties. The DDPG Actor refines the MPC baseline, while uncertainty analysis in the MPC objective anticipates future variations. The Critic evaluates Q-values with uncertainty feedback. Simulations and real-world tests confirm the method's stability, precision, and reliability for high-precision industrial gluing.","PeriodicalId":73299,"journal":{"name":"IEEE open journal of control systems","volume":"4 ","pages":"130-143"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10981597","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of control systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10981597/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents an uncertainty-aware optimization method for high-precision servo control in automotive dosing cylinder gluing. A comprehensive system model captures the interdependent dynamics of mechanical, hydraulic, and servo motor subsystems, formulating the control problem as a Markov Decision Process (MDP). Using Deep Deterministic Policy Gradient (DDPG) reinforcement learning with Model Predictive Control (MPC), the approach combines MPC's optimization capabilities with DDPG's adaptive learning, improving resilience to uncertainties. The DDPG Actor refines the MPC baseline, while uncertainty analysis in the MPC objective anticipates future variations. The Critic evaluates Q-values with uncertainty feedback. Simulations and real-world tests confirm the method's stability, precision, and reliability for high-precision industrial gluing.