{"title":"Regularizing Model Predictive Control for pixel-based long-horizon tasks","authors":"Yao-Hui Li, Feng Zhang, Qiang Hua, Chun-Ru Dong","doi":"10.1016/j.asoc.2025.113377","DOIUrl":null,"url":null,"abstract":"<div><div>Planning has been proven to be an effective strategy for dealing with complex tasks in environments. However, due to the constraints of computational budget and the accumulated model biases, planning for pixel-based long horizon tasks with limited samples remains a great challenge. To address this issue, a <strong>R</strong>egularized <strong>M</strong>odel <strong>P</strong>redictive <strong>C</strong>ontrol (<strong>RMPC</strong>) was proposed in this study. RMPC performs trajectory optimization using short-term reward estimates and long-term return estimates, which avoids the high burden of long-horizon planning. Additionally, an implicit regularization mechanism is employed to improve the robustness of the generated environment model and reliability of the value function estimation, which helps to reduce the risk of accumulated model biases. Extensive comparison experiments and ablation studies are performed on the benchmark datasets for evaluating the proposed RMPC. And empirical results show that RMPC outperforms the previous SOTA algorithms in terms of sample-efficiency (20.88% performance improvement) and model stability (56.39% standard deviation reduction) on pixel-based continuous control tasks from DMControl-100k benchmark. Our code is available at: <span><span>https://github.com/Arya87/RMPC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"181 ","pages":"Article 113377"},"PeriodicalIF":7.2000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S156849462500688X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Planning has been proven to be an effective strategy for dealing with complex tasks in environments. However, due to the constraints of computational budget and the accumulated model biases, planning for pixel-based long horizon tasks with limited samples remains a great challenge. To address this issue, a Regularized Model Predictive Control (RMPC) was proposed in this study. RMPC performs trajectory optimization using short-term reward estimates and long-term return estimates, which avoids the high burden of long-horizon planning. Additionally, an implicit regularization mechanism is employed to improve the robustness of the generated environment model and reliability of the value function estimation, which helps to reduce the risk of accumulated model biases. Extensive comparison experiments and ablation studies are performed on the benchmark datasets for evaluating the proposed RMPC. And empirical results show that RMPC outperforms the previous SOTA algorithms in terms of sample-efficiency (20.88% performance improvement) and model stability (56.39% standard deviation reduction) on pixel-based continuous control tasks from DMControl-100k benchmark. Our code is available at: https://github.com/Arya87/RMPC.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.