Shengrong Gong , Yi Wang , Xin Du , Yuya Sun , Lifan Zhou , Shan Zhong
{"title":"耦合流作为基于模型的策略优化的指导","authors":"Shengrong Gong , Yi Wang , Xin Du , Yuya Sun , Lifan Zhou , Shan Zhong","doi":"10.1016/j.engappai.2025.111528","DOIUrl":null,"url":null,"abstract":"<div><div>Model-based reinforcement learning (MBRL) offers high sample efficiency but suffers from cumulative multi-step prediction errors that degrade long-term performance. To address this, we propose a coupled flows-guided policy optimization framework, where two coupled flows quantify and minimize the discrepancy between the true and learned state–action distributions. By reducing this divergence, the loss functions serve as both a discriminator, selecting more accurate rollouts for policy learning, and a reward signal, refining the dynamics model to mitigate multi-step errors. Theoretical analysis establishes a bound on the expected return discrepancy. Empirical evaluations demonstrate that our method achieves higher cumulative rewards than the representative model-based approaches across diverse control tasks. This highlights its applicability in data-scarce domains such as robotics, recommendation systems, and autonomous driving.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"159 ","pages":"Article 111528"},"PeriodicalIF":8.0000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Coupled flows as guidance for model-based policy optimization\",\"authors\":\"Shengrong Gong , Yi Wang , Xin Du , Yuya Sun , Lifan Zhou , Shan Zhong\",\"doi\":\"10.1016/j.engappai.2025.111528\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Model-based reinforcement learning (MBRL) offers high sample efficiency but suffers from cumulative multi-step prediction errors that degrade long-term performance. To address this, we propose a coupled flows-guided policy optimization framework, where two coupled flows quantify and minimize the discrepancy between the true and learned state–action distributions. By reducing this divergence, the loss functions serve as both a discriminator, selecting more accurate rollouts for policy learning, and a reward signal, refining the dynamics model to mitigate multi-step errors. Theoretical analysis establishes a bound on the expected return discrepancy. Empirical evaluations demonstrate that our method achieves higher cumulative rewards than the representative model-based approaches across diverse control tasks. This highlights its applicability in data-scarce domains such as robotics, recommendation systems, and autonomous driving.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"159 \",\"pages\":\"Article 111528\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2025-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625015301\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625015301","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Coupled flows as guidance for model-based policy optimization
Model-based reinforcement learning (MBRL) offers high sample efficiency but suffers from cumulative multi-step prediction errors that degrade long-term performance. To address this, we propose a coupled flows-guided policy optimization framework, where two coupled flows quantify and minimize the discrepancy between the true and learned state–action distributions. By reducing this divergence, the loss functions serve as both a discriminator, selecting more accurate rollouts for policy learning, and a reward signal, refining the dynamics model to mitigate multi-step errors. Theoretical analysis establishes a bound on the expected return discrepancy. Empirical evaluations demonstrate that our method achieves higher cumulative rewards than the representative model-based approaches across diverse control tasks. This highlights its applicability in data-scarce domains such as robotics, recommendation systems, and autonomous driving.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.