{"title":"采用动态编程和过渡可行性的机器人控制稳定扩散模型","authors":"Haoran Li;Yaocheng Zhang;Haowei Wen;Yuanheng Zhu;Dongbin Zhao","doi":"10.1109/TAI.2024.3387401","DOIUrl":null,"url":null,"abstract":"Due to its strong ability in distribution representation, the diffusion model has been incorporated into offline reinforcement learning (RL) to cover diverse trajectories of the complex behavior policy. However, this also causes several challenges. Training the diffusion model to imitate behavior from the collected trajectories suffers from limited stitching capability which derives better policies from suboptimal trajectories. Furthermore, the inherent randomness of the diffusion model can lead to unpredictable control and dangerous behavior for the robot. To address these concerns, we propose the value-learning-based decision diffuser (V-DD), which consists of the trajectory diffusion module (TDM) and the trajectory evaluation module (TEM). During the training process, the TDM combines the state-value and classifier-free guidance to bolster the ability to stitch suboptimal trajectories. During the inference process, we design the TEM to select a feasible trajectory generated by the diffusion model. Empirical results demonstrate that our method delivers competitive results on the D4RL benchmark and substantially outperforms current diffusion model-based methods on the real-world robot task.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 9","pages":"4585-4594"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Stabilizing Diffusion Model for Robotic Control With Dynamic Programming and Transition Feasibility\",\"authors\":\"Haoran Li;Yaocheng Zhang;Haowei Wen;Yuanheng Zhu;Dongbin Zhao\",\"doi\":\"10.1109/TAI.2024.3387401\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to its strong ability in distribution representation, the diffusion model has been incorporated into offline reinforcement learning (RL) to cover diverse trajectories of the complex behavior policy. However, this also causes several challenges. Training the diffusion model to imitate behavior from the collected trajectories suffers from limited stitching capability which derives better policies from suboptimal trajectories. Furthermore, the inherent randomness of the diffusion model can lead to unpredictable control and dangerous behavior for the robot. To address these concerns, we propose the value-learning-based decision diffuser (V-DD), which consists of the trajectory diffusion module (TDM) and the trajectory evaluation module (TEM). During the training process, the TDM combines the state-value and classifier-free guidance to bolster the ability to stitch suboptimal trajectories. During the inference process, we design the TEM to select a feasible trajectory generated by the diffusion model. Empirical results demonstrate that our method delivers competitive results on the D4RL benchmark and substantially outperforms current diffusion model-based methods on the real-world robot task.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"5 9\",\"pages\":\"4585-4594\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10496464/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10496464/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
由于扩散模型在分布表示方面的强大能力,它已被纳入离线强化学习(RL),以覆盖复杂行为政策的各种轨迹。然而,这也带来了一些挑战。从收集到的轨迹中训练扩散模型来模仿行为,会受到拼接能力的限制,从而从次优轨迹中得出更好的策略。此外,扩散模型固有的随机性可能会导致机器人无法预测的控制和危险行为。为了解决这些问题,我们提出了基于价值学习的决策扩散器(V-DD),它由轨迹扩散模块(TDM)和轨迹评估模块(TEM)组成。在训练过程中,TDM 结合了状态值和无分类器指导,以提高缝合次优轨迹的能力。在推理过程中,我们设计 TEM 来选择由扩散模型生成的可行轨迹。实证结果表明,我们的方法在 D4RL 基准测试中取得了具有竞争力的结果,并且在实际机器人任务中大大优于当前基于扩散模型的方法。
Stabilizing Diffusion Model for Robotic Control With Dynamic Programming and Transition Feasibility
Due to its strong ability in distribution representation, the diffusion model has been incorporated into offline reinforcement learning (RL) to cover diverse trajectories of the complex behavior policy. However, this also causes several challenges. Training the diffusion model to imitate behavior from the collected trajectories suffers from limited stitching capability which derives better policies from suboptimal trajectories. Furthermore, the inherent randomness of the diffusion model can lead to unpredictable control and dangerous behavior for the robot. To address these concerns, we propose the value-learning-based decision diffuser (V-DD), which consists of the trajectory diffusion module (TDM) and the trajectory evaluation module (TEM). During the training process, the TDM combines the state-value and classifier-free guidance to bolster the ability to stitch suboptimal trajectories. During the inference process, we design the TEM to select a feasible trajectory generated by the diffusion model. Empirical results demonstrate that our method delivers competitive results on the D4RL benchmark and substantially outperforms current diffusion model-based methods on the real-world robot task.