{"title":"曼巴政策:利用混合选择性状态模型实现高效的 3D 扩散策略","authors":"Jiahang Cao, Qiang Zhang, Jingkai Sun, Jiaxu Wang, Hao Cheng, Yulin Li, Jun Ma, Yecheng Shao, Wen Zhao, Gang Han, Yijie Guo, Renjing Xu","doi":"arxiv-2409.07163","DOIUrl":null,"url":null,"abstract":"Diffusion models have been widely employed in the field of 3D manipulation\ndue to their efficient capability to learn distributions, allowing for precise\nprediction of action trajectories. However, diffusion models typically rely on\nlarge parameter UNet backbones as policy networks, which can be challenging to\ndeploy on resource-constrained devices. Recently, the Mamba model has emerged\nas a promising solution for efficient modeling, offering low computational\ncomplexity and strong performance in sequence modeling. In this work, we\npropose the Mamba Policy, a lighter but stronger policy that reduces the\nparameter count by over 80% compared to the original policy network while\nachieving superior performance. Specifically, we introduce the XMamba Block,\nwhich effectively integrates input information with conditional features and\nleverages a combination of Mamba and Attention mechanisms for deep feature\nextraction. Extensive experiments demonstrate that the Mamba Policy excels on\nthe Adroit, Dexart, and MetaWorld datasets, requiring significantly fewer\ncomputational resources. Additionally, we highlight the Mamba Policy's enhanced\nrobustness in long-horizon scenarios compared to baseline methods and explore\nthe performance of various Mamba variants within the Mamba Policy framework.\nOur project page is in https://andycao1125.github.io/mamba_policy/.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mamba Policy: Towards Efficient 3D Diffusion Policy with Hybrid Selective State Models\",\"authors\":\"Jiahang Cao, Qiang Zhang, Jingkai Sun, Jiaxu Wang, Hao Cheng, Yulin Li, Jun Ma, Yecheng Shao, Wen Zhao, Gang Han, Yijie Guo, Renjing Xu\",\"doi\":\"arxiv-2409.07163\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Diffusion models have been widely employed in the field of 3D manipulation\\ndue to their efficient capability to learn distributions, allowing for precise\\nprediction of action trajectories. However, diffusion models typically rely on\\nlarge parameter UNet backbones as policy networks, which can be challenging to\\ndeploy on resource-constrained devices. Recently, the Mamba model has emerged\\nas a promising solution for efficient modeling, offering low computational\\ncomplexity and strong performance in sequence modeling. In this work, we\\npropose the Mamba Policy, a lighter but stronger policy that reduces the\\nparameter count by over 80% compared to the original policy network while\\nachieving superior performance. Specifically, we introduce the XMamba Block,\\nwhich effectively integrates input information with conditional features and\\nleverages a combination of Mamba and Attention mechanisms for deep feature\\nextraction. Extensive experiments demonstrate that the Mamba Policy excels on\\nthe Adroit, Dexart, and MetaWorld datasets, requiring significantly fewer\\ncomputational resources. Additionally, we highlight the Mamba Policy's enhanced\\nrobustness in long-horizon scenarios compared to baseline methods and explore\\nthe performance of various Mamba variants within the Mamba Policy framework.\\nOur project page is in https://andycao1125.github.io/mamba_policy/.\",\"PeriodicalId\":501031,\"journal\":{\"name\":\"arXiv - CS - Robotics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07163\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07163","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mamba Policy: Towards Efficient 3D Diffusion Policy with Hybrid Selective State Models
Diffusion models have been widely employed in the field of 3D manipulation
due to their efficient capability to learn distributions, allowing for precise
prediction of action trajectories. However, diffusion models typically rely on
large parameter UNet backbones as policy networks, which can be challenging to
deploy on resource-constrained devices. Recently, the Mamba model has emerged
as a promising solution for efficient modeling, offering low computational
complexity and strong performance in sequence modeling. In this work, we
propose the Mamba Policy, a lighter but stronger policy that reduces the
parameter count by over 80% compared to the original policy network while
achieving superior performance. Specifically, we introduce the XMamba Block,
which effectively integrates input information with conditional features and
leverages a combination of Mamba and Attention mechanisms for deep feature
extraction. Extensive experiments demonstrate that the Mamba Policy excels on
the Adroit, Dexart, and MetaWorld datasets, requiring significantly fewer
computational resources. Additionally, we highlight the Mamba Policy's enhanced
robustness in long-horizon scenarios compared to baseline methods and explore
the performance of various Mamba variants within the Mamba Policy framework.
Our project page is in https://andycao1125.github.io/mamba_policy/.