Saman Kazemkhani, Aarav Pandya, Daphne Cornelisse, Brennan Shacklett, Eugene Vinitsky
{"title":"GPUDrive:每秒 100 万次的数据驱动多代理驾驶模拟","authors":"Saman Kazemkhani, Aarav Pandya, Daphne Cornelisse, Brennan Shacklett, Eugene Vinitsky","doi":"arxiv-2408.01584","DOIUrl":null,"url":null,"abstract":"Multi-agent learning algorithms have been successful at generating superhuman\nplanning in a wide variety of games but have had little impact on the design of\ndeployed multi-agent planners. A key bottleneck in applying these techniques to\nmulti-agent planning is that they require billions of steps of experience. To\nenable the study of multi-agent planning at this scale, we present GPUDrive, a\nGPU-accelerated, multi-agent simulator built on top of the Madrona Game Engine\nthat can generate over a million steps of experience per second. Observation,\nreward, and dynamics functions are written directly in C++, allowing users to\ndefine complex, heterogeneous agent behaviors that are lowered to\nhigh-performance CUDA. We show that using GPUDrive we are able to effectively\ntrain reinforcement learning agents over many scenes in the Waymo Motion\ndataset, yielding highly effective goal-reaching agents in minutes for\nindividual scenes and generally capable agents in a few hours. We ship these\ntrained agents as part of the code base at\nhttps://github.com/Emerge-Lab/gpudrive.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"173 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS\",\"authors\":\"Saman Kazemkhani, Aarav Pandya, Daphne Cornelisse, Brennan Shacklett, Eugene Vinitsky\",\"doi\":\"arxiv-2408.01584\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-agent learning algorithms have been successful at generating superhuman\\nplanning in a wide variety of games but have had little impact on the design of\\ndeployed multi-agent planners. A key bottleneck in applying these techniques to\\nmulti-agent planning is that they require billions of steps of experience. To\\nenable the study of multi-agent planning at this scale, we present GPUDrive, a\\nGPU-accelerated, multi-agent simulator built on top of the Madrona Game Engine\\nthat can generate over a million steps of experience per second. Observation,\\nreward, and dynamics functions are written directly in C++, allowing users to\\ndefine complex, heterogeneous agent behaviors that are lowered to\\nhigh-performance CUDA. We show that using GPUDrive we are able to effectively\\ntrain reinforcement learning agents over many scenes in the Waymo Motion\\ndataset, yielding highly effective goal-reaching agents in minutes for\\nindividual scenes and generally capable agents in a few hours. We ship these\\ntrained agents as part of the code base at\\nhttps://github.com/Emerge-Lab/gpudrive.\",\"PeriodicalId\":501291,\"journal\":{\"name\":\"arXiv - CS - Performance\",\"volume\":\"173 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Performance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.01584\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.01584","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
多代理学习算法成功地在各种游戏中生成了超人规划,但对部署多代理规划器的设计却影响甚微。将这些技术应用于多代理规划的一个关键瓶颈是,它们需要数十亿步的经验。为了能够研究这种规模的多代理规划,我们提出了 GPUDrive,这是一个基于 Madrona 游戏引擎的 GPU 加速多代理模拟器,每秒可以生成超过一百万步的经验。观察、奖励和动态函数直接用 C++ 编写,允许用户定义复杂的异构代理行为,并将其降低到高性能 CUDA 中。我们的研究表明,使用 GPUDrive,我们能够在 Waymo 运动数据集中的许多场景中有效地训练强化学习代理,在个别场景中几分钟内就能训练出高效的目标达成代理,在几个小时内就能训练出具有一般能力的代理。我们将经过强化训练的代理作为代码库的一部分发布在https://github.com/Emerge-Lab/gpudrive。
GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS
Multi-agent learning algorithms have been successful at generating superhuman
planning in a wide variety of games but have had little impact on the design of
deployed multi-agent planners. A key bottleneck in applying these techniques to
multi-agent planning is that they require billions of steps of experience. To
enable the study of multi-agent planning at this scale, we present GPUDrive, a
GPU-accelerated, multi-agent simulator built on top of the Madrona Game Engine
that can generate over a million steps of experience per second. Observation,
reward, and dynamics functions are written directly in C++, allowing users to
define complex, heterogeneous agent behaviors that are lowered to
high-performance CUDA. We show that using GPUDrive we are able to effectively
train reinforcement learning agents over many scenes in the Waymo Motion
dataset, yielding highly effective goal-reaching agents in minutes for
individual scenes and generally capable agents in a few hours. We ship these
trained agents as part of the code base at
https://github.com/Emerge-Lab/gpudrive.