arXiv - CS - Robotics最新文献

筛选
英文 中文
Hypergraph-based Motion Generation with Multi-modal Interaction Relational Reasoning 基于超图的运动生成与多模态交互关系推理
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11676
Keshu Wu, Yang Zhou, Haotian Shi, Dominique Lord, Bin Ran, Xinyue Ye
{"title":"Hypergraph-based Motion Generation with Multi-modal Interaction Relational Reasoning","authors":"Keshu Wu, Yang Zhou, Haotian Shi, Dominique Lord, Bin Ran, Xinyue Ye","doi":"arxiv-2409.11676","DOIUrl":"https://doi.org/arxiv-2409.11676","url":null,"abstract":"The intricate nature of real-world driving environments, characterized by\u0000dynamic and diverse interactions among multiple vehicles and their possible\u0000future states, presents considerable challenges in accurately predicting the\u0000motion states of vehicles and handling the uncertainty inherent in the\u0000predictions. Addressing these challenges requires comprehensive modeling and\u0000reasoning to capture the implicit relations among vehicles and the\u0000corresponding diverse behaviors. This research introduces an integrated\u0000framework for autonomous vehicles (AVs) motion prediction to address these\u0000complexities, utilizing a novel Relational Hypergraph Interaction-informed\u0000Neural mOtion generator (RHINO). RHINO leverages hypergraph-based relational\u0000reasoning by integrating a multi-scale hypergraph neural network to model\u0000group-wise interactions among multiple vehicles and their multi-modal driving\u0000behaviors, thereby enhancing motion prediction accuracy and reliability.\u0000Experimental validation using real-world datasets demonstrates the superior\u0000performance of this framework in improving predictive accuracy and fostering\u0000socially aware automated driving in dynamic traffic scenarios.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GauTOAO: Gaussian-based Task-Oriented Affordance of Objects GauTOAO:基于高斯的面向任务的物体亲和力
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11941
Jiawen Wang, Dingsheng Luo
{"title":"GauTOAO: Gaussian-based Task-Oriented Affordance of Objects","authors":"Jiawen Wang, Dingsheng Luo","doi":"arxiv-2409.11941","DOIUrl":"https://doi.org/arxiv-2409.11941","url":null,"abstract":"When your robot grasps an object using dexterous hands or grippers, it should\u0000understand the Task-Oriented Affordances of the Object(TOAO), as different\u0000tasks often require attention to specific parts of the object. To address this\u0000challenge, we propose GauTOAO, a Gaussian-based framework for Task-Oriented\u0000Affordance of Objects, which leverages vision-language models in a zero-shot\u0000manner to predict affordance-relevant regions of an object, given a natural\u0000language query. Our approach introduces a new paradigm: \"static camera, moving\u0000object,\" allowing the robot to better observe and understand the object in hand\u0000during manipulation. GauTOAO addresses the limitations of existing methods,\u0000which often lack effective spatial grouping, by extracting a comprehensive 3D\u0000object mask using DINO features. This mask is then used to conditionally query\u0000gaussians, producing a refined semantic distribution over the object for the\u0000specified task. This approach results in more accurate TOAO extraction,\u0000enhancing the robot's understanding of the object and improving task\u0000performance. We validate the effectiveness of GauTOAO through real-world\u0000experiments, demonstrating its capability to generalize across various tasks.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement Learning with Lie Group Orientations for Robotics 利用机器人的谎言群方向进行强化学习
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11935
Martin Schuck, Jan Brüdigam, Sandra Hirche, Angela Schoellig
{"title":"Reinforcement Learning with Lie Group Orientations for Robotics","authors":"Martin Schuck, Jan Brüdigam, Sandra Hirche, Angela Schoellig","doi":"arxiv-2409.11935","DOIUrl":"https://doi.org/arxiv-2409.11935","url":null,"abstract":"Handling orientations of robots and objects is a crucial aspect of many\u0000applications. Yet, ever so often, there is a lack of mathematical correctness\u0000when dealing with orientations, especially in learning pipelines involving, for\u0000example, artificial neural networks. In this paper, we investigate\u0000reinforcement learning with orientations and propose a simple modification of\u0000the network's input and output that adheres to the Lie group structure of\u0000orientations. As a result, we obtain an easy and efficient implementation that\u0000is directly usable with existing learning libraries and achieves significantly\u0000better performance than other common orientation representations. We briefly\u0000introduce Lie theory specifically for orientations in robotics to motivate and\u0000outline our approach. Subsequently, a thorough empirical evaluation of\u0000different combinations of orientation representations for states and actions\u0000demonstrates the superior performance of our proposed approach in different\u0000scenarios, including: direct orientation control, end effector orientation\u0000control, and pick-and-place tasks.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMRL: Integrating Visual, Physical, Temporal, and Geometric Representations for Enhanced Food Acquisition IMRL:整合视觉、物理、时间和几何表征,增强食物获取能力
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.12092
Rui Liu, Zahiruddin Mahammad, Amisha Bhaskar, Pratap Tokekar
{"title":"IMRL: Integrating Visual, Physical, Temporal, and Geometric Representations for Enhanced Food Acquisition","authors":"Rui Liu, Zahiruddin Mahammad, Amisha Bhaskar, Pratap Tokekar","doi":"arxiv-2409.12092","DOIUrl":"https://doi.org/arxiv-2409.12092","url":null,"abstract":"Robotic assistive feeding holds significant promise for improving the quality\u0000of life for individuals with eating disabilities. However, acquiring diverse\u0000food items under varying conditions and generalizing to unseen food presents\u0000unique challenges. Existing methods that rely on surface-level geometric\u0000information (e.g., bounding box and pose) derived from visual cues (e.g.,\u0000color, shape, and texture) often lacks adaptability and robustness, especially\u0000when foods share similar physical properties but differ in visual appearance.\u0000We employ imitation learning (IL) to learn a policy for food acquisition.\u0000Existing methods employ IL or Reinforcement Learning (RL) to learn a policy\u0000based on off-the-shelf image encoders such as ResNet-50. However, such\u0000representations are not robust and struggle to generalize across diverse\u0000acquisition scenarios. To address these limitations, we propose a novel\u0000approach, IMRL (Integrated Multi-Dimensional Representation Learning), which\u0000integrates visual, physical, temporal, and geometric representations to enhance\u0000the robustness and generalizability of IL for food acquisition. Our approach\u0000captures food types and physical properties (e.g., solid, semi-solid, granular,\u0000liquid, and mixture), models temporal dynamics of acquisition actions, and\u0000introduces geometric information to determine optimal scooping points and\u0000assess bowl fullness. IMRL enables IL to adaptively adjust scooping strategies\u0000based on context, improving the robot's capability to handle diverse food\u0000acquisition scenarios. Experiments on a real robot demonstrate our approach's\u0000robustness and adaptability across various foods and bowl configurations,\u0000including zero-shot generalization to unseen settings. Our approach achieves\u0000improvement up to $35%$ in success rate compared with the best-performing\u0000baseline.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Words to Wheels: Automated Style-Customized Policy Generation for Autonomous Driving 从文字到车轮:为自动驾驶自动生成风格定制的策略
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11694
Xu Han, Xianda Chen, Zhenghan Cai, Pinlong Cai, Meixin Zhu, Xiaowen Chu
{"title":"From Words to Wheels: Automated Style-Customized Policy Generation for Autonomous Driving","authors":"Xu Han, Xianda Chen, Zhenghan Cai, Pinlong Cai, Meixin Zhu, Xiaowen Chu","doi":"arxiv-2409.11694","DOIUrl":"https://doi.org/arxiv-2409.11694","url":null,"abstract":"Autonomous driving technology has witnessed rapid advancements, with\u0000foundation models improving interactivity and user experiences. However,\u0000current autonomous vehicles (AVs) face significant limitations in delivering\u0000command-based driving styles. Most existing methods either rely on predefined\u0000driving styles that require expert input or use data-driven techniques like\u0000Inverse Reinforcement Learning to extract styles from driving data. These\u0000approaches, though effective in some cases, face challenges: difficulty\u0000obtaining specific driving data for style matching (e.g., in Robotaxis),\u0000inability to align driving style metrics with user preferences, and limitations\u0000to pre-existing styles, restricting customization and generalization to new\u0000commands. This paper introduces Words2Wheels, a framework that automatically\u0000generates customized driving policies based on natural language user commands.\u0000Words2Wheels employs a Style-Customized Reward Function to generate a\u0000Style-Customized Driving Policy without relying on prior driving data. By\u0000leveraging large language models and a Driving Style Database, the framework\u0000efficiently retrieves, adapts, and generalizes driving styles. A Statistical\u0000Evaluation module ensures alignment with user preferences. Experimental results\u0000demonstrate that Words2Wheels outperforms existing methods in accuracy,\u0000generalization, and adaptability, offering a novel solution for customized AV\u0000driving behavior. Code and demo available at\u0000https://yokhon.github.io/Words2Wheels/.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One Map to Find Them All: Real-time Open-Vocabulary Mapping for Zero-shot Multi-Object Navigation 一张地图找所有:零距离多目标导航的实时开放词汇映射
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11764
Finn Lukas Busch, Timon Homberger, Jesús Ortega-Peimbert, Quantao Yang, Olov Andersson
{"title":"One Map to Find Them All: Real-time Open-Vocabulary Mapping for Zero-shot Multi-Object Navigation","authors":"Finn Lukas Busch, Timon Homberger, Jesús Ortega-Peimbert, Quantao Yang, Olov Andersson","doi":"arxiv-2409.11764","DOIUrl":"https://doi.org/arxiv-2409.11764","url":null,"abstract":"The capability to efficiently search for objects in complex environments is\u0000fundamental for many real-world robot applications. Recent advances in\u0000open-vocabulary vision models have resulted in semantically-informed object\u0000navigation methods that allow a robot to search for an arbitrary object without\u0000prior training. However, these zero-shot methods have so far treated the\u0000environment as unknown for each consecutive query. In this paper we introduce a\u0000new benchmark for zero-shot multi-object navigation, allowing the robot to\u0000leverage information gathered from previous searches to more efficiently find\u0000new objects. To address this problem we build a reusable open-vocabulary\u0000feature map tailored for real-time object search. We further propose a\u0000probabilistic-semantic map update that mitigates common sources of errors in\u0000semantic feature extraction and leverage this semantic uncertainty for informed\u0000multi-object exploration. We evaluate our method on a set of object navigation\u0000tasks in both simulation as well as with a real robot, running in real-time on\u0000a Jetson Orin AGX. We demonstrate that it outperforms existing state-of-the-art\u0000approaches both on single and multi-object navigation tasks. Additional videos,\u0000code and the multi-object navigation benchmark will be available on\u0000https://finnbsch.github.io/OneMap.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robots that Learn to Safely Influence via Prediction-Informed Reach-Avoid Dynamic Games 通过预测信息的 "到达-避开 "动态游戏学会安全影响的机器人
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.12153
Ravi Pandya, Changliu Liu, Andrea Bajcsy
{"title":"Robots that Learn to Safely Influence via Prediction-Informed Reach-Avoid Dynamic Games","authors":"Ravi Pandya, Changliu Liu, Andrea Bajcsy","doi":"arxiv-2409.12153","DOIUrl":"https://doi.org/arxiv-2409.12153","url":null,"abstract":"Robots can influence people to accomplish their tasks more efficiently:\u0000autonomous cars can inch forward at an intersection to pass through, and\u0000tabletop manipulators can go for an object on the table first. However, a\u0000robot's ability to influence can also compromise the safety of nearby people if\u0000naively executed. In this work, we pose and solve a novel robust reach-avoid\u0000dynamic game which enables robots to be maximally influential, but only when a\u0000safety backup control exists. On the human side, we model the human's behavior\u0000as goal-driven but conditioned on the robot's plan, enabling us to capture\u0000influence. On the robot side, we solve the dynamic game in the joint physical\u0000and belief space, enabling the robot to reason about how its uncertainty in\u0000human behavior will evolve over time. We instantiate our method, called SLIDE\u0000(Safely Leveraging Influence in Dynamic Environments), in a high-dimensional\u0000(39-D) simulated human-robot collaborative manipulation task solved via offline\u0000game-theoretic reinforcement learning. We compare our approach to a robust\u0000baseline that treats the human as a worst-case adversary, a safety controller\u0000that does not explicitly reason about influence, and an energy-function-based\u0000safety shield. We find that SLIDE consistently enables the robot to leverage\u0000the influence it has on the human when it is safe to do so, ultimately allowing\u0000the robot to be less conservative while still ensuring a high safety rate\u0000during task execution.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpotLight: Robotic Scene Understanding through Interaction and Affordance Detection SpotLight:通过交互和亲和力检测理解机器人场景
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11870
Tim Engelbracht, René Zurbrügg, Marc Pollefeys, Hermann Blum, Zuria Bauer
{"title":"SpotLight: Robotic Scene Understanding through Interaction and Affordance Detection","authors":"Tim Engelbracht, René Zurbrügg, Marc Pollefeys, Hermann Blum, Zuria Bauer","doi":"arxiv-2409.11870","DOIUrl":"https://doi.org/arxiv-2409.11870","url":null,"abstract":"Despite increasing research efforts on household robotics, robots intended\u0000for deployment in domestic settings still struggle with more complex tasks such\u0000as interacting with functional elements like drawers or light switches, largely\u0000due to limited task-specific understanding and interaction capabilities. These\u0000tasks require not only detection and pose estimation but also an understanding\u0000of the affordances these elements provide. To address these challenges and\u0000enhance robotic scene understanding, we introduce SpotLight: A comprehensive\u0000framework for robotic interaction with functional elements, specifically light\u0000switches. Furthermore, this framework enables robots to improve their\u0000environmental understanding through interaction. Leveraging VLM-based\u0000affordance prediction to estimate motion primitives for light switch\u0000interaction, we achieve up to 84% operation success in real world experiments.\u0000We further introduce a specialized dataset containing 715 images as well as a\u0000custom detection model for light switch detection. We demonstrate how the\u0000framework can facilitate robot learning through physical interaction by having\u0000the robot explore the environment and discover previously unknown relationships\u0000in a scene graph representation. Lastly, we propose an extension to the\u0000framework to accommodate other functional interactions such as swing doors,\u0000showcasing its flexibility. Videos and Code:\u0000timengelbracht.github.io/SpotLight/","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Projection-Based Next-best-view Planning Framework for Reconstruction of Unknown Objects 基于投影的高效下一最佳视角规划框架,用于重建未知物体
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.12096
Zhizhou Jia, Shaohui Zhang, Qun Hao
{"title":"An Efficient Projection-Based Next-best-view Planning Framework for Reconstruction of Unknown Objects","authors":"Zhizhou Jia, Shaohui Zhang, Qun Hao","doi":"arxiv-2409.12096","DOIUrl":"https://doi.org/arxiv-2409.12096","url":null,"abstract":"Efficiently and completely capturing the three-dimensional data of an object\u0000is a fundamental problem in industrial and robotic applications. The task of\u0000next-best-view (NBV) planning is to infer the pose of the next viewpoint based\u0000on the current data, and gradually realize the complete three-dimensional\u0000reconstruction. Many existing algorithms, however, suffer a large computational\u0000burden due to the use of ray-casting. To address this, this paper proposes a\u0000projection-based NBV planning framework. It can select the next best view at an\u0000extremely fast speed while ensuring the complete scanning of the object.\u0000Specifically, this framework refits different types of voxel clusters into\u0000ellipsoids based on the voxel structure.Then, the next best view is selected\u0000from the candidate views using a projection-based viewpoint quality evaluation\u0000function in conjunction with a global partitioning strategy. This process\u0000replaces the ray-casting in voxel structures, significantly improving the\u0000computational efficiency. Comparative experiments with other algorithms in a\u0000simulation environment show that the framework proposed in this paper can\u0000achieve 10 times efficiency improvement on the basis of capturing roughly the\u0000same coverage. The real-world experimental results also prove the efficiency\u0000and feasibility of the framework.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XP-MARL: Auxiliary Prioritization in Multi-Agent Reinforcement Learning to Address Non-Stationarity XP-MARL:多代理强化学习中的辅助优先级以解决非稳定性问题
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11852
Jianye Xu, Omar Sobhy, Bassam Alrifaee
{"title":"XP-MARL: Auxiliary Prioritization in Multi-Agent Reinforcement Learning to Address Non-Stationarity","authors":"Jianye Xu, Omar Sobhy, Bassam Alrifaee","doi":"arxiv-2409.11852","DOIUrl":"https://doi.org/arxiv-2409.11852","url":null,"abstract":"Non-stationarity poses a fundamental challenge in Multi-Agent Reinforcement\u0000Learning (MARL), arising from agents simultaneously learning and altering their\u0000policies. This creates a non-stationary environment from the perspective of\u0000each individual agent, often leading to suboptimal or even unconverged learning\u0000outcomes. We propose an open-source framework named XP-MARL, which augments\u0000MARL with auxiliary prioritization to address this challenge in cooperative\u0000settings. XP-MARL is 1) founded upon our hypothesis that prioritizing agents\u0000and letting higher-priority agents establish their actions first would\u0000stabilize the learning process and thus mitigate non-stationarity and 2)\u0000enabled by our proposed mechanism called action propagation, where\u0000higher-priority agents act first and communicate their actions, providing a\u0000more stationary environment for others. Moreover, instead of using a predefined\u0000or heuristic priority assignment, XP-MARL learns priority-assignment policies\u0000with an auxiliary MARL problem, leading to a joint learning scheme. Experiments\u0000in a motion-planning scenario involving Connected and Automated Vehicles (CAVs)\u0000demonstrate that XP-MARL improves the safety of a baseline model by 84.4% and\u0000outperforms a state-of-the-art approach, which improves the baseline by only\u000012.8%. Code: github.com/cas-lab-munich/sigmarl","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信