arXiv - CS - Robotics最新文献

筛选
英文 中文
WeHelp: A Shared Autonomy System for Wheelchair Users WeHelp:轮椅使用者共享自主系统
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.12159
Abulikemu Abuduweili, Alice Wu, Tianhao Wei, Weiye Zhao
{"title":"WeHelp: A Shared Autonomy System for Wheelchair Users","authors":"Abulikemu Abuduweili, Alice Wu, Tianhao Wei, Weiye Zhao","doi":"arxiv-2409.12159","DOIUrl":"https://doi.org/arxiv-2409.12159","url":null,"abstract":"There is a large population of wheelchair users. Most of the wheelchair users\u0000need help with daily tasks. However, according to recent reports, their needs\u0000are not properly satisfied due to the lack of caregivers. Therefore, in this\u0000project, we develop WeHelp, a shared autonomy system aimed for wheelchair\u0000users. A robot with a WeHelp system has three modes, following mode, remote\u0000control mode and tele-operation mode. In the following mode, the robot follows\u0000the wheelchair user automatically via visual tracking. The wheelchair user can\u0000ask the robot to follow them from behind, by the left or by the right. When the\u0000wheelchair user asks for help, the robot will recognize the command via speech\u0000recognition, and then switch to the teleoperation mode or remote control mode.\u0000In the teleoperation mode, the wheelchair user takes over the robot with a joy\u0000stick and controls the robot to complete some complex tasks for their needs,\u0000such as opening doors, moving obstacles on the way, reaching objects on a high\u0000shelf or on the low ground, etc. In the remote control mode, a remote assistant\u0000takes over the robot and helps the wheelchair user complete some complex tasks\u0000for their needs. Our evaluation shows that the pipeline is useful and practical\u0000for wheelchair users. Source code and demo of the paper are available at\u0000url{https://github.com/Walleclipse/WeHelp}.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metric-Semantic Factor Graph Generation based on Graph Neural Networks 基于图神经网络的度量语义因子图生成
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11972
Jose Andres Millan-Romera, Hriday Bavle, Muhammad Shaheer, Holger Voos, Jose Luis Sanchez-Lopez
{"title":"Metric-Semantic Factor Graph Generation based on Graph Neural Networks","authors":"Jose Andres Millan-Romera, Hriday Bavle, Muhammad Shaheer, Holger Voos, Jose Luis Sanchez-Lopez","doi":"arxiv-2409.11972","DOIUrl":"https://doi.org/arxiv-2409.11972","url":null,"abstract":"Understanding the relationships between geometric structures and semantic\u0000concepts is crucial for building accurate models of complex environments. In\u0000indoors, certain spatial constraints, such as the relative positioning of\u0000planes, remain consistent despite variations in layout. This paper explores how\u0000these invariant relationships can be captured in a graph SLAM framework by\u0000representing high-level concepts like rooms and walls, linking them to\u0000geometric elements like planes through an optimizable factor graph. Several\u0000efforts have tackled this issue with add-hoc solutions for each concept\u0000generation and with manually-defined factors. This paper proposes a novel method for metric-semantic factor graph\u0000generation which includes defining a semantic scene graph, integrating\u0000geometric information, and learning the interconnecting factors, all based on\u0000Graph Neural Networks (GNNs). An edge classification network (G-GNN) sorts the\u0000edges between planes into same room, same wall or none types. The resulting\u0000relations are clustered, generating a room or wall for each cluster. A second\u0000family of networks (F-GNN) infers the geometrical origin of the new nodes. The\u0000definition of the factors employs the same F-GNN used for the metric attribute\u0000of the generated nodes. Furthermore, share the new factor graph with the\u0000S-Graphs+ algorithm, extending its graph expressiveness and scene\u0000representation with the ultimate goal of improving the SLAM performance. The\u0000complexity of the environments is increased to N-plane rooms by training the\u0000networks on L-shaped rooms. The framework is evaluated in synthetic and\u0000simulated scenarios as no real datasets of the required complex layouts are\u0000available.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Residual Descent Differential Dynamic Game (RD3G) -- A Fast Newton Solver for Constrained General Sum Games 残差后裔动态博弈(RD3G)--受约束泛和博弈的快速牛顿求解器
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.12152
Zhiyuan Zhang, Panagiotis Tsiotras
{"title":"Residual Descent Differential Dynamic Game (RD3G) -- A Fast Newton Solver for Constrained General Sum Games","authors":"Zhiyuan Zhang, Panagiotis Tsiotras","doi":"arxiv-2409.12152","DOIUrl":"https://doi.org/arxiv-2409.12152","url":null,"abstract":"We present Residual Descent Differential Dynamic Game (RD3G), a Newton-based\u0000solver for constrained multi-agent game-control problems. The proposed solver\u0000seeks a local Nash equilibrium for problems where agents are coupled through\u0000their rewards and state constraints. We compare the proposed method against\u0000competing state-of-the-art techniques and showcase the computational benefits\u0000of the RD3G algorithm on several example problems.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure Control Systems for Autonomous Quadrotors against Cyber-Attacks 确保自主四旋翼飞行器控制系统免受网络攻击
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11897
Samuel Belkadi
{"title":"Secure Control Systems for Autonomous Quadrotors against Cyber-Attacks","authors":"Samuel Belkadi","doi":"arxiv-2409.11897","DOIUrl":"https://doi.org/arxiv-2409.11897","url":null,"abstract":"The problem of safety for robotic systems has been extensively studied.\u0000However, little attention has been given to security issues for\u0000three-dimensional systems, such as quadrotors. Malicious adversaries can\u0000compromise robot sensors and communication networks, causing incidents,\u0000achieving illegal objectives, or even injuring people. This study first designs\u0000an intelligent control system for autonomous quadrotors. Then, it investigates\u0000the problems of optimal false data injection attack scheduling and\u0000countermeasure design for unmanned aerial vehicles. Using a state-of-the-art\u0000deep learning-based approach, an optimal false data injection attack scheme is\u0000proposed to deteriorate a quadrotor's tracking performance with limited attack\u0000energy. Subsequently, an optimal tracking control strategy is learned to\u0000mitigate attacks and recover the quadrotor's tracking performance. We base our\u0000work on Agilicious, a state-of-the-art quadrotor recently deployed for\u0000autonomous settings. This paper is the first in the United Kingdom to deploy\u0000this quadrotor and implement reinforcement learning on its platform. Therefore,\u0000to promote easy reproducibility with minimal engineering overhead, we further\u0000provide (1) a comprehensive breakdown of this quadrotor, including software\u0000stacks and hardware alternatives; (2) a detailed reinforcement-learning\u0000framework to train autonomous controllers on Agilicious agents; and (3) a new\u0000open-source environment that builds upon PyFlyt for future reinforcement\u0000learning research on Agilicious platforms. Both simulated and real-world\u0000experiments are conducted to show the effectiveness of the proposed frameworks\u0000in section 5.2.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Representing Positional Information in Generative World Models for Object Manipulation 在用于物体操作的生成式世界模型中表示位置信息
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.12005
Stefano Ferraro, Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt, Sai Rajeswar
{"title":"Representing Positional Information in Generative World Models for Object Manipulation","authors":"Stefano Ferraro, Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt, Sai Rajeswar","doi":"arxiv-2409.12005","DOIUrl":"https://doi.org/arxiv-2409.12005","url":null,"abstract":"Object manipulation capabilities are essential skills that set apart embodied\u0000agents engaging with the world, especially in the realm of robotics. The\u0000ability to predict outcomes of interactions with objects is paramount in this\u0000setting. While model-based control methods have started to be employed for\u0000tackling manipulation tasks, they have faced challenges in accurately\u0000manipulating objects. As we analyze the causes of this limitation, we identify\u0000the cause of underperformance in the way current world models represent crucial\u0000positional information, especially about the target's goal specification for\u0000object positioning tasks. We introduce a general approach that empowers world\u0000model-based agents to effectively solve object-positioning tasks. We propose\u0000two declinations of this approach for generative world models:\u0000position-conditioned (PCP) and latent-conditioned (LCP) policy learning. In\u0000particular, LCP employs object-centric latent representations that explicitly\u0000capture object positional information for goal specification. This naturally\u0000leads to the emergence of multimodal capabilities, enabling the specification\u0000of goals through spatial coordinates or a visual goal. Our methods are\u0000rigorously evaluated across several manipulation environments, showing\u0000favorable performance compared to current model-based control approaches.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle-based Instance-aware Semantic Occupancy Mapping in Dynamic Environments 动态环境中基于粒子的实例感知语义占用映射
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11975
Gang Chen, Zhaoying Wang, Wei Dong, Javier Alonso-Mora
{"title":"Particle-based Instance-aware Semantic Occupancy Mapping in Dynamic Environments","authors":"Gang Chen, Zhaoying Wang, Wei Dong, Javier Alonso-Mora","doi":"arxiv-2409.11975","DOIUrl":"https://doi.org/arxiv-2409.11975","url":null,"abstract":"Representing the 3D environment with instance-aware semantic and geometric\u0000information is crucial for interaction-aware robots in dynamic environments.\u0000Nonetheless, creating such a representation poses challenges due to sensor\u0000noise, instance segmentation and tracking errors, and the objects' dynamic\u0000motion. This paper introduces a novel particle-based instance-aware semantic\u0000occupancy map to tackle these challenges. Particles with an augmented instance\u0000state are used to estimate the Probability Hypothesis Density (PHD) of the\u0000objects and implicitly model the environment. Utilizing a State-augmented\u0000Sequential Monte Carlo PHD (S$^2$MC-PHD) filter, these particles are updated to\u0000jointly estimate occupancy status, semantic, and instance IDs, mitigating\u0000noise. Additionally, a memory module is adopted to enhance the map's\u0000responsiveness to previously observed objects. Experimental results on the\u0000Virtual KITTI 2 dataset demonstrate that the proposed approach surpasses\u0000state-of-the-art methods across multiple metrics under different noise\u0000conditions. Subsequent tests using real-world data further validate the\u0000effectiveness of the proposed approach.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Repeatable Energy-Efficient Perching for Flapping-Wing Robots Using Soft Grippers 使用软爪为扇翼机器人提供可重复的高能效栖息地
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11921
Krispin C. V. Broers, Sophie F. Armanini
{"title":"Repeatable Energy-Efficient Perching for Flapping-Wing Robots Using Soft Grippers","authors":"Krispin C. V. Broers, Sophie F. Armanini","doi":"arxiv-2409.11921","DOIUrl":"https://doi.org/arxiv-2409.11921","url":null,"abstract":"With the emergence of new flapping-wing micro aerial vehicle (FWMAV) designs,\u0000a need for extensive and advanced mission capabilities arises. FWMAVs try to\u0000adapt and emulate the flight features of birds and flying insects. While\u0000current designs already achieve high manoeuvrability, they still almost\u0000entirely lack perching and take-off abilities. These capabilities could, for\u0000instance, enable long-term monitoring and surveillance missions, and operations\u0000in cluttered environments or in proximity to humans and animals. We present the\u0000development and testing of a framework that enables repeatable perching and\u0000take-off for small to medium-sized FWMAVs, utilising soft, non-damaging\u0000grippers. Thanks to its novel active-passive actuation system, an\u0000energy-conserving state can be achieved and indefinitely maintained while the\u0000vehicle is perched. A prototype of the proposed system weighing under 39 g was\u0000manufactured and extensively tested on a 110 g flapping-wing robot. Successful\u0000free-flight tests demonstrated the full mission cycle of landing, perching and\u0000subsequent take-off. The telemetry data recorded during the flights yields\u0000extensive insight into the system's behaviour and is a valuable step towards\u0000full automation and optimisation of the entire take-off and landing cycle.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-robot connection towards collective obstacle field traversal 多机器人连接实现集体穿越障碍物区域
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11709
Haodi Hu, Xingjue Liao, Wuhao Du, Feifei Qian
{"title":"Multi-robot connection towards collective obstacle field traversal","authors":"Haodi Hu, Xingjue Liao, Wuhao Du, Feifei Qian","doi":"arxiv-2409.11709","DOIUrl":"https://doi.org/arxiv-2409.11709","url":null,"abstract":"Environments with large terrain height variations present great challenges\u0000for legged robot locomotion. Drawing inspiration from fire ants' collective\u0000assembly behavior, we study strategies that can enable two ``connectable''\u0000robots to collectively navigate over bumpy terrains with height variations\u0000larger than robot leg length. Each robot was designed to be extremely simple,\u0000with a cubical body and one rotary motor actuating four vertical peg legs that\u0000move in pairs. Two or more robots could physically connect to one another to\u0000enhance collective mobility. We performed locomotion experiments with a\u0000two-robot group, across an obstacle field filled with uniformly-distributed\u0000semi-spherical ``boulders''. Experimentally-measured robot speed suggested that\u0000the connection length between the robots has a significant effect on collective\u0000mobility: connection length C in [0.86, 0.9] robot unit body length (UBL) were\u0000able to produce sustainable movements across the obstacle field, whereas\u0000connection length C in [0.63, 0.84] and [0.92, 1.1] UBL resulted in low\u0000traversability. An energy landscape based model revealed the underlying\u0000mechanism of how connection length modulated collective mobility through the\u0000system's potential energy landscape, and informed adaptation strategies for the\u0000two-robot system to adapt their connection length for traversing obstacle\u0000fields with varying spatial frequencies. Our results demonstrated that by\u0000varying the connection configuration between the robots, the two-robot system\u0000could leverage mechanical intelligence to better utilize obstacle interaction\u0000forces and produce improved locomotion. Going forward, we envision that\u0000generalized principles of robot-environment coupling can inform design and\u0000control strategies for a large group of small robots to achieve ant-like\u0000collective environment negotiation.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142266857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Robot Learning Framework 通用机器人学习框架
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.12061
Jiahuan Yan, Zhouyang Hong, Yu Zhao, Yu Tian, Yunxin Liu, Travis Davies, Luhui Hu
{"title":"Generalized Robot Learning Framework","authors":"Jiahuan Yan, Zhouyang Hong, Yu Zhao, Yu Tian, Yunxin Liu, Travis Davies, Luhui Hu","doi":"arxiv-2409.12061","DOIUrl":"https://doi.org/arxiv-2409.12061","url":null,"abstract":"Imitation based robot learning has recently gained significant attention in\u0000the robotics field due to its theoretical potential for transferability and\u0000generalizability. However, it remains notoriously costly, both in terms of\u0000hardware and data collection, and deploying it in real-world environments\u0000demands meticulous setup of robots and precise experimental conditions. In this\u0000paper, we present a low-cost robot learning framework that is both easily\u0000reproducible and transferable to various robots and environments. We\u0000demonstrate that deployable imitation learning can be successfully applied even\u0000to industrial-grade robots, not just expensive collaborative robotic arms.\u0000Furthermore, our results show that multi-task robot learning is achievable with\u0000simple network architectures and fewer demonstrations than previously thought\u0000necessary. As the current evaluating method is almost subjective when it comes\u0000to real-world manipulation tasks, we propose Voting Positive Rate (VPR) - a\u0000novel evaluation strategy that provides a more objective assessment of\u0000performance. We conduct an extensive comparison of success rates across various\u0000self-designed tasks to validate our approach. To foster collaboration and\u0000support the robot learning community, we have open-sourced all relevant\u0000datasets and model checkpoints, available at huggingface.co/ZhiChengAI.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RMP-YOLO: A Robust Motion Predictor for Partially Observable Scenarios even if You Only Look Once RMP-YOLO:即使只看一眼,也能预测部分可观测场景的稳健运动预测器
arXiv - CS - Robotics Pub Date : 2024-09-18 DOI: arxiv-2409.11696
Jiawei Sun, Jiahui Li, Tingchen Liu, Chengran Yuan, Shuo Sun, Zefan Huang, Anthony Wong, Keng Peng Tee, Marcelo H. Ang Jr
{"title":"RMP-YOLO: A Robust Motion Predictor for Partially Observable Scenarios even if You Only Look Once","authors":"Jiawei Sun, Jiahui Li, Tingchen Liu, Chengran Yuan, Shuo Sun, Zefan Huang, Anthony Wong, Keng Peng Tee, Marcelo H. Ang Jr","doi":"arxiv-2409.11696","DOIUrl":"https://doi.org/arxiv-2409.11696","url":null,"abstract":"We introduce RMP-YOLO, a unified framework designed to provide robust motion\u0000predictions even with incomplete input data. Our key insight stems from the\u0000observation that complete and reliable historical trajectory data plays a\u0000pivotal role in ensuring accurate motion prediction. Therefore, we propose a\u0000new paradigm that prioritizes the reconstruction of intact historical\u0000trajectories before feeding them into the prediction modules. Our approach\u0000introduces a novel scene tokenization module to enhance the extraction and\u0000fusion of spatial and temporal features. Following this, our proposed recovery\u0000module reconstructs agents' incomplete historical trajectories by leveraging\u0000local map topology and interactions with nearby agents. The reconstructed,\u0000clean historical data is then integrated into the downstream prediction\u0000modules. Our framework is able to effectively handle missing data of varying\u0000lengths and remains robust against observation noise, while maintaining high\u0000prediction accuracy. Furthermore, our recovery module is compatible with\u0000existing prediction models, ensuring seamless integration. Extensive\u0000experiments validate the effectiveness of our approach, and deployment in\u0000real-world autonomous vehicles confirms its practical utility. In the 2024\u0000Waymo Motion Prediction Competition, our method, RMP-YOLO, achieves\u0000state-of-the-art performance, securing third place.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信