arXiv - CS - Robotics最新文献

筛选
英文 中文
Multi-Robot Coordination Induced in Hazardous Environments through an Adversarial Graph-Traversal Game 通过对抗性图遍历博弈诱导危险环境中的多机器人协调
arXiv - CS - Robotics Pub Date : 2024-09-12 DOI: arxiv-2409.08222
James Berneburg, Xuan Wang, Xuesu Xiao, Daigo Shishika
{"title":"Multi-Robot Coordination Induced in Hazardous Environments through an Adversarial Graph-Traversal Game","authors":"James Berneburg, Xuan Wang, Xuesu Xiao, Daigo Shishika","doi":"arxiv-2409.08222","DOIUrl":"https://doi.org/arxiv-2409.08222","url":null,"abstract":"This paper presents a game theoretic formulation of a graph traversal\u0000problem, with applications to robots moving through hazardous environments in\u0000the presence of an adversary, as in military and security applications. The\u0000blue team of robots moves in an environment modeled by a time-varying graph,\u0000attempting to reach some goal with minimum cost, while the red team controls\u0000how the graph changes to maximize the cost. The problem is formulated as a\u0000stochastic game, so that Nash equilibrium strategies can be computed\u0000numerically. Bounds are provided for the game value, with a guarantee that it\u0000solves the original problem. Numerical simulations demonstrate the results and\u0000the effectiveness of this method, particularly showing the benefit of mixing\u0000actions for both players, as well as beneficial coordinated behavior, where\u0000blue robots split up and/or synchronize to traverse risky edges.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relevance for Human Robot Collaboration 与人机协作的相关性
arXiv - CS - Robotics Pub Date : 2024-09-12 DOI: arxiv-2409.07753
Xiaotong Zhang, Dingcheng Huang, Kamal Youcef-Toumi
{"title":"Relevance for Human Robot Collaboration","authors":"Xiaotong Zhang, Dingcheng Huang, Kamal Youcef-Toumi","doi":"arxiv-2409.07753","DOIUrl":"https://doi.org/arxiv-2409.07753","url":null,"abstract":"Effective human-robot collaboration (HRC) requires the robots to possess\u0000human-like intelligence. Inspired by the human's cognitive ability to\u0000selectively process and filter elements in complex environments, this paper\u0000introduces a novel concept and scene-understanding approach termed `relevance.'\u0000It identifies relevant components in a scene. To accurately and efficiently\u0000quantify relevance, we developed an event-based framework that selectively\u0000triggers relevance determination, along with a probabilistic methodology built\u0000on a structured scene representation. Simulation results demonstrate that the\u0000relevance framework and methodology accurately predict the relevance of a\u0000general HRC setup, achieving a precision of 0.99 and a recall of 0.94.\u0000Relevance can be broadly applied to several areas in HRC to improve task\u0000planning time by 79.56% compared with pure planning for a cereal task, reduce\u0000perception latency by up to 26.53% for an object detector, improve HRC safety\u0000by up to 13.50% and reduce the number of inquiries for HRC by 75.36%. A\u0000real-world demonstration showcases the relevance framework's ability to\u0000intelligently assist humans in everyday tasks.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iKalibr-RGBD: Partially-Specialized Target-Free Visual-Inertial Spatiotemporal Calibration For RGBDs via Continuous-Time Velocity Estimation iKalibr-RGBD:通过连续时间速度估计对 RGBD 进行部分专用的无目标视觉惯性时空校准
arXiv - CS - Robotics Pub Date : 2024-09-11 DOI: arxiv-2409.07116
Shuolong Chen, Xingxing Li, Shengyu Li, Yuxuan Zhou
{"title":"iKalibr-RGBD: Partially-Specialized Target-Free Visual-Inertial Spatiotemporal Calibration For RGBDs via Continuous-Time Velocity Estimation","authors":"Shuolong Chen, Xingxing Li, Shengyu Li, Yuxuan Zhou","doi":"arxiv-2409.07116","DOIUrl":"https://doi.org/arxiv-2409.07116","url":null,"abstract":"Visual-inertial systems have been widely studied and applied in the last two\u0000decades, mainly due to their low cost and power consumption, small footprint,\u0000and high availability. Such a trend simultaneously leads to a large amount of\u0000visual-inertial calibration methods being presented, as accurate spatiotemporal\u0000parameters between sensors are a prerequisite for visual-inertial fusion. In\u0000our previous work, i.e., iKalibr, a continuous-time-based visual-inertial\u0000calibration method was proposed as a part of one-shot multi-sensor resilient\u0000spatiotemporal calibration. While requiring no artificial target brings\u0000considerable convenience, computationally expensive pose estimation is demanded\u0000in initialization and batch optimization, limiting its availability.\u0000Fortunately, this could be vastly improved for the RGBDs with additional depth\u0000information, by employing mapping-free ego-velocity estimation instead of\u0000mapping-based pose estimation. In this paper, we present the continuous-time\u0000ego-velocity estimation-based RGBD-inertial spatiotemporal calibration, termed\u0000as iKalibr-RGBD, which is also targetless but computationally efficient. The\u0000general pipeline of iKalibr-RGBD is inherited from iKalibr, composed of a\u0000rigorous initialization procedure and several continuous-time batch\u0000optimizations. The implementation of iKalibr-RGBD is open-sourced at\u0000(https://github.com/Unsigned-Long/iKalibr) to benefit the research community.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptive Pedipulation with Local Obstacle Avoidance 具有局部障碍物规避功能的感知脚踏装置
arXiv - CS - Robotics Pub Date : 2024-09-11 DOI: arxiv-2409.07195
Jonas Stolle, Philip Arm, Mayank Mittal, Marco Hutter
{"title":"Perceptive Pedipulation with Local Obstacle Avoidance","authors":"Jonas Stolle, Philip Arm, Mayank Mittal, Marco Hutter","doi":"arxiv-2409.07195","DOIUrl":"https://doi.org/arxiv-2409.07195","url":null,"abstract":"Pedipulation leverages the feet of legged robots for mobile manipulation,\u0000eliminating the need for dedicated robotic arms. While previous works have\u0000showcased blind and task-specific pedipulation skills, they fail to account for\u0000static and dynamic obstacles in the environment. To address this limitation, we\u0000introduce a reinforcement learning-based approach to train a whole-body\u0000obstacle-aware policy that tracks foot position commands while simultaneously\u0000avoiding obstacles. Despite training the policy in only five different static\u0000scenarios in simulation, we show that it generalizes to unknown environments\u0000with different numbers and types of obstacles. We analyze the performance of\u0000our method through a set of simulation experiments and successfully deploy the\u0000learned policy on the ANYmal quadruped, demonstrating its capability to follow\u0000foot commands while navigating around static and dynamic obstacles.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FaVoR: Features via Voxel Rendering for Camera Relocalization FaVoR:通过体素渲染实现摄像机重定位的功能
arXiv - CS - Robotics Pub Date : 2024-09-11 DOI: arxiv-2409.07571
Vincenzo Polizzi, Marco Cannici, Davide Scaramuzza, Jonathan Kelly
{"title":"FaVoR: Features via Voxel Rendering for Camera Relocalization","authors":"Vincenzo Polizzi, Marco Cannici, Davide Scaramuzza, Jonathan Kelly","doi":"arxiv-2409.07571","DOIUrl":"https://doi.org/arxiv-2409.07571","url":null,"abstract":"Camera relocalization methods range from dense image alignment to direct\u0000camera pose regression from a query image. Among these, sparse feature matching\u0000stands out as an efficient, versatile, and generally lightweight approach with\u0000numerous applications. However, feature-based methods often struggle with\u0000significant viewpoint and appearance changes, leading to matching failures and\u0000inaccurate pose estimates. To overcome this limitation, we propose a novel\u0000approach that leverages a globally sparse yet locally dense 3D representation\u0000of 2D features. By tracking and triangulating landmarks over a sequence of\u0000frames, we construct a sparse voxel map optimized to render image patch\u0000descriptors observed during tracking. Given an initial pose estimate, we first\u0000synthesize descriptors from the voxels using volumetric rendering and then\u0000perform feature matching to estimate the camera pose. This methodology enables\u0000the generation of descriptors for unseen views, enhancing robustness to view\u0000changes. We extensively evaluate our method on the 7-Scenes and Cambridge\u0000Landmarks datasets. Our results show that our method significantly outperforms\u0000existing state-of-the-art feature representation techniques in indoor\u0000environments, achieving up to a 39% improvement in median translation error.\u0000Additionally, our approach yields comparable results to other methods for\u0000outdoor scenarios while maintaining lower memory and computational costs.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling Shared-Control for A Riding Ballbot System 实现骑行球机器人系统的共享控制
arXiv - CS - Robotics Pub Date : 2024-09-11 DOI: arxiv-2409.07013
Yu Chen, Mahshid Mansouri, Chenzhang Xiao, Ze Wang, Elizabeth T. Hsiao-Wecksler, William R. Norris
{"title":"Enabling Shared-Control for A Riding Ballbot System","authors":"Yu Chen, Mahshid Mansouri, Chenzhang Xiao, Ze Wang, Elizabeth T. Hsiao-Wecksler, William R. Norris","doi":"arxiv-2409.07013","DOIUrl":"https://doi.org/arxiv-2409.07013","url":null,"abstract":"This study introduces a shared-control approach for collision avoidance in a\u0000self-balancing riding ballbot, called PURE, marked by its dynamic stability,\u0000omnidirectional movement, and hands-free interface. Integrated with a sensor\u0000array and a novel Passive Artificial Potential Field (PAPF) method, PURE\u0000provides intuitive navigation with deceleration assistance and haptic/audio\u0000feedback, effectively mitigating collision risks. This approach addresses the\u0000limitations of traditional APF methods, such as control oscillations and\u0000unnecessary speed reduction in challenging scenarios. A human-robot interaction\u0000experiment, with 20 manual wheelchair users and able-bodied individuals, was\u0000conducted to evaluate the performance of indoor navigation and obstacle\u0000avoidance with the proposed shared-control algorithm. Results indicated that\u0000shared-control significantly reduced collisions and cognitive load without\u0000affecting travel speed, offering intuitive and safe operation. These findings\u0000highlight the shared-control system's suitability for enhancing collision\u0000avoidance in self-balancing mobility devices, a relatively unexplored area in\u0000assistive mobility research.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Invariant filtering for wheeled vehicle localization with unknown wheel radius and unknown GNSS lever arm 未知车轮半径和未知全球导航卫星系统杠杆臂的轮式车辆定位不变滤波技术
arXiv - CS - Robotics Pub Date : 2024-09-11 DOI: arxiv-2409.07050
Paul ChauchatAMU SCI, AMU, LIS, DIAPRO, Silvère BonnabelCAOR, Axel Barrau
{"title":"Invariant filtering for wheeled vehicle localization with unknown wheel radius and unknown GNSS lever arm","authors":"Paul ChauchatAMU SCI, AMU, LIS, DIAPRO, Silvère BonnabelCAOR, Axel Barrau","doi":"arxiv-2409.07050","DOIUrl":"https://doi.org/arxiv-2409.07050","url":null,"abstract":"We consider the problem of observer design for a nonholonomic car (more\u0000generally a wheeled robot) equipped with wheel speeds with unknown wheel\u0000radius, and whose position is measured via a GNSS antenna placed at an unknown\u0000position in the car. In a tutorial and unified exposition, we recall the recent\u0000theory of two-frame systems within the field of invariant Kalman filtering. We\u0000then show how to adapt it geometrically to address the considered problem,\u0000although it seems at first sight out of its scope. This yields an invariant\u0000extended Kalman filter having autonomous error equations, and state-independent\u0000Jacobians, which is shown to work remarkably well in simulations. The proposed\u0000novel construction thus extends the application scope of invariant filtering.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Fairness Perceptions in Human-Robot Interaction 人机交互中的动态公平感知
arXiv - CS - Robotics Pub Date : 2024-09-11 DOI: arxiv-2409.07560
Houston Claure, Kate Candon, Inyoung Shin, Marynel Vázquez
{"title":"Dynamic Fairness Perceptions in Human-Robot Interaction","authors":"Houston Claure, Kate Candon, Inyoung Shin, Marynel Vázquez","doi":"arxiv-2409.07560","DOIUrl":"https://doi.org/arxiv-2409.07560","url":null,"abstract":"People deeply care about how fairly they are treated by robots. The\u0000established paradigm for probing fairness in Human-Robot Interaction (HRI)\u0000involves measuring the perception of the fairness of a robot at the conclusion\u0000of an interaction. However, such an approach is limited as interactions vary\u0000over time, potentially causing changes in fairness perceptions as well. To\u0000validate this idea, we conducted a 2x2 user study with a mixed design (N=40)\u0000where we investigated two factors: the timing of unfair robot actions (early or\u0000late in an interaction) and the beneficiary of those actions (either another\u0000robot or the participant). Our results show that fairness judgments are not\u0000static. They can shift based on the timing of unfair robot actions. Further, we\u0000explored using perceptions of three key factors (reduced welfare, conduct, and\u0000moral transgression) proposed by a Fairness Theory from Organizational Justice\u0000to predict momentary perceptions of fairness in our study. Interestingly, we\u0000found that the reduced welfare and moral transgression factors were better\u0000predictors than all factors together. Our findings reinforce the idea that\u0000unfair robot behavior can shape perceptions of group dynamics and trust towards\u0000a robot and pave the path to future research directions on moment-to-moment\u0000fairness perceptions","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIS: Seam-Informed Strategy for T-shirt Unfolding SIS: T恤展开的接缝信息战略
arXiv - CS - Robotics Pub Date : 2024-09-11 DOI: arxiv-2409.06990
Xuzhao Huang, Akira Seino, Fuyuki Tokuda, Akinari Kobayashi, Dayuan Chen, Yasuhisa Hirata, Norman C. Tien, Kazuhiro Kosuge
{"title":"SIS: Seam-Informed Strategy for T-shirt Unfolding","authors":"Xuzhao Huang, Akira Seino, Fuyuki Tokuda, Akinari Kobayashi, Dayuan Chen, Yasuhisa Hirata, Norman C. Tien, Kazuhiro Kosuge","doi":"arxiv-2409.06990","DOIUrl":"https://doi.org/arxiv-2409.06990","url":null,"abstract":"Seams are information-rich components of garments. The presence of different\u0000types of seams and their combinations helps to select grasping points for\u0000garment handling. In this paper, we propose a new Seam-Informed Strategy (SIS)\u0000for finding actions for handling a garment, such as grasping and unfolding a\u0000T-shirt. Candidates for a pair of grasping points for a dual-arm manipulator\u0000system are extracted using the proposed Seam Feature Extraction Method (SFEM).\u0000A pair of grasping points for the robot system is selected by the proposed\u0000Decision Matrix Iteration Method (DMIM). The decision matrix is first computed\u0000by multiple human demonstrations and updated by the robot execution results to\u0000improve the grasping and unfolding performance of the robot. Note that the\u0000proposed scheme is trained on real data without relying on simulation.\u0000Experimental results demonstrate the effectiveness of the proposed strategy.\u0000The project video is available at https://github.com/lancexz/sis.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-View 3D Reconstruction via SO(2)-Equivariant Gaussian Sculpting Networks 通过 SO(2)-Equivariant 高斯雕刻网络进行单视角三维重建
arXiv - CS - Robotics Pub Date : 2024-09-11 DOI: arxiv-2409.07245
Ruihan Xu, Anthony Opipari, Joshua Mah, Stanley Lewis, Haoran Zhang, Hanzhe Guo, Odest Chadwicke Jenkins
{"title":"Single-View 3D Reconstruction via SO(2)-Equivariant Gaussian Sculpting Networks","authors":"Ruihan Xu, Anthony Opipari, Joshua Mah, Stanley Lewis, Haoran Zhang, Hanzhe Guo, Odest Chadwicke Jenkins","doi":"arxiv-2409.07245","DOIUrl":"https://doi.org/arxiv-2409.07245","url":null,"abstract":"This paper introduces SO(2)-Equivariant Gaussian Sculpting Networks (GSNs) as\u0000an approach for SO(2)-Equivariant 3D object reconstruction from single-view\u0000image observations. GSNs take a single observation as input to generate a Gaussian splat\u0000representation describing the observed object's geometry and texture. By using\u0000a shared feature extractor before decoding Gaussian colors, covariances,\u0000positions, and opacities, GSNs achieve extremely high throughput (>150FPS).\u0000Experiments demonstrate that GSNs can be trained efficiently using a multi-view\u0000rendering loss and are competitive, in quality, with expensive diffusion-based\u0000reconstruction algorithms. The GSN model is validated on multiple benchmark\u0000experiments. Moreover, we demonstrate the potential for GSNs to be used within\u0000a robotic manipulation pipeline for object-centric grasping.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信