Robotics and Autonomous Systems最新文献

筛选
英文 中文
A cooperative approach to Range-Only SLAM with undelayed initialization 一种无延迟初始化的全距离SLAM协同方法
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-17 DOI: 10.1016/j.robot.2025.105230
Lorenzo Bianchi, Francesco Martinelli
{"title":"A cooperative approach to Range-Only SLAM with undelayed initialization","authors":"Lorenzo Bianchi,&nbsp;Francesco Martinelli","doi":"10.1016/j.robot.2025.105230","DOIUrl":"10.1016/j.robot.2025.105230","url":null,"abstract":"<div><div>A 2D cooperative Range-Only SLAM problem is considered in this paper. In addition to odometry, available through noisy encoder readings on the actuated wheels, the robots measure the distances to a set of landmarks in unknown positions within the environment, as well as to other robots. Inter-landmark distances are not assumed to be available. The robots start at unknown locations, with their relative positions also assumed unknown. A Multi-Hypotheses Extended Kalman Filter, endowed with a Federated Information Sharing mechanism, is proposed to solve the problem in a computationally efficient way, without any delay in the initialization of landmark and robot position estimates. Simulation and experimental results are reported in the paper to demonstrate the effectiveness of the proposed approach, showing significant improvements in both steady-state and transient performance compared to the single-robot scenario.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105230"},"PeriodicalIF":5.2,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145340626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive survey of lower limb assistive exoskeleton robots: Models, dynamics, mechanics, and control 下肢辅助外骨骼机器人的综合研究:模型、动力学、力学和控制
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-17 DOI: 10.1016/j.robot.2025.105232
Ali Foroutannia, Masoud Mohammadian
{"title":"A comprehensive survey of lower limb assistive exoskeleton robots: Models, dynamics, mechanics, and control","authors":"Ali Foroutannia,&nbsp;Masoud Mohammadian","doi":"10.1016/j.robot.2025.105232","DOIUrl":"10.1016/j.robot.2025.105232","url":null,"abstract":"<div><div>Movement disorders of the lower limbs due to stroke lead to abnormal walking and affect life, which requires the rehabilitation of patients to improve treatment. Lower limb exoskeleton robots are used to reduce the burden of therapists in rehabilitation. This comprehensive review addresses the multidisciplinary realm of lower limb assistive exoskeleton robots and extensively explores their models, dynamics, mechanics, sensors, actuators, and control. This survey examines various modeling approaches, including musculoskeletal dynamics, human–robot interaction, and complex interactions between the mechanical structure of the exoskeleton and the human body. In addition, it deals with the dynamics and mechanics underlying design and covers areas such as kinematics, energy transfer mechanisms, and the innovative application of adaptive mechanisms. The report also goes into great detail about the control systems built into these exoskeletons, talking about things like complex control architectures, real-time adaptation, and smooth coordination with the user’s movements. Also, the study goes into great detail about human aspects and user experience, which helps us understand the important link between technology and human interaction. This survey is an important tool for researchers, doctors, and engineers that will help them make progress and generate new ideas in assistive robotics.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105232"},"PeriodicalIF":5.2,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145325697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning-driven heuristic path planning method for automated special vehicles in unstructured environment 非结构化环境下自动专用车辆强化学习驱动的启发式路径规划方法
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-14 DOI: 10.1016/j.robot.2025.105231
Fei-xiang Xu , Yan-chen Wang , De-qiang Cheng , Wei-guang An , Chen Zhou , Qi-qi Kou
{"title":"Reinforcement learning-driven heuristic path planning method for automated special vehicles in unstructured environment","authors":"Fei-xiang Xu ,&nbsp;Yan-chen Wang ,&nbsp;De-qiang Cheng ,&nbsp;Wei-guang An ,&nbsp;Chen Zhou ,&nbsp;Qi-qi Kou","doi":"10.1016/j.robot.2025.105231","DOIUrl":"10.1016/j.robot.2025.105231","url":null,"abstract":"<div><div>Aiming at improving the adaptability of global path planning method for the Automated Special Vehicles (ASVs) in a variety of unstructured environments, a reinforcement learning (RL)-driven heuristic path planning method is proposed. The introduction of traditional heuristic algorithm avoids inefficiency of RL in the early learning phase, and it provides a preliminary planning path to be adjusted by RL. Furthermore, a reward function is designed based on vehicle dynamics to generate a smooth, stable, and efficient path. The simulation environments are established based on real terrain data. The algorithm's performance is evaluated by testing various starting and ending points across different terrains. This paper also examines how obstacle distributions and ground conditions affect ASV path planning. Results demonstrate that the proposed method generates collision-free, efficient paths while maintaining excellent adaptability to diverse complex terrains.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105231"},"PeriodicalIF":5.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145325699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finger flow: Reactive reach-while-grasp generation for robotic arms with multi-fingered hands 手指流动:多指手的机械臂在抓取时的反应性到达生成
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-14 DOI: 10.1016/j.robot.2025.105222
Xuming Meng , Henry Maurenbrecher , Alin Albu-Schäffer , Manuel Keppler
{"title":"Finger flow: Reactive reach-while-grasp generation for robotic arms with multi-fingered hands","authors":"Xuming Meng ,&nbsp;Henry Maurenbrecher ,&nbsp;Alin Albu-Schäffer ,&nbsp;Manuel Keppler","doi":"10.1016/j.robot.2025.105222","DOIUrl":"10.1016/j.robot.2025.105222","url":null,"abstract":"<div><div>Humans effortlessly grasp both stationary and moving objects in one-shot motions, fluidly adapting to disturbances and automatically recovering from failed attempts. In contrast, robots with multi-fingered hands often rely on pre-planned, sequential “reach-then-grasp” strategies, which result in slow, unnatural motions and restrict the robot’s ability to react dynamically to changes in the object’s location. Moreover, open-loop execution oftentimes leads to grasp failures. To address these challenges, we introduce Finger Flow (FF), a reactive motion generator that uses the visual feedback from an onboard camera and position feedback from fingers and arms to robustly reach and grasp stationary and moving objects with unpredictable behavior. During the reaching, FF continuously guides the hand to avoid finger-object collisions and adjusts the hand’s reactive opening and closure based on its relative position to the object. This state-dependent behavior results in automatic recovery from failed grasp attempts. We also provide formal guarantees of convergence and collision avoidance for stationary spherical objects. We evaluate FF on the DLR humanoid robot <em>neoDavid</em>, equipped with a multi-fingered hand, and quantitatively assess its performance in a series of grasping experiments involving fast and reactive grasping of a stationary or unpredictable spatially moving object. Running in a closed loop at 3 kHz, FF achieves an 87 % grasp success rate on the stationary object placed at random positions over 130 attempts. Interactive and adversarial human-to-robot handover experiments further demonstrate the robustness and effectiveness of FF.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105222"},"PeriodicalIF":5.2,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145325701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MMSeg: A multimodal multi-scale point cloud segmentation model for navigable areas in complex field environments MMSeg:一种用于复杂野外环境中可导航区域的多模态多尺度点云分割模型
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-13 DOI: 10.1016/j.robot.2025.105229
Yifang Huang, Hongdou He, Peng Shi, Xiaobing Hao, Haitao He, Pei Miao
{"title":"MMSeg: A multimodal multi-scale point cloud segmentation model for navigable areas in complex field environments","authors":"Yifang Huang,&nbsp;Hongdou He,&nbsp;Peng Shi,&nbsp;Xiaobing Hao,&nbsp;Haitao He,&nbsp;Pei Miao","doi":"10.1016/j.robot.2025.105229","DOIUrl":"10.1016/j.robot.2025.105229","url":null,"abstract":"<div><div>This study focuses on advanced navigable area perception technology to address critical applications such as battlefield support and emergency rescue for autonomous ground intelligent agents. It emphasizes its application in complex field environments characterized by unstructured, diverse, and intricate features. Current methods predominantly target structured environments, neglecting the unique challenges presented by unstructured terrains essential for critical applications such as battlefield support and emergency rescue missions. To address this gap, we propose three key contributions through a Multimodal Multi-scale point cloud Segmentation (MMSeg) model. First, we introduce a multimodal ground feature fusion technique that integrates geometric information from LiDAR point clouds with visual texture features from images, enhancing the recognition capabilities of heterogeneous ground surfaces. Second, we propose a local–global terrain geometry information enhancement method that utilizes a dual-attention mechanism to effectively capture and analyze both local and global geometric features in complex terrain conditions. Third, we design a multi-scale classifier framework that effectively processes the multimodal fused information of ground materials and terrain structures, enabling precise segmentation of navigable areas. An experiment on a dedicated platform demonstrates that the MMSeg model achieves mIoU 6% higher than commonly used point cloud segmentation models. These findings suggest that the MMSeg model significantly enhances the perception capabilities of autonomous ground intelligent agents in challenging environments, providing a promising and novel solution to improve their operational effectiveness in complex field conditions.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105229"},"PeriodicalIF":5.2,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145325598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effect of efficiently computed explanations for robot motion planning failures in human–robot interaction 人机交互中有效计算解释对机器人运动规划失败的影响
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-13 DOI: 10.1016/j.robot.2025.105224
Matthias Eder , Bettina Kubicek , Gerald Steinbauer-Wagner
{"title":"Effect of efficiently computed explanations for robot motion planning failures in human–robot interaction","authors":"Matthias Eder ,&nbsp;Bettina Kubicek ,&nbsp;Gerald Steinbauer-Wagner","doi":"10.1016/j.robot.2025.105224","DOIUrl":"10.1016/j.robot.2025.105224","url":null,"abstract":"<div><div>Transparent interaction between an operator and a robotic system is essential for successful task completion. This requires a mutual understanding of decisions and processes in order to provide accurate diagnoses and troubleshooting suggestions in the event of an error. In the motion planning domain, a deep understanding of the decisions made by the system is essential for the successful navigation of a robot in dynamic environments. Due to inaccuracies in the environment perception or in the configuration of the motion planner, robot motion planning incidents can occur that are difficult for the operator to understand. In this work, we present a method that is able to quickly provide explanations for motion planning failures. In the context of optimization-based planners, failures related to planning constraints can be identified using an adaptation of the diagnosis algorithm FastDiag. It is able to provide a preferred minimal diagnosis in logarithmic time, even for large sets of constraints. To evaluate the potential of the proposed method, we conduct a user study to investigate the impact of the provided explanations of motion planning failures on the operator’s performance, trust, and workload. The results show that quickly providing additional explanations for failed motion planning improves task completion time, overall trust in the system, and reduces the number of interactions required. However, no effect was found on perceived workload.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105224"},"PeriodicalIF":5.2,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145325595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A behavior architecture for fast humanoid robot door traversals 一种快速仿人机器人门遍历的行为架构
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-11 DOI: 10.1016/j.robot.2025.105217
Duncan Calvert , Luigi Penco , Dexton Anderson , Tomasz Bialek , Arghya Chatterjee , Bhavyansh Mishra , Geoffrey Clark , Sylvain Bertrand , Robert Griffin
{"title":"A behavior architecture for fast humanoid robot door traversals","authors":"Duncan Calvert ,&nbsp;Luigi Penco ,&nbsp;Dexton Anderson ,&nbsp;Tomasz Bialek ,&nbsp;Arghya Chatterjee ,&nbsp;Bhavyansh Mishra ,&nbsp;Geoffrey Clark ,&nbsp;Sylvain Bertrand ,&nbsp;Robert Griffin","doi":"10.1016/j.robot.2025.105217","DOIUrl":"10.1016/j.robot.2025.105217","url":null,"abstract":"<div><div>Towards the role of humanoid robots as squad mates in urban operations and other domains, we identified doors as a major area lacking capability development. In this paper, we focus on the ability of humanoid robots to navigate and deal with doors. Human-sized doors are ubiquitous in many environment domains and the humanoid form factor is uniquely suited to operate and traverse them. We present an architecture which incorporates GPU accelerated perception and a tree based interactive behavior coordination system with a whole body motion and walking controller. Our system is capable of performing door traversals on a variety of door types. It supports rapid authoring of behaviors for unseen door types and techniques to achieve re-usability of those authored behaviors. The behaviors are modeled using trees and feature logical reactivity and action sequences that can be executed with layered concurrency to increase speed. Primitive actions are built on top of our existing whole body controller which supports manipulation while walking. We include a perception system using both neural networks and classical computer vision for door mechanism detection outside of the lab environment. We present operator-robot interdependence analysis charts to explore how human cognition is combined with artificial intelligence to produce complex robot behavior. Finally, we present and discuss real robot performances of fast door traversals on our Nadia humanoid robot. Videos online at <span><span>https://www.youtube.com/playlist?list=PLXuyT8w3JVgMPaB5nWNRNHtqzRK8i68dy</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105217"},"PeriodicalIF":5.2,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145325696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trajectory control for a quadrotor unmanned aerial vehicle: Adaptive super-twisting terminal sliding mode with adjustable recurrent neural network 四旋翼无人机轨迹控制:带可调递归神经网络的自适应超扭终端滑模
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-11 DOI: 10.1016/j.robot.2025.105228
Peike Huang , Zhanshan Zhao , Xinghao Qin , Hua Wang
{"title":"Trajectory control for a quadrotor unmanned aerial vehicle: Adaptive super-twisting terminal sliding mode with adjustable recurrent neural network","authors":"Peike Huang ,&nbsp;Zhanshan Zhao ,&nbsp;Xinghao Qin ,&nbsp;Hua Wang","doi":"10.1016/j.robot.2025.105228","DOIUrl":"10.1016/j.robot.2025.105228","url":null,"abstract":"<div><div>This paper explores how to control the trajectory of a quadrotor UAV (Unmanned Aerial Vehicle) in unpredictable environments with external disturbances. We address the challenge of designing a controller when the UAV’s mass and inertia are unknown, which makes real-time modeling difficult. To solve this problem, we developed an adjustable recurrent neural network (ARNN) that more accurately approximates the necessary control actions. There are actually some problems when using an RNN in the design of UAV control algorithms: it produces insufficiently accurate control approximations, it is difficult to generalize across different tasks for various UAVs, and the neural network’s own gradient disappears during the training process. To improve its performance, we designed the ARNN with a flexible activation function controlled by an adjustable parameter, which improves its adaptability to different data types and reduces training problems. We also refined the self-feedback mechanism to increase the accuracy of the control approximation. The whole system combines a super-twisting sliding mode control algorithm with the ARNN. We introduce a new super-twisting algorithm that accelerates convergence and reduces the chattering problem in sliding mode controllers through an exponential nonlinear term. Using Lyapunov functions and the Lassalle invariance principle, we show that our method ensures global convergence in finite time. Simulation results confirm the effectiveness and advantages of our approach for UAV trajectory tracking.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105228"},"PeriodicalIF":5.2,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145325703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smooth motion planning method for mobile robots in dynamic environments 动态环境下移动机器人的平滑运动规划方法
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-10 DOI: 10.1016/j.robot.2025.105220
Ana Šelek, Marija Seder
{"title":"Smooth motion planning method for mobile robots in dynamic environments","authors":"Ana Šelek,&nbsp;Marija Seder","doi":"10.1016/j.robot.2025.105220","DOIUrl":"10.1016/j.robot.2025.105220","url":null,"abstract":"<div><div>Smooth motion planning methods are essential for efficient, safe, and real-time navigation of nonholonomic mobile robots in dynamic and complex environments while avoiding obstacles. This paper presents a novel smooth motion planning method, combining two main parts: a path planner integrating the Two Way D<span><math><msup><mrow></mrow><mrow><mo>∗</mo></mrow></msup></math></span> algorithm and clothoids, and a trajectory planner. Our proposed method generates continuous, collision-free, and near time-optimal trajectories for nonholonomic mobile robots. It is capable of working in real-time because of algorithmic simplicity and the ability to rapidly determine new replanned paths in the presence of unknown obstacles. Simulation results demonstrate that our method, compared to similar methods, can achieve a shorter trajectory length, shorter traveling time, and shorter path planning time. We have validated the efficiency of our method through experiments conducted on a Husky mobile robot.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105220"},"PeriodicalIF":5.2,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145325715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OVGrasp: Target-oriented open-vocabulary robotic grasping in clutter OVGrasp:面向目标的开放词汇机器人在杂乱中抓取
IF 5.2 2区 计算机科学
Robotics and Autonomous Systems Pub Date : 2025-10-10 DOI: 10.1016/j.robot.2025.105210
Xiaomei Zhang, Hanyue Ling, Xiao Huang, Qiwen Jin, Jiwei Hu
{"title":"OVGrasp: Target-oriented open-vocabulary robotic grasping in clutter","authors":"Xiaomei Zhang,&nbsp;Hanyue Ling,&nbsp;Xiao Huang,&nbsp;Qiwen Jin,&nbsp;Jiwei Hu","doi":"10.1016/j.robot.2025.105210","DOIUrl":"10.1016/j.robot.2025.105210","url":null,"abstract":"<div><div>Robotic recognizing and grasping of novel-category objects in cluttered environments presents a significant challenge due to occlusions and complex object arrangements. In addition, the ability to integrate language instructions is crucial for obtaining target object. In this work, we propose OVGrasp, an open-vocabulary grasping framework that seamlessly integrates vision and language to enhance robotic manipulation capabilities. Our approach leverages a unified integration of pretrained vision-language and grasping models, incorporates cross-modality alignment modules to enhance visual-linguistic perception, and uses a multi-scale voxel based point cloud representation for precise grasp-pose estimation in cluttered environments. By jointly modeling vision, language, and action, OVGrasp eliminates the reliance on predefined object labels and handcrafted rules, enabling more adaptable and efficient grasping. Extensive experiments in both simulation and real-world settings demonstrate that our method can achieve better task success rate by less times of motion, outperforming state-of-the-art methods under open-vocabulary language instructions in cluttered scenarios.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"195 ","pages":"Article 105210"},"PeriodicalIF":5.2,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145325698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信