IEEE Robotics and Automation Letters最新文献

筛选
英文 中文
Modular Actuator for Multimodal Proprioceptive and Kinesthetic Feedback of Robotic Hands 多模态机械手本体感觉和运动反馈的模块化驱动器
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585714
Sungwoo Park;Myo-Taeg Lim;Donghyun Hwang
{"title":"Modular Actuator for Multimodal Proprioceptive and Kinesthetic Feedback of Robotic Hands","authors":"Sungwoo Park;Myo-Taeg Lim;Donghyun Hwang","doi":"10.1109/LRA.2025.3585714","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585714","url":null,"abstract":"This study addresses the challenge of implementing proprioceptive and kinesthetic (PK) feedback in robotic hands, essential for grasping and manipulation tasks in unstructured environments. We developed a compact modular actuator featuring a low-module, high-transmission-ratio multistage gear mechanism that measures 25 × 10 × 24 mm, weighs only 10 grams, and maintains moderate backdrivability. The actuator provides multimodal PK feedback, capturing position, velocity, current, and torque data, which are critical for performing various grasping and manipulation tasks. To enable precise motion and force control, we introduced a new adaptive velocity estimator and a simplified Reaction Torque Observer (RTOB). Comprehensive experiments demonstrated the actuator's ability to accurately detect surface shape, roughness, and stiffness of target objects, eliminating the need for additional sensors or space. Experimental results confirmed the actuator's precision, achieving measurement errors of 5.8 mrad for position, 0.19 rad/s for velocity, and 0.011 N·m for torque. These findings highlight the actuator's ability to leverage proprioceptive information, significantly enhancing the functionality and adaptability of robotic hands in diverse and dynamic scenarios.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"8467-8474"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144623984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLIM: A Symmetric, Low-Inertia Manipulator for Constrained, Contact-Rich Spaces SLIM:一个对称的、低惯性的机械臂,适用于受限的、接触丰富的空间
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585712
Rachel Thomasson;Alessandra Bernardini;Hao Li;Chengyi Xing;Amar Hajj-Ahmad;Mark Cutkosky
{"title":"SLIM: A Symmetric, Low-Inertia Manipulator for Constrained, Contact-Rich Spaces","authors":"Rachel Thomasson;Alessandra Bernardini;Hao Li;Chengyi Xing;Amar Hajj-Ahmad;Mark Cutkosky","doi":"10.1109/LRA.2025.3585712","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585712","url":null,"abstract":"Operation in constrained and cluttered spaces poses a challenge for robotic manipulators, in part due to their bulky link geometry and kinematic limitations in comparison to human hands and arms. To address these limitations, we introduce SLIM, a custom end-effector consisting of a bidirectional hand and an integrated 2-axis wrist. With an opposing thumb that tucks alongside the palm and fingers that bend in both directions, the hand is shaped like an articulated paddle for reaching through gaps and maneuvering in clutter. Series elastic actuation decouples finger inertia from motor inertia, enabling use of small, highly-geared motors for forceful grasps while maintaining a low effective end-point mass. The thumb is mounted on a prismatic axis that adjusts grasp width for large or small objects. We illustrate advantages of the design over conventional solutions with a computed increase in grasp acquisition region, decrease in swept volume when reorienting objects, and reduced end-point mass. SLIM's thin form factor enables faster and more successful teleoperated task completion in constrained environments compared to a conventional parallel-jaw gripper. Additionally, its bidirectional fingers allow demonstrators to complete a sequential picking task more efficiently than with an anthropomorphic hand.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 9","pages":"8682-8689"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144657490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Tip-Flexible Endoscope With Reconfigurable Baseline for Enhanced 3D Perception 具有可重构基线的尖端柔性内窥镜增强3D感知
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585758
Zhikang Ma;Jianchang Zhao;Xinan Sun;Lizhi Pan;Shuxin Wang;Jinhua Li
{"title":"A Tip-Flexible Endoscope With Reconfigurable Baseline for Enhanced 3D Perception","authors":"Zhikang Ma;Jianchang Zhao;Xinan Sun;Lizhi Pan;Shuxin Wang;Jinhua Li","doi":"10.1109/LRA.2025.3585758","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585758","url":null,"abstract":"Stereoscopic endoscopes are widely used in minimally invasive cardiac surgery, providing 3D information of the thoracic cavity through small incisions. However, current high-precision 3D perception methods often reduce the flexibility of the endoscope, limiting its field of view. This study proposes a reconfigurable-baseline tip-flexible endoscope specifically designed for cardiac surgery, offering enhanced 3D perception capability. An anti-symmetric constraint architecture and a depth-driven baseline control method are adopted for high-precision 3D perception. Notably, it can adapt to multi-degree-of-freedom tip-flexible structures and constrained surgical environments without increasing the complexity of algorithms or sensors, thereby providing surgeons with greater operational space. In phantom-based experiments, the experimental group achieved a lower RMSE of 0.41 mm at 60–110 mm, compared to 0.58 mm in the control group. Similar results were observed in ex vivo tests, with RMSEs of 0.40 mm and 0.57 mm, respectively, reinforcing its clinical potential. External parameters remained within acceptable ranges, with the dominant error factor, <inline-formula><tex-math>$Delta theta$</tex-math></inline-formula>, controlled to a RMSE of 0.01428<inline-formula><tex-math>$^{circ }$</tex-math></inline-formula>. These results validate the proposed method and offer a new approach for high-precision 3D perception in minimally invasive surgery.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"8324-8331"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SICNav-Diffusion: Safe and Interactive Crowd Navigation With Diffusion Trajectory Predictions sicnav -扩散:具有扩散轨迹预测的安全交互式人群导航
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585713
Sepehr Samavi;Anthony Lem;Fumiaki Sato;Sirui Chen;Qiao Gu;Keijiro Yano;Angela P. Schoellig;Florian Shkurti
{"title":"SICNav-Diffusion: Safe and Interactive Crowd Navigation With Diffusion Trajectory Predictions","authors":"Sepehr Samavi;Anthony Lem;Fumiaki Sato;Sirui Chen;Qiao Gu;Keijiro Yano;Angela P. Schoellig;Florian Shkurti","doi":"10.1109/LRA.2025.3585713","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585713","url":null,"abstract":"To navigate crowds without collisions, robots must interact with humans by forecasting their future motion and reacting accordingly. While learning-based prediction models have shown success in generating likely human trajectory predictions, integrating these stochastic models into a robot controller presents several challenges. The controller needs to account for interactive coupling between planned robot motion and human predictions while ensuring both predictions and robot actions are safe (i.e. collision-free). To address these challenges, we present a receding horizon crowd navigation method for single-robot multi-human environments. We first propose a diffusion model to generate joint trajectory predictions for all humans in the scene. We then incorporate these multi-modal predictions into a SICNav Bilevel MPC problem that simultaneously solves for a robot plan (upper-level) and acts as a safety filter to refine the predictions for non-collision (lower-level). Combining planning and prediction refinement into one bilevel problem ensures that the robot plan and human predictions are coupled. We validate the open-loop trajectory prediction performance of our diffusion model on the commonly used ETH/UCY benchmark and evaluate the closed-loop performance of our robot navigation method in simulation and extensive real-robot experiments demonstrating safe, efficient, and reactive robot motion.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 9","pages":"8738-8745"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144671225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAGA-SLAM: Scale-Adaptive 3D Gaussian Splatting for Visual SLAM SAGA-SLAM:用于视觉SLAM的比例自适应三维高斯飞溅
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585756
Kun Park;Seung-Woo Seo
{"title":"SAGA-SLAM: Scale-Adaptive 3D Gaussian Splatting for Visual SLAM","authors":"Kun Park;Seung-Woo Seo","doi":"10.1109/LRA.2025.3585756","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585756","url":null,"abstract":"3D Gaussian Splatting (3DGS) has recently emerged as a powerful technique for representing 3D scenes. Its superior high-fidelity rendering quality and speed have driven its rapid adoption in many applications. Among them, Visual Simultaneous Localization and Mapping (VSLAM) is the most prominent application, as it requires real-time simultaneous mapping and position tracking of navigating objects. However, from our comprehensive study, we observed a fundamental hurdle in directly applying the current 3DGS technique to VSLAM, which we define as the scale adaptation problem. The scale adaptation problem refers to the inability of existing 3DGS-based SLAM methods to address varying scales, specifically the extent of camera pose difference from the perspective of tracking, and environmental size in terms of mapping and the addition of new 3D Gaussians. To overcome this limitation, we propose SAGA-SLAM, the first scale-adaptive RGB-D Dense SLAM framework based on 3DGS. We optimize the tracking and mapping stages robustly over various scales by utilizing the Polyak step size and momentum. Additionally, we present gaussian fission method to address the scale problem during the addition of 3D Gaussians. Experiments show that our method achieves state-of-the-art results robustly on both large and small scales, such as KITTI, Replica, and TUM-RGBD. By adapting without the need for hyperparameter tuning, our method demonstrates both superior performance and practical applicability.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"8268-8275"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144606217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controller Adaptation via Learning Solutions of Contextual Bayesian Optimization 基于上下文贝叶斯优化学习解的控制器自适应
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585716
Viet-Anh Le;Andreas A. Malikopoulos
{"title":"Controller Adaptation via Learning Solutions of Contextual Bayesian Optimization","authors":"Viet-Anh Le;Andreas A. Malikopoulos","doi":"10.1109/LRA.2025.3585716","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585716","url":null,"abstract":"In this work, we propose a framework for adapting the controller's parameters based on learning optimal solutions from contextual black-box optimization problems. We consider a class of control design problems for dynamical systems operating in different environments or conditions represented by contextual parameters. The overarching goal is to identify the controller parameters that maximize the controlled system's performance, given different realizations of the contextual parameters. We formulate a contextual Bayesian optimization problem in which the solution is actively learned using Gaussian processes to approximate the controller adaptation strategy. We demonstrate the efficacy of the proposed framework with a sim-to-real example. We learn the optimal weighting strategy of a model predictive control for connected and automated vehicles interacting with human-driven vehicles from simulations and then deploy it in a real-time experiment.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"8308-8315"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Trust Estimation in Human-Robot Collaboration Using Beta Reputation at Fine-Grained Timescales 基于细粒度时间尺度的Beta信誉改进人机协作中的信任估计
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585653
Resul Dagdanov;Milan Andrejević;Dikai Liu;Chin-Teng Lin
{"title":"Improving Trust Estimation in Human-Robot Collaboration Using Beta Reputation at Fine-Grained Timescales","authors":"Resul Dagdanov;Milan Andrejević;Dikai Liu;Chin-Teng Lin","doi":"10.1109/LRA.2025.3585653","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585653","url":null,"abstract":"When interacting with each other, humans adjust their behavior based on perceived trust. To achieve similar adaptability, robots must accurately estimate human trust at sufficiently granular timescales while collaborating with humans. Beta reputation is a popular way to formalize a mathematical estimation of human trust. However, it relies on binary performance, which updates trust estimations only after each task concludes. Additionally, manually crafting a reward function is the usual method of building a performance indicator, which is labor-intensive and time-consuming. These limitations prevent efficient capture of continuous trust changes at more granular timescales throughout the collaboration task. Therefore, this letter presents a new framework for the estimation of human trust using beta reputation at fine-grained timescales. To achieve granularity in beta reputation, we utilize continuous reward values to update trust estimates at each timestep of a task. We construct a continuous reward function using maximum entropy optimization to eliminate the need for the laborious specification of a performance indicator. The proposed framework improves trust estimations by increasing accuracy, eliminating the need to manually craft a reward function, and advancing toward the development of more intelligent robots.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"8562-8569"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144641080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Proximity-Based Framework for Human-Robot Seamless Close Interactions 基于接近度的人机无缝密切交互框架
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585762
Liana Bertoni;Lorenzo Baccelliere;Luca Muratore;Nikos G. Tsagarakis
{"title":"A Proximity-Based Framework for Human-Robot Seamless Close Interactions","authors":"Liana Bertoni;Lorenzo Baccelliere;Luca Muratore;Nikos G. Tsagarakis","doi":"10.1109/LRA.2025.3585762","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585762","url":null,"abstract":"The administration and monitoring of shared workspaces are crucial for seamlessly integrating robots to operate in close interactions with humans. Adaptive, versatile, and reliable robot movements are key to achieving effective and successful human-robot synergy. In situations involving unexpected or unintended collisions, robots must react appropriately to minimize risks to humans while still staying focused on their primary tasks or safely resuming them. Although collision detection and identification algorithms are well-established, more advanced robot reactions beyond basic stop-and-wait reactions have not yet been widely adopted and understood. This limitation highlights the need for more sophisticated robot responses to better handle complex collision scenarios, ensuring both safety and task continuity. This letter introduces a novel complete robotic system that leverages the potential of on-board proximity sensor equipment to seamlessly furnish compatible robot reactions while operating in close interactions. With on-board distributed proximity sensors, the robot gains a continuous close workspace awareness, facilitating a transparent negotiation of potential collisions while executing tasks. The proposed system and framework are validated in a collaborative industrial task scenario composed of sub-tasks allocated to the human and the robot and performed within shared regions of the workspace, demonstrating the efficacy of the approach.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"8514-8521"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TeTRA-VPR: A Ternary Transformer Approach for Compact Visual Place Recognition 一种用于紧凑视觉位置识别的三元变换方法
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585715
Oliver Grainge;Michael J. Milford;Indu Bodala;Sarvapali D. Ramchurn;Shoaib Ehsan
{"title":"TeTRA-VPR: A Ternary Transformer Approach for Compact Visual Place Recognition","authors":"Oliver Grainge;Michael J. Milford;Indu Bodala;Sarvapali D. Ramchurn;Shoaib Ehsan","doi":"10.1109/LRA.2025.3585715","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585715","url":null,"abstract":"Visual Place Recognition (VPR) localizes a query image by matching it against a database of geo-tagged reference images, making it essential for navigation and mapping in robotics. Although Vision Transformer (ViT) solutions deliver high accuracy, their large models often exceed the memory and compute budgets of resource-constrained platforms such as drones and mobile robots. To address this issue, we propose <italic>TeTRA</i>, a ternary transformer approach that progressively quantizes the ViT backbone to 2-bit precision and binarizes its final embedding layer, offering substantial reductions in model size and latency. A carefully designed progressive distillation strategy preserves the representational power of a full-precision teacher, allowing <italic>TeTRA</i> to retain or even surpass the accuracy of uncompressed convolutional counterparts, despite using fewer resources. Experiments on standard VPR benchmarks demonstrate that TeTRA reduces memory consumption by up to 69% compared to efficient baselines, while lowering inference latency by 35%, with either no loss or a slight improvement in recall@1. These gains enable high-accuracy VPR on power-constrained, memory-limited robotic platforms, making <italic>TeTRA</i> an appealing solution for real-world deployment.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"8396-8403"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144611941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DexMGNet: Multi-Mode Dexterous Grasping in Cluttered Scenes With Generative Models 基于生成模型的杂乱场景多模灵巧抓取
IF 4.6 2区 计算机科学
IEEE Robotics and Automation Letters Pub Date : 2025-07-03 DOI: 10.1109/LRA.2025.3585761
Zongwu Xie;Guanghu Xie;Yang Liu;Yonglong Zhang;Baoshi Cao;Yiming Ji;Zhengpu Wang;Hong Liu
{"title":"DexMGNet: Multi-Mode Dexterous Grasping in Cluttered Scenes With Generative Models","authors":"Zongwu Xie;Guanghu Xie;Yang Liu;Yonglong Zhang;Baoshi Cao;Yiming Ji;Zhengpu Wang;Hong Liu","doi":"10.1109/LRA.2025.3585761","DOIUrl":"https://doi.org/10.1109/LRA.2025.3585761","url":null,"abstract":"Dexterous grasping is a crucial technique in humanoid robot manipulation. However, existing methods still fall short in effectively detecting dexterous grasps in cluttered environments. In this work, we propose DexMGNet, a novel multi-mode dexterous grasping framework designed for such challenging scenarios. We introduce the concept of pre-grasping and redefine dexterous grasping to enhance adaptability. We propose an effective pre-grasp and grasp data sampling strategy and develop a conditional generative model for grasp and pre-grasp generation. Additionally, we integrate pre-grasp collision detection within the hand's workspace, significantly improving grasping performance in cluttered environments. Our method supports multi-mode grasping, including two-finger, three-finger, and four-finger grasps, enabling greater flexibility across diverse grasping tasks. In real-world desktop grasping experiments, our approach achieves a 93.3% success rate in single-object scenes and a 78.3% success rate in multi-object scenes, demonstrating its effectiveness and superiority.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 8","pages":"8483-8490"},"PeriodicalIF":4.6,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信