IEEE Transactions on Robotics最新文献

筛选
英文 中文
Double Oracle Algorithm for Game-Theoretic Robot Allocation on Graphs 图上博弈论机器人分配的双Oracle算法
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-06 DOI: 10.1109/TRO.2025.3567506
Zijian An;Lifeng Zhou
{"title":"Double Oracle Algorithm for Game-Theoretic Robot Allocation on Graphs","authors":"Zijian An;Lifeng Zhou","doi":"10.1109/TRO.2025.3567506","DOIUrl":"10.1109/TRO.2025.3567506","url":null,"abstract":"In this article, we study the problem of game-theoretic robot allocation where two players strategically allocate robots to compete for multiple sites of interest. Robots possess offensive or defensive capabilities to interfere and weaken their opponents to take over a competing site. This problem belongs to the conventional an acronym colonel blotto game (CBG). Considering the robots' heterogeneous capabilities and environmental factors, we generalize the conventional Blotto game by incorporating heterogeneous robot types and graph constraints that capture the robot transitions between sites. Then, we employ the double oracle algorithm (DOA) to solve for the Nash equilibrium of the generalized Blotto game. Particularly, for cyclic-dominance-heterogeneous (CDH) robots that inhibit each other, we define a new transformation rule between any two robot types. Building on the transformation, we design a novel utility function to measure the game's outcome quantitatively. Moreover, we rigorously prove the correctness of the designed utility function. Finally, we conduct extensive simulations to demonstrate the effectiveness of DOA on computing Nash equilibrium for homogeneous, linear heterogeneous, and CDH robot allocation on graphs.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"3244-3259"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous Flights Inside Narrow Tunnels 狭窄隧道内的自动飞行
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-06 DOI: 10.1109/TRO.2025.3548525
Luqi Wang;Yan Ning;Hongming Chen;Peize Liu;Yang Xu;Hao Xu;Ximin Lyu;Shaojie Shen
{"title":"Autonomous Flights Inside Narrow Tunnels","authors":"Luqi Wang;Yan Ning;Hongming Chen;Peize Liu;Yang Xu;Hao Xu;Ximin Lyu;Shaojie Shen","doi":"10.1109/TRO.2025.3548525","DOIUrl":"10.1109/TRO.2025.3548525","url":null,"abstract":"Multirotors are usually desired to enter confined narrow tunnels that are barely accessible to humans in various applications including inspection, search and rescue, and so on. This task is extremely challenging since the lack of geometric features and illuminations, together with the limited field of view, cause problems in perception; the restricted space and significant ego airflow disturbances induce control issues. This article introduces an autonomous aerial system designed for navigation through tunnels as narrow as 0.5 m in diameter. The real-time and online system includes a virtual omni-directional perception module tailored for the mission and a novel motion planner that incorporates perception and ego airflow disturbance factors modeled using camera projections and computational fluid dynamics analyses, respectively. Extensive flight experiments on a custom-designed quadrotor are conducted in multiple realistic narrow tunnels to validate the superior performance of the system, even over human pilots, proving its potential for real applications. In addition, a deployment pipeline on other multirotor platforms is outlined and open-source packages are provided for future developments.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2230-2250"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FlowSight: Vision-Based Artificial Lateral Line Sensor for Water Flow Perception FlowSight:基于视觉的水流感知人工侧线传感器
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-06 DOI: 10.1109/TRO.2025.3567551
Tiandong Zhang;Rui Wang;Qiyuan Cao;Shaowei Cui;Gang Zheng;Shuo Wang
{"title":"FlowSight: Vision-Based Artificial Lateral Line Sensor for Water Flow Perception","authors":"Tiandong Zhang;Rui Wang;Qiyuan Cao;Shaowei Cui;Gang Zheng;Shuo Wang","doi":"10.1109/TRO.2025.3567551","DOIUrl":"10.1109/TRO.2025.3567551","url":null,"abstract":"This article presents a novel vision-based artificial lateral line (ALL) sensor, FlowSight, enhancing the perception capabilities of underwater robots. Through an autonomous vision system, FlowSight allows for simultaneous sensing the speed and direction of local water flow without relying on external auxiliary equipment. Inspired by the lateral line neuromast of fish, a flexible bionic tentacle is designed to sense water flow. Deformation and motion characteristics of the tentacle are modeled and analyzed using bidirectional fluid-structure interaction (FSI) simulation. Upon contact with water flow, the tentacle converts water flow information into elastic deformation information, which is captured and processed into an image sequence by the autonomous vision system. Subsequently, a water flow perception method based on deep neural networks is proposed to estimate the flow speed and direction from the captured image sequence. The perception network is trained and tested using data collected from practical experiments conducted in a controllable swim tunnel. Finally, the FlowSight sensor is integrated into the bionic underwater robot RoboDact, and a closed-loop motion control experiment based on water flow perception is conducted. Experiments conducted in the swim tunnel and water pool demonstrate the feasibility and effectiveness of FlowSight sensor and the water flow perception method.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"3260-3277"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safe Reinforcement Learning on the Constraint Manifold: Theory and Applications 约束流形上的安全强化学习:理论与应用
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-06 DOI: 10.1109/TRO.2025.3567477
Puze Liu;Haitham Bou-Ammar;Jan Peters;Davide Tateo
{"title":"Safe Reinforcement Learning on the Constraint Manifold: Theory and Applications","authors":"Puze Liu;Haitham Bou-Ammar;Jan Peters;Davide Tateo","doi":"10.1109/TRO.2025.3567477","DOIUrl":"10.1109/TRO.2025.3567477","url":null,"abstract":"Integrating learning-based techniques, especially reinforcement learning, into robotics is promising for solving complex problems in unstructured environments. Most of the existing approaches rely on training in carefully calibrated simulators before being deployed on real robots, often without real-world fine-tuning. While effective in controlled settings, this framework falls short in applications where precise simulation is unavailable or the environment is too complex to model. Instead, on-robot learning, which learns by interacting directly with the real world, offers a promising alternative. One major problem for on-robot reinforcement learning is ensuring safety, as uncontrolled exploration can cause catastrophic damage to the robot or the environment. Indeed, safety specifications, often represented as constraints, can be complex and nonlinear, making safety challenging to guarantee in learning systems. In this article, we show how we can impose complex safety constraints on learning-based robotics systems in a principled manner, both from theoretical and practical points of view. Our approach is based on the concept of the constraint manifold, representing the set of safe robot configurations. Exploiting differential geometry techniques, i.e., the tangent space, we can construct a safe action space, allowing learning agents to sample arbitrary actions while ensuring safety. We demonstrate the method's effectiveness in a real-world robot air hockey task, showing that our method can handle high-dimensional tasks with complex constraints.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"3442-3461"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CornerVINS: Accurate Localization and Layout Mapping for Structural Environments Leveraging Hierarchical Geometric Representations CornerVINS:利用层次几何表示的结构环境的精确定位和布局映射
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-06 DOI: 10.1109/TRO.2025.3567532
Yidi Zhang;Fulin Tang;Yihong Wu
{"title":"CornerVINS: Accurate Localization and Layout Mapping for Structural Environments Leveraging Hierarchical Geometric Representations","authors":"Yidi Zhang;Fulin Tang;Yihong Wu","doi":"10.1109/TRO.2025.3567532","DOIUrl":"10.1109/TRO.2025.3567532","url":null,"abstract":"A compact and consistent map of surroundings is critical for intelligent robots to understand their situations and realize robust navigation. Most existing techniques rely on infinite planes, which are sensitive to pose drift and may lead to confusing maps. Toward high-level perception in indoor environments, we propose CornerVINS, an innovative RGB-D inertial localization and layout mapping method leveraging hierarchical geometric features, i.e., points, planes, and box corners. Specifically, points are enhanced by fusing depth information, and planes are modeled as bounded patches using convex hulls to increase their discriminability. More importantly, box corners, lying at the intersection of three orthogonal planes, are parameterized with a 6-D vector and integrated into the extended Kalman filter for the first time. We introduce a hierarchical mechanism to effectively extract and associate planes and corners, which are considered as layout components of scenes and serve as long-term landmarks to correct camera poses. Extensive experiments prove that the proposed box corners bring significant improvements, enabling accurate localization and consistent layout mapping at low computational cost. Overall, the proposed CornerVINS outperforms state-of-the-art systems in both accuracy and efficiency.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"3500-3517"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Extended Environment Perception Toward Real-Time Dynamic Modeling for Long-Range Underwater Robot 从扩展环境感知到远程水下机器人实时动态建模
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-06 DOI: 10.1109/TRO.2025.3567531
Lei Lei;Yu Zhou;Jianxing Zhang
{"title":"From Extended Environment Perception Toward Real-Time Dynamic Modeling for Long-Range Underwater Robot","authors":"Lei Lei;Yu Zhou;Jianxing Zhang","doi":"10.1109/TRO.2025.3567531","DOIUrl":"10.1109/TRO.2025.3567531","url":null,"abstract":"Underwater robots are critical observation platforms for diverse ocean environments. However, existing robotic designs often lack long-range and deep-sea observation capabilities and overlook the effects of environmental uncertainties on robotic operations. This article presents a novel long-range underwater robot for extreme ocean environments, featuring a low-power dual-circuit buoyancy adjustment system, an efficient mass-based attitude adjustment system, flying wings, and an open sensor cabin. After that, an extended environment perception strategy with incremental updating is proposed to understand and predict full hydrological dynamics based on sparse observations. On this basis, a real-time dynamic modeling approach integrates multibody dynamics, perceived hydrological dynamics, and environment-robot interactions to provide accurate dynamics predictions and enhance motion efficiency. Extensive simulations and field experiments covering 600 km validated the reliability and autonomy of the robot in long-range ocean observations, highlighting the accuracy of the extended perception and real-time dynamics modeling methods.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"3423-3441"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous Tomato Harvesting With Top–Down Fusion Network for Limited Data 基于自顶向下融合网络的有限数据番茄自主收获
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-06 DOI: 10.1109/TRO.2025.3567544
Xingxu Li;Yiheng Han;Nan Ma;Yongjin Liu;Jia Pan;Shun Yang;Siyi Zheng
{"title":"Autonomous Tomato Harvesting With Top–Down Fusion Network for Limited Data","authors":"Xingxu Li;Yiheng Han;Nan Ma;Yongjin Liu;Jia Pan;Shun Yang;Siyi Zheng","doi":"10.1109/TRO.2025.3567544","DOIUrl":"10.1109/TRO.2025.3567544","url":null,"abstract":"Using robots for tomato truss harvesting represents a promising approach to agricultural production. However, incomplete acquisition of perception information and clumsy operations often results in low harvest success rates or crop damage. To addressthis issue, we designed a new method for tomato truss perception, an autonomous harvesting method, and a novel circular rotary cutting end-effector. The robot performs object detection and keypoint detection on tomato trusses using the proposed top–down fusion network, making decisions on suitable targets for harvesting based on phenotyping and pose estimation. The designed end-effector moves gradually from the bottom up to wrap around the tomato truss, cutting the peduncle to complete the harvest. Experiments conducted in real-world scenarios for robotic perception and autonomous harvesting of tomato trusses show that the proposed method increases accuracy by up to 11.42% and 22.29% for complete and limited dataset conditions, compared to baseline models. Furthermore, we have implemented an automatic tomato harvesting system based on TDFNet, which reaches an average harvest success rate of 89.58% in the greenhouse.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"3609-3628"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CURE: Simulation-Augmented Autotuning in Robotics CURE:机器人中的仿真-增强自动调谐
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-06 DOI: 10.1109/TRO.2025.3548546
Md Abir Hossen;Sonam Kharade;Jason M. O'Kane;Bradley Schmerl;David Garlan;Pooyan Jamshidi
{"title":"CURE: Simulation-Augmented Autotuning in Robotics","authors":"Md Abir Hossen;Sonam Kharade;Jason M. O'Kane;Bradley Schmerl;David Garlan;Pooyan Jamshidi","doi":"10.1109/TRO.2025.3548546","DOIUrl":"10.1109/TRO.2025.3548546","url":null,"abstract":"Robotic systems are typically composed of various subsystems, such as localization and navigation, each encompassing numerous configurable components (e.g., selecting different planning algorithms). Once an algorithm has been selected for a component, its associated configuration options must be set to the appropriate values. Configuration options across the system stack interact nontrivially. Finding optimal configurations for highly configurable robots to achieve desired performance poses a significant challenge due to the interactions between configuration options across software and hardware that result in an exponentially large and complex configuration space. These challenges are further compounded by the need for transferability between different environments and robotic platforms. Data efficient optimization algorithms (e.g., Bayesian optimization) have been increasingly employed to automate the tuning of configurable parameters in cyber-physical systems. However, such optimization algorithms converge at later stages, often after exhausting the allocated budget (e.g., optimization steps, allotted time) and lacking transferability. This article proposes causal understanding and remediation for enhancing robot performance (<monospace>CURE</monospace>)—a method that identifies causally relevant configuration options, enabling the optimization process to operate in a reduced search space, thereby enabling faster optimization of robot performance. <monospace>CURE</monospace> abstracts the causal relationships between various configuration options and the robot performance objectives by learning a causal model in the source (a low-cost environment such as the Gazebo simulator) and applying the learned knowledge to perform optimization in the target (e.g., <italic>Turtlebot 3</i> physical robot). We demonstrate the effectiveness and transferability of <monospace>CURE</monospace> by conducting experiments that involve varying degrees of deployment changes in both physical robots and simulation.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2825-2842"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Geometric Modeling-Based Computer Vision for Context Aware Control in a Hip Exosuit 基于几何建模的计算机视觉在髋关节外装中的环境感知控制
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-06 DOI: 10.1109/TRO.2025.3567489
Enrica Tricomi;Giuseppe Piccolo;Federica Russo;Xiaohui Zhang;Francesco Missiroli;Sandro Ferrari;Letizia Gionfrida;Fanny Ficuciello;Michele Xiloyannis;Lorenzo Masia
{"title":"Leveraging Geometric Modeling-Based Computer Vision for Context Aware Control in a Hip Exosuit","authors":"Enrica Tricomi;Giuseppe Piccolo;Federica Russo;Xiaohui Zhang;Francesco Missiroli;Sandro Ferrari;Letizia Gionfrida;Fanny Ficuciello;Michele Xiloyannis;Lorenzo Masia","doi":"10.1109/TRO.2025.3567489","DOIUrl":"10.1109/TRO.2025.3567489","url":null,"abstract":"Human beings adapt their motor patterns in response to their surroundings, utilizing sensory modalities such as visual inputs. This context-informed adaptive motor behavior has increased interest in integrating computer vision (CV) algorithms into robotic assistive technologies, marking a shift toward <italic>context aware control</i>. However, such integration has rarely been achieved so far, with current methods mostly relying on data-driven approaches. In this study, we introduce a novel control framework for a soft hip exosuit, employing instead a physics-informed CV method grounded on geometric modeling of the captured scene for assistance tuning during stairs and level walking. This approach promises to provide a viable solution that is more computationally efficient and does not depend on training examples. Evaluating the controller with six subjects on a path comprising level walking and stairs, we achieved an overall detection accuracy of <inline-formula><tex-math>$93.0pm 1.1%$</tex-math></inline-formula>. CV-based assistance provided significantly greater metabolic benefits compared to non-vision-based assistance, with larger energy reductions relative to being unassisted during stair ascent (<inline-formula><tex-math>$-18.9 pm 4.1%$</tex-math></inline-formula> versus <inline-formula><tex-math>$-5.2 pm 4.1%$</tex-math></inline-formula>) and descent (<inline-formula><tex-math>$-10.1 pm 3.6%$</tex-math></inline-formula> versus <inline-formula><tex-math>$-4.7 pm 4.8%$</tex-math></inline-formula>). Such a result is a consequence of the adaptive nature of the device, enabled by the context aware controller that allowed for more effective walking support, i.e., the assistive torque showed a significant increase while ascending stairs (<inline-formula><tex-math>$+33.9pm 8.8%$</tex-math></inline-formula>) and decrease while descending stairs (<inline-formula><tex-math>$-17.4pm 6.0%$</tex-math></inline-formula>) compared to a condition without assistance modulation enabled by vision. These results highlight the potential of the approach, promoting effective real-time embedded applications in assistive robotics.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"3462-3479"},"PeriodicalIF":9.4,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Environment-Centric Learning Approach for Gait Synthesis in Terrestrial Soft Robots 陆基软体机器人步态合成的环境中心学习方法
IF 9.4 1区 计算机科学
IEEE Transactions on Robotics Pub Date : 2025-03-05 DOI: 10.1109/TRO.2025.3548543
Caitlin Freeman;Arun Niddish Mahendran;Vishesh Vikas
{"title":"Environment-Centric Learning Approach for Gait Synthesis in Terrestrial Soft Robots","authors":"Caitlin Freeman;Arun Niddish Mahendran;Vishesh Vikas","doi":"10.1109/TRO.2025.3548543","DOIUrl":"10.1109/TRO.2025.3548543","url":null,"abstract":"Locomotion gaits are fundamental for control of soft terrestrial robots. However, synthesis of these gaits is challenging due to modeling of robot-environment interaction and lack of a mathematical framework. This work presents an environment-centric, data-driven, and fault-tolerant probabilistic model-free control framework that allows for soft multilimb robots to learn from their environment and synthesize diverse sets of locomotion gaits for realizing open-loop control. Here, discretization of factors dominating robot-environment interactions enables an environment-specific graphical representation where the edges encode experimental locomotion data corresponding to the robot motion primitives. In this graph, locomotion gaits are defined as simple cycles that are transformation invariant, i.e., the locomotion is independent of the starting vertex of these periodic cycles. Gait synthesis, the problem of finding optimal locomotion gaits for a given substrate, is formulated as binary integer linear programming problems with a linearized cost function, linear constraints, and iterative simple cycle detection. Experimentally, gaits are synthesized for varying robot-environment interactions. Variables include robot morphology—three-limb and four-limb robots, TerreSoRo-III and TerreSoRo-IV; substrate—rubber mat, whiteboard and carpet; and actuator functionality—simulated loss of robot limb actuation. On an average, gait synthesis improves the translation and rotation speeds by 82% and 97%, respectively. The results highlight that data-driven methods are vital to soft robot locomotion control due to complex robot-environment interactions and simulation-to-reality gaps, particularly when biological analogues are unavailable.","PeriodicalId":50388,"journal":{"name":"IEEE Transactions on Robotics","volume":"41 ","pages":"2144-2163"},"PeriodicalIF":9.4,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信