Autonomous Robots最新文献

筛选
英文 中文
System identification and adaptive input estimation on the Jaiabot micro autonomous underwater vehicle Jaiabot微型自主水下航行器系统辨识与自适应输入估计
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-10-11 DOI: 10.1007/s10514-025-10220-9
Ioannis Faros, Herbert G. Tanner
{"title":"System identification and adaptive input estimation on the Jaiabot micro autonomous underwater vehicle","authors":"Ioannis Faros,&nbsp;Herbert G. Tanner","doi":"10.1007/s10514-025-10220-9","DOIUrl":"10.1007/s10514-025-10220-9","url":null,"abstract":"<div><p>This paper reports an attempt to model the system dynamics and estimate both the unknown internal control input and the state of a recently developed marine autonomous vehicle, the Jaiabot. Although the Jaiabot has shown promise in many applications, process and sensor noise necessitates state estimation and noise filtering. In this work, we present the first surge and heading linear dynamical model for Jaiabots derived from real data collected during field testing. An adaptive input estimation algorithm is implemented to accurately estimate the control input and hence the state. For validation, this approach is compared to the classical Kalman filter, highlighting its advantages in handling unknown control inputs.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 4","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous robotic manipulation for grasping a target object in cluttered environments 在杂乱环境中抓取目标物体的自主机器人操作
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-10-08 DOI: 10.1007/s10514-025-10214-7
Sanraj Lachhiramka,  Pradeep J, Archanaa A. Chandaragi, Arjun Achar, Shikha Tripathi
{"title":"Autonomous robotic manipulation for grasping a target object in cluttered environments","authors":"Sanraj Lachhiramka,&nbsp; Pradeep J,&nbsp;Archanaa A. Chandaragi,&nbsp;Arjun Achar,&nbsp;Shikha Tripathi","doi":"10.1007/s10514-025-10214-7","DOIUrl":"10.1007/s10514-025-10214-7","url":null,"abstract":"<div><p>This work addresses the challenge of grasping a target object in cluttered environments, even when it is partially visible or fully occluded. The proposed approach enables the manipulator to learn a sequence of strategic pushing actions that rearrange the scene to make the target object graspable. Our pipeline integrates image morphological processing with deep reinforcement learning (DRL), using GR-ConvNet to predict grasp points for the target. When the object is considered ungraspable, a soft actor-critic (SAC) model guides optimal pushing actions. A novel clutter map is introduced, encoding environmental clutter into a quantitative score that informs the decision-making process. The system shows improved performance with a discount factor (<span>(gamma )</span>) of 0.9, demonstrated through comparative analysis with and without the clutter map. We also compare models trained in discrete versus continuous action spaces to evaluate the impact of action space on DRL effectiveness. The pipeline generalizes well to diverse objects and integrates directly with hardware, requiring no additional training for real-world deployment.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 4","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DMPC-Swarm: distributed model predictive control on nano UAV swarms DMPC-Swarm:纳米无人机蜂群的分布式模型预测控制
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-09-27 DOI: 10.1007/s10514-025-10211-w
Alexander Gräfe, Joram Eickhoff, Marco Zimmerling, Sebastian Trimpe
{"title":"DMPC-Swarm: distributed model predictive control on nano UAV swarms","authors":"Alexander Gräfe,&nbsp;Joram Eickhoff,&nbsp;Marco Zimmerling,&nbsp;Sebastian Trimpe","doi":"10.1007/s10514-025-10211-w","DOIUrl":"10.1007/s10514-025-10211-w","url":null,"abstract":"<div><p>Swarms of unmanned aerial vehicles (UAVs) are increasingly becoming vital to our society, undertaking tasks such as search and rescue, surveillance and delivery. A special variant of Distributed Model Predictive Control (DMPC) has emerged as a promising approach for the safe management of these swarms by combining the scalability of distributed computation with dynamic swarm motion control. In this DMPC method, multiple agents solve local optimization problems with coupled anti-collision constraints, periodically exchanging their solutions. Despite its potential, existing methodologies using this DMPC variant have yet to be deployed on distributed hardware that fully utilize true distributed computation and wireless communication. This is primarily due to the lack of a communication system tailored to meet the unique requirements of mobile swarms and an architecture that supports distributed computation while adhering to the payload constraints of UAVs. We present DMPC-Swarm, a new swarm control methodology that integrates an efficient, stateless low-power wireless communication protocol with a novel DMPC algorithm that provably avoids UAV collisions even under message loss. By utilizing event-triggered and distributed off-board computing, DMPC-Swarm supports nano UAVs, allowing them to benefit from additional computational resources while retaining scalability and fault tolerance. In a detailed theoretical analysis, we prove that DMPC-Swarm guarantees collision avoidance under realistic conditions, including communication delays and message loss. Finally, we present DMPC-Swarm’s implementation on a swarm of up to 16 nano-quadcopters, demonstrating the first realization of these DMPC variants with computation distributed on multiple physical devices interconnected by a real wireless mesh networks. A video showcasing DMPC-Swarm is available at http://tiny.cc/DMPCSwarm.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 4","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10211-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145170679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DynaLOAM: robust LiDAR odometry and mapping in dynamic environments DynaLOAM:动态环境中强大的激光雷达里程测量和测绘
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-09-27 DOI: 10.1007/s10514-025-10213-8
Yu Wang, Ruichen Lyu, Junyuan Ouyang, Zhihao Wang, Xiaochen Xie, Haoyao Chen
{"title":"DynaLOAM: robust LiDAR odometry and mapping in dynamic environments","authors":"Yu Wang,&nbsp;Ruichen Lyu,&nbsp;Junyuan Ouyang,&nbsp;Zhihao Wang,&nbsp;Xiaochen Xie,&nbsp;Haoyao Chen","doi":"10.1007/s10514-025-10213-8","DOIUrl":"10.1007/s10514-025-10213-8","url":null,"abstract":"<div><p>Simultaneous localization and mapping (SLAM) based on LiDAR in dynamic environments remains a challenging problem due to unreliable data association and residual ghost tracks in the map. In recent years, some related works have attempted to utilize semantic information or geometric constraints between consecutive frames to reject dynamic objects as outliers. However, challenges persist, including poor real-time performance, heavy reliance on meticulously annotated datasets, and susceptibility to misclassifying static points as dynamic. This paper presents a novel dynamic LiDAR SLAM framework called DynaLOAM, in which a complementary dynamic interference suppression scheme is exploited. For accurate relative pose estimation, a lightweight detector is proposed to rapidly respond to pre-defined dynamic object classes in the LiDAR FOV and eliminate correspondences from dynamic landmarks. Then, an online submap cleaning method based on visibility and clustering is proposed for real-time dynamic object removal in submap, which is further utilized for pose optimization and global static map construction. By integrating the complementary characteristics of prior appearance detection and online visibility check, DynaLOAM can finally achieve accurate pose estimation and static map construction in dynamic environments. Extensive experiments are conducted on the KITTI dataset and three real scenarios. The results show that our approach achieves promising performance compared to state-of-the-art methods. The code will be available at https://github.com/HITSZ-NRSL/DynaLOAM.git.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 4","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145171047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive ergodic search with energy-aware scheduling for persistent multi-robot missions 持久多机器人任务的能量感知自适应遍历搜索
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-09-24 DOI: 10.1007/s10514-025-10215-6
Kaleb Ben Naveed, Devansh R. Agrawal, Rahul Kumar, Dimitra Panagou
{"title":"Adaptive ergodic search with energy-aware scheduling for persistent multi-robot missions","authors":"Kaleb Ben Naveed,&nbsp;Devansh R. Agrawal,&nbsp;Rahul Kumar,&nbsp;Dimitra Panagou","doi":"10.1007/s10514-025-10215-6","DOIUrl":"10.1007/s10514-025-10215-6","url":null,"abstract":"<div><p>Autonomous robots are increasingly deployed for long-term information-gathering tasks, which pose two key challenges: planning informative trajectories in environments that evolve across space and time, and ensuring persistent operation under energy constraints. This paper presents a unified framework, <span>mEclares</span>, that addresses both challenges through adaptive ergodic search and energy-aware scheduling in multi-robot systems. Our contributions are two-fold: (1) we model real-world variability using stochastic spatiotemporal environments, where the underlying information evolves continuously over space and time under process noise. To guide exploration, we construct a target information spatial distribution (TISD) based on clarity, a metric that captures the decay of information in the absence of observations and highlights regions of high uncertainty; and (2) we introduce <span>Robust-meSch</span> ( <span>RmeSch</span> ), an online scheduling method that enables persistent operation by coordinating rechargeable robots sharing a single mobile charging station. Unlike prior work, our approach avoids reliance on preplanned schedules, static or dedicated charging stations, and simplified robot dynamics. Instead, the scheduler supports general nonlinear models, accounts for uncertainty in the estimated position of the charging station, and handles central node failures. The proposed framework is validated through real-world hardware experiments, and feasibility guarantees are provided under specific assumptions. [Code: https://github.com/kalebbennaveed/mEclares-main.git][Experiment Video: https://www.youtube.com/watch?v=dmaZDvxJgF8]</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 4","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10215-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145168745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Persistent multi-resource coverage with heterogeneous multi-robot teams 具有异构多机器人团队的持久多资源覆盖
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-09-20 DOI: 10.1007/s10514-025-10207-6
Mela Coffey, Alyssa Pierson
{"title":"Persistent multi-resource coverage with heterogeneous multi-robot teams","authors":"Mela Coffey,&nbsp;Alyssa Pierson","doi":"10.1007/s10514-025-10207-6","DOIUrl":"10.1007/s10514-025-10207-6","url":null,"abstract":"<div><p>Multi-robot teams provide an effective solution for delivering multiple types of goods, such as food or medicine, to various locations of demand. This work presents a Voronoi-based coverage control approach to the multi-resource allocation problem, and considers a heterogeneous team comprising robots with different resource types and capacities. The team must supply resources to multiple demand locations. Demand of resources may change over time, and fluctuate in overall demand, which is represented over the environment as a time-varying density function. From the demand density, robots minimize their respective locational cost, adapting and moving to areas of higher demand. Robots must adhere to supply constraints and replenish resources over time to ensure persistent resource coverage. This paper therefore investigates how to enable persistent deployments, wherein robots must continually alternate between serving demand or replenishing resources. We explore four algorithms for resource replenishment, which vary in communication, forecasting, and information assumptions. Simulations and hardware experiments demonstrate a need-based auction algorithm, which aims to minimize service blackouts, produces the best performance for a heterogeneous team. We also present a discussion on acceptable alternatives for homogeneous teams without communication.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 4","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
L2D2: Robot Learning from 2D drawings L2D2:机器人从2D图纸中学习
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-09-15 DOI: 10.1007/s10514-025-10210-x
Shaunak A. Mehta, Heramb Nemlekar, Hari Sumant, Dylan P. Losey
{"title":"L2D2: Robot Learning from 2D drawings","authors":"Shaunak A. Mehta,&nbsp;Heramb Nemlekar,&nbsp;Hari Sumant,&nbsp;Dylan P. Losey","doi":"10.1007/s10514-025-10210-x","DOIUrl":"10.1007/s10514-025-10210-x","url":null,"abstract":"<div><p>Robots should learn new tasks from humans. But how do humans convey what they want the robot to do? Existing methods largely rely on humans physically guiding the robot arm throughout their intended task. Unfortunately — as we scale up the amount of data — physical guidance becomes prohibitively burdensome. Not only do humans need to operate robot hardware but also modify the environment (e.g., moving and resetting objects) to provide multiple task examples. In this work we propose L2D2, a sketching interface and imitation learning algorithm where humans can provide demonstrations by <i>drawing</i> the task. L2D2 starts with a single image of the robot arm and its workspace. Using a tablet, users draw and label trajectories on this image to illustrate how the robot should act. To collect new and diverse demonstrations, we no longer need the human to physically reset the workspace; instead, L2D2 leverages vision-language segmentation to autonomously vary object locations and generate synthetic images for the human to draw upon. We recognize that drawing trajectories is not as information-rich as physically demonstrating the task. Drawings are 2-dimensional and do not capture how the robot’s actions affect its environment. To address these fundamental challenges the next stage of L2D2 grounds the human’s static, 2D drawings in our dynamic, 3D world by leveraging a small set of physical demonstrations. Our experiments and user study suggest that L2D2 enables humans to provide more demonstrations with less time and effort than traditional approaches, and users prefer drawings over physical manipulation. When compared to other drawing-based approaches, we find that L2D2 learns more performant robot policies, requires a smaller dataset, and can generalize to longer-horizon tasks. See our project website: https://collab.me.vt.edu/L2D2/</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 3","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10210-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145062311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optical communication-based identification for multi-UAV systems: theory and practice 基于光通信的多无人机系统识别:理论与实践
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-09-04 DOI: 10.1007/s10514-025-10208-5
Daniel Bonilla Licea, Viktor Walter, Mounir Ghogho, Martin Saska
{"title":"Optical communication-based identification for multi-UAV systems: theory and practice","authors":"Daniel Bonilla Licea,&nbsp;Viktor Walter,&nbsp;Mounir Ghogho,&nbsp;Martin Saska","doi":"10.1007/s10514-025-10208-5","DOIUrl":"10.1007/s10514-025-10208-5","url":null,"abstract":"<div><p>Mutual relative localization and identification are important features for multi-unmanned aerial vehicle (UAV) systems. Camera-based communications technology, also known as optical camera communications in the literature, is a novel technology that brings a valuable solution to this task. In such a system, the UAVs are equipped with LEDs acting as beacons, and with cameras to locate the LEDs of the other UAVs. Specific blinking sequences are assigned to the LEDs of each of the UAVs to uniquely identify them. This camera-based system is immune to radio frequency electromagnetic interference and operates in global navigation satellite-denied environments. In addition, the implementation of this system is inexpensive. In this article, we study in detail the capacity of this system and its limitations. Furthermore, we show how to construct blinking sequences for UAV LEDs to improve system performance. Finally, experimental results are presented to corroborate the analytical derivations.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 3","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVER: cross-vehicle transition framework for quadrotor control in air-ground cooperation COVER:空地合作中四旋翼控制的跨车辆过渡框架
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-09-03 DOI: 10.1007/s10514-025-10209-4
Qiuyu Ren, Miao Xu, Mengke Zhang, Nanhe Chen, Mingwei Lai, Chao Xu, Fei Gao, Yanjun Cao
{"title":"COVER: cross-vehicle transition framework for quadrotor control in air-ground cooperation","authors":"Qiuyu Ren,&nbsp;Miao Xu,&nbsp;Mengke Zhang,&nbsp;Nanhe Chen,&nbsp;Mingwei Lai,&nbsp;Chao Xu,&nbsp;Fei Gao,&nbsp;Yanjun Cao","doi":"10.1007/s10514-025-10209-4","DOIUrl":"10.1007/s10514-025-10209-4","url":null,"abstract":"<div><p>UAV transitions across UGVs enable diverse air-ground cooperation (AGC) applications, such as cross-vehicle landing, delivery, and rescue. However, achieving precise and efficient transitions across multiple moving UGVs without prior knowledge of their trajectories remains highly challenging. This paper proposes COVER, a cross-vehicle transition framework for quadrotor control in AGC scenarios. In COVER, the UAV is directly controlled in UGVs’ body frames as non-inertial frames, thus eliminating all dependencies in the world frame. Each transition process is divided into three stages: the initial stage, transition stage, and final stage, with pre-set stage transition points and stage-varying system states. Then, an optimal reference trajectory is generated at each stage by solving a non-linear programming (NLP) problem. The effect of the target UGV’s rotation on the initial relative velocity is eliminated to obtain a dynamically feasible and smooth transition reference trajectory. Finally, we design a stage-adaptive model predictive control (SAMPC) method, proposing a novel MPC position reference mode to avoid indirect routes at the transition stage. The SAMPC method effectively mitigates the flight instability caused by reference frame transition and eliminates the effect of reference frame rotation at the transition stage. And it can flexibly adapt to accurate requirements at the final stage by switching position reference mode and adjusting cost weights. Simulation benchmarks and extensive real-world experiments validate that our approach can achieve smooth, short-distance, and accurate cross-vehicle operations.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 3","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconfiguration and locomotion with joint movements in the amoebot model 变形虫模型中关节运动的重构和运动
IF 4.3 3区 计算机科学
Autonomous Robots Pub Date : 2025-08-23 DOI: 10.1007/s10514-025-10204-9
Andreas Padalkin, Manish Kumar, Christian Scheideler
{"title":"Reconfiguration and locomotion with joint movements in the amoebot model","authors":"Andreas Padalkin,&nbsp;Manish Kumar,&nbsp;Christian Scheideler","doi":"10.1007/s10514-025-10204-9","DOIUrl":"10.1007/s10514-025-10204-9","url":null,"abstract":"<div><p>We are considering the geometric amoebot model where a set of <i>n</i> <i>amoebots</i> is placed on the triangular grid. An amoebot is able to send information to its neighbors, and to move via expansions and contractions. Since amoebots and information can only travel node by node, most problems have a natural lower bound of <span>(Omega (D))</span> where <i>D</i> denotes the diameter of the structure. Inspired by the nervous and muscular system, Feldmann et al. (Computat Biol 29(4):317–343, 2022) have proposed the <i>reconfigurable circuit extension</i> and the <i>joint movement extension</i> of the amoebot model with the goal of breaking this lower bound. In the joint movement extension, the way amoebots move is altered. Amoebots become able to push and pull other amoebots. Feldmann et al. (Computat Biol 29(4):317–343, 2022) demonstrated the power of joint movements by transforming a line of amoebots into a rhombus within <span>(O(log n))</span> rounds. However, they left the details of the extension open. The goal of this paper is therefore to formalize and extend the joint movement extension. In order to provide a proof of concept for the extension, we develop centralized algorithms for two fundamental problems of modular robot systems: <i>reconfiguration</i> and <i>locomotion</i>. We approach these problems by defining meta-modules of rhombical and hexagonal shape, respectively. The meta-modules are capable of movement primitives like sliding, rotating, and tunneling. This allows us to simulate reconfiguration algorithms of various modular robot systems. Finally, we construct three amoebot structures capable of locomotion by rolling, crawling, and walking, respectively.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 3","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10204-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信