面向动态和障碍物密集环境的多机器人协作操作框架:集成深度学习实现实时任务执行。

IF 3 Q2 ROBOTICS
Frontiers in Robotics and AI Pub Date : 2025-07-30 eCollection Date: 2025-01-01 DOI:10.3389/frobt.2025.1585544
Afnan Ahmed Adil, Saber Sakhrieh, Jinane Mounsef, Noel Maalouf
{"title":"面向动态和障碍物密集环境的多机器人协作操作框架:集成深度学习实现实时任务执行。","authors":"Afnan Ahmed Adil, Saber Sakhrieh, Jinane Mounsef, Noel Maalouf","doi":"10.3389/frobt.2025.1585544","DOIUrl":null,"url":null,"abstract":"<p><p>This paper presents a multi-robot collaborative manipulation framework, implemented in the Gazebo simulation environment, designed to enable the execution of autonomous tasks by mobile manipulators in dynamic environments and dense obstacles. The system consists of multiple mobile robot platforms, each equipped with a robotic manipulator, a simulated RGB-D camera, and a 2D LiDAR sensor on the mobile base, facilitating task coordination, object detection, and advanced collision avoidance within a simulated warehouse setting. A leader-follower architecture governs collaboration, allowing for the dynamic formation of teams to tackle tasks requiring combined effort, such as transporting heavy objects. Task allocation and control are achieved through a centralized control structure architecture in which the leader robot coordinates subordinate units based on high-level task assignments. The framework incorporates deep learning-based object detection (YOLOv2) to identify target objects using a simulated RGB-D camera mounted on the manipulator's end-effector. Path planning is achieved through a sampling-based algorithm that is integrated with the LiDAR data to facilitate precise obstacle avoidance and localization. It also provides real-time path rerouting for safe navigation when dynamically moving obstacles, such as humans or other entities, intersect planned paths. This functionality ensures uninterrupted task execution and enhances safety in human-robot shared spaces. High-level task scheduling and control transitions are managed using MATLAB and Stateflow logic, while ROS facilitates data communication between MATLAB, Simulink, and Gazebo. This multirobot architecture is adaptable, allowing configuration of team size for collaborative tasks based on load requirements and environmental complexity. By integrating computer vision and deep learning for visual processing, and YOLOv2 for object detection, the system efficiently identifies, picks, and transports objects to designated locations, demonstrating the scalability of multi-robot framework for future applications in logistics automation, collaborative manufacturing, and dynamic human-robot interaction scenarios.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1585544"},"PeriodicalIF":3.0000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343253/pdf/","citationCount":"0","resultStr":"{\"title\":\"A multi-robot collaborative manipulation framework for dynamic and obstacle-dense environments: integration of deep learning for real-time task execution.\",\"authors\":\"Afnan Ahmed Adil, Saber Sakhrieh, Jinane Mounsef, Noel Maalouf\",\"doi\":\"10.3389/frobt.2025.1585544\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This paper presents a multi-robot collaborative manipulation framework, implemented in the Gazebo simulation environment, designed to enable the execution of autonomous tasks by mobile manipulators in dynamic environments and dense obstacles. The system consists of multiple mobile robot platforms, each equipped with a robotic manipulator, a simulated RGB-D camera, and a 2D LiDAR sensor on the mobile base, facilitating task coordination, object detection, and advanced collision avoidance within a simulated warehouse setting. A leader-follower architecture governs collaboration, allowing for the dynamic formation of teams to tackle tasks requiring combined effort, such as transporting heavy objects. Task allocation and control are achieved through a centralized control structure architecture in which the leader robot coordinates subordinate units based on high-level task assignments. The framework incorporates deep learning-based object detection (YOLOv2) to identify target objects using a simulated RGB-D camera mounted on the manipulator's end-effector. Path planning is achieved through a sampling-based algorithm that is integrated with the LiDAR data to facilitate precise obstacle avoidance and localization. It also provides real-time path rerouting for safe navigation when dynamically moving obstacles, such as humans or other entities, intersect planned paths. This functionality ensures uninterrupted task execution and enhances safety in human-robot shared spaces. High-level task scheduling and control transitions are managed using MATLAB and Stateflow logic, while ROS facilitates data communication between MATLAB, Simulink, and Gazebo. This multirobot architecture is adaptable, allowing configuration of team size for collaborative tasks based on load requirements and environmental complexity. By integrating computer vision and deep learning for visual processing, and YOLOv2 for object detection, the system efficiently identifies, picks, and transports objects to designated locations, demonstrating the scalability of multi-robot framework for future applications in logistics automation, collaborative manufacturing, and dynamic human-robot interaction scenarios.</p>\",\"PeriodicalId\":47597,\"journal\":{\"name\":\"Frontiers in Robotics and AI\",\"volume\":\"12 \",\"pages\":\"1585544\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-07-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343253/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Robotics and AI\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/frobt.2025.1585544\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Robotics and AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frobt.2025.1585544","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

摘要

本文提出了一个多机器人协同操作框架,在Gazebo仿真环境中实现,旨在使移动机械手能够在动态环境和密集障碍物中执行自主任务。该系统由多个移动机器人平台组成,每个平台都配备一个机器人操作手、一个模拟RGB-D相机和一个移动基座上的2D激光雷达传感器,在模拟仓库环境中促进任务协调、目标检测和高级防撞。领导者-追随者体系结构管理协作,允许团队的动态形成来处理需要联合努力的任务,例如运输重物。任务的分配和控制是通过一个集中控制结构体系来实现的,在这个体系中,领导机器人根据高层任务分配协调下属单位。该框架结合了基于深度学习的目标检测(YOLOv2),使用安装在机械臂末端执行器上的模拟RGB-D相机识别目标物体。路径规划是通过基于采样的算法实现的,该算法与激光雷达数据相结合,以促进精确的避障和定位。当动态移动的障碍物(如人类或其他实体)与规划的路径相交时,它还提供实时路径重新路由,以确保安全导航。该功能可确保不间断的任务执行,并提高人机共享空间的安全性。高级任务调度和控制转换使用MATLAB和statflow逻辑进行管理,而ROS促进了MATLAB, Simulink和Gazebo之间的数据通信。这种多机器人架构具有适应性,允许根据负载需求和环境复杂性配置团队规模来完成协作任务。通过集成用于视觉处理的计算机视觉和深度学习,以及用于物体检测的YOLOv2,该系统有效地识别、拾取和运输物体到指定位置,展示了多机器人框架在物流自动化、协同制造和动态人机交互场景中的可扩展性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A multi-robot collaborative manipulation framework for dynamic and obstacle-dense environments: integration of deep learning for real-time task execution.

This paper presents a multi-robot collaborative manipulation framework, implemented in the Gazebo simulation environment, designed to enable the execution of autonomous tasks by mobile manipulators in dynamic environments and dense obstacles. The system consists of multiple mobile robot platforms, each equipped with a robotic manipulator, a simulated RGB-D camera, and a 2D LiDAR sensor on the mobile base, facilitating task coordination, object detection, and advanced collision avoidance within a simulated warehouse setting. A leader-follower architecture governs collaboration, allowing for the dynamic formation of teams to tackle tasks requiring combined effort, such as transporting heavy objects. Task allocation and control are achieved through a centralized control structure architecture in which the leader robot coordinates subordinate units based on high-level task assignments. The framework incorporates deep learning-based object detection (YOLOv2) to identify target objects using a simulated RGB-D camera mounted on the manipulator's end-effector. Path planning is achieved through a sampling-based algorithm that is integrated with the LiDAR data to facilitate precise obstacle avoidance and localization. It also provides real-time path rerouting for safe navigation when dynamically moving obstacles, such as humans or other entities, intersect planned paths. This functionality ensures uninterrupted task execution and enhances safety in human-robot shared spaces. High-level task scheduling and control transitions are managed using MATLAB and Stateflow logic, while ROS facilitates data communication between MATLAB, Simulink, and Gazebo. This multirobot architecture is adaptable, allowing configuration of team size for collaborative tasks based on load requirements and environmental complexity. By integrating computer vision and deep learning for visual processing, and YOLOv2 for object detection, the system efficiently identifies, picks, and transports objects to designated locations, demonstrating the scalability of multi-robot framework for future applications in logistics automation, collaborative manufacturing, and dynamic human-robot interaction scenarios.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.50
自引率
5.90%
发文量
355
审稿时长
14 weeks
期刊介绍: Frontiers in Robotics and AI publishes rigorously peer-reviewed research covering all theory and applications of robotics, technology, and artificial intelligence, from biomedical to space robotics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信