{"title":"Mixed-reality for quadruped-robotic guidance in SAR tasks","authors":"Christyan Cruz Ulloa, J. Cerro, A. Barrientos","doi":"10.1093/jcde/qwad061","DOIUrl":null,"url":null,"abstract":"\n In recent years, exploration tasks in disaster environments, victim localization and primary assistance have been the main focuses of Search and Rescue (SAR) Robotics. Developing new technologies in Mixed Reality (M-R) and legged robotics has taken a big step in developing robust field applications in the Robotics field. This article presents MR-RAS (Mixed-Reality for Robotic Assistance), which aims to assist rescuers and protect their integrity when exploring post-disaster areas (against collapse, electrical, and toxic risks) by facilitating the robot’s gesture guidance and allowing them to manage interest visual information of the environment. Thus, ARTU-R (A1 Rescue Tasks UPM Robot) quadruped robot has been equipped with a sensory system (lidar, thermal and RGB-D cameras) to validate this proof of concept. On the other hand, Human-Robot interaction is executed by using the Hololens glasses. This work’s main contribution is the implementation and evaluation of a Mixed-Reality system based on a ROS-Unity solution, capable of managing at a high level the guidance of a complex legged robot through different interest zones (defined by a Neural Network and a vision system) of a post-disaster environment. The robot’s main tasks at each point visited involve detecting victims through thermal, RGB imaging and neural networks and assisting victims with medical equipment. Tests have been carried out in scenarios that recreate the conditions of post-disaster environments (debris, simulation of victims, etc.). An average efficiency improvement of 48% has been obtained when using the immersive interface and a time optimization of 21.4% compared to conventional interfaces. The proposed method has proven to improve rescuers’ immersive experience of controlling a complex robotic system.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Design and Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1093/jcde/qwad061","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years, exploration tasks in disaster environments, victim localization and primary assistance have been the main focuses of Search and Rescue (SAR) Robotics. Developing new technologies in Mixed Reality (M-R) and legged robotics has taken a big step in developing robust field applications in the Robotics field. This article presents MR-RAS (Mixed-Reality for Robotic Assistance), which aims to assist rescuers and protect their integrity when exploring post-disaster areas (against collapse, electrical, and toxic risks) by facilitating the robot’s gesture guidance and allowing them to manage interest visual information of the environment. Thus, ARTU-R (A1 Rescue Tasks UPM Robot) quadruped robot has been equipped with a sensory system (lidar, thermal and RGB-D cameras) to validate this proof of concept. On the other hand, Human-Robot interaction is executed by using the Hololens glasses. This work’s main contribution is the implementation and evaluation of a Mixed-Reality system based on a ROS-Unity solution, capable of managing at a high level the guidance of a complex legged robot through different interest zones (defined by a Neural Network and a vision system) of a post-disaster environment. The robot’s main tasks at each point visited involve detecting victims through thermal, RGB imaging and neural networks and assisting victims with medical equipment. Tests have been carried out in scenarios that recreate the conditions of post-disaster environments (debris, simulation of victims, etc.). An average efficiency improvement of 48% has been obtained when using the immersive interface and a time optimization of 21.4% compared to conventional interfaces. The proposed method has proven to improve rescuers’ immersive experience of controlling a complex robotic system.
期刊介绍:
Journal of Computational Design and Engineering is an international journal that aims to provide academia and industry with a venue for rapid publication of research papers reporting innovative computational methods and applications to achieve a major breakthrough, practical improvements, and bold new research directions within a wide range of design and engineering:
• Theory and its progress in computational advancement for design and engineering
• Development of computational framework to support large scale design and engineering
• Interaction issues among human, designed artifacts, and systems
• Knowledge-intensive technologies for intelligent and sustainable systems
• Emerging technology and convergence of technology fields presented with convincing design examples
• Educational issues for academia, practitioners, and future generation
• Proposal on new research directions as well as survey and retrospectives on mature field.