Juxian Zhao, Wei Li, Jinsong Zhu, Zhigang Gao, Lu Pan, Zhongguan Liu
{"title":"基于视觉引导的消防机器人灭火决策:一种新颖的注意力和尺度U-Net模型及遗传算法","authors":"Juxian Zhao, Wei Li, Jinsong Zhu, Zhigang Gao, Lu Pan, Zhongguan Liu","doi":"10.1002/rob.22438","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>At present, rescue firefighting relies mainly on firefighting robots, and robots with perception and decision-making functions are the key elements for achieving intelligent firefighting. However, traditional firefighting robots often lack the ability for autonomous perception and decision-making when extinguishing multiple fire sources, leading to low rescue efficiency and increased risk for rescue personnel, especially when making firefighting decisions in extreme fire scenes, which poses a challenge. To effectively handle firefighting tasks and ensure operational efficiency, a robot firefighting decision-making method based on drone visual guidance is proposed. First, we introduce a novel Attention and Scale U-Net (ASUNet) model to accurately capture crucial target information, including fire location and size, in a fire scene. The ASUNet model adopts an effective multiscale fusion strategy and attention mechanism to enhance the model's performance. Subsequently, based on the results of the ASUNet model, through pixel segmentation clustering and a genetic optimization algorithm, we obtain the robot's firefighting decision results, thereby guiding the robot to carry out firefighting operations systematically. Finally, through numerical experiments, it is verified that the proposed ASUNet model is superior and effective, as the model can perceive important information in a fire scene and extract it well. The use of improved genetic optimization can further accelerate algorithm convergence. To our knowledge, this study is the first to use drone-based monocular vision guidance for firefighting decision-making, providing significant engineering value.</p>\n </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 4","pages":"1103-1124"},"PeriodicalIF":4.2000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Firefighting Robot Extinguishment Decision-Making Based on Visual Guidance: A Novel Attention and Scale U-Net Model and Genetic Algorithm\",\"authors\":\"Juxian Zhao, Wei Li, Jinsong Zhu, Zhigang Gao, Lu Pan, Zhongguan Liu\",\"doi\":\"10.1002/rob.22438\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>At present, rescue firefighting relies mainly on firefighting robots, and robots with perception and decision-making functions are the key elements for achieving intelligent firefighting. However, traditional firefighting robots often lack the ability for autonomous perception and decision-making when extinguishing multiple fire sources, leading to low rescue efficiency and increased risk for rescue personnel, especially when making firefighting decisions in extreme fire scenes, which poses a challenge. To effectively handle firefighting tasks and ensure operational efficiency, a robot firefighting decision-making method based on drone visual guidance is proposed. First, we introduce a novel Attention and Scale U-Net (ASUNet) model to accurately capture crucial target information, including fire location and size, in a fire scene. The ASUNet model adopts an effective multiscale fusion strategy and attention mechanism to enhance the model's performance. Subsequently, based on the results of the ASUNet model, through pixel segmentation clustering and a genetic optimization algorithm, we obtain the robot's firefighting decision results, thereby guiding the robot to carry out firefighting operations systematically. Finally, through numerical experiments, it is verified that the proposed ASUNet model is superior and effective, as the model can perceive important information in a fire scene and extract it well. The use of improved genetic optimization can further accelerate algorithm convergence. To our knowledge, this study is the first to use drone-based monocular vision guidance for firefighting decision-making, providing significant engineering value.</p>\\n </div>\",\"PeriodicalId\":192,\"journal\":{\"name\":\"Journal of Field Robotics\",\"volume\":\"42 4\",\"pages\":\"1103-1124\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Field Robotics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/rob.22438\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ROBOTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Field Robotics","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rob.22438","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
Firefighting Robot Extinguishment Decision-Making Based on Visual Guidance: A Novel Attention and Scale U-Net Model and Genetic Algorithm
At present, rescue firefighting relies mainly on firefighting robots, and robots with perception and decision-making functions are the key elements for achieving intelligent firefighting. However, traditional firefighting robots often lack the ability for autonomous perception and decision-making when extinguishing multiple fire sources, leading to low rescue efficiency and increased risk for rescue personnel, especially when making firefighting decisions in extreme fire scenes, which poses a challenge. To effectively handle firefighting tasks and ensure operational efficiency, a robot firefighting decision-making method based on drone visual guidance is proposed. First, we introduce a novel Attention and Scale U-Net (ASUNet) model to accurately capture crucial target information, including fire location and size, in a fire scene. The ASUNet model adopts an effective multiscale fusion strategy and attention mechanism to enhance the model's performance. Subsequently, based on the results of the ASUNet model, through pixel segmentation clustering and a genetic optimization algorithm, we obtain the robot's firefighting decision results, thereby guiding the robot to carry out firefighting operations systematically. Finally, through numerical experiments, it is verified that the proposed ASUNet model is superior and effective, as the model can perceive important information in a fire scene and extract it well. The use of improved genetic optimization can further accelerate algorithm convergence. To our knowledge, this study is the first to use drone-based monocular vision guidance for firefighting decision-making, providing significant engineering value.
期刊介绍:
The Journal of Field Robotics seeks to promote scholarly publications dealing with the fundamentals of robotics in unstructured and dynamic environments.
The Journal focuses on experimental robotics and encourages publication of work that has both theoretical and practical significance.