{"title":"基于计算机视觉的自主水面车辆在海上伤亡搜救过程中的受害者检测","authors":"Achmad Zidan Akbar, C. Fatichah, Rudy Dikairono","doi":"10.1109/CENIM56801.2022.10037319","DOIUrl":null,"url":null,"abstract":"An Autonomous Surface Vehicle (ASV) is proposed to help the search process of maritime accident victims. A catamaran-type hull is implemented on the ASV for its stability. The ASV fix-mounted the electric propulsion system T200 Thruster on the stern of the ASV. Robot Operating System (ROS) is implemented on the ASV main software architecture. The ASV uses sensors such as a Global Positioning System (GPS), compass, Inertial Measurement Unit (IMU), and gyroscope to detect the state of the ASV. The ASV also uses ultrasonic sensors for obstacle avoidance. To interface with the actuators, a microcontroller STM32F4 is used. You Only Look Once (YOLO)v4 as Convolutional Neural Network (CNN) Architecture was used for the victim detection that was running on Nvidia RTX 2060 Mobile. The navigation system of the ASV performs well despite the noise from the sensor. The ASV is also capable of avoiding obstacles when moving at low speed. Dataset annotation was done manually from the images taken in Danau 8 Institut Teknologi Sepuluh Nopember (ITS). YOLOv4 gives an accuracy of 0.840203. Optimizing the YOLOv4 model from the darknet model to TensorRT increases the inference speed from 27 FPS to 85 FPS.","PeriodicalId":118934,"journal":{"name":"2022 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Autonomous Surface Vehicle in Search and Rescue Process of Marine Casualty using Computer Vision Based Victims Detection\",\"authors\":\"Achmad Zidan Akbar, C. Fatichah, Rudy Dikairono\",\"doi\":\"10.1109/CENIM56801.2022.10037319\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"An Autonomous Surface Vehicle (ASV) is proposed to help the search process of maritime accident victims. A catamaran-type hull is implemented on the ASV for its stability. The ASV fix-mounted the electric propulsion system T200 Thruster on the stern of the ASV. Robot Operating System (ROS) is implemented on the ASV main software architecture. The ASV uses sensors such as a Global Positioning System (GPS), compass, Inertial Measurement Unit (IMU), and gyroscope to detect the state of the ASV. The ASV also uses ultrasonic sensors for obstacle avoidance. To interface with the actuators, a microcontroller STM32F4 is used. You Only Look Once (YOLO)v4 as Convolutional Neural Network (CNN) Architecture was used for the victim detection that was running on Nvidia RTX 2060 Mobile. The navigation system of the ASV performs well despite the noise from the sensor. The ASV is also capable of avoiding obstacles when moving at low speed. Dataset annotation was done manually from the images taken in Danau 8 Institut Teknologi Sepuluh Nopember (ITS). YOLOv4 gives an accuracy of 0.840203. Optimizing the YOLOv4 model from the darknet model to TensorRT increases the inference speed from 27 FPS to 85 FPS.\",\"PeriodicalId\":118934,\"journal\":{\"name\":\"2022 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM)\",\"volume\":\"88 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CENIM56801.2022.10037319\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CENIM56801.2022.10037319","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
提出了一种自主水面车辆(ASV)来帮助海上事故受害者的搜索过程。在ASV上采用双体船型船体以保证其稳定性。ASV在ASV尾部固定安装电力推进系统T200推进器。机器人操作系统(ROS)是在ASV主软件架构上实现的。ASV通过GPS (Global Positioning System)、指南针(compass)、IMU (Inertial Measurement Unit)和陀螺仪等传感器来检测ASV的状态。ASV还使用超声波传感器来避障。为了与执行器接口,使用了一个微控制器STM32F4。You Only Look Once (YOLO)v4作为卷积神经网络(CNN)架构被用于在Nvidia RTX 2060手机上运行的受害者检测。ASV的导航系统在传感器噪声的影响下表现良好。ASV还能够在低速移动时避开障碍物。数据集注释是根据Danau 8 Institut teknologii Sepuluh 11月(ITS)拍摄的图像手动完成的。YOLOv4给出的精度为0.840203。将YOLOv4模型从暗网模型优化为TensorRT,将推理速度从27 FPS提高到85 FPS。
Autonomous Surface Vehicle in Search and Rescue Process of Marine Casualty using Computer Vision Based Victims Detection
An Autonomous Surface Vehicle (ASV) is proposed to help the search process of maritime accident victims. A catamaran-type hull is implemented on the ASV for its stability. The ASV fix-mounted the electric propulsion system T200 Thruster on the stern of the ASV. Robot Operating System (ROS) is implemented on the ASV main software architecture. The ASV uses sensors such as a Global Positioning System (GPS), compass, Inertial Measurement Unit (IMU), and gyroscope to detect the state of the ASV. The ASV also uses ultrasonic sensors for obstacle avoidance. To interface with the actuators, a microcontroller STM32F4 is used. You Only Look Once (YOLO)v4 as Convolutional Neural Network (CNN) Architecture was used for the victim detection that was running on Nvidia RTX 2060 Mobile. The navigation system of the ASV performs well despite the noise from the sensor. The ASV is also capable of avoiding obstacles when moving at low speed. Dataset annotation was done manually from the images taken in Danau 8 Institut Teknologi Sepuluh Nopember (ITS). YOLOv4 gives an accuracy of 0.840203. Optimizing the YOLOv4 model from the darknet model to TensorRT increases the inference speed from 27 FPS to 85 FPS.