{"title":"Underwater Object Detection Using Synthetic Data","authors":"Nitish Reddy Nandyala, Rakesh Kumar Sanodiya","doi":"10.1109/ESDC56251.2023.10149870","DOIUrl":null,"url":null,"abstract":"The ability to detect life in challenging underwater environments holds the potential to preserve many aquatic species and coral reefs. Recent object detection research has witnessed a remarkable upsurge in natural images but not in Underwater, due to the imbalanced lighting, inadequate contrast, frequent occlusions, and the mimicry displayed by aquatic life forms. The assessment of object recognition models utilized in various contexts has augmented the need for annotated datasets. Due to the labor-intensive nature of generating these datasets, we have opted to undertake training using synthetic images as an alternative. In this study, we train the cutting-edge YOLO object detection system on a synthetic underwater dataset, with the aim of achieving category-agnostic object detection and then evaluated through practical assessments conducted on real underwater images. In addition, we provide benchmarking results for different YOLO versions in this work, assessing their performance on both real-world and synthetic datasets. Our investigation reveal that YOLOv5 shines in its ability to perform on synthetic data, whereas the latest YOLOv8, excels in real data domains, outpacing other two models tested. These findings have far reaching implications for the design and development of object detection in underwater environments.","PeriodicalId":354855,"journal":{"name":"2023 11th International Symposium on Electronic Systems Devices and Computing (ESDC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 11th International Symposium on Electronic Systems Devices and Computing (ESDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ESDC56251.2023.10149870","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The ability to detect life in challenging underwater environments holds the potential to preserve many aquatic species and coral reefs. Recent object detection research has witnessed a remarkable upsurge in natural images but not in Underwater, due to the imbalanced lighting, inadequate contrast, frequent occlusions, and the mimicry displayed by aquatic life forms. The assessment of object recognition models utilized in various contexts has augmented the need for annotated datasets. Due to the labor-intensive nature of generating these datasets, we have opted to undertake training using synthetic images as an alternative. In this study, we train the cutting-edge YOLO object detection system on a synthetic underwater dataset, with the aim of achieving category-agnostic object detection and then evaluated through practical assessments conducted on real underwater images. In addition, we provide benchmarking results for different YOLO versions in this work, assessing their performance on both real-world and synthetic datasets. Our investigation reveal that YOLOv5 shines in its ability to perform on synthetic data, whereas the latest YOLOv8, excels in real data domains, outpacing other two models tested. These findings have far reaching implications for the design and development of object detection in underwater environments.