{"title":"RAD: A Robust Algae Detection Solution to IEEE UV 2022 “Vision Meets Alage” Object Detection Challenge","authors":"Ye Zheng, Bo Wang","doi":"10.1109/UV56588.2022.10185496","DOIUrl":null,"url":null,"abstract":"This article introduces the solutions of the “MicroalgaeDetector” team for the IEEE UV 2022 Vision Meets Algae Object Detection Challenge. This challenge focus on developing computer vision detection algorithm to automatically detect marine microalgae from microscopy images. Automatic localization and identification of microalgae are anticipated to be accomplished concurrently during image analysis, which will simplify downstream cell analysis and lay the groundwork for algae identification using image data in conjunction with biomorphological traits. In this competition, we observe that the training dataset has a serious class imbalance problem, and some classes are in a state of few samples, which greatly limits the performance of both single stage detectors and multi-stage detectors. There are also issues with tiny objects in high-resolution images and serious bounding box annotation inconsistencies. To address the aforementioned competition challenges of few samples, unbalanced categories, noisy annotations and small objects in this competition, we propose a robust and high-performance algae detection method (RAD), which can precisely localize and identify marine microalgae in microscopy images. In the proposed RAD, we develop a class-specific copy-paste strategy to achieve instance-level re-sampling, which resolves the problem of the data imbalance. We also introduce several training/inference strategies and a bag of tricks that brings more or less performance boost. In order to increase robustness, we also train multiple expert models to ensemble them. Our RAD wins the competition after achieving 58.192% mAP in the test dataset.","PeriodicalId":211011,"journal":{"name":"2022 6th International Conference on Universal Village (UV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th International Conference on Universal Village (UV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UV56588.2022.10185496","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This article introduces the solutions of the “MicroalgaeDetector” team for the IEEE UV 2022 Vision Meets Algae Object Detection Challenge. This challenge focus on developing computer vision detection algorithm to automatically detect marine microalgae from microscopy images. Automatic localization and identification of microalgae are anticipated to be accomplished concurrently during image analysis, which will simplify downstream cell analysis and lay the groundwork for algae identification using image data in conjunction with biomorphological traits. In this competition, we observe that the training dataset has a serious class imbalance problem, and some classes are in a state of few samples, which greatly limits the performance of both single stage detectors and multi-stage detectors. There are also issues with tiny objects in high-resolution images and serious bounding box annotation inconsistencies. To address the aforementioned competition challenges of few samples, unbalanced categories, noisy annotations and small objects in this competition, we propose a robust and high-performance algae detection method (RAD), which can precisely localize and identify marine microalgae in microscopy images. In the proposed RAD, we develop a class-specific copy-paste strategy to achieve instance-level re-sampling, which resolves the problem of the data imbalance. We also introduce several training/inference strategies and a bag of tricks that brings more or less performance boost. In order to increase robustness, we also train multiple expert models to ensemble them. Our RAD wins the competition after achieving 58.192% mAP in the test dataset.