N. Seese, A. Myers, Kaleb E. Smith, Anthony O. Smith
{"title":"Adaptive Foreground Extraction for Deep Fish Classification","authors":"N. Seese, A. Myers, Kaleb E. Smith, Anthony O. Smith","doi":"10.1109/CVAUI.2016.016","DOIUrl":null,"url":null,"abstract":"Despite the recent advances in computer vision and the proliferation of applications for tracking, image classification, and video analysis, very little applied work has been done to improve techniques for underwater video. Object detection and classification for underwater environments is critical in domains like marine biology, where scientist study populations of underwater species. Most applications assume either a static background, or movement that can be accounted for by some constant offset. Existing state-of-the-art algorithms perform well under controlled conditions, but when applied to underwater video of an unconstrained real world environment, they suffer a substantial performance degradation. In this work, we implement a system that performs foreground extraction on streaming underwater video for fish classification using a convolutional neural network. Our goal is to accurately detect and classify objects in real-time utilizing graphics processing unit (GPU) parallel computing capability. GPU accelerated computing is the ideal hardware technology for video analysis that provides a platform for real-time processing. We evaluate our performance on standard benchmark video datasets, specifically for scene complexity, and for detection and classification accuracy.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVAUI.2016.016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Despite the recent advances in computer vision and the proliferation of applications for tracking, image classification, and video analysis, very little applied work has been done to improve techniques for underwater video. Object detection and classification for underwater environments is critical in domains like marine biology, where scientist study populations of underwater species. Most applications assume either a static background, or movement that can be accounted for by some constant offset. Existing state-of-the-art algorithms perform well under controlled conditions, but when applied to underwater video of an unconstrained real world environment, they suffer a substantial performance degradation. In this work, we implement a system that performs foreground extraction on streaming underwater video for fish classification using a convolutional neural network. Our goal is to accurately detect and classify objects in real-time utilizing graphics processing unit (GPU) parallel computing capability. GPU accelerated computing is the ideal hardware technology for video analysis that provides a platform for real-time processing. We evaluate our performance on standard benchmark video datasets, specifically for scene complexity, and for detection and classification accuracy.