2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)最新文献

筛选
英文 中文
Live Tracking of Rail-Based Fish Catching on Wild Sea Surface 野外海面轨道捕鱼实况追踪
Tsung-Wei Huang, Jenq-Neng Hwang, S. Romain, Farron Wallace
{"title":"Live Tracking of Rail-Based Fish Catching on Wild Sea Surface","authors":"Tsung-Wei Huang, Jenq-Neng Hwang, S. Romain, Farron Wallace","doi":"10.1109/CVAUI.2016.017","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.017","url":null,"abstract":"Automated video analysis in fishery has drawn increasing attention since it is more scalable and deployable in conducting survey, such as fish catch tracking and size measurement, than traditional human observers. However, there are challenges from the wild sea environment, such as the rapid motion of the tide and the white water foam on the surface, which can create large noise in video data. In this work, we present an innovative method for live tracking of rail-based fish catching by combining background subtraction and motion trajectories techniques in highly noisy sea surface environment. First, the foreground masks, which consist of both fish and tide-blob noise, are obtained using background subtraction. Then, the fish are tracked and separated from noise based on their trajectories, and their boundaries are further refined with histogram of optical flow. Finally, the segmentation is acquired with a dense conditional random field (CRF) in which the optical flow on trajectories are transformed and served as feature vectors for calculating the pairwise potential. Our experimental results demonstrate that the trajectories and the feature vectors from optical flow greatly improve the tracking performance.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124390182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Closed-Loop Tracking-by-Detection for ROV-Based Multiple Fish Tracking 基于rov的多鱼跟踪闭环检测跟踪
Gaoang Wang, Jenq-Neng Hwang, K. Williams, G. Cutter
{"title":"Closed-Loop Tracking-by-Detection for ROV-Based Multiple Fish Tracking","authors":"Gaoang Wang, Jenq-Neng Hwang, K. Williams, G. Cutter","doi":"10.1109/CVAUI.2016.014","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.014","url":null,"abstract":"Fish abundance estimation with the aid of visual analysis has drawn increasing attention based on the underwater videos from a remotely-operated vehicle (ROV). We build a novel fish tracking and counting system followed by tracking-by-detection framework. Since fish may keep entering or leaving the field of view (FOV), an offline trained deformable part model (DPM) fish detector is adopted to detect live fish from video data. Besides that, a multiple kernel tracking approach is used to associate the same object across consecutive frames for fish counting purpose. However, due to the diversity of fish poses, the deformation of fish body shape and the color similarity between fish and background, the detection performance greatly decreases, resulting in a large error in tracking and counting. To deal with such issue, we propose a closed-loop mechanism between tracking and detection. First, we arrange detection results into tracklets and extract motion features from arranged tracklets. A Bayesian classifier is then applied to remove unreliable detections. Finally, the tracking results are modified based on the reliable detections. This proposed strategy effectively addresses the false detection problem and largely decreases the tracking error. Favorable performance is achieved by our proposed closed-loop between tracking and detection on the real-world ROV videos.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114589550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Adaptive Foreground Extraction for Deep Fish Classification 深海鱼类分类的自适应前景提取
N. Seese, A. Myers, Kaleb E. Smith, Anthony O. Smith
{"title":"Adaptive Foreground Extraction for Deep Fish Classification","authors":"N. Seese, A. Myers, Kaleb E. Smith, Anthony O. Smith","doi":"10.1109/CVAUI.2016.016","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.016","url":null,"abstract":"Despite the recent advances in computer vision and the proliferation of applications for tracking, image classification, and video analysis, very little applied work has been done to improve techniques for underwater video. Object detection and classification for underwater environments is critical in domains like marine biology, where scientist study populations of underwater species. Most applications assume either a static background, or movement that can be accounted for by some constant offset. Existing state-of-the-art algorithms perform well under controlled conditions, but when applied to underwater video of an unconstrained real world environment, they suffer a substantial performance degradation. In this work, we implement a system that performs foreground extraction on streaming underwater video for fish classification using a convolutional neural network. Our goal is to accurately detect and classify objects in real-time utilizing graphics processing unit (GPU) parallel computing capability. GPU accelerated computing is the ideal hardware technology for video analysis that provides a platform for real-time processing. We evaluate our performance on standard benchmark video datasets, specifically for scene complexity, and for detection and classification accuracy.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133919199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Polyp Activity Estimation and Monitoring for Cold Water Corals with a Deep Learning Approach 基于深度学习方法的冷水珊瑚水螅活动估计与监测
Jonas Osterloff, I. Nilssen, Johanna Jarnegren, P. Buhl-Mortensen, T. Nattkemper
{"title":"Polyp Activity Estimation and Monitoring for Cold Water Corals with a Deep Learning Approach","authors":"Jonas Osterloff, I. Nilssen, Johanna Jarnegren, P. Buhl-Mortensen, T. Nattkemper","doi":"10.1109/CVAUI.2016.013","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.013","url":null,"abstract":"Fixed underwater observatories (FUOs) equipped with a variety of sensors including cameras, allow long-term monitoring with a high temporal resolution of a limited area of interest. FUOs equipped with HD cameras enable in situ monitoring of biological activity, such as live cold-water corals on a level of detail down to individual polyps. We present a workflow which allows monitoring the activity of cold water coral polyps automatically from photos recorded at the FUO LoVe (Lofoten - Vesterålen). The workflow consists of three steps: First the manual polyp activity-level identification, carried out by three observers on a region of interest in 13 images to generate a gold standard. Second, the training of a convolutional neural network (CNN) on the gold standard to automate the polyp activity classification. Third, the computational activity classification is integrated into an algorithmic estimation of polyp activity in a region of interest. We present results obtained for an image series from April to November 2015 that shows interesting temporal behavior patterns correlating with other posterior measurements.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129451605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Plankton Image Classification Based on Multiple Segmentations 基于多重分割的浮游生物图像分类
N. Hirata, M. A. Fernandez, R. Lopes
{"title":"Plankton Image Classification Based on Multiple Segmentations","authors":"N. Hirata, M. A. Fernandez, R. Lopes","doi":"10.1109/CVAUI.2016.022","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.022","url":null,"abstract":"Due to image quality related issues, classification of plankton images, particularly of those collected in situ, strongly relies on shape features. Thus, image segmentation is a critical step in the classification pipeline. In general, the segmentation algorithm that leads to the best overall classification accuracy does not necessarily imply best classification accuracy with respect to each of the individual classes. In addition, in real time applications, changes in the environment or in the image acquisition devices require fast adjustments in the classification pipeline. Customizing segmentation algorithms for each situation may demand considerable effort. Motivated by these issues, we address the problem of using multiple segmentation algorithms and letting the classifier decide how to make best use of them. Some case studies and results are presented and discussed.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133899593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Shape Reconstruction of Objects in Participating Media by Combining Photometric Stereo and Optical Thickness 结合光度立体与光学厚度的参与介质中物体形状重建
Yuki Fujimura, M. Iiyama, Atsushi Hashimoto, M. Minoh
{"title":"Shape Reconstruction of Objects in Participating Media by Combining Photometric Stereo and Optical Thickness","authors":"Yuki Fujimura, M. Iiyama, Atsushi Hashimoto, M. Minoh","doi":"10.1109/CVAUI.2016.021","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.021","url":null,"abstract":"This paper proposes a method to reconstruct the 3D shape of objects in participating media. Shape reconstruction of objects in participating media, such as water, fog, and smoke, is difficult due to light scattering, which degrades image quality. While previous methods cope with this problem by removing the scattering components from images, the proposed method estimates optical thickness from images and uses it to recover the depth of the objects in participating media. With the proposed method, a detailed 3D shape is recovered using a photometric stereo technique that was designed to work in participating media. Three-dimensional global shapes that cannot be recovered by the photometric stereo technique, such as depth edges, are recovered from optical thickness. Experimental results with real images show that the proposed method correctly reconstructs the 3D shape of objects in participating media.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122604777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Surface Stereo for Shallow Underwater Scenes 浅层水下场景的表面立体
Scott Sorensen, Wayne Treible, C. Kambhamettu
{"title":"Surface Stereo for Shallow Underwater Scenes","authors":"Scott Sorensen, Wayne Treible, C. Kambhamettu","doi":"10.1109/CVAUI.2016.019","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.019","url":null,"abstract":"Imaging underwater scenes with a surface based stereo system allows for reconstruction where having an underwater system is unsafe or impractical. Refraction and other optical phenomena complicate the reconstruction process, and here we analyze these factors and propose techniques for mitigating problems. Existing techniques are of limited use in practical settings, or have ignored physical properties which complicate the reconstruction task. We demonstrate that physical properties can be used to aid in reconstruction and surface modeling using optical and thermal properties of water. In this work we analyze these properties and provide a grounded example with ecological and geophysical applications.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126635943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shrinking Encoding with Two-Level Codebook Learning for Fine-Grained Fish Recognition 基于两级码本学习的细粒度鱼类识别压缩编码
Gaoang Wang, Jenq-Neng Hwang, K. Williams, Farron Wallace, Craig S. Rose
{"title":"Shrinking Encoding with Two-Level Codebook Learning for Fine-Grained Fish Recognition","authors":"Gaoang Wang, Jenq-Neng Hwang, K. Williams, Farron Wallace, Craig S. Rose","doi":"10.1109/CVAUI.2016.018","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.018","url":null,"abstract":"Bag-of-features (BoF) shows a great power in representing images for image classification. Many codebook learning methods have been developed to find discriminative parts of images for fine-grained recognition. Built upon BoF framework, we propose a novel approach for finegrained fish recognition with two-level codebook learning by shrinking coding coefficients. In the framework, only the maximum-valued coefficient will be maintained in the local spatial region if followed by max pooling strategy. However, the maximum-valued coefficient may result from a local descriptor which is not discriminative among fine-grained classes, resulting in difficulty in classification. In this paper, a two-level codebook is learned to represent the importance between the local descriptor and each codeword in its corresponding k-nearest neighbors. A shrinkage function is also introduced to shrink unrelated coefficients after encoding. Our experimental results show that the proposed method achieves significant performance improvement for fine-grained fish recognition tasks.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"52 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120911773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Data Enrichment in Fine-Grained Classification of Aquatic Macroinvertebrates 水生大型无脊椎动物细粒度分类的数据充实
Jenni Raitoharju, Ekaterina Riabchenko, Kristian Meissner, I. Ahmad, Alexandros Iosifidis, M. Gabbouj, S. Kiranyaz
{"title":"Data Enrichment in Fine-Grained Classification of Aquatic Macroinvertebrates","authors":"Jenni Raitoharju, Ekaterina Riabchenko, Kristian Meissner, I. Ahmad, Alexandros Iosifidis, M. Gabbouj, S. Kiranyaz","doi":"10.1109/CVAUI.2016.020","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.020","url":null,"abstract":"The types and numbers of benthic macroinvertebrates found in a water body reflect water quality. Therefore, macroinvertebrates are routinely monitored as a part of freshwater ecological quality assessment. The collected macroinvertebrate samples are identified by human experts, which is costly and time-consuming. Thus, developing automated identification methods that could partially replace the human effort is important. In our group, we have been working toward this goal and, in this paper, we improve our earlier results on automated macroinvertebrate classification obtained using deep Convolutional Neural Networks (CNNs). We apply simple data enrichment prior to CNN training. By rotations and mirroring, we create new images so as to increase the total size of the image database sixfold. We evaluate the effect of data enrichment on Caffe and MatConvNet CNN implementations. The networks are trained either fully on the macroinvertebrate data or first pretrained using ImageNet pictures and then fine-tuned using the macroinvertebrate data. The results show 3-6% improvement, when the enriched data are used. This is an encouraging result, because it significantly narrows the gap between automated techniques and human experts, while it leaves room for future improvements as even the size of the enriched data, about 60000 images, is small compared to data sizes typically required for efficient training of deep CNNs.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123418719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Data-Driven Long Term Change Analysis in Marine Observatory Image Streams 海洋观测图像流的数据驱动长期变化分析
Torben Möller, I. Nilssen, T. Nattkemper
{"title":"Data-Driven Long Term Change Analysis in Marine Observatory Image Streams","authors":"Torben Möller, I. Nilssen, T. Nattkemper","doi":"10.1109/CVAUI.2016.015","DOIUrl":"https://doi.org/10.1109/CVAUI.2016.015","url":null,"abstract":"In recent years, a number of fixed long-term underwater observatories (FUO) have been deployed to monitor marine habitats over time. HD cameras deployed on FUOs enable vision based studies of long-term processes in the monitored habitats. However, in many marine environments there is often only little a-priori knowledge about potential changes that can be expected or where such changes are likely to occur. Therefore, we propose a method to detect regions of potentially relevant changes and to group them into categories. Wavelet analysis is employed to extract features that describe the approximate progression of pixel values over time. Clustering the features using the recently proposed Bi-Domain Feature Clustering (BDFC) achieves feature grouping and a data-driven definition of change categories. Moreover, a relevance score is computed for each change category, to find regions with relevant changes and to illustrate different relevant change categories simultaneously in one image. Our experiments with images from the Lofoten Vesterålen (LoVe) ocean observatory demonstrate the effectiveness of the method to find relevant change patterns and associate them to different regions or biota.","PeriodicalId":169345,"journal":{"name":"2016 ICPR 2nd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI)","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116657108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信