Underwater biotope mapping: automatic processing of underwater video data

O. Iakushkin, Ekaterina Pavlova, Anastasia Lavrova, Eugene Pen, O. Sedova, Vyacheslav Polovkov, N. Shabalin, Terekhina Yana, Frih-Har Anna
{"title":"Underwater biotope mapping: automatic processing of underwater video data","authors":"O. Iakushkin, Ekaterina Pavlova, Anastasia Lavrova, Eugene Pen, O. Sedova, Vyacheslav Polovkov, N. Shabalin, Terekhina Yana, Frih-Har Anna","doi":"10.22323/1.429.0024","DOIUrl":null,"url":null,"abstract":"The task of analysing the inhabitants of the underwater world applies to a wide range of applied problems: construction, fishing, and mining. Currently, this task is applied on an industrial scale by a rigorous review done by human experts in underwater life. In this work, we present a tool that we have created that allows us to significantly reduce the time spent by a person on video analysis. Our technology offsets the painstaking video review task to AI, creating a shortcut that allows experts only to verify the accuracy of the results. To achieve this, we have developed an observation pipeline by dividing the video into frames; assessing their degree of noise and blurriness; performing corrections via resolution increase; analysing the number of animals on each frame; building a report on the content of the video, and displaying the obtained data of the biotope on the map. This dramatically reduces the time spent analysing underwater video data. Also, we considered the task of biotope mass calculation. We correlated the Few-shot learning segmentation model results with point cloud data to achieve that. That provided us with a biotope surface coverage area that allowed us to approximate its volume. Such estimation is helpful for precise area mapping and surveillance. Thus, this paper presents a system that allows detailed underwater biotope mapping using automatic processing of a single camera underwater video data. To achieve this, we combine into a single pipeline a set of deep neural networks that work in tandem.","PeriodicalId":262901,"journal":{"name":"Proceedings of The 6th International Workshop on Deep Learning in Computational Physics — PoS(DLCP2022)","volume":"130 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of The 6th International Workshop on Deep Learning in Computational Physics — PoS(DLCP2022)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22323/1.429.0024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The task of analysing the inhabitants of the underwater world applies to a wide range of applied problems: construction, fishing, and mining. Currently, this task is applied on an industrial scale by a rigorous review done by human experts in underwater life. In this work, we present a tool that we have created that allows us to significantly reduce the time spent by a person on video analysis. Our technology offsets the painstaking video review task to AI, creating a shortcut that allows experts only to verify the accuracy of the results. To achieve this, we have developed an observation pipeline by dividing the video into frames; assessing their degree of noise and blurriness; performing corrections via resolution increase; analysing the number of animals on each frame; building a report on the content of the video, and displaying the obtained data of the biotope on the map. This dramatically reduces the time spent analysing underwater video data. Also, we considered the task of biotope mass calculation. We correlated the Few-shot learning segmentation model results with point cloud data to achieve that. That provided us with a biotope surface coverage area that allowed us to approximate its volume. Such estimation is helpful for precise area mapping and surveillance. Thus, this paper presents a system that allows detailed underwater biotope mapping using automatic processing of a single camera underwater video data. To achieve this, we combine into a single pipeline a set of deep neural networks that work in tandem.
水下生物群落制图:水下视频数据的自动处理
分析海底世界居民的任务适用于广泛的应用问题:建筑、捕鱼和采矿。目前,这项任务是通过水下生物人类专家的严格审查在工业规模上应用的。在这项工作中,我们展示了一个我们创建的工具,它允许我们显着减少一个人在视频分析上花费的时间。我们的技术将艰苦的视频审查任务抵消给了人工智能,创造了一个快捷方式,让专家只验证结果的准确性。为了实现这一点,我们开发了一个观察管道,将视频分成帧;评估它们的噪音和模糊程度;通过提高分辨率进行校正;分析每一帧的动物数量;根据视频内容建立报告,并在地图上显示所获得的生物群落数据。这大大减少了分析水下视频数据所花费的时间。此外,我们还考虑了生物群落质量计算的任务。我们将Few-shot学习分割模型的结果与点云数据相关联来实现这一目标。这为我们提供了一个生物群落的表面覆盖面积,使我们能够大致估算其体积。这样的估算有助于精确的区域测绘和监测。因此,本文提出了一个系统,允许详细的水下生物群落测绘使用自动处理单相机水下视频数据。为了实现这一点,我们将一组协同工作的深度神经网络组合成一个单一的管道。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信