Detection and tracking of belugas, kayaks and motorized boats in drone video using deep learning

IF 1.3 Q3 REMOTE SENSING
Madison L. Harasyn, W. Chan, Emma L. Ausen, D. Barber
{"title":"Detection and tracking of belugas, kayaks and motorized boats in drone video using deep learning","authors":"Madison L. Harasyn, W. Chan, Emma L. Ausen, D. Barber","doi":"10.1139/juvs-2021-0024","DOIUrl":null,"url":null,"abstract":"Aerial imagery surveys are commonly used in marine mammal research to determine population size, distribution and habitat use. Analysis of aerial photos involves hours of manually identifying individuals present in each image and converting raw counts into useable biological statistics. Our research proposes the use of deep learning algorithms to increase the efficiency of the marine mammal research workflow. To test the feasibility of this proposal, the existing YOLOv4 convolutional neural network model was trained to detect belugas, kayaks and motorized boats in oblique drone imagery, collected from a stationary tethered system. Automated computer-based object detection achieved the following precision and recall, respectively, for each class: beluga = 74%/72%; boat = 97%/99%; and kayak = 96%/96%. We then tested the performance of computer vision tracking of belugas and manned watercraft in drone videos using the DeepSORT tracking algorithm, which achieved a multiple-object tracking accuracy (MOTA) ranging from 37% – 88% and multiple object tracking precision (MOTP) between 63% – 86%. Results from this research indicate that deep learning technology can detect and track features more consistently than human annotators, allowing for larger datasets to be processed within a fraction of the time while avoiding discrepancies introduced by labeling fatigue or multiple human annotators.","PeriodicalId":45619,"journal":{"name":"Journal of Unmanned Vehicle Systems","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Unmanned Vehicle Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1139/juvs-2021-0024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 5

Abstract

Aerial imagery surveys are commonly used in marine mammal research to determine population size, distribution and habitat use. Analysis of aerial photos involves hours of manually identifying individuals present in each image and converting raw counts into useable biological statistics. Our research proposes the use of deep learning algorithms to increase the efficiency of the marine mammal research workflow. To test the feasibility of this proposal, the existing YOLOv4 convolutional neural network model was trained to detect belugas, kayaks and motorized boats in oblique drone imagery, collected from a stationary tethered system. Automated computer-based object detection achieved the following precision and recall, respectively, for each class: beluga = 74%/72%; boat = 97%/99%; and kayak = 96%/96%. We then tested the performance of computer vision tracking of belugas and manned watercraft in drone videos using the DeepSORT tracking algorithm, which achieved a multiple-object tracking accuracy (MOTA) ranging from 37% – 88% and multiple object tracking precision (MOTP) between 63% – 86%. Results from this research indicate that deep learning technology can detect and track features more consistently than human annotators, allowing for larger datasets to be processed within a fraction of the time while avoiding discrepancies introduced by labeling fatigue or multiple human annotators.
使用深度学习在无人机视频中检测和跟踪白鲸、皮划艇和摩托艇
航空图像调查通常用于海洋哺乳动物的研究,以确定种群规模,分布和栖息地的利用。航空照片的分析需要数小时的时间来手动识别每张图像中的个体,并将原始计数转换为可用的生物统计数据。我们的研究提出使用深度学习算法来提高海洋哺乳动物研究工作流程的效率。为了测试这一提议的可行性,现有的YOLOv4卷积神经网络模型被训练来检测从固定系绳系统收集的倾斜无人机图像中的白鲸、皮划艇和摩托艇。基于计算机的自动目标检测分别达到了以下准确率和召回率:白鲸= 74%/72%;船= 97%/99%;皮艇= 96%/96%。然后,我们使用DeepSORT跟踪算法测试了无人机视频中白鲸和有人驾驶船只的计算机视觉跟踪性能,该算法实现了37% - 88%的多目标跟踪精度(MOTA)和63% - 86%的多目标跟踪精度(MOTP)。这项研究的结果表明,深度学习技术可以比人类注释器更一致地检测和跟踪特征,允许在一小部分时间内处理更大的数据集,同时避免因标注疲劳或多个人类注释器而导致的差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.30
自引率
0.00%
发文量
2
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信