通过多目标跟踪和 RGB-D 图像进行三视角棉花花朵计数

IF 4.4 1区 农林科学 Q1 AGRICULTURAL ENGINEERING
Chenjiao Tan , Jin Sun , Andrew H. Paterson , Huaibo Song , Changying Li
{"title":"通过多目标跟踪和 RGB-D 图像进行三视角棉花花朵计数","authors":"Chenjiao Tan ,&nbsp;Jin Sun ,&nbsp;Andrew H. Paterson ,&nbsp;Huaibo Song ,&nbsp;Changying Li","doi":"10.1016/j.biosystemseng.2024.08.010","DOIUrl":null,"url":null,"abstract":"<div><p>Monitoring the number of cotton flowers can provide important information for breeders to assess the flowering time and the productivity of genotypes because flowering marks the transition from vegetative growth to reproductive growth and impacts the final yield. Traditional manual counting methods are time-consuming and impractical for large-scale fields. To count cotton flowers efficiently and accurately, a multi-view multi-object tracking approach was proposed by using both RGB and depth images collected by three RGB-D cameras fixed on a ground robotic platform. The tracking-by-detection algorithm was employed to track flowers from three views simultaneously and remove duplicated counting from single views. Specifically, an object detection model (YOLOv8) was trained to detect flowers in RGB images and a deep learning-based optical flow model Recurrent All-pairs Field Transforms (RAFT) was used to estimate motion between two adjacent frames. The intersection over union and distance costs were employed to associate flowers in the tracking algorithm. Additionally, tracked flowers were segmented in RGB images and the depth of each flower was obtained from the corresponding depth image. Those flowers tracked with known depth from two side views were then projected onto the middle image coordinate using camera calibration parameters. Finally, a constrained hierarchy clustering algorithm clustered all flowers in the middle image coordinate to remove duplicated counting from three views. The results showed that the mean average precision of trained YOLOv8x was 96.4%. The counting results of the developed method were highly correlated with those counted manually with a coefficient of determination of 0.92. Besides, the mean absolute percentage error of all 25 testing videos was 6.22%. The predicted cumulative flower number of Pima cotton flowers is higher than that of Acala Maxxa, which is consistent with what breeders have observed. Furthermore, the developed method can also obtain the flower number distributions of different genotypes without laborious manual counting in the field. Overall, the three-view approach provides an efficient and effective approach to count cotton flowers from multiple views. By collecting the video data continuously, this method is beneficial for breeders to dissect genetic mechanisms of flowering time with unprecedented spatial and temporal resolution, also providing a means to discern genetic differences in fecundity, the number of flowers that result in harvestable bolls. The code and datasets used in this paper can be accessed on GitHub: <span><span>https://github.com/UGA-BSAIL/Multi-view_flower_counting</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":9173,"journal":{"name":"Biosystems Engineering","volume":"246 ","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Three-view cotton flower counting through multi-object tracking and RGB-D imagery\",\"authors\":\"Chenjiao Tan ,&nbsp;Jin Sun ,&nbsp;Andrew H. Paterson ,&nbsp;Huaibo Song ,&nbsp;Changying Li\",\"doi\":\"10.1016/j.biosystemseng.2024.08.010\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Monitoring the number of cotton flowers can provide important information for breeders to assess the flowering time and the productivity of genotypes because flowering marks the transition from vegetative growth to reproductive growth and impacts the final yield. Traditional manual counting methods are time-consuming and impractical for large-scale fields. To count cotton flowers efficiently and accurately, a multi-view multi-object tracking approach was proposed by using both RGB and depth images collected by three RGB-D cameras fixed on a ground robotic platform. The tracking-by-detection algorithm was employed to track flowers from three views simultaneously and remove duplicated counting from single views. Specifically, an object detection model (YOLOv8) was trained to detect flowers in RGB images and a deep learning-based optical flow model Recurrent All-pairs Field Transforms (RAFT) was used to estimate motion between two adjacent frames. The intersection over union and distance costs were employed to associate flowers in the tracking algorithm. Additionally, tracked flowers were segmented in RGB images and the depth of each flower was obtained from the corresponding depth image. Those flowers tracked with known depth from two side views were then projected onto the middle image coordinate using camera calibration parameters. Finally, a constrained hierarchy clustering algorithm clustered all flowers in the middle image coordinate to remove duplicated counting from three views. The results showed that the mean average precision of trained YOLOv8x was 96.4%. The counting results of the developed method were highly correlated with those counted manually with a coefficient of determination of 0.92. Besides, the mean absolute percentage error of all 25 testing videos was 6.22%. The predicted cumulative flower number of Pima cotton flowers is higher than that of Acala Maxxa, which is consistent with what breeders have observed. Furthermore, the developed method can also obtain the flower number distributions of different genotypes without laborious manual counting in the field. Overall, the three-view approach provides an efficient and effective approach to count cotton flowers from multiple views. By collecting the video data continuously, this method is beneficial for breeders to dissect genetic mechanisms of flowering time with unprecedented spatial and temporal resolution, also providing a means to discern genetic differences in fecundity, the number of flowers that result in harvestable bolls. The code and datasets used in this paper can be accessed on GitHub: <span><span>https://github.com/UGA-BSAIL/Multi-view_flower_counting</span><svg><path></path></svg></span>.</p></div>\",\"PeriodicalId\":9173,\"journal\":{\"name\":\"Biosystems Engineering\",\"volume\":\"246 \",\"pages\":\"\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biosystems Engineering\",\"FirstCategoryId\":\"97\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1537511024001880\",\"RegionNum\":1,\"RegionCategory\":\"农林科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURAL ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biosystems Engineering","FirstCategoryId":"97","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1537511024001880","RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

监测棉花开花数量可为育种者评估开花时间和基因型的生产力提供重要信息,因为开花标志着棉花从无性生长向生殖生长的过渡,并影响最终产量。传统的人工数花方法既费时又不适合大面积种植。为了高效准确地计数棉花花朵,我们提出了一种多视角多目标跟踪方法,利用固定在地面机器人平台上的三台 RGB-D 摄像机收集的 RGB 和深度图像。采用逐个检测跟踪算法同时跟踪三个视角的花朵,并去除单个视角的重复计数。具体来说,训练了一个对象检测模型(YOLOv8)来检测 RGB 图像中的花朵,并使用基于深度学习的光流模型循环全对场变换(RAFT)来估计相邻两帧之间的运动。在跟踪算法中,采用了交集大于联合和距离成本来关联花朵。此外,在 RGB 图像中对跟踪到的花朵进行分割,并从相应的深度图像中获得每朵花的深度。然后,利用相机校准参数,将从两个侧视图追踪到的已知深度的花朵投影到中间图像坐标上。最后,使用约束层次聚类算法对中间图像坐标上的所有花朵进行聚类,以去除三个视图中的重复计数。结果表明,经过训练的 YOLOv8x 的平均精度为 96.4%。所开发方法的计数结果与人工计数结果高度相关,决定系数为 0.92。此外,所有 25 个测试视频的平均绝对误差为 6.22%。皮马棉花的预测累积花数高于 Acala Maxxa,这与育种人员的观察结果一致。此外,所开发的方法还能获得不同基因型的花数分布,而无需在田间进行费力的人工计数。总之,三视角方法提供了一种从多个视角对棉花花朵进行计数的高效方法。通过连续收集视频数据,该方法有利于育种人员以前所未有的空间和时间分辨率剖析开花时间的遗传机制,同时还提供了一种方法来鉴别受精率(即可采收棉铃的花朵数量)的遗传差异。本文使用的代码和数据集可在 GitHub 上访问:https://github.com/UGA-BSAIL/Multi-view_flower_counting。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Three-view cotton flower counting through multi-object tracking and RGB-D imagery

Monitoring the number of cotton flowers can provide important information for breeders to assess the flowering time and the productivity of genotypes because flowering marks the transition from vegetative growth to reproductive growth and impacts the final yield. Traditional manual counting methods are time-consuming and impractical for large-scale fields. To count cotton flowers efficiently and accurately, a multi-view multi-object tracking approach was proposed by using both RGB and depth images collected by three RGB-D cameras fixed on a ground robotic platform. The tracking-by-detection algorithm was employed to track flowers from three views simultaneously and remove duplicated counting from single views. Specifically, an object detection model (YOLOv8) was trained to detect flowers in RGB images and a deep learning-based optical flow model Recurrent All-pairs Field Transforms (RAFT) was used to estimate motion between two adjacent frames. The intersection over union and distance costs were employed to associate flowers in the tracking algorithm. Additionally, tracked flowers were segmented in RGB images and the depth of each flower was obtained from the corresponding depth image. Those flowers tracked with known depth from two side views were then projected onto the middle image coordinate using camera calibration parameters. Finally, a constrained hierarchy clustering algorithm clustered all flowers in the middle image coordinate to remove duplicated counting from three views. The results showed that the mean average precision of trained YOLOv8x was 96.4%. The counting results of the developed method were highly correlated with those counted manually with a coefficient of determination of 0.92. Besides, the mean absolute percentage error of all 25 testing videos was 6.22%. The predicted cumulative flower number of Pima cotton flowers is higher than that of Acala Maxxa, which is consistent with what breeders have observed. Furthermore, the developed method can also obtain the flower number distributions of different genotypes without laborious manual counting in the field. Overall, the three-view approach provides an efficient and effective approach to count cotton flowers from multiple views. By collecting the video data continuously, this method is beneficial for breeders to dissect genetic mechanisms of flowering time with unprecedented spatial and temporal resolution, also providing a means to discern genetic differences in fecundity, the number of flowers that result in harvestable bolls. The code and datasets used in this paper can be accessed on GitHub: https://github.com/UGA-BSAIL/Multi-view_flower_counting.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biosystems Engineering
Biosystems Engineering 农林科学-农业工程
CiteScore
10.60
自引率
7.80%
发文量
239
审稿时长
53 days
期刊介绍: Biosystems Engineering publishes research in engineering and the physical sciences that represent advances in understanding or modelling of the performance of biological systems for sustainable developments in land use and the environment, agriculture and amenity, bioproduction processes and the food chain. The subject matter of the journal reflects the wide range and interdisciplinary nature of research in engineering for biological systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信