利用表现型机器人和基于变压器的图像分析进行花生产量估算

IF 5.7 Q1 AGRICULTURAL ENGINEERING
Zhengkun Li , Rui Xu , Nino Brown , Barry L. Tillman , Changying Li
{"title":"利用表现型机器人和基于变压器的图像分析进行花生产量估算","authors":"Zhengkun Li ,&nbsp;Rui Xu ,&nbsp;Nino Brown ,&nbsp;Barry L. Tillman ,&nbsp;Changying Li","doi":"10.1016/j.atech.2025.101154","DOIUrl":null,"url":null,"abstract":"<div><div>Peanuts rank as the seventh-largest crop in the United States with a farm gate value exceeding $1 billion. Conventional peanut yield estimation methods involve digging, harvesting, transporting, and weighing, which are labor-intensive and inefficient for large-scale research operations. This inefficiency is particularly pronounced in peanut breeding, which requires precise pod yield estimations of each plot in order to compare genetic potential for yield to select new, high-performing breeding lines. To improve efficiency and throughput for accelerating genetic improvement, we proposed an automated robotic imaging system to predict peanut yields in the field after digging and inversion of plots. A workflow was developed to estimate yield accurately across different genotypes by counting the pods from stitched plot-scale images. After the robotic scanning in the field, the sequential images of each peanut plot were stitched together using the Local Feature Transformer (LoFTR)-based feature matching and estimated translation between adjusted images, which avoided replicated pod counting in overlapped image regions. Additionally, the Real-Time Detection Transformer (RT-DETR) was customized for pod detection by integrating partial convolution into a lightweight ResNet-18 backbone and refining the up-sampling and down-sampling modules in cross-scale feature fusion. The customized detector achieved a mean Average Precision (mAP50) of 89.3% and a mAP95 of 55.0%, improving by 3.3% and 5.9% over the original RT-DETR model with lighter weights and less computation. To determine the number of pods within the stitched plot-scale image, a sliding window-based method was used to divide it into smaller patches to improve the accuracy of pod detection. In a case study of a total of 68 plots across 19 genotypes in a peanut breeding yield trial, the result presented a correlation (R<sup>2</sup>=0.47) between the yield and predicted pod count, better than the structure-from-motion (SfM) method. The yield ranking among different genotypes using image prediction achieved an average consistency of 84.8% with manual measurement. When the yield difference between two genotypes exceeded 12%, the consistency surpassed 90%. Overall, our robotic plot-scale peanut yield estimation workflow showed promise to replace the human measurement process, reducing the time and labor required for yield determination and improving the efficiency of peanut breeding.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101154"},"PeriodicalIF":5.7000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Plot-scale peanut yield estimation using a phenotyping robot and transformer-based image analysis\",\"authors\":\"Zhengkun Li ,&nbsp;Rui Xu ,&nbsp;Nino Brown ,&nbsp;Barry L. Tillman ,&nbsp;Changying Li\",\"doi\":\"10.1016/j.atech.2025.101154\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Peanuts rank as the seventh-largest crop in the United States with a farm gate value exceeding $1 billion. Conventional peanut yield estimation methods involve digging, harvesting, transporting, and weighing, which are labor-intensive and inefficient for large-scale research operations. This inefficiency is particularly pronounced in peanut breeding, which requires precise pod yield estimations of each plot in order to compare genetic potential for yield to select new, high-performing breeding lines. To improve efficiency and throughput for accelerating genetic improvement, we proposed an automated robotic imaging system to predict peanut yields in the field after digging and inversion of plots. A workflow was developed to estimate yield accurately across different genotypes by counting the pods from stitched plot-scale images. After the robotic scanning in the field, the sequential images of each peanut plot were stitched together using the Local Feature Transformer (LoFTR)-based feature matching and estimated translation between adjusted images, which avoided replicated pod counting in overlapped image regions. Additionally, the Real-Time Detection Transformer (RT-DETR) was customized for pod detection by integrating partial convolution into a lightweight ResNet-18 backbone and refining the up-sampling and down-sampling modules in cross-scale feature fusion. The customized detector achieved a mean Average Precision (mAP50) of 89.3% and a mAP95 of 55.0%, improving by 3.3% and 5.9% over the original RT-DETR model with lighter weights and less computation. To determine the number of pods within the stitched plot-scale image, a sliding window-based method was used to divide it into smaller patches to improve the accuracy of pod detection. In a case study of a total of 68 plots across 19 genotypes in a peanut breeding yield trial, the result presented a correlation (R<sup>2</sup>=0.47) between the yield and predicted pod count, better than the structure-from-motion (SfM) method. The yield ranking among different genotypes using image prediction achieved an average consistency of 84.8% with manual measurement. When the yield difference between two genotypes exceeded 12%, the consistency surpassed 90%. Overall, our robotic plot-scale peanut yield estimation workflow showed promise to replace the human measurement process, reducing the time and labor required for yield determination and improving the efficiency of peanut breeding.</div></div>\",\"PeriodicalId\":74813,\"journal\":{\"name\":\"Smart agricultural technology\",\"volume\":\"12 \",\"pages\":\"Article 101154\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart agricultural technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772375525003867\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURAL ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772375525003867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

花生是美国第七大农作物,农场产值超过10亿美元。传统的花生产量估算方法包括挖掘、收获、运输和称重,这些都是劳动密集型的,对于大规模的研究操作来说效率低下。这种低效率在花生育种中尤为明显,这需要对每个地块的豆荚产量进行精确估计,以便比较产量的遗传潜力,选择新的高性能育种品系。为了提高效率和产量,加速遗传改良,我们提出了一种自动化的机器人成像系统,用于预测花生在田间的产量。开发了一种工作流程,通过从缝合的图标图像中计数豆荚来准确估计不同基因型的产量。机器人在田间扫描后,利用基于局部特征变换(Local Feature Transformer, LoFTR)的特征匹配和调整后图像之间的估计平移,将每个花生地块的序列图像拼接在一起,避免了重叠图像区域的重复豆荚计数。此外,实时检测变压器(RT-DETR)通过将部分卷积集成到轻量级ResNet-18主干中,并在跨尺度特征融合中改进上采样和下采样模块,为pod检测定制。自定义检测器的平均精度(mAP50)为89.3%,mAP95为55.0%,比原来的RT-DETR模型分别提高了3.3%和5.9%,重量更轻,计算量更少。为了确定拼接后的地块尺度图像中的豆荚数量,采用基于滑动窗口的方法将其划分为更小的小块,以提高豆荚检测的精度。在一个花生育种产量试验中,对19个基因型共68个地块进行了研究,结果表明产量与预测荚果数之间的相关性(R2=0.47)优于结构-运动(SfM)方法。图像预测在不同基因型间的产量排序与人工测量的平均一致性为84.8%。当两个基因型产量差异超过12%时,一致性超过90%。总体而言,我们的机器人地块尺度花生产量估算工作流程有望取代人工测量过程,减少产量确定所需的时间和劳动力,提高花生育种效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Plot-scale peanut yield estimation using a phenotyping robot and transformer-based image analysis
Peanuts rank as the seventh-largest crop in the United States with a farm gate value exceeding $1 billion. Conventional peanut yield estimation methods involve digging, harvesting, transporting, and weighing, which are labor-intensive and inefficient for large-scale research operations. This inefficiency is particularly pronounced in peanut breeding, which requires precise pod yield estimations of each plot in order to compare genetic potential for yield to select new, high-performing breeding lines. To improve efficiency and throughput for accelerating genetic improvement, we proposed an automated robotic imaging system to predict peanut yields in the field after digging and inversion of plots. A workflow was developed to estimate yield accurately across different genotypes by counting the pods from stitched plot-scale images. After the robotic scanning in the field, the sequential images of each peanut plot were stitched together using the Local Feature Transformer (LoFTR)-based feature matching and estimated translation between adjusted images, which avoided replicated pod counting in overlapped image regions. Additionally, the Real-Time Detection Transformer (RT-DETR) was customized for pod detection by integrating partial convolution into a lightweight ResNet-18 backbone and refining the up-sampling and down-sampling modules in cross-scale feature fusion. The customized detector achieved a mean Average Precision (mAP50) of 89.3% and a mAP95 of 55.0%, improving by 3.3% and 5.9% over the original RT-DETR model with lighter weights and less computation. To determine the number of pods within the stitched plot-scale image, a sliding window-based method was used to divide it into smaller patches to improve the accuracy of pod detection. In a case study of a total of 68 plots across 19 genotypes in a peanut breeding yield trial, the result presented a correlation (R2=0.47) between the yield and predicted pod count, better than the structure-from-motion (SfM) method. The yield ranking among different genotypes using image prediction achieved an average consistency of 84.8% with manual measurement. When the yield difference between two genotypes exceeded 12%, the consistency surpassed 90%. Overall, our robotic plot-scale peanut yield estimation workflow showed promise to replace the human measurement process, reducing the time and labor required for yield determination and improving the efficiency of peanut breeding.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信