Zhengkun Li , Rui Xu , Nino Brown , Barry L. Tillman , Changying Li
{"title":"Plot-scale peanut yield estimation using a phenotyping robot and transformer-based image analysis","authors":"Zhengkun Li , Rui Xu , Nino Brown , Barry L. Tillman , Changying Li","doi":"10.1016/j.atech.2025.101154","DOIUrl":null,"url":null,"abstract":"<div><div>Peanuts rank as the seventh-largest crop in the United States with a farm gate value exceeding $1 billion. Conventional peanut yield estimation methods involve digging, harvesting, transporting, and weighing, which are labor-intensive and inefficient for large-scale research operations. This inefficiency is particularly pronounced in peanut breeding, which requires precise pod yield estimations of each plot in order to compare genetic potential for yield to select new, high-performing breeding lines. To improve efficiency and throughput for accelerating genetic improvement, we proposed an automated robotic imaging system to predict peanut yields in the field after digging and inversion of plots. A workflow was developed to estimate yield accurately across different genotypes by counting the pods from stitched plot-scale images. After the robotic scanning in the field, the sequential images of each peanut plot were stitched together using the Local Feature Transformer (LoFTR)-based feature matching and estimated translation between adjusted images, which avoided replicated pod counting in overlapped image regions. Additionally, the Real-Time Detection Transformer (RT-DETR) was customized for pod detection by integrating partial convolution into a lightweight ResNet-18 backbone and refining the up-sampling and down-sampling modules in cross-scale feature fusion. The customized detector achieved a mean Average Precision (mAP50) of 89.3% and a mAP95 of 55.0%, improving by 3.3% and 5.9% over the original RT-DETR model with lighter weights and less computation. To determine the number of pods within the stitched plot-scale image, a sliding window-based method was used to divide it into smaller patches to improve the accuracy of pod detection. In a case study of a total of 68 plots across 19 genotypes in a peanut breeding yield trial, the result presented a correlation (R<sup>2</sup>=0.47) between the yield and predicted pod count, better than the structure-from-motion (SfM) method. The yield ranking among different genotypes using image prediction achieved an average consistency of 84.8% with manual measurement. When the yield difference between two genotypes exceeded 12%, the consistency surpassed 90%. Overall, our robotic plot-scale peanut yield estimation workflow showed promise to replace the human measurement process, reducing the time and labor required for yield determination and improving the efficiency of peanut breeding.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101154"},"PeriodicalIF":5.7000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772375525003867","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Peanuts rank as the seventh-largest crop in the United States with a farm gate value exceeding $1 billion. Conventional peanut yield estimation methods involve digging, harvesting, transporting, and weighing, which are labor-intensive and inefficient for large-scale research operations. This inefficiency is particularly pronounced in peanut breeding, which requires precise pod yield estimations of each plot in order to compare genetic potential for yield to select new, high-performing breeding lines. To improve efficiency and throughput for accelerating genetic improvement, we proposed an automated robotic imaging system to predict peanut yields in the field after digging and inversion of plots. A workflow was developed to estimate yield accurately across different genotypes by counting the pods from stitched plot-scale images. After the robotic scanning in the field, the sequential images of each peanut plot were stitched together using the Local Feature Transformer (LoFTR)-based feature matching and estimated translation between adjusted images, which avoided replicated pod counting in overlapped image regions. Additionally, the Real-Time Detection Transformer (RT-DETR) was customized for pod detection by integrating partial convolution into a lightweight ResNet-18 backbone and refining the up-sampling and down-sampling modules in cross-scale feature fusion. The customized detector achieved a mean Average Precision (mAP50) of 89.3% and a mAP95 of 55.0%, improving by 3.3% and 5.9% over the original RT-DETR model with lighter weights and less computation. To determine the number of pods within the stitched plot-scale image, a sliding window-based method was used to divide it into smaller patches to improve the accuracy of pod detection. In a case study of a total of 68 plots across 19 genotypes in a peanut breeding yield trial, the result presented a correlation (R2=0.47) between the yield and predicted pod count, better than the structure-from-motion (SfM) method. The yield ranking among different genotypes using image prediction achieved an average consistency of 84.8% with manual measurement. When the yield difference between two genotypes exceeded 12%, the consistency surpassed 90%. Overall, our robotic plot-scale peanut yield estimation workflow showed promise to replace the human measurement process, reducing the time and labor required for yield determination and improving the efficiency of peanut breeding.