Precision Agriculture最新文献

筛选
英文 中文
Comparing machine learning algorithms for predicting and digitally mapping surface soil available phosphorous: a case study from southwestern Iran 地表土壤可利用磷的预测和数字化绘图的机器学习算法比较:伊朗西南部的一项案例研究
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-27 DOI: 10.1007/s11119-023-10099-5
Saeid Hojati, Asim Biswas, Mojtaba Norouzi Masir
{"title":"Comparing machine learning algorithms for predicting and digitally mapping surface soil available phosphorous: a case study from southwestern Iran","authors":"Saeid Hojati, Asim Biswas, Mojtaba Norouzi Masir","doi":"10.1007/s11119-023-10099-5","DOIUrl":"https://doi.org/10.1007/s11119-023-10099-5","url":null,"abstract":"<p>In developing countries like Iran, where information is scarce, understanding the spatial variability of soil available phosphorous (SAP), one of the three major nutrients, is crucial for effective agricultural ecosystem management. This study aimed to predict and digitally map the spatial distribution and related uncertainty of SAP while also assessing the impact of environmental factors on SAP variability in the topsoils. A study area from northern Khuzestan province, Iran was selected as case study area. Three machine learning (ML) models, namely, Random Forest (RF), Artificial Neural Network (ANN), and Support Vector Regression (SVR), were used to develop predictive relationship between surface soil (0–10 cm) SAP content and environmental covariates derived from a digital elevation model and Landsat 8 images. A total of 250 topsoil samples were collected following the conditioned Latin Hypercube Sampling (cLHS) approach and several soil properties were measured in the laboratory. Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Lin’s Concordance Correlation Coefficient (LCCC) were used to determine the accuracy of models. The findings indicated that the RF algorithm demonstrated the most favorable performance, with a mean absolute error (MAE) of 0.85 mg SAP kg<sup>−1</sup>, the lowest root mean square error (RMSE) of 0.99 mg SAP kg<sup>−1</sup>, and the highest linear correlation coefficient (LCCC) values of 0.96. This suggests that the RF algorithm had the least tendency to overestimate or underestimate SAP contents compared to other methods. Consequently, the RF algorithm was selected as the optimal choice. Predictive ML models were employed to digitally map SAP contents within the region. Spatial patterns of SAP contents showed an increasing gradient from west to east. The spatial variability information provides a basis for developing sustainable production system in the area.</p>","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"3 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139050777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing direct-seeded rice yield prediction using UAV-derived features acquired during the reproductive phase 利用无人机在生育期获取的特征加强直播水稻产量预测
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-21 DOI: 10.1007/s11119-023-10103-y
Guodong Yang, Yaxing Li, Shen Yuan, Changzai Zhou, Hongshun Xiang, Zhenqing Zhao, Qiaorong Wei, Qingshan Chen, Shaobing Peng, Le Xu
{"title":"Enhancing direct-seeded rice yield prediction using UAV-derived features acquired during the reproductive phase","authors":"Guodong Yang, Yaxing Li, Shen Yuan, Changzai Zhou, Hongshun Xiang, Zhenqing Zhao, Qiaorong Wei, Qingshan Chen, Shaobing Peng, Le Xu","doi":"10.1007/s11119-023-10103-y","DOIUrl":"https://doi.org/10.1007/s11119-023-10103-y","url":null,"abstract":"<p>Pre-harvest yield prediction of direct-seeded rice is critical for guiding crop interventions and food security assessment in precision agriculture. Technology advances in unmanned aerial vehicle (UAV)-based remote sensing has provided an unprecedented opportunity to efficiently retrieve crop growth parameters instead of labor-intensive ground measurements. This study is aiming to evaluate the feasibility of fusing multi-temporal UAV-derived features collected at critical phenological stages in forecasting direct-seeded rice yield across different cultivars and nitrogen (N) management. The results showed that RGB sensor-derived canopy volume, canopy coverage, and spectral features including RBRI, WI etc., were identified to be most sensitive to the differences in aboveground biomass and grain yield. Heading stage was the suitable time for estimating yield performance (R<sup>2</sup> = 0.75) for mono-temporal UAV observation. By contrast, multi-temporal features fusion could remarkably enhance the yield prediction accuracy. Moreover, the yield prediction accuracy can be further improved by integrating UAV features collected at panicle initiation and heading stages (i.e., rice reproductive phase) compared to multi-temporal features fusion (R<sup>2</sup> increased from 0.82 to 0.85 and RMSE decreased from 35.1 to 31.5 g m<sup>−2</sup>). This can be attributed to the fact that the biomass accumulation during the reproductive phase was closely associated to the total spikelets and final yield. By using this proposed approach, the predicted yield showed a good spatial consistency with the measured yield across different cultivars and N management, and yield prediction error in the most of the plots (114 of 128 plots) was less than 45 g m<sup>−2</sup>. In summary, this study highlights that the reproductive phase is the optimal time window for UAV observing, which provides an effective method for accurate pre-harvest yield prediction of direct-seeded rice in precision agriculture.</p>","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"4 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138840319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end 3D CNN for plot-scale soybean yield prediction using multitemporal UAV-based RGB images 利用基于无人机的多时态 RGB 图像进行小区尺度大豆产量预测的端到端 3D CNN
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-21 DOI: 10.1007/s11119-023-10096-8
Sourav Bhadra, Vasit Sagan, Juan Skobalski, Fernando Grignola, Supria Sarkar, Justin Vilbig
{"title":"End-to-end 3D CNN for plot-scale soybean yield prediction using multitemporal UAV-based RGB images","authors":"Sourav Bhadra, Vasit Sagan, Juan Skobalski, Fernando Grignola, Supria Sarkar, Justin Vilbig","doi":"10.1007/s11119-023-10096-8","DOIUrl":"https://doi.org/10.1007/s11119-023-10096-8","url":null,"abstract":"<p>Crop yield prediction from UAV images has significant potential in accelerating and revolutionizing crop breeding pipelines. Although convolutional neural networks (CNN) provide easy, accurate and efficient solutions over traditional machine learning models in computer vision applications, a CNN training requires large number of ground truth data, which is often difficult to collect in the agricultural context. The major objective of this study was to develope an end-to-end 3D CNN model for plot-scale soybean yield prediction using multitemporal UAV-based RGB images with approximately 30,000 sample plots. A low-cost UAV-RGB system was utilized and multitemporal images from 13 different experimental fields were collected at Argentina in 2021. Three commonly used 2D CNN architectures (i.e., VGG, ResNet and DenseNet) were transformed into 3D variants to incorporate the temporal data as the third dimension. Additionally, multiple spatiotemporal resolutions were considered as data input and the CNN architectures were trained with different combinations of input shapes. The results reveal that: (a) DenseNet provided the most efficient result (R<sup>2</sup> 0.69) in terms of accuracy and model complexity, followed by VGG (R<sup>2</sup> 0.70) and ResNet (R<sup>2</sup> 0.65); (b) Finer spatiotemporal resolution did not necessarily improve the model performance but increased the model complexity, while the coarser resolution achieved comparable results; and (c) DenseNet showed lower clustering patterns in its prediction maps compared to the other models. This study clearly identifies that multitemporal observation with UAV-based RGB images provides enough information for the 3D CNN architectures to accurately estimate soybean yield non-destructively and efficiently.</p>","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"80 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138840409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retinanet_G2S: a multi-scale feature fusion-based network for fruit detection of punna navel oranges in complex field environments Retinanet_G2S:基于多尺度特征融合的网络,用于在复杂的田间环境中检测番泻叶脐橙果实
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-19 DOI: 10.1007/s11119-023-10098-6
Hongxing Peng, Hu Chen, Xin Zhang, Huanai Liu, Keyin Chen, Juntao Xiong
{"title":"Retinanet_G2S: a multi-scale feature fusion-based network for fruit detection of punna navel oranges in complex field environments","authors":"Hongxing Peng, Hu Chen, Xin Zhang, Huanai Liu, Keyin Chen, Juntao Xiong","doi":"10.1007/s11119-023-10098-6","DOIUrl":"https://doi.org/10.1007/s11119-023-10098-6","url":null,"abstract":"<p>In the natural environment, the detection and recognition process of Punna navel orange fruit using machine vision systems is affected by many factors, such as complex background, uneven light illumination, occlusions of branches and leaves and large variations in fruit size. To solve these problems of low accuracy in fruit detection and poor robustness of the detection algorithm in the field conditions, a new object detection algorithm, named Retinanet_G2S, was proposed in this paper based on the modified Retinanet network. The images of Punna navel orange were collected with Microsoft Kinect V2 in the uncontrolled environment. Firstly, a new Res2Net-GF network was designed to replace the section of feature extraction in the original Retinanet, which can potentially improve the learning ability of target features of the trunk network. Secondly, a multi-scale cross-regional feature fusion grids network was designed to replace the feature pyramid network module in the original Retinanet, which could enhance the ability of feature information fusion among different scales of the feature pyramid. Finally, the original border regression localization method in Retinanet network was optimized based on the accurate boundary box regression algorithm. The study results showed that, compared with the original Retinanet network, Retinanet_G2S improved mAP, mAP50, mAP75, mAP<sub>S</sub>, mAP<sub>M</sub> and mAP<sub>L</sub> by 3.8%, 1.7%, 5.8%, 2.4%, 2.1% and 5.5%, respectively. Moreover, compared with 7 types of classic object detection models, including SSD, YOLOv3, CenterNet, CornerNet, FCOS, Faster-RCNN and Retinanet, the average increase in mAP of Retinanet_G2S was 9.11%. Overall, Retinanet_G2S showed a promising optimization effect, particularly for the detection of small targets and overlapping fruits.</p>","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"19 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138740515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design, implementation and validation of a sensor-based precise airblast sprayer to improve pesticide applications in orchards 设计、实施和验证基于传感器的精确喷气式喷雾器,以改进果园中的农药施用
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-18 DOI: 10.1007/s11119-023-10097-7
Bernat Salas, Ramón Salcedo, Francisco Garcia-Ruiz, Emilio Gil
{"title":"Design, implementation and validation of a sensor-based precise airblast sprayer to improve pesticide applications in orchards","authors":"Bernat Salas, Ramón Salcedo, Francisco Garcia-Ruiz, Emilio Gil","doi":"10.1007/s11119-023-10097-7","DOIUrl":"https://doi.org/10.1007/s11119-023-10097-7","url":null,"abstract":"<p>An orchard sprayer prototype running a variable-rate algorithm to adapt the spray volume to the canopy characteristics (dimensions, shape and leaf density) in real-time was designed and implemented. The developed machine was able to modify the application rate by using an algorithm based on the tree row volume, in combination with a newly coefficient defined as Density Factor (<i>Df</i>). Variations in the canopy characteristics along the row crop were electronically measured using six ultrasonic sensors (three per sprayer side). These differences in foliage structure were used to adjust the flow rate of the nozzles by merging the ultrasonic sensors data and the forward speed information received from the on-board GNSS. A set of motor-valves was used to regulate the final amount of sprayed liquid. Laboratory and field tests using artificial canopy were arranged to calibrate and select the optimal ultrasonic sensor configuration (width beam and signal pre-processing method) that best described the physical canopy properties. Results indicated that the sensor setup with a medium beam width offered the most appropriate characterization of trees in terms of width and <i>Df</i>. The experimental sprayer was also able to calculate the application rate automatically depending on changes on target trees. In general, the motor valves demonstrated adequate capability to supply and control the required liquid pressure at all times, mainly when spraying in a range between 4.0 and 14.0 MPa. Further work is required on the equipment, such as designing field efficiency tests for the sprayer or refining the accuracy of <i>Df</i>.</p>","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"99 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138714144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-throughput phenotyping of individual plant height in an oilseed rape population based on Mask-RCNN and UAV images 基于 Mask-RCNN 和无人机图像的油菜群体单株高度高通量表型分析
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-15 DOI: 10.1007/s11119-023-10095-9
Yutao Shen, Xuqi Lu, Mengqi Lyu, Hongyu Zhou, Wenxuan Guan, Lixi Jiang, Yuhong He, Haiyan Cen
{"title":"High-throughput phenotyping of individual plant height in an oilseed rape population based on Mask-RCNN and UAV images","authors":"Yutao Shen, Xuqi Lu, Mengqi Lyu, Hongyu Zhou, Wenxuan Guan, Lixi Jiang, Yuhong He, Haiyan Cen","doi":"10.1007/s11119-023-10095-9","DOIUrl":"https://doi.org/10.1007/s11119-023-10095-9","url":null,"abstract":"<p>Plant height, a key agronomic trait, affects crop structure, photosynthesis, and thus the final yield and seed quality. The combination of digital cameras on unmanned aerial vehicles (UAVs) and use of structure from motion have enabled high-throughput crop canopy height estimation. However, the focus of prior research has mainly been on plot-level height prediction, neglecting precise estimations for individual plants. This study aims to explore the potential of UAV RGB images with mask region-based convolutional neural network (Mask-RCNN) for high-throughput phenotyping of individual-level height (IH) in oilseed rape at different growth stages. Field-measured height (FH) of nine sampling plants in each subplot of the 150 subplots was obtained by manual measurement after the UAV flight. An instance segmentation model for oilseed rape with data augmentation based on the Mask-RCNN model was developed. The IHs were then used to obtain plot-level height based on individual-level height (PHIH). The results show that Mask-RCNN performed better than the conventional Otsu method with the F1 score increased by 60.8% and 26.6% under high and low weed pressure, respectively. The trained model with data augmentation achieved accurate crop height estimation based on overexposed and underexposed UAV images, indicating the model’s applicability in practical scenarios. The PHIH can be predicted with the determination coefficient (r<sup>2</sup>) of 0.992, root mean square error (RMSE) of 4.03 cm, relative root mean square error (rRMSE) of 7.68%, which outperformed the results in the reported studies, especially in the late bolting stage. The IHs of the whole growth stages of oilseed can be predicted by this method with an r<sup>2</sup> of 0.983, RMSE of 2.60 cm, and rRMSE of 7.14%. Furthermore, this method enabled a comprehensive Genome-wide association study (GWAS) in a 293-accession genetic population. The GWAS identified 200 and 65 statistically significant single nucleotide polymorphisms (SNPs), which were tightly associated with 28 and 11 candidate genes, at the late bolting and flowering stages, respectively. These findings demonstrated that the proposed method is promising for accurate estimations of IHs in oilseed rape as well as exploring the variations within the subplot, thus providing great potential for high-throughput plant phenotyping in crop breeding.</p>","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"9 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138634978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extracting illuminated vegetation, shadowed vegetation and background for finer fractional vegetation cover with polarization information and a convolutional network 利用偏振信息和卷积网络提取照明植被、阴影植被和背景,以获得更精细的植被覆盖分数
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-13 DOI: 10.1007/s11119-023-10094-w
Hongru Bi, Wei Chen, Yi Yang
{"title":"Extracting illuminated vegetation, shadowed vegetation and background for finer fractional vegetation cover with polarization information and a convolutional network","authors":"Hongru Bi, Wei Chen, Yi Yang","doi":"10.1007/s11119-023-10094-w","DOIUrl":"https://doi.org/10.1007/s11119-023-10094-w","url":null,"abstract":"<p>Shadows are inevitable in vegetated remote sensing scenes due to variations in viewing and solar geometries, resulting in illuminated vegetation, shadowed vegetation, illuminated background and shadowed background. In RGB images, shadowed vegetation is difficult to separate from the shadowed background because their spectra are very similar in the visible light range. Furthermore, shadowed vegetation may provide different ecological functions than illuminated vegetation. Therefore, it is important to extract both illuminated and shadowed vegetation instead of combining them into one vegetation class. However, most previous studies focused on extracting total vegetation cover and neglected separating illuminated and shadowed vegetation, partly due to a lack of sufficient information. In this study, polarization information is introduced to extract illuminated vegetation, shadowed vegetation and background simultaneously with different deep learning algorithms. The experimental results show that the addition of polarization information can effectively improve the extraction accuracy of illuminated vegetation, shadowed vegetation and background, with a maximum accuracy improvement of 12.2%. The accuracy of shadow vegetation improved the most, with a rate of 21.8%. The results of this study suggest that by adding polarization information, illuminated and shadowed vegetation can be accurately extracted to provide a reliable vegetation cover product for remote sensing.</p>","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"55 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138582519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAE-Mask: a novel deep-learning-based automatic detection model for in-field wheat diseases DAE-Mask:基于深度学习的新型小麦田间病害自动检测模型
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-12 DOI: 10.1007/s11119-023-10093-x
Rui Mao, Yuchen Zhang, Zexi Wang, Xingan Hao, Tao Zhu, Shengchang Gao, Xiaoping Hu
{"title":"DAE-Mask: a novel deep-learning-based automatic detection model for in-field wheat diseases","authors":"Rui Mao, Yuchen Zhang, Zexi Wang, Xingan Hao, Tao Zhu, Shengchang Gao, Xiaoping Hu","doi":"10.1007/s11119-023-10093-x","DOIUrl":"https://doi.org/10.1007/s11119-023-10093-x","url":null,"abstract":"<p>Wheat diseases seriously restrict the safety of wheat production and food quality. For farmers and agriculture technicians, diagnosing the disease with the naked eye is not suitable for modern precision agriculture. Deep learning has shown promise in crop disease diagnosis, but accuracy and speed remain a significant challenge in natural field conditions. In this study, a novel DAE-Mask method based on <b>d</b>iversification-<b>a</b>ugmented features and <b>e</b>dge features was proposed for intelligent wheat disease detection. DAE-Mask used Densely Connected Convolutional Networks (DenseNet) for preliminary feature extraction, and a backbone feature extraction network combining Feature Pyramid Network (FPN) and attention mechanism was designed to extract diversification-augmented features. To accelerate DAE-Mask, an Edge Agreement Head module based on Sobel filters was designed to compare edge features during training, which improved the model’s mask generation efficiency. We also built a multi-scene wheat disease dataset, MSWDD2022, containing images of wheat stripe rust, wheat powdery mildew, wheat yellow dwarf, and wheat scab. Our model achieved detection speed of 0.08s/pic. On MSWDD2022, our model with mean average precision (<i>mAP</i>) of 96.02% outperformed YOLOv5s, YOLOv8x, SSD, EfficientDet, CenterNet, and RefineDet by 7.79, 1.32, 3.54, 4.79, 9.77, and 5.29 percentage points, respectively. On the public dataset PlantDoc, our model with <i>mAP</i> of 57.68% outperformed YOLOv5s, YOLOv8x, SSD, EfficientDet, CenterNet, and RefineDet by 27.76, 6.48, 14.43, 11.79, 19.40, and 13.40 percentage points, respectively. Finally, the DAE-Mask was deployed on WeChat Mini Program to realize the real-time detection of in-field wheat diseases. The <i>mAP</i> reached 92.78%, and the average return delay of each image was 1.43s.</p>","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"15 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138571150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision of grain yield monitors for use in on-farm research strip trials 用于农场研究带状试验的谷物产量监测器的精度
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-11 DOI: 10.1007/s11119-023-10092-y
A. A. Gauci, J. P. Fulton, A. Lindsey, S. A. Shearer, D. Barker, E. M. Hawkins
{"title":"Precision of grain yield monitors for use in on-farm research strip trials","authors":"A. A. Gauci, J. P. Fulton, A. Lindsey, S. A. Shearer, D. Barker, E. M. Hawkins","doi":"10.1007/s11119-023-10092-y","DOIUrl":"https://doi.org/10.1007/s11119-023-10092-y","url":null,"abstract":"<p>On-farm research (OFR) has become popular as a result of precision agriculture technology simplifying the process and farm software capabilities to summarize results collected through the technology. Different OFR designs exists with strip-trials being a simple approach to evaluate different treatments. Common in OFR is the use of yield monitors to collect crop performance data since yield represents a primary response variable in these type studies. The objective was to investigate the ability of grain yield monitoring technologies to accurately inform strip trials when frequent yield variability exists within an experimental unit. A combination of six sub-plot treatment resolutions (TR) that differed in length of imposed yield variation (7.6, 15.2, 30.5, 61.0, 121.9, and 243.8 m) were harvested at combine ground speeds of 3.2, 6.4, 7.2, and 8.1 kph, depending on study site (three study sites total). Intentional yield differences in maize (<i>Zea mays L.</i>) were created for each sub-plot by alternating the amount nitrogen (N) applied: 0 or 202 kg N/ha. Yield was measured by four commercially available yield monitoring (YM) technologies and a weigh wagon. Comparisons were made between the accumulated mass of the YM technology and weigh wagon through percent differences along with testing the significance of the plotted relationship between YM and weigh wagon. Results indicated that yield monitoring technology can be used to evaluate strip trial performance regardless of yield frequency and variability (error &lt; 3%) within an experimental unit when operating within the calibrated range of the mass flow sensor. Operating outside of the calibrated range of the mass flow sensor resulted in &gt; 15% error in estimating accumulated weight and overestimation of yield by 23%. Finally, no significant differences existed in estimating accumulated weight values between grain yield monitor technologies (all p-values ≥ 0.54).</p>","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"149 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138565149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Within-season vegetation indices and yield stability as a predictor of spatial patterns of Maize (Zea mays L) yields 季内植被指数和产量稳定性对玉米产量空间格局的预测
IF 6.2 2区 农林科学
Precision Agriculture Pub Date : 2023-12-07 DOI: 10.1007/s11119-023-10101-0
Guanyuan Shuai, Ames Fowler, Bruno Basso
{"title":"Within-season vegetation indices and yield stability as a predictor of spatial patterns of Maize (Zea mays L) yields","authors":"Guanyuan Shuai, Ames Fowler, Bruno Basso","doi":"10.1007/s11119-023-10101-0","DOIUrl":"https://doi.org/10.1007/s11119-023-10101-0","url":null,"abstract":"&lt;p&gt;Accurate evaluation of crop performance and yield prediction at a sub-field scale is essential for achieving high yields while minimizing environmental impacts. Two important approaches for improving agronomic management and predicting future crop yields are the spatial stability of historic crop yields and in-season remote sensing imagery. However, the relative accuracies of these approaches have not been well characterized. In this study, we aim to first, assess the accuracies of yield stability and in-season remote sensing for predicting yield patterns at a sub-field resolution across multiple fields, second, investigate the optimal satellite image date for yield prediction, and third, relate bi-weekly changes in GCVI through the season to yield levels. We hypothesize that historical yield stability zones provide high accuracies in identifying yield patterns compared to within-season remote sensing images.&lt;/p&gt;&lt;p&gt;To conduct this evaluation, we utilized biweekly Planet images with visible and near-infrared bands from June through September (2018–2020), along with observed historical yield maps from 115 maize fields located in Indiana, Iowa, Michigan, and Minnesota, USA. We compared the yield stability zones (YSZ) with the in-season remote sensing data, specifically focusing on the green chlorophyll vegetative index (GCVI). Our analysis revealed that yield stability maps provided more accurate estimates of yield within both high stable (HS) and low stable (LS) yield zones within fields compared to any single-image in-season remote sensing model.&lt;/p&gt;&lt;p&gt;For the in-season remote sensing predictions, we used linear models for a single image date, as well as multi-linear and random forest models incorporating multiple image dates. Results indicated that the optimal image date for yield prediction varied between and within fields, highlighting the instability of this approach. However, the multi-image models, incorporating multiple image dates, showed improved prediction accuracy, achieving R&lt;sup&gt;2&lt;/sup&gt; values of 0.66 and 0.86 by September 1st for the multi-linear and random forest models, respectively. Our analysis revealed that most low or high GCVI values of a pixel were consistent across the season (77%), with the greatest instability observed at the beginning and end of the growing season. Interestingly, the historical yield stability zones provided better predictions of yield compared to the bi-weekly dynamics of GCVI. The historically high-yielding areas started with low GCVI early in the season but caught up, while the low-yielding areas with high initial GCVI faltered.&lt;/p&gt;&lt;p&gt;In conclusion, the historical yield stability zones in the US Midwest demonstrated robust predictive capacity for in-field heterogeneity in stable zones. Multi-image models showed promise for assessing unstable zones during the season, but it is crucial to link these two approaches to fully capture both stable and unstable zones of crop yield. This study provides oppor","PeriodicalId":20423,"journal":{"name":"Precision Agriculture","volume":"24 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138544825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信