{"title":"Optimizing sugarcane bale logistics operations: Leveraging reinforcement learning and artificial multiple intelligence for dynamic multi-fleet management and multi-period scheduling under machine breakdown constraints","authors":"Rapeepan Pitakaso , Kanchana Sethanan , Chettha Chamnanlor , Chen-Fu Chien , Sarayut Gonwirat , Kongkidakhon Worasan , Ming K Limg","doi":"10.1016/j.compag.2025.110431","DOIUrl":"10.1016/j.compag.2025.110431","url":null,"abstract":"<div><div>This study enhances the bio-circular green economic model within the sugar industry by advancing sustainable practices, notably green harvesting. A significant challenge involves establishing an efficient supply chain for sugarcane bale collection, emphasizing the minimization of idle time and the optimization of travel routes. The objective is to refine the scheduling and routing strategies for specialized machinery in sugarcane bale operations through a heuristic-driven methodology. The incorporation of a reinforcement learning-artificial multiple intelligence system (RL-AMIS) tackles the logistical challenges, particularly in dynamic multifleet scheduling and breakdown management, providing an advanced solution for bale collection. This system combines reinforcement learning (RL) with artificial multiple intelligence system (AMIS) components to enhance profitability. Furthermore, the application of genetic algorithms (GA) and differential evolution (DE) introduces robust enhancement techniques. In support of real-time decision-making for route planners, the study developed sugarcane bale logistics software, BaleLogistics, and corresponding mobile applications based on the RL-AMIS framework. A case study conducted in Thailand demonstrated that the RL-AMIS model significantly outperformed conventional methods, reducing operational costs by 26.1 %, the makespan by 10.54 %, and working time by 6.43 %, while achieving a task completion rate of 96.65 % and decreasing machine downtime by 77.05 %. This research marks a pioneering step in employing technology to optimize sugarcane bale logistics, potentially extending novel and efficient logistic solutions across the agricultural sector.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110431"},"PeriodicalIF":7.7,"publicationDate":"2025-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keyi Zhu , Jiajia Li , Kaixiang Zhang , Chaaran Arunachalam , Siddhartha Bhattacharya , Renfu Lu , Zhaojian Li
{"title":"Foundation model-based apple ripeness and size estimation for selective harvesting","authors":"Keyi Zhu , Jiajia Li , Kaixiang Zhang , Chaaran Arunachalam , Siddhartha Bhattacharya , Renfu Lu , Zhaojian Li","doi":"10.1016/j.compag.2025.110407","DOIUrl":"10.1016/j.compag.2025.110407","url":null,"abstract":"<div><div>Harvesting is a critical task in the tree fruit industry, demanding extensive manual labor and substantial costs, and exposing workers to potential hazards. Recent advances in automated harvesting offer a promising solution by enabling efficient, cost-effective, and ergonomic fruit picking within tight harvesting windows. However, existing harvesting technologies often indiscriminately harvest all visible and accessible fruits, including those that are unripe or undersized. This study introduces a novel foundation-model-based framework for efficient apple ripeness and size estimation. Specifically, we curated two public RGBD-based Fuji apple image datasets, integrating expanded annotations for ripeness (“Ripe” vs. “Unripe”) based on fruit color and image capture dates. The resulting comprehensive dataset, <em>Fuji-Ripeness-Size Dataset</em>, includes 4,027 images and 16,257 annotated apples with ripeness and size labels. To the best of our knowledge, this is the first published dataset on apples with ripeness and size annotations. Leveraging Grounding-DINO, a foundation-model-based object detector, we achieved robust apple detection and ripeness estimation, with mean Average Precision being 72.8, outperforming other state-of-the-art models in the evaluation on our dataset. Additionally, we developed six size estimation algorithms, made a comprehensive comparison using box-plots, and identified the best algorithm with lowest error and variation. The <em>Fuji-Ripeness-Size Dataset</em> and the apple detection and size estimation algorithms are made publicly available<span><span><sup>1</sup></span></span>, which provides valuable benchmarks for future studies in automated and selective harvesting.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110407"},"PeriodicalIF":7.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyu Zhao , Leiying He , Yatao Li , Jianneng Chen , Chuanyu Wu
{"title":"Kinetostatic modeling of clamping force in a tendon-driven soft robotic gripper for tea shoot plucking","authors":"Xinyu Zhao , Leiying He , Yatao Li , Jianneng Chen , Chuanyu Wu","doi":"10.1016/j.compag.2025.110441","DOIUrl":"10.1016/j.compag.2025.110441","url":null,"abstract":"<div><div>The mechanized plucking of tea shoots presents several challenges due to their delicate nature, small size, and dense growth, all of which pose significant difficulties in the design of the end-effector. This study proposes a soft robotic gripper specifically designed for tea shoot plucking, featuring two identical tendon-actuated fingers. To accurately determine the clamping force of the gripper, a kinetostatic model was established based on the Chain-Beam Constraint Model (CBCM), which characterizes the relationship among tendon tension, tendon displacement, and the clamping force. The accuracy of the model was validated through finite element analysis (FEA) and experimental testing, with maximum discrepancies in clamping force of 0.35 N under tendon tension and 0.28 N under tendon displacement. Using this model, the required tendon tension and tendon displacement for plucking tea shoots were determined, and a prototype of the gripper was subsequently developed. A field experiments demonstrated an overall plucking success rate of 94.33 %, with a 78.33 % probability that the breakpoint of the tea stem corresponded to the clamping position of the gripper. These results confirm that the clamping force predicted by the model is suitable for tea shoot plucking. In conclusion, the findings demonstrate that the proposed soft robotic gripper is a promising solution for tea shoot plucking.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110441"},"PeriodicalIF":7.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bing Bai , Hongtao Liu , Aizhen Liang , Lixia Wang , Anxun Wang
{"title":"Linking fluorescence spectral to machine learning predicts the emissions fates of greenhouse gas during composting","authors":"Bing Bai , Hongtao Liu , Aizhen Liang , Lixia Wang , Anxun Wang","doi":"10.1016/j.compag.2025.110430","DOIUrl":"10.1016/j.compag.2025.110430","url":null,"abstract":"<div><div>Dissolved organic matter plays a complex and crucial role in regulating microbial activity and greenhouse gas (GHG) emissions. However, the relationship between dissolved organic matter and GHG emissions to enable intelligent prediction remains limited. Therefore, the variations in GHG emissions and dissolved organic matter characteristics were assessed across different composting scenarios in this study, including various raw materials, auxiliary materials, and composting processes. After that, three machine learning models of varying depths—Gradient Boosting Regression, Random Forest, and Deep Neural Network—were established based on dissolved organic matter characteristics to accurately predict the dynamics of GHG emissions during composting. The results indicated that the Deep Neural Network model performed best in predicting CH<sub>4</sub> emissions (R<sup>2</sup> = 0.96), while the Random Forest model excelled in predicting N<sub>2</sub>O and CO<sub>2</sub> emissions (R<sup>2</sup> = 0.93 and R<sup>2</sup> = 0.76, respectively). Meantime, further feature analysis revealed that soluble microbial by-products in raw materials, the degree of organic matter degradation, and microbial activity are crucial factors influencing the emissions of CH<sub>4</sub>, CO<sub>2</sub> and N<sub>2</sub>O, respectively. This study successfully achieved accurate predictions of GHG emissions, identified key dissolved organic matter components driving gas emissions, offered a new perspective for future research on GHG dynamics, and provided scientific guidance for GHG management during composting.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110430"},"PeriodicalIF":7.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Speed control of an autonomous electric vehicle for orchard spraying","authors":"Yoshitomo Yamasaki, Kazunobu Ishii, Noboru Noguchi","doi":"10.1016/j.compag.2025.110419","DOIUrl":"10.1016/j.compag.2025.110419","url":null,"abstract":"<div><div>We developed an autonomous electric vehicle for orchard spraying, termed a spraying robot. Traveling resistance varies depending on vehicle weight, the front sideslip angle, and surface slope. The vehicle weight must change while traveling, especially for the spraying robot. To adapt to changes in those resistances, it is necessary to develop a speed controller. This research focused on rolling and slope resistance as a traveling resistance, which depends on the vehicle weight. We modeled the resistance and developed a feedforward controller with a proportional-integral-derivative (PID) feedback controller. The developed controller (FF-PID) was compared with a simple PID controller in simulation. The FF-PID was verified to be more rapid and stable response than the PID. Moreover, the FF-PID responded adaptively when the vehicle weight changed. Compared to the PID, the FF-PID reduced the error to the target speed by 50 % during sideslip angle changes and by 48 % during slope angle changes. Finally, we simulated a spraying task based on actual traveling data in a vineyard, factoring in the vehicle weight, steering angle, and slope angle change. The results showed that the FF-PID reduced error by 32 %. This research improved the performance of the spraying robot’s speed controller by modeling traveling resistance in an orchard environment.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110419"},"PeriodicalIF":7.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry Medeiros , Amy Tabb , Scott Stewart , Tracy Leskey
{"title":"Detecting invasive insects using Uncrewed Aerial Vehicles and Variational AutoEncoders","authors":"Henry Medeiros , Amy Tabb , Scott Stewart , Tracy Leskey","doi":"10.1016/j.compag.2025.110362","DOIUrl":"10.1016/j.compag.2025.110362","url":null,"abstract":"<div><div>Invasive insect pests, such as the brown marmorated stink bug (BMSB), cause significant economic and environmental damage to agricultural crops. To mitigate damage, entomological research to characterize insect behavior in the affected regions is needed. A component of this research is tracking insect movement with mark-release-recapture (MRR) methods. A common type of MRR requires marking insects with a fluorescent powder, releasing the insects into the wild, and searching for the marked insects using direct observations aided by ultraviolet (UV) flashlights at suspected destination locations. This involves a significant amount of labor and has a low recapture rate. Automating the insect search step can improve recapture rates, reducing the amount of labor required in the process and improving the quality of the data. We propose a new MRR method that uses an uncrewed aerial vehicle (UAV) to collect video data of the area of interest. Our system uses a UV illumination array and a digital camera mounted on the bottom of the UAV to collect nighttime images of previously marked and released insects. We propose a novel unsupervised computer vision method based on a Convolutional Variational Auto Encoder (CVAE) to detect insects in these videos. We then associate insect observations across multiple frames using ByteTrack and project these detections to the ground plane using the UAV’s flight log information. This allows us to accurately count the real-world insects. Our experimental results show that our system can detect BMSBs with an average precision of 0.86 and average recall of 0.87, substantially outperforming the current state of the art.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110362"},"PeriodicalIF":7.7,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143877229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AngusRecNet: Multi-module cooperation for facial anti-occlusion recognition in single-stage Angus cattle","authors":"Lijun Hu , Xu Li , Guoliang Li , Zhongyuan Wang","doi":"10.1016/j.compag.2025.110456","DOIUrl":"10.1016/j.compag.2025.110456","url":null,"abstract":"<div><div>In the context of the booming development of modern precision livestock farming, traditional cattle recognition methods exhibit clear limitations when faced with interference from feed residues, dirt, and other obstructions on the face. To address this, this study proposes an innovative deep learning framework, AngusRecNet, aimed at solving the facial recognition problem of Angus cattle under occlusion scenarios. The backbone network of AngusRecNet includes the innovatively designed Occlusion-Robust Feature Extraction Module (ORFEM) and the Vision AeroStack Module (VASM). By combining Asymmetric convolutions and fine spatial sampling, it effectively captures facial features. The neck structure is integrated with the Mamba architecture and the core ideas of DySample, leading to the design of the State Space Dynamic Sampling Feature Pyramid Network (SS-DSFPN), which enhances multi-scale feature extraction and fusion capabilities under occlusion scenarios. Additionally, the proposed Mish-Driven Channel-Spatial Transformer Head (MCST-Head), combining Channel Spatial Fusion Transformer (CSFT) and Smooth Depth Convolution (SDConv), optimizes feature representation and spatial perception in deep learning networks, significantly improving robustness and bounding box regression performance under complex backgrounds and occlusion conditions. Testing on the newly constructed AngusFace dataset demonstrates that AngusRecNet achieves a mAP50 of 94.2% in facial recognition tasks, showcasing its immense potential for application in precision livestock farming. The code can be obtained on GitHub: <span><span>https://github.com/HLJ11235/AngusRecNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110456"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143874142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Modal sensing for soil moisture mapping: Integrating drone-based ground penetrating radar and RGB-thermal imaging with deep learning","authors":"Milad Vahidi, Sanaz Shafian, William Hunter Frame","doi":"10.1016/j.compag.2025.110423","DOIUrl":"10.1016/j.compag.2025.110423","url":null,"abstract":"<div><div>Precise estimation of soil moisture is vital for refining irrigation practices, enhancing crop productivity, and promoting sustainable water use management. This study integrates Ground Penetrating Radar (GPR) and RGB-Thermal imaging datasets to enhance soil moisture prediction throughout the maize growing season, assessing moisture content at 10 and 30-cm soil depths. By leveraging the complementary strengths of GPR for subsurface moisture detection and RGB-Thermal imagery for surface and canopy analysis, we addressed common issues such as underestimation and overestimation often encountered with standalone datasets, including the weaknesses of GPR signal and its attenuation for deeper soil moisture monitoring as well as RGB-Thermal sensor lack in dealing with canopy, covering the<!--> <!-->soil surface. Advanced machine learning models, including ANN, AdaBoost, and PLS, were applied to evaluate the effects of thermal, structural and spectral variables on accuracy of moisture estimation. The best-performing model, the ANN trained with variables extracted from the 1D-CNN network, achieved an R<sup>2</sup> of 0.83 and an RMSE of 1.9 % at 10 cm depth. At 30 cm, the same model achieved an R<sup>2</sup> of 0.79 and an RMSE of 3.2 %, showing robust performance even at deeper soil levels. These results demonstrate the significant improvement in model performance when GPR data is integrated with RGB-Thermal data, reducing prediction errors in both high and low moisture regimes. Thermal variables, particularly Land Surface Temperature, exhibited a strong correlation with moisture content, especially at shallower depths. However, GPR variables were essential for detecting subsurface moisture at 30 cm depth, where RGB-Thermal data alone showed limitations. The integration of GPR and RGB-Thermal data resulted in a more comprehensive and accurate soil moisture estimation model, offering significant potential for optimizing water use in agricultural systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110423"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143874143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changqing Yan , Zeyun Liang , Han Cheng , Shuyang Li , Guangpeng Yang , Zhiwei Li , Ling Yin , Junjie Qu , Jing Wang , Genghong Wu , Qi Tian , Qiang Yu , Gang Zhao
{"title":"CDIP-ChatGLM3: A dual-model approach integrating computer vision and language modeling for crop disease identification and prescription","authors":"Changqing Yan , Zeyun Liang , Han Cheng , Shuyang Li , Guangpeng Yang , Zhiwei Li , Ling Yin , Junjie Qu , Jing Wang , Genghong Wu , Qi Tian , Qiang Yu , Gang Zhao","doi":"10.1016/j.compag.2025.110442","DOIUrl":"10.1016/j.compag.2025.110442","url":null,"abstract":"<div><div>Deep learning (DL) models have shown exceptional accuracy in plant disease identification, yet their practical utility for farmers remains limited due to a lack of professional and actionable guidance. To bridge this gap, we developed CDIP-ChatGLM3, an innovative framework that synergizes a state-of-the-art DL-based computer vision model with a fine-tuned large language model (LLM), designed specifically for Crop Disease Identification and Prescription (CDIP). EfficientNet-B2, evaluated among 10 DL models across 48 diseases and 13 crops, achieved top performance with 97.97 % ± 0.16 % accuracy at a 95 % confidence level. Building on this, we fine-tuned the widely used ChatGLM3-6B LLM using Low-Rank Adaptation (LoRA) and Freeze-tuning, optimizing its ability to deliver precise disease management prescriptions. We compared two training strategies—multi-task learning (MTL) and Dual-stage Mixed Fine-Tuning (DMT)—using a different combination of domain-specific and general datasets. Freeze-tuning with DMT led to substantial performance gains, achieving a 33.16 % improvement in BLEU-4 and a 27.04 % increase in the Average ROUGE F-score, surpassing the original model and state-of-the-art competitors such as Qwen-max, Llama-3.1-405B-Instruct, and GPT-4o. The dual-model architecture of CDIP-ChatGLM3 leverages the complementary strengths of computer vision for image-based disease detection and LLMs for contextualized, domain-specific text generation, offering unmatched specialization, interpretability, and scalability. Unlike resource-intensive multimodal models that blend modalities, our dual-model approach maintains efficiency while achieving superior performance in both disease identification and actionable prescription generation.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110442"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143874141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenlong Yi , Shuokang Xia , Sergey Kuzmin , Igor Gerasimov , Xiangping Cheng
{"title":"RTFVE-YOLOv9: Real-time fruit volume estimation model integrating YOLOv9 and binocular stereo vision","authors":"Wenlong Yi , Shuokang Xia , Sergey Kuzmin , Igor Gerasimov , Xiangping Cheng","doi":"10.1016/j.compag.2025.110401","DOIUrl":"10.1016/j.compag.2025.110401","url":null,"abstract":"<div><div>This study proposes a real-time fruit volume estimation model based on YOLOv9 (RTFVE-YOLOv9) and binocular stereo vision technology to address the challenges of low automation and insufficient accuracy in fruit volume measurement in complex orchard environments, particularly in scenarios with diverse canopy structures and severe branch-leaf occlusion. The model achieves effective recognition of occluded fruits through the innovative design of a Dual-Scale and Global–Local Sequence (DSGLSeq) module while incorporating a Multi-Head and Multi-Scale Self-Interaction (MHMSI) module to improve the detection performance of small fruit targets. Systematic validation experiments conducted on major economic fruit tree varieties, including apples, pears, pomelos, and kiwifruit, demonstrate that RTFVE-YOLOv9 improved the mean Average Precision (mAP) by 2.1%, 1.6%, 4%, and 3.8% respectively on the four fruit datasets compared to the baseline YOLOv9-c model. The model’s internal working mechanisms were thoroughly revealed through multi-dimensional evaluation, including ablation experiments, Heatmap Analysis, and Effective Receptive Field (ERF) analysis, providing a theoretical foundation for subsequent optimization. The research findings enrich the application theory of computer vision in smart agriculture and provide reliable technical support for achieving precise orchard management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"236 ","pages":"Article 110401"},"PeriodicalIF":7.7,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}