Baihe Liang , Liangchen Hu , Guangxing Liu , Peng Hu , Shaosheng Xu , Biao Jie
{"title":"YOLOv9-GSSA model for efficient soybean seedlings and weeds detection","authors":"Baihe Liang , Liangchen Hu , Guangxing Liu , Peng Hu , Shaosheng Xu , Biao Jie","doi":"10.1016/j.atech.2025.101134","DOIUrl":"10.1016/j.atech.2025.101134","url":null,"abstract":"<div><div>To monitor soybean seedlings growth in real time, an effective method for accurately identifying seedlings and removing weeds is essential. Challenges include the small size and morphological similarity of seedlings and weeds, complicating conventional detection methods. To tackle these issues, we propose a real-time detection algorithm called YOLOv9-GSSA. The improved Mosaic-Dense algorithm increases object density at the model's input layer, enhancing its ability to capture detailed features. Additionally, the GSSA neck optimization module, combining GSConv and Gated Self-Attention, supports key information extraction and multi-scale feature interaction. The Swin-GSSA prediction head further utilizes spatial positional information, improving small object detection and handling overlapping occlusion. Experimental results show our model achieves a mAP of 47.5% with a detection speed of 23.42 ms per image, suitable for real-time monitoring. The enhanced model significantly improves the detection of soybean seedlings and weeds, making it a valuable tool for managing farmland effectively. This ultimately aids in precise yield estimation and decision-making in precision agriculture.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101134"},"PeriodicalIF":6.3,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144548891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Li , Peijie Guo , Yu Chen , Jun Chen , Haiying Wang , Jing Zhang , Zhixing Zhang
{"title":"Design and trial of precision spraying system for weeds in winter wheat field at tillering stage","authors":"Bo Li , Peijie Guo , Yu Chen , Jun Chen , Haiying Wang , Jing Zhang , Zhixing Zhang","doi":"10.1016/j.atech.2025.101159","DOIUrl":"10.1016/j.atech.2025.101159","url":null,"abstract":"<div><div>During the tillering stage of wheat, the distribution of weeds in the field is irregular, often showing single plants or clusters. Current precision spraying systems are mainly suitable for locating and spraying single-plant vegetation, which usually leads to the system missing or under-spraying when dealing with clustered weeds. In this study, a precision spraying control method is proposed to reduce the effect of camera frame rate on weed localization failure through three sets of position determination regions, and to address the effect of solenoid valve response frequency on precision spraying by controlling the spray nozzle to continuously spray herbicides on clustered weeds through a velocity-adaptive dynamic overlap region. To improve the accuracy of weed detection, GCGS-YOLO is proposed as a weed target detection model, and we integrate the Global Context (GC) attention mechanism with the traditional C3 module to optimize the backbone feature extraction network, and introduce the GSConv module to improve the neck network. The improved models <em>P, R, mAP</em> and <em>F</em>1 were 88 %, 84.6 %, 92.2 % and 86.3 %, which were 3 %, 3.1 %, 2.7 % and 3.1 % higher compared to the original model. The precision spraying algorithms and systems were integrated in a test bed and sprayer to carry out the tests. The tests showed that the recognition rate and spraying rate on the test bed could reach >98 % at different speeds. The results of the field test showed that the recognition rate and spray application rate of the sprayer were 91.2 % and 96.1 %, respectively, at a speed of 0.2 m/s. The research results can reduce the waste of herbicide, improve the efficiency of weeding, and provide reference for large-scale precision weeding.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101159"},"PeriodicalIF":6.3,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144581011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Instance segmentation of oyster mushroom datasets: A novel data sampling methodology for training and evaluation of deep learning models","authors":"Christos Charisis, Meiqing Wang, Dimitrios Argyropoulos","doi":"10.1016/j.atech.2025.101146","DOIUrl":"10.1016/j.atech.2025.101146","url":null,"abstract":"<div><div>This paper proposes a novel data sampling methodology for training and evaluation of deep-learning instance segmentation models using a comprehensive image dataset of oyster mushroom clusters obtained from commercial farms including 25,978 single mushrooms. A custom data splitting and reduction strategy was designed to generate multiple training subsets for an in-depth model performance evaluation. Also, the study aims to examine the ability of five feature extraction backbone configurations of Mask R-CNN: i) CNN-based (ResNet50, ResNeXt101 and ConvNeXt) and ii) Transformer-based (Swin small and tiny) to accurately detect and segment single mushroom instances within the cluster in the images. To complement the standard evaluation metrics (mAP, mAR), two new metrics, namely Correctness and Instance Segmentation Quality Index (ISQI), were introduced. Correctness was used to assess the segmentation quality and ISQI to combine information from both detection (mAR) and segmentation (Correctness). The new metrics examined the consistency of the generated masks across multiple experimental runs on distinct dataset splits, reflecting the ability of the models to produce similar masks despite variations in their training data. The results revealed that ConvNeXt consistently outperformed its counterparts (mAP = 0.7675, mAR = 0.8071; Correctness = 0.9160, ISQI = 0.8598) in all dataset sizes, demonstrating superior detection ability, even in cases of high occlusion and low illumination. Swin also exhibited high detection performance (mAP = 0.7616, mAR = 0.7991; Correctness = 0.9126, ISQI = 0.8540), however with a greater dependence on the dataset size. Overall, this research highlights the importance of properly evaluating backbone architectures across different dataset sizes for developing robust DL instance segmentation models applicable to mushroom farming or other visually complex environments.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101146"},"PeriodicalIF":6.3,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144581012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing tomato yield prediction using phenologically timed UAV-based spectral data and machine learning","authors":"Carolina Trentin , Yiannis Ampatzidis , Sotirios Tasioulas , Pavlos Tsouvaltzis","doi":"10.1016/j.atech.2025.101158","DOIUrl":"10.1016/j.atech.2025.101158","url":null,"abstract":"<div><div>Accurate yield prediction is critical for optimizing agricultural practices and ensuring food security. This study evaluated the performance of machine learning models in predicting tomato yield using weather data, spectral bands, and vegetation indices under varying nitrogen rates and bio-stimulant treatments to induce plant growth variability. UAV-based spectral data were collected across seven dates from October 27 to December 15, 2023, corresponding to key phenological stages: vegetative growth (data collection date 1), flowering (dates 2 and 3), fruit development (dates 4, 5, and 6), and early ripening (date 7). Significant input features were identified using the Pearson correlation coefficient (<em>r</em> > 0.65, <em>p</em> < 0.05), including Near Infrared (NIR), Red Edge, and Red spectral bands, as well as vegetation indices such as NDVI, GNDVI, NDRE, and SAVI. Aerial spectral data collected during fruit development (dates 5 and 6) showed the strongest correlations with yield (<em>r</em> = 0.66–0.74), emphasizing the importance of mid-to-late-season spectral information. Among the models evaluated, linear regression (LR) and XGBoost achieved the best performance, with root mean squared error (RMSE) values of 16.13 kg and 16.15 kg, respectively, and R² values of 0.63. Support vector machine (SVM) and decision tree (DT) also perform well, with RMSE values of 17.15 kg and 17.18 kg, respectively. In contrast, the deep learning model underperformed (RMSE = 23.49 kg, R² = 0.23), likely due to the limited data. This study highlights the predictive potential of spectral bands and emphasizes the significance of phenologically timed spectral data for yield estimation.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101158"},"PeriodicalIF":6.3,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144564049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meiyan Shu , Zhenghang Ge , Yang Li , Jibo Yue , Wei Guo , Yuanyuan Fu , Ping Dong , Hongbo Qiao , Xiaohe Gu
{"title":"A novel canopy water indicator for UAV imaging to monitor winter wheat water status","authors":"Meiyan Shu , Zhenghang Ge , Yang Li , Jibo Yue , Wei Guo , Yuanyuan Fu , Ping Dong , Hongbo Qiao , Xiaohe Gu","doi":"10.1016/j.atech.2025.101160","DOIUrl":"10.1016/j.atech.2025.101160","url":null,"abstract":"<div><div>The utilization of UAV-based imaging systems for precise assessment of crop hydration levels plays a pivotal role in optimizing irrigation strategies and enhancing the efficiency of agricultural water resource management. While canopy fuel moisture content (FMCc) serves as a key parameter for evaluating plant hydration status, its accurate quantification relies heavily on precise measurements of the leaf area index (LAI). However, the complexity involved in acquiring LAI data and the associated high costs limit the practical application of FMCc in crop water monitoring. To address this limitation, this study proposed a novel canopy water indicator, termed r-FMCc, which integrates canopy coverage and FMC. The effectiveness of FMC, FMCc and r-FMCc in assessing wheat water status were comparatively analyzed using UAV hyperspectral data. First, the hyperspectral data were processed to generate a range of vegetation indices. Subsequently, a Boruta-based feature selection algorithm was employed to identify those indices that exhibited significant correlations with the three target water parameters (FMC, FMCc,and r-FMCc). To develop robust estimation models, four machine learning algorithms were implemented across individual and combined growth stages, and their performance was validated using independent ground-measured datasets that were not used during the training process. The results indicated significant positive correlations between LAI and canopy coverage across all growth stages. Among the four estimation models, the random forest (RF) and Gaussian process regression models exhibited superior performance in estimating various water indicators. Considering variability across growth stages significantly improved the accuracy of water status quantification compared to assessments based on individual growth stages. Using RF, The R²values for the training sets of FMC, FMCc, and r-FMCc across multiple growth stages were 0.96, 0.98, and 0.98, respectively, while the corresponding R²values for the testing sets were 0.83, 0.90, and 0.89. The integration of UAV-based hyperspectral imagery with machine learning techniques enables high-throughput and precise quantification of wheat canopy water status parameters. The newly proposed wheat water indicator (r-FMCc) enhances the applicability of UAV imaging for monitoring wheat water status without compromising estimation accuracy.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101160"},"PeriodicalIF":6.3,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144587774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai-Dong Wang , Rui-Jie Li , Xiang-Qian Feng , Zi-Qiu Li , Wei-Yuan Hong , Hua-Xing Wu , Dan-Ying Wang , Song Chen
{"title":"The research on enhancing LA estimation accuracy across domains for small sample data based on data augmentation and data transfer integration optimization system","authors":"Ai-Dong Wang , Rui-Jie Li , Xiang-Qian Feng , Zi-Qiu Li , Wei-Yuan Hong , Hua-Xing Wu , Dan-Ying Wang , Song Chen","doi":"10.1016/j.atech.2025.101148","DOIUrl":"10.1016/j.atech.2025.101148","url":null,"abstract":"<div><h3>Context</h3><div>The efficient and precise monitoring of rice leaf area (LA) is essential for variety selection and agricultural management. At present, LA estimation models based on high-throughput phenotyping technologies primarily depend on homogenized large sample datasets. These models encounter generalization challenges when applied to heterogeneous scenarios with small sample sizes.</div></div><div><h3>Objective</h3><div>In this research, our goal is to develop a novel framework to mitigate prediction biases in LA caused by sample limitations and data heterogeneity. This framework integrates machine learning models to establish a universal solution for cross-domain LA estimation in data-scarce situations.</div></div><div><h3>Methods</h3><div>This research utilizes canopy image data acquired from the 2023–2024 rice full-cycle multi-view RGB imaging system (with dual front and side camera positions). Fourteen morphological feature parameters are constructed, and the leaf area values are measured through destructive sampling, together forming the dataset. A comprehensive comparison of six algorithms (linear regression, support vector regression, random forest, XGBoost, CatBoost, and K-nearest neighbors) is conducted, assessing their performance under a combined strategy of data augmentation (noise injection, generative adversarial networks, Gaussian mixture model, variational autoencoders) and transfer learning (random, clustering, and hierarchical parameter transfer).</div></div><div><h3>Results and conclusions</h3><div>The results demonstrate that the integrated optimization system (Gaussian Mixture Model Generation-Cluster-Based Transfer, GMM-CBT) achieved optimal performance when combined with XGBoost (validation <em>R</em><sup><em>2</em></sup>=0.85, test <em>R</em><sup><em>2</em></sup>=0.85), outperforming both standalone approaches: data augmentation (validation <em>R</em><sup><em>2</em></sup>=0.87, test <em>R</em><sup><em>2</em></sup>=-0.37) and transfer learning (validation <em>R</em><sup><em>2</em></sup>=0.84, test <em>R</em><sup><em>2</em></sup>=0.84). The framework clusters heterogeneous data based on morphological features (such as size, compactness, and roundness) and constructs a transfer sample library with feature coverage.</div></div><div><h3>Significance</h3><div>The proposed methodology advances precision agriculture by enabling single-plant LA monitoring, with potential extensions to other crops and trait-phenotyping applications.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101148"},"PeriodicalIF":6.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144571575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengkun Li , Rui Xu , Nino Brown , Barry L. Tillman , Changying Li
{"title":"Plot-scale peanut yield estimation using a phenotyping robot and transformer-based image analysis","authors":"Zhengkun Li , Rui Xu , Nino Brown , Barry L. Tillman , Changying Li","doi":"10.1016/j.atech.2025.101154","DOIUrl":"10.1016/j.atech.2025.101154","url":null,"abstract":"<div><div>Peanuts rank as the seventh-largest crop in the United States with a farm gate value exceeding $1 billion. Conventional peanut yield estimation methods involve digging, harvesting, transporting, and weighing, which are labor-intensive and inefficient for large-scale research operations. This inefficiency is particularly pronounced in peanut breeding, which requires precise pod yield estimations of each plot in order to compare genetic potential for yield to select new, high-performing breeding lines. To improve efficiency and throughput for accelerating genetic improvement, we proposed an automated robotic imaging system to predict peanut yields in the field after digging and inversion of plots. A workflow was developed to estimate yield accurately across different genotypes by counting the pods from stitched plot-scale images. After the robotic scanning in the field, the sequential images of each peanut plot were stitched together using the Local Feature Transformer (LoFTR)-based feature matching and estimated translation between adjusted images, which avoided replicated pod counting in overlapped image regions. Additionally, the Real-Time Detection Transformer (RT-DETR) was customized for pod detection by integrating partial convolution into a lightweight ResNet-18 backbone and refining the up-sampling and down-sampling modules in cross-scale feature fusion. The customized detector achieved a mean Average Precision (mAP50) of 89.3% and a mAP95 of 55.0%, improving by 3.3% and 5.9% over the original RT-DETR model with lighter weights and less computation. To determine the number of pods within the stitched plot-scale image, a sliding window-based method was used to divide it into smaller patches to improve the accuracy of pod detection. In a case study of a total of 68 plots across 19 genotypes in a peanut breeding yield trial, the result presented a correlation (R<sup>2</sup>=0.47) between the yield and predicted pod count, better than the structure-from-motion (SfM) method. The yield ranking among different genotypes using image prediction achieved an average consistency of 84.8% with manual measurement. When the yield difference between two genotypes exceeded 12%, the consistency surpassed 90%. Overall, our robotic plot-scale peanut yield estimation workflow showed promise to replace the human measurement process, reducing the time and labor required for yield determination and improving the efficiency of peanut breeding.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101154"},"PeriodicalIF":6.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144571558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nobin Chandra Paul, Pratapsingh S. Khapte, Navyasree Ponnaganti, Sushil S. Changan, Sangram B. Chavan, K. Ravi Kumar, Dhananjay D. Nangare, K. Sammi Reddy
{"title":"Grape vine (Vitis vinifera) yield prediction using optimized weighted ensemble machine learning approach","authors":"Nobin Chandra Paul, Pratapsingh S. Khapte, Navyasree Ponnaganti, Sushil S. Changan, Sangram B. Chavan, K. Ravi Kumar, Dhananjay D. Nangare, K. Sammi Reddy","doi":"10.1016/j.atech.2025.101151","DOIUrl":"10.1016/j.atech.2025.101151","url":null,"abstract":"<div><div>Grape vine (<em>Vitis vinifera</em>) plays a significant role in the agricultural industry, contributing substantially to the global economy through the production of table grapes, wine, and raisins. With increasing demand for high-quality grapes, both for domestic consumption and export, there is a pressing need to improve yield prediction models for better resource management. In this study, we propose an optimized weighted ensemble machine learning approach for predicting grape vine yield, integrating multiple morphological, physiological, and berry quality parameters. A diverse set of machine learning (ML) models, including Random Forest (RF), Artificial Neural Network (ANN), Extreme Gradient Boosting (XgBoost), Support Vector Regression (SVR), Gaussian Process Regression (GPR), Cubist and Multivariate Adaptive Regression Splines (MARS), were employed to model the grapevine yield. A Minimum Data Set (MDS) selection was performed using Principal Component Analysis (PCA), followed by data normalization to enhance model efficiency. Additionally, three ensemble approaches-Simple Averaging, Weighted Averaging, and Ridge Regression-based ensemble models were implemented to improve prediction accuracy. The dataset was divided into training and testing subsets, with hyperparameters of each model tuned using repeated k-fold cross-validation. The ensemble approach demonstrated superior performance, with improved accuracy in yield prediction compared to individual base models. This study highlights the effectiveness of ensemble learning in precision viticulture, offering a reliable framework for yield prediction in grapevine cultivation. The proposed approach offers a practical framework for vineyard managers and growers to optimize resource allocation and improve decision-making.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101151"},"PeriodicalIF":6.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144570501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md Ishtiaq Ahmed , Huiping Cao , Andrés Ricardo Perea , Mehmet Emin Bakir , Huiying Chen , Santiago A. Utsumi
{"title":"YOLOv8-BS: An integrated method for identifying stationary and moving behaviors of cattle with a newly developed dataset","authors":"Md Ishtiaq Ahmed , Huiping Cao , Andrés Ricardo Perea , Mehmet Emin Bakir , Huiying Chen , Santiago A. Utsumi","doi":"10.1016/j.atech.2025.101153","DOIUrl":"10.1016/j.atech.2025.101153","url":null,"abstract":"<div><div>Enhanced identification of cattle behavior can significantly improve animal welfare, support preventive health management, and optimize daily operations. Advances in computer vision (CV) and deep learning have shown great potential to enhance the robustness and sophistication of modern animal monitoring systems. This study introduces YOLOv8-Background Subtraction (YOLOv8-BS), a novel approach combining the CV model YOLOv8, a background subtraction module from OpenCV, and a behavior-counting component to classify four key behaviors in free roaming cattle: standing, feeding, resting (lying), and walking (moving). To train and evaluate the model, a new benchmark dataset of 92,592 labeled video frames, obtained from videos recorded from 11/2023 to 12/2023, with a balanced distribution of the targeted behaviors was curated. While the YOLOv8 model excelled in identifying stationary postures, it faced significant challenges when detecting animal motion. Conversely, the use of YOLOv8-BS, which applied OpenCV’s background subtraction model on YOLOv8, enhanced the detection of walking, with a 20 % increase in precision, a 13 % boost in recall and an 18 % improvement in F1 score compared to the YOLOv8. YOLOv8-BS achieved 89 % precision and 88 % recall for ‘standing’, 100 % precision and 90 % recall for ‘resting’, 86 % of precision and recall for ‘feeding’ and 74 % precision and 72 % recall for ‘walking’, respectively. Datasets curated for this study fill in the gaps of currently available datasets that primarily emphasize the detection of stationary behaviors of cattle in confined environments or one or a few specific behaviors within an individual video frame. This dataset is available online for research purposes.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101153"},"PeriodicalIF":6.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144569924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongbo Liu , Guillermo Palacios-Navarro , Raquel Lacuesta
{"title":"Agricultural big data for predicting crop water demand","authors":"Zhongbo Liu , Guillermo Palacios-Navarro , Raquel Lacuesta","doi":"10.1016/j.atech.2025.101155","DOIUrl":"10.1016/j.atech.2025.101155","url":null,"abstract":"<div><div>Agricultural Internet of Things (IoT) big data technology has begun to play an increasingly important role in agricultural production. In this study, the prediction of water demand for crop growth in agricultural tomato greenhouses at an angle of 25–35 degrees in Ningxia, China, is taken as the goal, and the corresponding agricultural IoT big data system is designed in detail, and the needed agricultural big data system is formed by real-time dynamic monitoring of environmental factors for crop growth in greenhouses, and then the correlation between water demand of crops in greenhouses and the growing environment and the stage of crop growth is explored and analyzed. And using K-MEANS, KNN, Random Forest algorithm to mine the generated big data, and finally scientifically predict the water demand for crop growth in agricultural tomato greenhouses at 25–35° angles in Ningxia, China, the results show that the research results effectively predict the water demand of crops in this type of agricultural greenhouses in the region, and provide a reference to the prediction of water demand for other crops in the similar greenhouses in the region, as well as to the Water saving prediction of crops in agricultural greenhouses in the region, rational planning of water resource utilization, and development of scientific and reasonable irrigation system for agricultural greenhouses are all important references.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101155"},"PeriodicalIF":6.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144535515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}