Zhixin Gu , Ting Long , Shuairan Wang , Xiaowei Shang , Weizheng Shen , Xiaoli Wei , Kaihong Xu
{"title":"Construction of Q&A methods based on knowledge graphs and large language models-improving the accuracy of landscape pest and disease Q&A","authors":"Zhixin Gu , Ting Long , Shuairan Wang , Xiaowei Shang , Weizheng Shen , Xiaoli Wei , Kaihong Xu","doi":"10.1016/j.atech.2025.101094","DOIUrl":"10.1016/j.atech.2025.101094","url":null,"abstract":"<div><div>With the development of urban landscaping, the problem of garden diseases and pests is becoming increasingly severe. Large language models have garnered significant attention for their ability to understand user intent and provide answers. The introduction of knowledge graphs has provided a high quality knowledge base for large language models. This study combines knowledge graphs (KGs), large language models (LLMs) and other technologies to design an intelligent question-answering (Q&A) model for garden pests and diseases. The main work carried out is as follows:</div><div>Build a knowledge graph for garden diseases and pests by collecting high-quality data through web crawling and literature analysis. Identify key entities and relationships to construct a conceptual pattern layer. Applying the ERNIE-BiLSTM-CRF model to extract knowledge from unstructured data. Through experiments, it is found that the accuracy, recall and F1 value of the knowledge extraction model proposed in this study are all more than 92%, superior to other models.</div><div>Propose a Q&A method that integrates the garden pest and disease KG with the ERNIE-Bot-turbo model. By vectorizing the knowledge and using similarity matching, the most relevant data is retrieved, combined with the question to form prompts, and input into the language model to generate natural language answers. Experiments comparing our method with ERNIE-Bot-turbo and ChatGLM-6B showed that our approach performs well on simple, moderate, and complex problems, avoiding misleading answers for irrelevant questions. It outperforms both models in accuracy, achieving a 90% accuracy rate for simple questions.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101094"},"PeriodicalIF":6.3,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimized ensemble learning for non-destructive avocado ripeness classification","authors":"Panudech Tipauksorn , Prasert Luekhong , Minoru Okada , Jutturit Thongpron , Chokemongkol Nadee , Krisda Yingkayun","doi":"10.1016/j.atech.2025.101114","DOIUrl":"10.1016/j.atech.2025.101114","url":null,"abstract":"<div><div>Classifying avocado ripeness accurately is crucial for enhancing post-harvest management and minimizing waste in agricultural supply chains. This study focuses on creating a strong ensemble classification model using spectral data from 120 kilogrammes of Buccaneer avocados obtained from the Royal Project in Chiang Mai, Thailand. We analyzed the avocados with near-infrared (NIR) spectroscopy at 18 wavelengths. Five machine learning models Random Forest, Decision Tree, XGBoost, Gradient Boosting, and Gaussian Mixture Model were trained separately and then merged into an ensemble. Four algorithms were used to optimize the model weight distribution: Bayesian Optimisation, Differential Evolution, Particle Swarm Optimisation, and Grid Search. We assessed performance through accuracy, precision, recall, F1-score, confusion matrices, and ROC curves. Grid Search achieved the best classification performance, reaching an accuracy of 82.5% and an F1-score of 85.3%, highlighting the benefits of weight-optimized ensemble learning compared to single classifiers. This study offers a scalable and clear method for non-destructive ripeness detection. The findings, despite some limitations like overfitting and reliance on spectral data quality, support future real-time deployment in agriculture.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101114"},"PeriodicalIF":6.3,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144480731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haiyang Shen , Man Gu , Hongguang Yang , Jie Ling , Lili Shi , Feng Wu , Fengwei Gu , Jiazhuang Tan , Zhichao Hu
{"title":"Optimizing peanut vine-inversion operations via intelligent control and semantic segmentation","authors":"Haiyang Shen , Man Gu , Hongguang Yang , Jie Ling , Lili Shi , Feng Wu , Fengwei Gu , Jiazhuang Tan , Zhichao Hu","doi":"10.1016/j.atech.2025.101124","DOIUrl":"10.1016/j.atech.2025.101124","url":null,"abstract":"<div><div>In response to the unstable inversion performance and the lack of intelligent regulation methods in current mechanized peanut harvesting processes, this paper proposes an optimization method that integrates semantic segmentation with intelligent control. First, a field data acquisition system is designed and constructed using a K230 vision module to capture peanut inversion images. A modified DeepLabV3+ semantic segmentation algorithm, enhanced by the incorporation of a Channel Transformer mechanism, is then employed to perform real‑time segmentation of these images, accurately identifying regions corresponding to peanut vines and pods. Experimental results of the segmentation module demonstrate an overall accuracy of 93.28 %, an mIoU of 76.11 %, a recall of 83.08 %, and a precision of 87.65 %, with an average processing time of 0.020327 s (approximately 49.23 FPS). Secondly, an intelligent peanut inversion control system based on fuzzy control theory is developed. In this system, the inversion rate derived from the semantic segmentation is used as a feedback signal to dynamically adjust the speeds of the conveyor belt and inversion roller in real time, thereby achieving closed‑loop control of the inversion process. Field tests show that the peanut inversion control system has an average response time of 3.18 s, consistently maintains an inversion rate above 70 %, and achieves an average inversion stability of 93.12 %. This significantly enhances both the quality and efficiency of the inversion operation. This study provides an efficient and reliable intelligent regulation solution for peanut inversion operations and holds significant implications for advancing the intelligence level of peanut harvesting machinery.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101124"},"PeriodicalIF":6.3,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144571557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yizhi Luo , Chen Yang , Enli Lv , Aqing Yang , Fanming Meng , Haowen Luo
{"title":"A lightweight model for automatic pig counting in intensive piggeries using a green inspection robot and image segmentation method","authors":"Yizhi Luo , Chen Yang , Enli Lv , Aqing Yang , Fanming Meng , Haowen Luo","doi":"10.1016/j.atech.2025.101115","DOIUrl":"10.1016/j.atech.2025.101115","url":null,"abstract":"<div><div>To address the high computational resource consumption of traditional pig instance segmentation models, which impedes their deployment on resource-constrained edge devices, this paper proposes an improved, lightweight instance segmentation and counting method based on YOLOv8n-seg model. Specifically, the C2f module is replaced with the Ghost module to reduce the model’s computational complexity. Additionally, a spatial group-enhanced attention mechanism is introduced in the neck network to enhance the model's feature fusion ability in the presence of pig occlusion and overlap. In the head network, a lightweight shared detail-enhanced convolution detection head is employed, which reduces computational load and parameter count through shared convolutions while capturing the intricate details of pigs from multiple angles via the detail-enhanced convolution module. Experimental results show that the improved model achieves an average precision of 95.7 % with memory usage, floating-point operations per second (FLOPS), and frames per second (FPS) at 1.2 MB, 7 × 10^9, and 217.86, respectively. Compared with State-of-the-art model, such as DeeplabV3+, HRNet, PSPNet, Seg-Former, and UNet models, the proposed model exhibits superior performance metrics. This research provides a lightweight solution for pig instance segmentation in farm environments.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101115"},"PeriodicalIF":6.3,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi-supervised anomaly detection from Chlorella vulgaris cultivations using hyperspectral imaging","authors":"Salli Pääkkönen , Ilkka Pölönen , Pauliina Salmi","doi":"10.1016/j.atech.2025.101121","DOIUrl":"10.1016/j.atech.2025.101121","url":null,"abstract":"<div><div>More evolved anomaly detection methods are needed to ensure efficient quality control of microalgae cultivations. This study aimed to determine whether non-invasively collected hyperspectral data can be used to indicate anomalies in <em>Chlorella vulgaris</em> cultivations. Three models of varying computational complexities were tested: isolation forest (iForest), one-class support vector machine (OC SVM), and neural network autoencoder. The models were trained in a semi-supervised way using 280 non-anomalous <em>Chlorella</em> spectra. test data included artificially contaminated cultures with <em>Microcystis aeruginosa</em> (4 spectra), nitrogen-depleted cultures (24 spectra) and non-anomalous <em>Chlorella</em> cultivations (43 spectra). The OC SVM was the most sensitive in detecting anomalies (AUC = 0.87 [0.79, 0.95], F1 = 0.91 CI [0.85, 0.98]), although the 95 % confidence intervals (CI) overlapped with the metrics of the other models. The model detected artificial contamination when the amount of <em>Microcystis</em> was around 1 % (biomass/ biomass) in the cultivation and nitrogen depletion after 3 days of nitrogen-free cultivation. The advantage of the semi-supervised training was that the models were able to learn about the normal <em>Chlorella</em> cultivations used as training data, and thus to classify unknown anomalies that deviated from the learned features. This may prove useful for detecting wider range of anomalies, but further testing is required to assess whether the other potential contaminants affect the spectra imaged in such a way that they differ from the normal. Non-invasive hyperspectral imaging together with the semi-supervised models provides a rapid indication method that could potentially give microalgae producers up-to-date information on cultivation quality.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101121"},"PeriodicalIF":6.3,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alireza Kalbasinia , Mehrnoosh Jafari , Ramin Kouhikamali , Morteza Sadeghi , Ali Nikbakht , Amir Tayefi
{"title":"Application of computational fluid dynamics (CFD) for optimal temperature sensor placement in a greenhouse equipped with a pad-fan cooling (PFC) system","authors":"Alireza Kalbasinia , Mehrnoosh Jafari , Ramin Kouhikamali , Morteza Sadeghi , Ali Nikbakht , Amir Tayefi","doi":"10.1016/j.atech.2025.101109","DOIUrl":"10.1016/j.atech.2025.101109","url":null,"abstract":"<div><div>The greenhouse microclimate is influenced by several external parameters, including ambient temperature, relative humidity, solar radiation intensity, wind speed, and the type of cultivated crops. Attaining optimal environmental control within greenhouse necessitates the precise placement of sensors to monitor these key parameters. However, sensor placement is frequently guided by the empirical knowledge and experience of greenhouse owners and designers rather than systematic methodologies. The primary objective of this study is to identify the optimal location for temperature sensor placement using Computational Fluid Dynamics (CFD) analysis. Air temperature data was utilized in this study, a key factor in plant growth, often regarded as one of the most significant. A greenhouse with dimensions of 52.5 <em>m</em> × 21 <em>m</em> × 7.75 m modeled in SolidWorks and subsequently imported into ANSYS Fluent for CFD simulations. A mesh independence analysis determined that a computational grid comprising 324,414 cells provided an appropriate balance between computational efficiency and solution accuracy. CFD analysis was conducted under two conditions: with and without an active cooling system. Temperature and airflow velocity data were collected at 15 discrete points positioned at a height of 1.5 m above the greenhouse floor for all simulated scenarios. A comparison between experimental measurements and CFD results demonstrated good agreement, with the mean absolute percentage error (MAPE) remaining below 5 % in all cases. In all simulated conditions, the maximum and minimum temperatures were recorded at the greenhouse roof and floor, respectively, with a maximum temperature difference exceeding 10 °C. The findings indicated that the temperature gradient was significantly greater when the cooling system was deactivated. The optimal sensor installation position was determined using the entropy-based method and K-means clustering. Data from the mean absolute temperature difference indicates that if only one temperature sensor is to be used in the greenhouse, the entropy method suggests position 4, near the pad, as the optimal installation location, whereas the K-means method recommends position 8, at the center of the greenhouse. The optimal sensor placements were established by combining standardized temperature and air velocity data (Z-index), with nodes 4 and 5 identified as ideal locations for the entropy and K-means methods, respectively.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101109"},"PeriodicalIF":6.3,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144365776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Lin , Shixiang Ma , Zhizheng Shi , Peiying Li , Leizi Jiao , Hongwu Tian , Zhen Xing , Chunjiang Zhao , Daming Dong
{"title":"TransLIBS-CRS: Integrating LIBS with transfer learning for accurate cross-regional soil total nitrogen detection","authors":"Peng Lin , Shixiang Ma , Zhizheng Shi , Peiying Li , Leizi Jiao , Hongwu Tian , Zhen Xing , Chunjiang Zhao , Daming Dong","doi":"10.1016/j.atech.2025.101118","DOIUrl":"10.1016/j.atech.2025.101118","url":null,"abstract":"<div><div>Accurate detection of soil total nitrogen (TN) is crucial for enhancing crop growth and quality. However, soil variability across different regions limits the generalization ability of calibration models. To address this challenge, this study introduces a model named transfer-learning-assisted laser-induced breakdown spectroscopy for cross-regional soil analysis (TransLIBS-CRS), specifically designed to mitigate the low cross-domain prediction accuracy caused by variations in regional soil properties. By fine-tuning with a limited number of target domain samples, this method significantly improves the cross-domain applicability of LIBS data, alleviating the challenges associated with the difficulty of obtaining soil samples from diverse regions. In the task of predicting TN in Guangzhou using the Beijing dataset, the TransLIBS-CRS model achieved optimal performance, with <span><math><msubsup><mi>R</mi><mi>V</mi><mn>2</mn></msubsup></math></span> of 0.846 and <em>RMSE<sub>V</sub><sub>-</sub></em> of 0.814 g/kg. Further analysis through saliency map and chemometric methods revealed that spectral lines of carbon at 193.0 nm and 247.8 nm play a key role in the quantitative detection of TN. Notably, these spectral features also demonstrated stable predictive contributions when transferred to the Guangzhou soil dataset. This approach offers a feasible solution for large-scale and efficient soil TN detection.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101118"},"PeriodicalIF":6.3,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Natho , S. Boonying , P. Bonguleaum , N. Tantidontanet , L. Chamuthai
{"title":"An enhanced machine vision system for smart poultry farms using deep learning","authors":"P. Natho , S. Boonying , P. Bonguleaum , N. Tantidontanet , L. Chamuthai","doi":"10.1016/j.atech.2025.101083","DOIUrl":"10.1016/j.atech.2025.101083","url":null,"abstract":"<div><div>This study confronts some major issues in current poultry production such as disease spread, lack of labor, lack of efficient monitoring, and increased demand for greater animal welfare. To address all these problems, a sophisticated machine vision system using deep learning is suggested, in which the YOLOv11 algorithm has been used to monitor and manage poultry automatically. The data set of 330 images of chickens was obtained from farms in Thailand with diverse environmental conditions and augmented to 1,716 images through rotation, modification of brightness, and saturation modification techniques. The model YOLOv11 was trained using the NVIDIA Jetson Orin Nano Developer Kit and evaluated in terms of precision, recall, and mean average precision (mAP). The model was highly accurate, with 0.964, 0.938, and 0.963 in precision, recall, and mAP, respectively. Optical and thermal imaging together enabled the system to overcome difficulties in varying lighting and environmental conditions. Real-time monitoring and tracking of multiple chickens were demonstrated by implementation results, which assisted in disease prevention, feed efficiency, and overall farm management. This research contributes to smart farming development through the provision of a fully automated, scalable, and very accurate poultry monitoring system to promote sustainability, efficiency of operation, and increased animal welfare in modern poultry farming.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101083"},"PeriodicalIF":6.3,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smart agriculture using IoT for automated irrigation, water and energy efficiency","authors":"Subir Gupta , Subrata Chowdhury , Ramya Govindaraj , Kassian T.T. Amesho , Sumarlin Shangdiar , Timoteus Kadhila , Sioni Iikela","doi":"10.1016/j.atech.2025.101081","DOIUrl":"10.1016/j.atech.2025.101081","url":null,"abstract":"<div><div>This study presents an innovative smart agriculture system that integrates Internet of Things (IoT) technologies, predictive algorithms, and automated control mechanisms to optimize irrigation and enhance resource efficiency. The proposed solution leverages soil moisture, temperature, and humidity sensors connected to an Arduino-based microcontroller to automate irrigation based on real-time data. Farmers can remotely monitor and manage irrigation schedules through mobile devices, ensuring precision and convenience. The predictive algorithm combines historical and real-time data to forecast irrigation requirements, ensuring water is applied only when necessary. Field trials demonstrated a 30 % reduction in water usage compared to traditional irrigation methods, while maintaining optimal soil moisture levels. Additionally, the system significantly reduced labor and energy costs, contributing to operational efficiency. By addressing water scarcity and promoting sustainable resource utilization, this smart system has the potential to improve crop yields, reduce greenhouse gas emissions, and advance ecological farming practices. In regions with limited water resources, the implementation of this technology represents a significant step toward sustainable agriculture, showcasing immense potential for broader adoption.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101081"},"PeriodicalIF":6.3,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144563508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aleena Rayamajhi , Guoyu Lu , Ernest William Tollner , Jean Williams-Woodward , Md Sultan Mahmud
{"title":"Assessing ornamental tree maturity and spray requirements using depth sensing and LiDAR technologies","authors":"Aleena Rayamajhi , Guoyu Lu , Ernest William Tollner , Jean Williams-Woodward , Md Sultan Mahmud","doi":"10.1016/j.atech.2025.101120","DOIUrl":"10.1016/j.atech.2025.101120","url":null,"abstract":"<div><div>Effective assessment of tree maturity and agrochemical application requirements is important for optimizing resource use and sustainability in woody ornamental nurseries. This study utilized red, green, blue – depth (RGB-D) camera and light detection and ranging (LiDAR) sensor technologies to measure key physiological parameters, trunk diameter and canopy volume, for maturity evaluation and precision spraying, respectively. Trunk diameter was calculated using a circle-fitting algorithm on point clouds at 0.15 m (6 inches) above ground, derived from RGB-D pair segmented using Fast Segment Anything Model (FastSAM). Canopy volume was estimated by using a convex hull algorithm on processed point clouds through point cloud registration, region of interest (ROI) clipping, and denoising. Thirty-two trees were randomly selected in pairs from two plots (Plot-1 and Plot-2) with varying terrains for this experiment. The trunk diameter measurement results in Plot-1 exhibited an average absolute error percentage of 0.23 %, with an root mean square error (RMSE) of 0.03 m and mean average error (MAE) of 0.02 m, whereas Plot-2 showed an error percentage of 1.11 %, with an RMSE of 0.08 m and MAE of 0.07 m. The trunk diameter was further analyzed for tree maturity analysis, revealing that Plot-1 had 10 mature trees while Plot-2 had only 5, indicating a more advanced growth stage in Plot-1. This classification was validated against manual assessments, showing 100 % agreement across all 32 experimental trees, confirming the accuracy of the RGB-D system in determining tree maturity. Similarly, results for the canopy volume of Plot-1 indicated an average absolute error percentage of 10.99 %, with RMSE and MAE values of 0.37 cubic meters and 0.33 cubic meters, respectively, while Plot-2 showed an error percentage of 13.01 %, with an RMSE of 0.27 cubic meters and MAE of 0.24 cubic meters. These results demonstrate the feasibility and accuracy of integrating LiDAR and RGB-D technologies for efficient nursery management, supporting maturity assessment and precision agrochemical application as part of sustainable practices in ornamental horticulture.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"12 ","pages":"Article 101120"},"PeriodicalIF":6.3,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}