Evans K. Wiafe, Kelvin Betitame, Billy G. Ram, Xin Sun
{"title":"Technical study on the efficiency and models of weed control methods using unmanned ground vehicles: A review","authors":"Evans K. Wiafe, Kelvin Betitame, Billy G. Ram, Xin Sun","doi":"10.1016/j.aiia.2025.05.003","DOIUrl":"10.1016/j.aiia.2025.05.003","url":null,"abstract":"<div><div>As precision agriculture evolves, unmanned ground vehicles (UGVs) have become an essential tool for improving weed management techniques, offering automated and targeted methods that obviously reduce the reliance on manual labor and blanket herbicide applications. Several papers on UGV-based weed control methods have been published in recent years, yet there is no explicit attempt to systematically study these papers to discuss these weed control methods, UGVs adopted, and their key components, and how they impact the environment and economy. Therefore, the objective of this study was to present a systematic review that involves the efficiency and types of weed control methods deployed in UGVs, including mechanical weeding, targeted herbicide application, thermal/flaming weeding, and laser weeding in the last 2 decades. For this purpose, a thorough literature review was conducted, analyzing 68 relevant articles on weed control methods for UGVs. The study found that the research focus on using UGVs in mechanical weeding has been more dominant, followed by target or precision spraying/ chemical weeding, with hybrid weeding systems quickly emerging. The effectiveness of UGVs for weed control is hinged on the accuracy of their navigation and weed detection technologies, which are influenced heavily by environmental conditions, including lighting, weather, uneven terrain, and weed and crop density. Also, there is a shift from using traditional machine learning (ML) algorithms to deep learning neural networks, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), for weed detection algorithm development due to their potential to work in complex environments. Finally, trials of most UGVs have limited documentation or lack extensive trials under various conditions, such as varying soil types, crop fields, topography, field geometry, and annual weather conditions. This review paper serves as an in-depth update on UGVs in weed management for farmers, researchers, robotic technology industry players, and AI enthusiasts, helping to further foster collaborative efforts to develop new ideas and advance this revolutionary technique in modern agriculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 622-641"},"PeriodicalIF":8.2,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144203295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuemin Lin , Jinhai Wang , Jinshuan Wang , Huiling Wei , Mingyou Chen , Lufeng Luo
{"title":"Picking point localization method based on semantic reasoning for complex picking scenarios in vineyards","authors":"Xuemin Lin , Jinhai Wang , Jinshuan Wang , Huiling Wei , Mingyou Chen , Lufeng Luo","doi":"10.1016/j.aiia.2025.05.004","DOIUrl":"10.1016/j.aiia.2025.05.004","url":null,"abstract":"<div><div>In the complex orchard environment, precise picking point localization is crucial for the automation of fruit picking robots. However, existing methods are prone to positioning errors when dealing with complex scenarios such as short peduncles, partial occlusion, or complete misidentification, which can affect the actual work efficiency of the fruit picking robot. This study proposes an enhanced picking point localization method based on semantic reasoning for complex picking scenarios in vineyard. It innovatively designs three modules: the semantic reasoning module (SRM), the ROI threshold adjustment strategy (RTAS), and the picking point location optimization module (PPOM). The SRM is applied to handle the scenarios of grape peduncles being obstructed by obstacles, partial misidentification of peduncles, and complete misidentification of peduncles. The RTAS addresses the issue of low and short peduncles during the picking process. Finally, the PPOM optimizes the final position of the picking point, allowing the robotic arm to perform the picking operation with greater flexibility. Experimental results show that SegFormer achieves an mIoU (mean Intersection over Union) of 84.54 %, with B_IoU and P_IoU reaching 73.90 % and 75.63 %, respectively. Additionally, the success rate of the improved fruit picking point localization algorithm reached 94.96 %, surpassing the baseline algorithm by 8.12 %. The algorithm's average processing time is 0.5428 ± 0.0063 s, meeting the practical requirements for real-time picking.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 744-756"},"PeriodicalIF":8.2,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md Geaur Rahman , Md Anisur Rahman , Mohammad Zavid Parvez , Md Anwarul Kaium Patwary , Tofael Ahamed , David A. Fleming-Muñoz , Saad Aloteibi , Mohammad Ali Moni PhD
{"title":"ADeepWeeD: An adaptive deep learning framework for weed species classification","authors":"Md Geaur Rahman , Md Anisur Rahman , Mohammad Zavid Parvez , Md Anwarul Kaium Patwary , Tofael Ahamed , David A. Fleming-Muñoz , Saad Aloteibi , Mohammad Ali Moni PhD","doi":"10.1016/j.aiia.2025.04.009","DOIUrl":"10.1016/j.aiia.2025.04.009","url":null,"abstract":"<div><div>Efficient weed management in agricultural fields is essential for attaining optimal crop yields and safeguarding global food security. Every year, farmers worldwide invest significant time, capital, and resources to combat yield losses, approximately USD 75.6 billion, due to weed infestations. Deep Learning (DL) methodologies have been recently implemented to revolutionise agricultural practices, particularly in weed detection and classification. Existing DL-based weed classification techniques, including VGG16 and ResNet50, initially construct a model by implementing the algorithm on a training dataset comprising weed species, subsequently employing the model to identify weed species acquired during training. Given the dynamic nature of crop fields, we argue that existing methods may exhibit suboptimal performance due to two key issues: (i) the unavailability of all training weed species initially, as these species emerge over time, resulting in a progressively expanding training dataset, and (ii) the constrained memory and computational capacity of the system utilised for model development, which hinders the retention of all weed species that manifest over an extended duration. To address the issues, this paper introduces a novel DL-based framework called ADeepWeeD for weed classification that facilitates adaptive (i.e. incremental) learning so that it can handle new weed species by keeping track of historical information. ADeepWeeD is evaluated using two criteria, namely <span><math><msub><mi>F</mi><mn>1</mn></msub></math></span>-Score and classification accuracy, by comparing its performances against four non-incremental and two incremental state-of-the-art methods on three publicly available large datasets. Our experimental results demonstrate that ADeepWeeD outperforms existing techniques used in this study. We believe that our developed model could be used to develop an automation system for weed identification. The code of the proposed method is available on GitHub: <span><span>https://github.com/grahman20/ADeepWeed</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 590-609"},"PeriodicalIF":8.2,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shengli Yan , Wenhui Hou , Yuan Rao , Dan Jiang , Xiu Jin , Tan Wang , Yuwei Wang , Lu Liu , Tong Zhang , Arthur Genis
{"title":"Multi-scale cross-modal feature fusion and cost-sensitive loss function for differential detection of occluded bagging pears in practical orchards","authors":"Shengli Yan , Wenhui Hou , Yuan Rao , Dan Jiang , Xiu Jin , Tan Wang , Yuwei Wang , Lu Liu , Tong Zhang , Arthur Genis","doi":"10.1016/j.aiia.2025.05.002","DOIUrl":"10.1016/j.aiia.2025.05.002","url":null,"abstract":"<div><div>In practical orchards, the challenges posed by fruit overlapping, branch and leaf occlusion, significantly impede the successful implementation of automated picking, particularly for bagging pears. To address this issue, this paper introduces the multi-scale cross-modal feature fusion and cost-sensitive classification loss function network (MCCNet), specifically designed to accurately detect bagging pears with various occlusion categories. The network designs a dual-stream convolutional neural network as its backbone, enabling the parallel extraction of multi-modal features. Meanwhile, we propose a novel lightweight cross-modal feature fusion method, inspired by enhancing shared features between modalities while extracting specific features from RGB and depth modalities. The cross-modal method enhances the perceptual capabilities of the model by facilitating the fusion of complementary information from multimodal bagging pear image pairs. Furthermore, we optimize the classification loss function by transforming it into a cost-sensitive loss function, aiming to improve detection classification efficiency and reduce instances of missing and false detections during the picking process. Experimental results on a bagging pear dataset demonstrate that our MCCNet achieves mAP0.5 and mAP0.5:0.95 values of 97.3 % and 80.3 %, respectively, representing improvements of 3.6 % and 6.3 % over the classical YOLOv10m model. When benchmarked against several state-of-the-art detection models, our MCCNet network has only 19.5 million parameters while maintaining superior inference speed.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 573-589"},"PeriodicalIF":8.2,"publicationDate":"2025-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongwei Li , Yongmei Mo , Jiasheng Chen , Jiqing Chen , Jiabao Li
{"title":"Accurate Orah fruit detection method using lightweight improved YOLOv8n model verified by optimized deployment on edge device","authors":"Hongwei Li , Yongmei Mo , Jiasheng Chen , Jiqing Chen , Jiabao Li","doi":"10.1016/j.aiia.2025.05.001","DOIUrl":"10.1016/j.aiia.2025.05.001","url":null,"abstract":"<div><div>The replacement of personal computer terminal with edge device is recognized as a portable and cost-effective potential solution in solving equipment miniaturization and achieving high flexibility of robotic fruit harvesting at in-field scale. This study proposes a lightweight improved You Only Look Once version 8n (YOLOv8n) model for detecting Orah fruits and deploying this model on an edge device. First of all, the model size was reduced while maintaining detection accuracy via the introduction of the ADown modules. Subsequently, a Concentrated-Comprehensive Dual Convolution (C3_DualConv) module combining dual convolutional bottlenecks was proposed to enhance the model capability to capture features of Orah fruits obscured by branches and leaves; this practice further reduced the model size. Additionally, a Bidirectional Feature Pyramid Network (BiFPN) that includes a pyramid level 2 high-resolution layer was employed for more efficient multi-scale feature fusion. Besides, three Coordinate Attention (CA) mechanism modules were also added to improve the recognition and capture capability for Orah fruit features. Finally, a more focused minimum points distance intersection over union loss was adopted to boost the detection efficiency of densely occluded Orah fruits. Experimentally demonstrating that the improved YOLOv8n model accurately detected Orah fruits in complex orchard environments, achieving a 97.7 % of precision, an Average Precision at IoU threshold 0.5 ([email protected]) of 98.8 %, and a 96.69 % of F1 score, while maintaining a compact model size of 4.1 MB, under a Windows-based system terminal. This proposed model was optimally deployed on an Nvidia Jetson Orin Nano using TensorRT Python Application Programming Interface (API), the average interface speed exceeds 30 fps, indicating a real-time detection ability. This study can provide technical support for Orah fruit robotic harvesting on the basis of edge device.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 707-723"},"PeriodicalIF":8.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decoding canola and oat crop health and productivity under drought and heat stress using bioelectrical signals and machine learning","authors":"Guoqi Wen, Bao-Luo Ma","doi":"10.1016/j.aiia.2025.04.006","DOIUrl":"10.1016/j.aiia.2025.04.006","url":null,"abstract":"<div><div>Abiotic stresses, such as heat and drought, often reduce crop yields by harming plant health. Plants have evolved complex signaling networks to mitigate environmental impacts, making monitoring in-situ biosignals a promising tool for assessing plant health in real time. In this study, needle-like sensors were used to measure electrical potential changes in oat and canola plants under heat and drought stress conditions. Signals were recorded over a 30-min period and segmented into time intervals of 1-, 5-, 10-, 20-, and 30-min. Machine learning algorithms, including Random Forest, K-Nearest Neighbors, and Support Vector Machines, were applied to classify stress conditions and estimate biomass based on 14 extracted bioelectrical features, such as signal amplitude and entropy. Results showed that heat stress primarily altered signal patterns, whereas drought stress affected the signal intensity, possibly due to a reduction in the flow rate of charged ions. Random Forest classifier successfully identified over 85 % of stressed crops within 30 min of signal recording. These signals also explained 58–95 % of the variation in plant aboveground and root biomass, depending on stress intensity and crop genotype. This study demonstrates the potential of using bioelectrical sensing as a rapid and efficient tool for stress detection and biomass estimation. Future research should explore the ability to use biosensors to capture genetic variability to mitigate abiotic stresses and combine this with remote sensing and other emerging precision agriculture technologies.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 696-706"},"PeriodicalIF":8.2,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing maize LAI estimation accuracy using unmanned aerial vehicle remote sensing and deep learning techniques","authors":"Zhen Chen , Weiguang Zhai , Qian Cheng","doi":"10.1016/j.aiia.2025.04.008","DOIUrl":"10.1016/j.aiia.2025.04.008","url":null,"abstract":"<div><div>The leaf area index (LAI) is crucial for precision agriculture management. UAV remote sensing technology has been widely applied for LAI estimation. Although spectral features are widely used for LAI estimation, their performance is often constrained in complex agricultural scenarios due to interference from soil background reflectance, variations in lighting conditions, and vegetation heterogeneity. Therefore, this study evaluates the potential of multi-source feature fusion and convolutional neural networks (CNN) in estimating maize LAI. To achieve this goal, field experiments on maize were conducted in Xinxiang City and Xuzhou City, China. Subsequently, spectral features, texture features, and crop height were extracted from the multi-spectral remote sensing data to construct a multi-source feature dataset. Then, maize LAI estimation models were developed using multiple linear regression, gradient boosting decision tree, and CNN. The results showed that: (1) Multi-source feature fusion, which integrates spectral features, texture features, and crop height, demonstrated the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.70 to 0.83, the RMSE ranging from 0.44 to 0.60, and the rRMSE ranging from 10.79 % to 14.57 %. In addition, the multi-source feature fusion demonstrates strong adaptability across different growth environments. In Xinxiang, the R<sup>2</sup> ranges from 0.76 to 0.88, the RMSE ranges from 0.35 to 0.50, and the rRMSE ranges from 8.73 % to 12.40 %. In Xuzhou, the R<sup>2</sup> ranges from 0.60 to 0.83, the RMSE ranges from 0.46 to 0.71, and the rRMSE ranges from 10.96 % to 17.11 %. (2) The CNN model outperformed traditional machine learning algorithms in most cases. Moreover, the combination of spectral features, texture features, and crop height using the CNN model achieved the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.83 to 0.88, the RMSE ranging from 0.35 to 0.46, and the rRMSE ranging from 8.73 % to 10.96 %.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 482-495"},"PeriodicalIF":8.2,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mapping of soil sampling sites using terrain and hydrological attributes","authors":"Tan-Hanh Pham , Kristopher Osterloh , Kim-Doang Nguyen","doi":"10.1016/j.aiia.2025.04.007","DOIUrl":"10.1016/j.aiia.2025.04.007","url":null,"abstract":"<div><div>Efficient soil sampling is essential for effective soil management and research on soil health. Traditional site selection methods are labor-intensive and fail to capture soil variability comprehensively. This study introduces a deep learning-based tool that automates soil sampling site selection using spectral images. The proposed framework consists of two key components: an extractor and a predictor. The extractor, based on a convolutional neural network (CNN), derives features from spectral images, while the predictor employs self-attention mechanisms to assess feature importance and generate prediction maps. The model is designed to process multiple spectral images and address the class imbalance in soil segmentation.</div><div>The model was trained on a soil dataset from 20 fields in eastern South Dakota, collected via drone-mounted LiDAR with high-precision GPS. Evaluation on a test set achieved a mean intersection over union (mIoU) of 69.46 % and a mean Dice coefficient (mDc) of 80.35 %, demonstrating strong segmentation performance. The results highlight the model's effectiveness in automating soil sampling site selection, providing an advanced tool for producers and soil scientists. Compared to existing state-of-the-art methods, the proposed approach improves accuracy and efficiency, optimizing soil sampling processes and enhancing soil research.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 470-481"},"PeriodicalIF":8.2,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143887644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Zhang , Yu Zhang , Meng Gao , Xinjie Wang , Baisheng Dai , Weizheng Shen
{"title":"Multimodal behavior recognition for dairy cow digital twin construction under incomplete modalities: A modality mapping completion network approach","authors":"Yi Zhang , Yu Zhang , Meng Gao , Xinjie Wang , Baisheng Dai , Weizheng Shen","doi":"10.1016/j.aiia.2025.04.005","DOIUrl":"10.1016/j.aiia.2025.04.005","url":null,"abstract":"<div><div>The recognition of dairy cow behavior is essential for enhancing health management, reproductive efficiency, production performance, and animal welfare. This paper addresses the challenge of modality loss in multimodal dairy cow behavior recognition algorithms, which can be caused by sensor or video signal disturbances arising from interference, harsh environmental conditions, extreme weather, network fluctuations, and other complexities inherent in farm environments. This study introduces a modality mapping completion network that maps incomplete sensor and video data to improve multimodal dairy cow behavior recognition under conditions of modality loss. By mapping incomplete sensor or video data, the method applies a multimodal behavior recognition algorithm to identify five specific behaviors: drinking, feeding, lying, standing, and walking. The results indicate that, under various comprehensive missing coefficients (λ), the method achieves an average accuracy of 97.87 % ± 0.15 %, an average precision of 95.19 % ± 0.4 %, and an average F1 score of 94.685 % ± 0.375 %, with an overall accuracy of 94.67 % ± 0.37 %. This approach enhances the robustness and applicability of cow behavior recognition based on multimodal data in situations of modality loss, resolving practical issues in the development of digital twins for cow behavior and providing comprehensive support for the intelligent and precise management of farms.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 459-469"},"PeriodicalIF":8.2,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143873553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Dai , Kebiao Mao , Zhonghua Guo , Zhihao Qin , Jiancheng Shi , Sayed M. Bateni , Liurui Xiao
{"title":"Joint optimization of AI large and small models for surface temperature and emissivity retrieval using knowledge distillation","authors":"Wang Dai , Kebiao Mao , Zhonghua Guo , Zhihao Qin , Jiancheng Shi , Sayed M. Bateni , Liurui Xiao","doi":"10.1016/j.aiia.2025.03.009","DOIUrl":"10.1016/j.aiia.2025.03.009","url":null,"abstract":"<div><div>The rapid advancement of artificial intelligence in domains such as natural language processing has catalyzed AI research across various fields. This study introduces a novel strategy, the AutoKeras-Knowledge Distillation (AK-KD), which integrates knowledge distillation technology for joint optimization of large and small models in the retrieval of surface temperature and emissivity using thermal infrared remote sensing. The approach addresses the challenges of limited accuracy in surface temperature retrieval by employing a high-performance large model developed through AutoKeras as the teacher model, which subsequently enhances a less accurate small model through knowledge distillation. The resultant student model is interactively integrated with the large model to further improve specificity and generalization capabilities. Theoretical derivations and practical applications validate that the AK-KD strategy significantly enhances the accuracy of temperature and emissivity retrieval. For instance, a large model trained with simulated ASTER data achieved a Pearson Correlation Coefficient (PCC) of 0.999 and a Mean Absolute Error (MAE) of 0.348 K in surface temperature retrieval. In practical applications, this model demonstrated a PCC of 0.967 and an MAE of 0.685 K. Although the large model exhibits high average accuracy, its precision in complex terrains is comparatively lower. To ameliorate this, the large model, serving as a teacher, enhances the small model's local accuracy. Specifically, in surface temperature retrieval, the small model's PCC improved from an average of 0.978 to 0.979, and the MAE decreased from 1.065 K to 0.724 K. In emissivity retrieval, the PCC rose from an average of 0.827 to 0.898, and the MAE reduced from 0.0076 to 0.0054. This research not only provides robust technological support for further development of thermal infrared remote sensing in temperature and emissivity retrieval but also offers important references and key technological insights for the universal model construction of other geophysical parameter retrievals.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 407-425"},"PeriodicalIF":8.2,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143837982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}