Wang Dai , Kebiao Mao , Zhonghua Guo , Zhihao Qin , Jiancheng Shi , Sayed M. Bateni , Liurui Xiao
{"title":"Joint optimization of AI large and small models for surface temperature and emissivity retrieval using knowledge distillation","authors":"Wang Dai , Kebiao Mao , Zhonghua Guo , Zhihao Qin , Jiancheng Shi , Sayed M. Bateni , Liurui Xiao","doi":"10.1016/j.aiia.2025.03.009","DOIUrl":"10.1016/j.aiia.2025.03.009","url":null,"abstract":"<div><div>The rapid advancement of artificial intelligence in domains such as natural language processing has catalyzed AI research across various fields. This study introduces a novel strategy, the AutoKeras-Knowledge Distillation (AK-KD), which integrates knowledge distillation technology for joint optimization of large and small models in the retrieval of surface temperature and emissivity using thermal infrared remote sensing. The approach addresses the challenges of limited accuracy in surface temperature retrieval by employing a high-performance large model developed through AutoKeras as the teacher model, which subsequently enhances a less accurate small model through knowledge distillation. The resultant student model is interactively integrated with the large model to further improve specificity and generalization capabilities. Theoretical derivations and practical applications validate that the AK-KD strategy significantly enhances the accuracy of temperature and emissivity retrieval. For instance, a large model trained with simulated ASTER data achieved a Pearson Correlation Coefficient (PCC) of 0.999 and a Mean Absolute Error (MAE) of 0.348 K in surface temperature retrieval. In practical applications, this model demonstrated a PCC of 0.967 and an MAE of 0.685 K. Although the large model exhibits high average accuracy, its precision in complex terrains is comparatively lower. To ameliorate this, the large model, serving as a teacher, enhances the small model's local accuracy. Specifically, in surface temperature retrieval, the small model's PCC improved from an average of 0.978 to 0.979, and the MAE decreased from 1.065 K to 0.724 K. In emissivity retrieval, the PCC rose from an average of 0.827 to 0.898, and the MAE reduced from 0.0076 to 0.0054. This research not only provides robust technological support for further development of thermal infrared remote sensing in temperature and emissivity retrieval but also offers important references and key technological insights for the universal model construction of other geophysical parameter retrievals.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 407-425"},"PeriodicalIF":8.2,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143837982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shi Yinyan, Zhu Yangxu, Wang Xiaochan, Zhang Xiaolei, Zheng Enlai, Zhang Yongnian
{"title":"Assessing particle application in multi-pass overlapping scenarios with variable rate centrifugal fertilizer spreaders for precision agriculture","authors":"Shi Yinyan, Zhu Yangxu, Wang Xiaochan, Zhang Xiaolei, Zheng Enlai, Zhang Yongnian","doi":"10.1016/j.aiia.2025.04.003","DOIUrl":"10.1016/j.aiia.2025.04.003","url":null,"abstract":"<div><div>Environmental impacts and economic demands are driving the development of variable rate fertilization (VRF) technology for precision agriculture. Despite the advantages of a simple structure, low cost and high efficiency, uneven fertilizer-spreading uniformity is becoming a key factor restricting the application of centrifugal fertilizer spreaders. Accordingly, the particle application characteristics and variation laws for centrifugal VRF spreaders with multi-pass overlapped spreading needs to be urgently explored, in order to improve their distribution uniformity and working accuracy. In this study, the working performance of a self-developed centrifugal VRF spreader, based on real-time growth information of rice and wheat, was investigated and tested through the test methods of using the collection trays prescribed in ISO 5690 and ASAE S341.2. The coefficient of variation (CV) was calculated by weighing the fertilizer mass in standard pans, in order to evaluate the distribution uniformity of spreading patterns. The results showed that the effective application widths were 21.05, 22.58 and 23.67 m for application rates of 225, 300 and 375 kg/ha, respectively. The actual fertilizer application rates of multi-pass overlapped spreading were generally higher than the target rates, as well as the particle distribution CVs within the effective spreading widths were 11.51, 9.25 and 11.28 % for the respective target rates. Field test results for multi-pass overlapped spreading showed that the average difference between the actual and target application was 4.54 %, as well as the average particle distribution CV within the operating width was 11.94 %, which met the operation requirements of particle transverse distribution for centrifugal fertilizer spreaders. The results and findings of this study provide a theoretical reference for technical innovation and development of centrifugal VRF spreaders and are of great practical and social significance for accelerating their application in implementing precision agriculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 395-406"},"PeriodicalIF":8.2,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuqing Yang , Chengguo Xu , Wenhao Hou , Alan G. McElligott , Kai Liu , Yueju Xue
{"title":"Transformer-based audio-visual multimodal fusion for fine-grained recognition of individual sow nursing behaviour","authors":"Yuqing Yang , Chengguo Xu , Wenhao Hou , Alan G. McElligott , Kai Liu , Yueju Xue","doi":"10.1016/j.aiia.2025.03.006","DOIUrl":"10.1016/j.aiia.2025.03.006","url":null,"abstract":"<div><div>Nursing behaviour and the calling-to-nurse sound are crucial indicators for assessing sow maternal behaviour and nursing status. However, accurately identifying these behaviours for individual sows in complex indoor pig housing is challenging due to factors such as variable lighting, rail obstructions, and interference from other sows' calls. Multimodal fusion, which integrates audio and visual data, has proven to be an effective approach for improving accuracy and robustness in complex scenarios. In this study, we designed an audio-visual data acquisition system that includes a camera for synchronised audio and video capture, along with a custom-developed sound source localisation system that leverages a sound sensor to track sound direction. Specifically, we proposed a novel transformer-based audio-visual multimodal fusion (TMF) framework for recognising fine-grained sow nursing behaviour with or without the calling-to-nurse sound. Initially, a unimodal self-attention enhancement (USE) module was employed to augment video and audio features with global contextual information. Subsequently, we developed an audio-visual interaction enhancement (AVIE) module to compress relevant information and reduce noise using the information bottleneck principle. Moreover, we presented an adaptive dynamic decision fusion strategy to optimise the model's performance by focusing on the most relevant features in each modality. Finally, we comprehensively identified fine-grained nursing behaviours by integrating audio and fused information, while incorporating angle information from the real-time sound source localisation system to accurately determine whether the sound cues originate from the target sow. Our results demonstrate that the proposed method achieves an accuracy of 98.42 % for general sow nursing behaviour and 94.37 % for fine-grained nursing behaviour, including nursing with and without the calling-to-nurse sound, and non-nursing behaviours. This fine-grained nursing information can provide a more nuanced understanding of the sow's health and lactation willingness, thereby enhancing management practices in pig farming.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 363-376"},"PeriodicalIF":8.2,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Navigating challenges/opportunities in developing smart agricultural extension platforms: Multi-media data mining techniques","authors":"Josué Kpodo , A. Pouyan Nejadhashemi","doi":"10.1016/j.aiia.2025.04.001","DOIUrl":"10.1016/j.aiia.2025.04.001","url":null,"abstract":"<div><div>Agricultural Extension (AE) research faces significant challenges in producing relevant and practical knowledge due to rapid advancements in artificial intelligence (AI). AE struggles to keep pace with these advancements, complicating the development of actionable information. One major challenge is the absence of intelligent platforms that enable efficient information retrieval and quick decision-making. Investigations have shown a shortage of AI-assisted solutions that effectively use AE materials across various media formats while preserving scientific accuracy and contextual relevance. Although mainstream AI systems can potentially reduce decision-making risks, their usage remains limited. This limitation arises primarily from the lack of standardized datasets and concerns regarding user data privacy. For AE datasets to be standardized, they must satisfy four key criteria: inclusion of critical domain-specific knowledge, expert curation, consistent structure, and acceptance by peers. Addressing data privacy issues involves adhering to open-access principles and enforcing strict data encryption and anonymization standards. To address these gaps, a conceptual framework is introduced. This framework extends beyond typical user-oriented platforms and comprises five core modules. It features a neurosymbolic pipeline integrating large language models with physically based agricultural modeling software, further enhanced by Reinforcement Learning from Human Feedback. Notable aspects of the framework include a dedicated human-in-the-loop process and a governance structure consisting of three primary bodies focused on data standardization, ethics and security, and accountability and transparency. Overall, this work represents a significant advancement in agricultural knowledge systems, potentially transforming how AE services deliver critical information to farmers and other stakeholders.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 426-448"},"PeriodicalIF":8.2,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143842756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rabiu Aminu , Samantha M. Cook , David Ljungberg , Oliver Hensel , Abozar Nasirahmadi
{"title":"Improving the performance of machine learning algorithms for detection of individual pests and beneficial insects using feature selection techniques","authors":"Rabiu Aminu , Samantha M. Cook , David Ljungberg , Oliver Hensel , Abozar Nasirahmadi","doi":"10.1016/j.aiia.2025.03.008","DOIUrl":"10.1016/j.aiia.2025.03.008","url":null,"abstract":"<div><div>To reduce damage caused by insect pests, farmers use insecticides to protect produce from crop pests. This practice leads to high synthetic chemical usage because a large portion of the applied insecticide does not reach its intended target; instead, it may affect non-target organisms and pollute the environment. One approach to mitigating this is through the selective application of insecticides to only those crop plants (or patches of plants) where the insect pests are located, avoiding non-targets and beneficials. The first step to achieve this is the identification of insects on plants and discrimination between pests and beneficial non-targets. However, detecting small-sized individual insects is challenging using image-based machine learning techniques, especially in natural field settings. This paper proposes a method based on explainable artificial intelligence feature selection and machine learning to detect pests and beneficial insects in field crops. An insect-plant dataset reflecting real field conditions was created. It comprises two pest insects—the Colorado potato beetle (CPB, <em>Leptinotarsa decemlineata</em>) and green peach aphid (<em>Myzus persicae</em>)—and the beneficial seven-spot ladybird (<em>Coccinella septempunctata</em>). The specialist herbivore CPB was imaged only on potato plants (<em>Solanum tuberosum</em>) while green peach aphids and seven-spot ladybirds were imaged on three crops: potato, faba bean (<em>Vicia faba)</em>, and sugar beet (<em>Beta vulgaris</em> subsp. <em>vulgaris</em>). This increased dataset diversity, broadening the potential application of the developed method for discriminating between pests and beneficial insects in several crops. The insects were imaged in both laboratory and outdoor settings. Using the GrabCut algorithm, regions of interest in the image were identified before shape, texture, and colour features were extracted from the segmented regions. The concept of explainable artificial intelligence was adopted by incorporating permutation feature importance ranking and Shapley Additive explanations values to identify the feature set that optimized a model's performance while reducing computational complexity. The proposed explainable artificial intelligence feature selection method was compared to conventional feature selection techniques, including mutual information, chi-square coefficient, maximal information coefficient, Fisher separation criterion and variance thresholding. Results showed improved accuracy (92.62 % Random forest, 90.16 % Support vector machine, 83.61 % K-nearest neighbours, and 81.97 % Naïve Bayes) and a reduction in the number of model parameters and memory usage (7.22 <em>×</em> 10<sup>7</sup> Random forest, 6.23 <em>×</em> 10<sup>3</sup> Support vector machine, 3.64 <em>×</em> 10<sup>4</sup> K-nearest neighbours and 1.88 <em>×</em> 10<sup>2</sup> Naïve Bayes) compared to using all features. Prediction and training times were also reduced by approxima","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 377-394"},"PeriodicalIF":8.2,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra
{"title":"Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning","authors":"Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra","doi":"10.1016/j.aiia.2025.03.001","DOIUrl":"10.1016/j.aiia.2025.03.001","url":null,"abstract":"<div><div>The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.</div><div>To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.</div><div>The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R<sup>2</sup>) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 280-295"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gan Yang , Qifeng Li , Chunjiang Zhao , Chaoyuan Wang , Hua Yan , Rui Meng , Yu Liu , Ligen Yu
{"title":"TGFN-SD: A text-guided multimodal fusion network for swine disease diagnosis","authors":"Gan Yang , Qifeng Li , Chunjiang Zhao , Chaoyuan Wang , Hua Yan , Rui Meng , Yu Liu , Ligen Yu","doi":"10.1016/j.aiia.2025.03.002","DOIUrl":"10.1016/j.aiia.2025.03.002","url":null,"abstract":"<div><div>China is the world's largest producer of pigs, but traditional manual prevention, treatment, and diagnosis methods cannot satisfy the demands of the current intensive production environment. Existing computer-aided diagnosis (CAD) systems for pigs are dominated by expert systems, which cannot be widely applied because the collection and maintenance of knowledge is difficult, and most of them ignore the effect of multimodal information. A swine disease diagnosis model was proposed in this study, the Text-Guided Fusion Network-Swine Diagnosis (TGFN-SD) model, which integrated text case reports and disease images. The model integrated the differences and complementary information in the multimodal representation of diseases through the text-guided transformer module such that text case reports could carry the semantic information of disease images for disease identification. Moreover, it alleviated the phenotypic overlap problem caused by similar diseases in combination with supervised learning and self-supervised learning. Experimental results revealed that TGFN-SD achieved satisfactory performance on a constructed swine disease image and text dataset (SDT6K) that covered six disease classification datasets with accuracy and F1-score of 94.48 % and 94.4 % respectively. The accuracies and F1-scores increased by 8.35 % and 7.24 % compared with those under the unimodal situation and by 2.02 % and 1.63 % compared with those of the optimal baseline model under the multimodal fusion. Additionally, interpretability analysis revealed that the model focus area was consistent with the habits and rules of the veterinary clinical diagnosis of pigs, indicating the effectiveness of the proposed model and providing new ideas and perspectives for the study of swine disease CAD.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 266-279"},"PeriodicalIF":8.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jihao Wang , Xiaochan Wang , Yinyan Shi , Haihui Yang , Bo Jia , Xiaolei Zhang , Lebin Lin
{"title":"A review of the application prospects of cloud-edge-end collaborative technology in freshwater aquaculture","authors":"Jihao Wang , Xiaochan Wang , Yinyan Shi , Haihui Yang , Bo Jia , Xiaolei Zhang , Lebin Lin","doi":"10.1016/j.aiia.2025.02.008","DOIUrl":"10.1016/j.aiia.2025.02.008","url":null,"abstract":"<div><div>This paper reviews the application and potential of cloud-edge-end collaborative (CEEC) technology in the field of freshwater aquaculture, a rapidly developing sector driven by the growing global demand for aquatic products. The sustainable development of freshwater aquaculture has become a critical challenge due to issues such as water pollution and inefficient resource utilization in traditional farming methods. In response to these challenges, the integration of smart technologies has emerged as a promising solution to improve both efficiency and sustainability. Cloud computing and edge computing, when combined, form the backbone of CEEC technology, offering an innovative approach that can significantly enhance aquaculture practices. By leveraging the strengths of both technologies, CEEC enables efficient data processing through cloud infrastructure and real-time responsiveness via edge computing, making it a compelling solution for modern aquaculture. This review explores the key applications of CEEC in areas such as environmental monitoring, intelligent feeding systems, health management, and product traceability. The ability of CEEC technology to optimize the aquaculture environment, enhance product quality, and boost overall farming efficiency highlights its potential to become a mainstream solution in the industry. Furthermore, the paper discusses the limitations and challenges that need to be addressed in order to fully realize the potential of CEEC in freshwater aquaculture. In conclusion, this paper provides researchers and practitioners with valuable insights into the current state of CEEC technology in aquaculture, offering suggestions for future development and optimization to further enhance its contributions to the sustainable growth of freshwater aquaculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 232-251"},"PeriodicalIF":8.2,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qing Wang , Ke Shao , Zhibo Cai , Yingpu Che , Haochong Chen , Shunfu Xiao , Ruili Wang , Yaling Liu , Baoguo Li , Yuntao Ma
{"title":"Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors","authors":"Qing Wang , Ke Shao , Zhibo Cai , Yingpu Che , Haochong Chen , Shunfu Xiao , Ruili Wang , Yaling Liu , Baoguo Li , Yuntao Ma","doi":"10.1016/j.aiia.2025.02.004","DOIUrl":"10.1016/j.aiia.2025.02.004","url":null,"abstract":"<div><div>Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with <em>R</em><sup>2</sup> values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 252-265"},"PeriodicalIF":8.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143643996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aritra Das , Fahad Pathan , Jamin Rahman Jim , Md Mohsin Kabir , M.F. Mridha
{"title":"Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review","authors":"Aritra Das , Fahad Pathan , Jamin Rahman Jim , Md Mohsin Kabir , M.F. Mridha","doi":"10.1016/j.aiia.2025.02.006","DOIUrl":"10.1016/j.aiia.2025.02.006","url":null,"abstract":"<div><div>The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 192-220"},"PeriodicalIF":8.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}