Aritra Das , Fahad Pathan , Jamin Rahman Jim , Md Mohsin Kabir , M.F. Mridha
{"title":"Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review","authors":"Aritra Das , Fahad Pathan , Jamin Rahman Jim , Md Mohsin Kabir , M.F. Mridha","doi":"10.1016/j.aiia.2025.02.006","DOIUrl":"10.1016/j.aiia.2025.02.006","url":null,"abstract":"<div><div>The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 192-220"},"PeriodicalIF":8.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boyi Tang , Jingping Zhou , Chunjiang Zhao , Yuchun Pan , Yao Lu , Chang Liu , Kai Ma , Xuguang Sun , Ruifang Zhang , Xiaohe Gu
{"title":"Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed","authors":"Boyi Tang , Jingping Zhou , Chunjiang Zhao , Yuchun Pan , Yao Lu , Chang Liu , Kai Ma , Xuguang Sun , Ruifang Zhang , Xiaohe Gu","doi":"10.1016/j.aiia.2025.02.007","DOIUrl":"10.1016/j.aiia.2025.02.007","url":null,"abstract":"<div><div>Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 162-181"},"PeriodicalIF":8.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhenni He , Fahui Yuan , Yansuo Zhou , Bingbo Cui , Yong He , Yufei Liu
{"title":"Stereo vision based broccoli recognition and attitude estimation method for field harvesting","authors":"Zhenni He , Fahui Yuan , Yansuo Zhou , Bingbo Cui , Yong He , Yufei Liu","doi":"10.1016/j.aiia.2025.02.002","DOIUrl":"10.1016/j.aiia.2025.02.002","url":null,"abstract":"<div><div>At present, automatic broccoli harvest in field still faces some issues. It is difficult to segment broccoli in real time under complex field background, and hard to pick tilt-growing broccoli for the end-effector of robot. In this research, an improved YOLOv8n-seg model, named YOLO-Broccoli-Seg was proposed for broccoli recognition. Through adding a triplet attention module to YOLOv8-Seg model, the feature fusion capability of the algorithm is improved significantly. The mean average precision mAP50 (Mask), mAP95 (Mask), mAP50 (Bounding Box, Bbox) and mAP95 (Bbox) of YOLO-Broccoli-Seg are 0.973, 0.683, 0.973 and 0.748 respectively. Precision <em>P</em>-value was improved the most, with an increment of 8.7 %. In addition, an attitude estimation method based on three-dimensional point cloud is proposed. When the tilt angle of broccoli is between −30°and 30°, the R<sup>2</sup> between the estimated value and the true value is 0.934. It indicated that this method can well represent the growth attitude of broccoli. This research can provide the rich broccoli information and technical basis for the automated broccoli picking.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 526-536"},"PeriodicalIF":8.2,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tingting Zhang , Jing Li , Jinpeng Tong , Yihu Song , Li Wang , Renye Wu , Xuan Wei , Yuanyuan Song , Rensen Zeng
{"title":"End-to-end deep fusion of hyperspectral imaging and computer vision techniques for rapid detection of wheat seed quality","authors":"Tingting Zhang , Jing Li , Jinpeng Tong , Yihu Song , Li Wang , Renye Wu , Xuan Wei , Yuanyuan Song , Rensen Zeng","doi":"10.1016/j.aiia.2025.02.003","DOIUrl":"10.1016/j.aiia.2025.02.003","url":null,"abstract":"<div><div>Seeds are essential to the agri-food industry. However, their quality is vulnerable to biotic and abiotic stresses during production and storage, leading to various types of deterioration. Real-time monitoring and pre-sowing screening offer substantial potential for improved storage management, field performance, and flour quality. This study investigated diverse deterioration patterns in wheat seeds by analyzing 1000 high-quality and 1098 deteriorated seeds encompassing mold, aging, mechanical damage, insect damage, and internal insect infestation. Hyperspectral imaging (HSI) and computer vision (CV) were employed to capture surface data from both the embryo (EM) and endosperm (EN). Internal seed quality was further assessed using scanning electron microscopy, dissection, and standard germination tests. Both conventional machine learning algorithms and deep convolutional neural networks (DCNN) were employed to develop discriminative models using independent datasets. Results revealed that each data source contributed valuable information for seed quality assessment (validation set accuracy: 65.1–89.2 %), with the integration of HSI and CV showing considerable promise. A comparison of early and late fusion strategies led to the development of an end-to-end deep fusion model. The decision fusion-based DCNN model, integrating HSI-EM, HSI-EN, CV-EM, and CV-EN data, achieved the highest accuracy in both training (94.3 %) and validation (93.8 %) sets. Applying this model to seed lot screening increased the proportion of high-quality seeds from 47.7 % to 93.4 %. These findings were further supported by external samples and visualizations. The proposed end-to-end decision fusion DCNN model simplifies the training process compared to traditional two-stage fusion methods. This study presents a potentially efficient alternative for rapid, individual kernel quality detection and control during wheat production.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 537-549"},"PeriodicalIF":8.2,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Billy G. Ram , Kirk Howatt , Joseph Mettler , Xin Sun
{"title":"Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification","authors":"Billy G. Ram , Kirk Howatt , Joseph Mettler , Xin Sun","doi":"10.1016/j.aiia.2025.02.005","DOIUrl":"10.1016/j.aiia.2025.02.005","url":null,"abstract":"<div><div>Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (> 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 131-146"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ameer Tamoor Khan , Signe Marie Jensen , Abdul Rehman Khan
{"title":"Advancing precision agriculture: A comparative analysis of YOLOv8 for multi-class weed detection in cotton cultivation","authors":"Ameer Tamoor Khan , Signe Marie Jensen , Abdul Rehman Khan","doi":"10.1016/j.aiia.2025.01.013","DOIUrl":"10.1016/j.aiia.2025.01.013","url":null,"abstract":"<div><div>Effective weed management plays a critical role in enhancing the productivity and sustainability of cotton cultivation. The rapid emergence of herbicide-resistant weeds has underscored the need for innovative solutions to address the challenges associated with precise weed detection. This paper investigates the potential of YOLOv8, the latest advancement in the YOLO family of object detectors, for multi-class weed detection in U.S. cotton fields. Leveraging the CottonWeedDet12 dataset, which includes diverse weed species captured under varying environmental conditions, this study provides a comprehensive evaluation of YOLOv8's performance. A comparative analysis with earlier YOLO variants reveals substantial improvements in detection accuracy, as evidenced by higher mean Average Precision (mAP) scores. These findings highlight YOLOv8's superior capability to generalize across complex field scenarios, making it a promising candidate for real-time applications in precision agriculture. The enhanced architecture of YOLOv8, featuring anchor-free detection, an advanced Feature Pyramid Network (FPN), and an optimized loss function, enables accurate detection even under challenging conditions. This research emphasizes the importance of machine vision technologies in modern agriculture, particularly for minimizing herbicide reliance and promoting sustainable farming practices. The results not only validate YOLOv8's efficacy in multi-class weed detection but also pave the way for its integration into autonomous agricultural systems, thereby contributing to the broader goals of precision agriculture and ecological sustainability.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 182-191"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Precision agriculture technologies for soil site-specific nutrient management: A comprehensive review","authors":"Niharika Vullaganti, Billy G. Ram, Xin Sun","doi":"10.1016/j.aiia.2025.02.001","DOIUrl":"10.1016/j.aiia.2025.02.001","url":null,"abstract":"<div><div>Amidst the growing food demands of an increasing population, agricultural intensification frequently depends on excessive chemical and fertilizer applications. While this approach initially boosts crop yields, it effects long-term sustainability through soil degradation and compromised food quality. Thus, prioritizing soil health while enhancing crop production is essential for sustainable food production. Site-Specific Nutrient Management (SSNM) emerges as a critical strategy to increase crop production, maintain soil health, and reduce environmental pollution. Despite its potential, the application of SSNM technologies remain limited in farmers' fields due to existing research gaps. This review critically analyzes and presents research conducted in SSNM in the past 11 years (2013–2024), identifying gaps and future research directions. A comprehensive study of 97 relevant research publications reveals several key findings: a) Electrochemical sensing and spectroscopy are the two widely explored areas in SSNM research, b) Despite numerous technologies in SSNM, each has its own limitation, preventing any single technology from being ideal, c) The selection of models and preprocessing techniques significantly impacts nutrient prediction accuracy, d) No single sensor or sensor combination can predict all soil properties, as suitability is highly attribute-specific. This review provides researchers, and technical personnel in precision agriculture, and farmers with detailed insights into SSNM research, its implementation, limitations, challenges, and future research directions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 147-161"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143507923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hang Yang , Qi Feng , Shibin Xia , Zhenbin Wu , Yi Zhang
{"title":"AI-driven aquaculture: A review of technological innovations and their sustainable impacts","authors":"Hang Yang , Qi Feng , Shibin Xia , Zhenbin Wu , Yi Zhang","doi":"10.1016/j.aiia.2025.01.012","DOIUrl":"10.1016/j.aiia.2025.01.012","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) in aquaculture has been identified as a transformative force, enhancing various operational aspects from water quality management to genetic optimization. This review provides a comprehensive synthesis of recent advancements in AI applications within the aquaculture sector, underscoring the significant enhancements in production efficiency and environmental sustainability. Key AI-driven improvements, such as predictive analytics for disease management and optimized feeding protocols, are highlighted, demonstrating their contributions to reducing waste and improving biomass outputs. However, challenges remain in terms of data quality, system integration, and the socio-economic impacts of technological adoption across diverse aquacultural environments. This review also addresses the gaps in current research, particularly the lack of robust, scalable AI models and frameworks that can be universally applied. Future directions are discussed, emphasizing the need for interdisciplinary research and development to fully leverage AI potential in aquaculture. This study not only maps the current landscape of AI applications but also serves as a call for continued innovation and strategic collaborations to overcome existing barriers and realize the full benefits of AI in aquaculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 508-525"},"PeriodicalIF":8.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthony Crespo , Claudia Moncada , Fabricio Crespo , Manuel Eugenio Morocho-Cayamcela
{"title":"An efficient strawberry segmentation model based on Mask R-CNN and TensorRT","authors":"Anthony Crespo , Claudia Moncada , Fabricio Crespo , Manuel Eugenio Morocho-Cayamcela","doi":"10.1016/j.aiia.2025.01.008","DOIUrl":"10.1016/j.aiia.2025.01.008","url":null,"abstract":"<div><div>Currently, artificial intelligence (AI), particularly computer vision (CV), has numerous applications in agriculture. In this field, the production and consumption of strawberries have experienced great growth in recent years, which makes meeting the growing demand a challenge that producers must face. However, one of the main problems regarding the cultivation of this fruit is the high cost and long picking times. In response, automatic harvesting has surged as an option to address this difficulty, and fruit instance segmentation plays a crucial role in these types of systems. Fruit segmentation is related to the identification and separation of individual fruits within a crop, allowing a more efficient and accurate harvesting process. Although deep learning (DL) techniques have shown potential for this activity, the complexity of the models leads to difficulty in their implementation in real-time systems. For this reason, a model capable of performing adequately in real-time, while also having good precision is of great interest. With this motivation, this work presents a efficient Mask R-CNN model to perform instance segmentation in strawberry fruits. The efficiency of the model is assessed considering the amount of frames per second (FPS) it can process, its size in megabytes (MB) and its mean average precision (mAP) value. Two approaches are provided: The first one consists on the training of the model using the Detectron2 library, while the second one focuses on the training of the model using the NVIDIA TAO Toolkit. In both cases, NVIDIA TensorRT is used to optimize the models. The results show that the best Mask R-CNN model, without optimization, has a performance of 83.45 mAP, 4 FPS, and 351 MB of size, which, after the TensorRT optimization, achieved 83.17 mAP, 25.46 FPS, and only 48.2 MB of size. It represents a suitable model for implementation in real-time systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 327-337"},"PeriodicalIF":8.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leng Han , Zhichong Wang , Miao He , Yajia Liu , Xiongkui He
{"title":"PWM offline variable application based on UAV remote sensing 3D prescription map","authors":"Leng Han , Zhichong Wang , Miao He , Yajia Liu , Xiongkui He","doi":"10.1016/j.aiia.2025.01.011","DOIUrl":"10.1016/j.aiia.2025.01.011","url":null,"abstract":"<div><div>Precision application in orchards enhancing deposition uniformity and environmental sustainability by accurately matching nozzle output with canopy parameters. This study provides a pipeline for creating 3D prescription maps using a UAV and performing offline variable application. It also evaluates the accuracy of ground altitude measurements at various flight heights. At a flight height of 30 m, with a three-dimensional reconstruction method without phase-control points, the root mean square error (RMSE) for ground altitude measurement was 0.214 m and the mean absolute error (MAE) was 0.211 m; for the canopy area, these values were 0.591 m and 0.541 m, respectively. As flight height increased, the accuracy of altitude measurements declined and tended to be underestimated. Moreover, during offline variable spraying, the shape of the spray area influenced deposition accuracy, with collision detection area of a line segment achieving greater precision than conical ones. Field tests showed that the offline variable application method reduced pesticide usage by 32.43 % and enhanced spray uniformity. This newly developed process does not require costly sensors on each sprayer and has potential for field applications.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 496-507"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}