Miklós Biszkup , Gábor Vásárhelyi , Nuri Nurlaila Setiawan , Aliz Márton , Szilárd Szentes , Petra Balogh , Barbara Babay-Török , Gábor Pajor , Dóra Drexler
{"title":"Detectability of multi-dimensional movement and behaviour in cattle using sensor data and machine learning algorithms: Study on a Charolais bull","authors":"Miklós Biszkup , Gábor Vásárhelyi , Nuri Nurlaila Setiawan , Aliz Márton , Szilárd Szentes , Petra Balogh , Barbara Babay-Török , Gábor Pajor , Dóra Drexler","doi":"10.1016/j.aiia.2024.11.002","DOIUrl":"10.1016/j.aiia.2024.11.002","url":null,"abstract":"<div><div>The development of motion sensors for monitoring cattle behaviour has enabled farmers to predict the state of their cattle's welfare more efficiently. While most studies work with one dimensional output with disjunct behaviour categories, more accurate prediction can still be achieved by including complex movements and enriching the sensor algorithm to detect multi-dimensional movements, i.e., more than one movement occurring simultaneously. This paper presents such a machine-learning method for analysing overlapping independent movements. The output of the method consists of automatically recognized complex behaviour patterns that can be used for measuring animal welfare, predicting calving, or detecting early signs of diseases. This study combines automated motion sensors (i.e., halter and pedometer) for ruminants known as RumiWatch mounted on a Charolais fattening bull and camera observation. Fourteen types of complex movements were identified, i.e., defecating-urinating, eating, drinking, getting up, head movement, licking, lying down, lying, playing-aggression, rubbing, ruminating, sleeping, standing, and stepping. As multiple parallel binary classificators were used, the system was able to recognize parallel behavioural patterns with high fidelity. Two types of machine learning, i.e., Support Vector Classification (SVC) and RandomForest were used to recognize different general and non-general forms of movement. Results from these two supervised learning systems were compared. A continuous forty-eight hours of video were annotated to train the systems and validate their predictions. The success rate of both classifiers in recognizing special movements from both sensors or separately in different settings (i.e., window and padding) was examined. Although the two classifiers produced different results, the ideal settings showed that all forms of movement in the subject animal were successfully recognized with high accuracy. More studies using more individual animals and different ruminants would increase our knowledge on enhancing the system's performance and accuracy.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 86-98"},"PeriodicalIF":8.2,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dorin Shmaryahu , Rotem Lev Lehman , Ezri Peleg , Guy Shani
{"title":"Estimating TYLCV resistance level using RGBD sensors in production greenhouse conditions","authors":"Dorin Shmaryahu , Rotem Lev Lehman , Ezri Peleg , Guy Shani","doi":"10.1016/j.aiia.2024.10.004","DOIUrl":"10.1016/j.aiia.2024.10.004","url":null,"abstract":"<div><div>Automated phenotyping is the task of automatically measuring plant attributes to help farmers and breeders in developing and growing strong robust plants. An automated tool for early illness detection can accelerate the process of identifying plant resistance and quickly pinpoint problematic breeding. Many such phenotyping tasks can be achieved by analyzing images from simple, low cost, RGB-D sensors. In this paper we focused on a particular case study — identifying the resistance level of tomato hybrids to the tomato yellow leaf curl virus (TYLCV) in production greenhouses. This is a difficult task, as separating between resistance levels based on images is difficult even for expert breeders. We collected a large dataset of images from an experiment containing many tomato hybrids with varying resistance levels. We used the depth information to identify the topmost part of the tomato plant. We then used deep learning models to classify the various resistance levels. For identifying plants with visual symptoms, our methods achieved an accuracy of 0.928, a precision of 0.934, and a recall of 0.95. In the multi-class case we achieved an accuracy of 0.76 in identifying the correct level, and an error of 0.278. Our methods are not particularly tailored for the specific task, and can be extended to other tasks that identify various plant diseases with visual symptoms such as ToBRFV, mildew, ToMV and others.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 31-42"},"PeriodicalIF":8.2,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S.M. Nuruzzaman Nobel , Maharin Afroj , Md Mohsin Kabir , M.F. Mridha
{"title":"Development of a cutting-edge ensemble pipeline for rapid and accurate diagnosis of plant leaf diseases","authors":"S.M. Nuruzzaman Nobel , Maharin Afroj , Md Mohsin Kabir , M.F. Mridha","doi":"10.1016/j.aiia.2024.10.005","DOIUrl":"10.1016/j.aiia.2024.10.005","url":null,"abstract":"<div><div>Selecting techniques is a crucial aspect of disease detection analysis, particularly in the convergence of computer vision and agricultural technology. Maintaining crop disease detection in a timely and accurate manner is essential to maintaining global food security. Deep learning is a viable answer to meet this need. To proceed with this study, we have developed and evaluated a disease detection model using a novel ensemble technique. We propose to introduce DenseNetMini, a smaller version of DenseNet. We propose combining DenseNetMini with a learning resizer in ensemble approach to enhance training accuracy and expedite learning. Another unique proposition involves utilizing Gradient Product (GP) as an optimization technique, effectively reducing the training time and improving the model performance. Examining images at different magnifications reveals noteworthy diagnostic agreement and accuracy improvements. Test accuracy rates of 99.65 %, 98.96 %, and 98.11 % are seen in the Plantvillage, Tomato leaf, and Appleleaf9 datasets, respectively. One of the research's main achievements is the significant decrease in processing time, which suggests that using the GP could improve disease detection in agriculture's accessibility and efficiency. Beyond quantitative successes, the study highlights Explainable Artificial Intelligence (XAI) methods, which are essential to improving the disease detection model's interpretability and transparency. XAI enhances the interpretability of the model by visually identifying critical areas on plant leaves for disease identification, which promotes confidence and understanding of the model's functionality.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 56-72"},"PeriodicalIF":8.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis E. Chuquimarca , Boris X. Vintimilla , Sergio A. Velastin
{"title":"A review of external quality inspection for fruit grading using CNN models","authors":"Luis E. Chuquimarca , Boris X. Vintimilla , Sergio A. Velastin","doi":"10.1016/j.aiia.2024.10.002","DOIUrl":"10.1016/j.aiia.2024.10.002","url":null,"abstract":"<div><div>This article reviews the state of the art of recent CNN models used for external quality inspection of fruits, considering parameters such as color, shape, size, and defects, used to categorize fruits according to international marketing levels of agricultural products. The literature review considers the number of fruit images in different datasets, the type of images used by the CNN models, the performance results obtained by each CNNs, the optimizers that help increase the accuracy of these, and the use of pre-trained CNN models used for transfer learning. CNN models have used various types of images in the visible, infrared, hyperspectral, and multispectral bands. Furthermore, the fruit image datasets used are either real or synthetic. Finally, several tables summarize the articles reviewed, which are prioritized according to inspection parameters, facilitating a critical comparison of each work.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 1-20"},"PeriodicalIF":8.2,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classifying early apple scab infections in multispectral imagery using convolutional neural networks","authors":"Alexander J. Bleasdale, J. Duncan Whyatt","doi":"10.1016/j.aiia.2024.10.001","DOIUrl":"10.1016/j.aiia.2024.10.001","url":null,"abstract":"<div><div>Multispectral imaging systems combined with deep learning classification models can be cost-effective tools for the early detection of apple scab (<em>Venturia inaequalis</em>) disease in commercial orchards. Near-infrared (NIR) imagery can display apple scab symptoms earlier and at a greater severity than visible-spectrum (RGB) imagery. Early apple scab diagnosis based on NIR imagery may be automated using deep learning convolutional neural networks (CNNs). CNN models have previously been used to classify a range of apple diseases accurately but have primarily focused on identifying late-stage rather than early-stage detection. This study fine-tunes CNN models to classify apple scab symptoms as they progress from the early to late stages of infection using a novel multispectral (RGB-NIR) time series created especially for this purpose.</div><div>This novel multispectral dataset was used in conjunction with a large Apple Disease Identification (ADID) dataset created from publicly available, pre-existing disease datasets. This ADID dataset contained 29,000 images of infection symptoms across six disease classes. Two CNN models, the lightweight MobileNetV2 and heavyweight EfficientNetV2L, were fine-tuned and used to classify each disease class in a testing dataset, with performance assessed through metrics derived from confusion matrices. The models achieved scab-prediction accuracies of 97.13 % and 97.57 % for MobileNetV2 and EfficientNetV2L, respectively, on the secondary data but only achieved accuracies of 74.12 % and 78.91 % when applied to the multispectral dataset in isolation. These lower performance scores were attributed to a higher proportion of false-positive scab predictions in the multispectral dataset. Time series analyses revealed that both models could classify apple scab infections earlier than the manual classification techniques, leading to more false-positive assessments, and could accurately distinguish between healthy and infected samples up to 7 days post-inoculation in NIR imagery.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 39-51"},"PeriodicalIF":8.2,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhixin Hua , Yitao Jiao , Tianyu Zhang , Zheng Wang , Yuying Shang , Huaibo Song
{"title":"Automatic location and recognition of horse freezing brand using rotational YOLOv5 deep learning network","authors":"Zhixin Hua , Yitao Jiao , Tianyu Zhang , Zheng Wang , Yuying Shang , Huaibo Song","doi":"10.1016/j.aiia.2024.10.003","DOIUrl":"10.1016/j.aiia.2024.10.003","url":null,"abstract":"<div><div>Individual livestock identification is of great importance to precision livestock farming. Liquid nitrogen freezing labeled horse brand is an effective way for livestock individual identification. Along with various technological developments, deep-learning-based methods have been applied in such individual marking recognition. In this research, a deep learning method for oriented horse brand location and recognition was proposed. Firstly, Rotational YOLOv5 (R-YOLOv5) was adopted to locate the oriented horse brand, then the cropped images of the brand area were trained by YOLOv5 for number recognition. In the first step, unlike classical detection methods, R-YOLOv5 introduced the orientation into the YOLO framework by integrating Circle Smooth Label (CSL). Besides, Coordinate Attention (CA) was added to raise the attention to positional information in the network. These improvements enhanced the accuracy of detecting oriented brands. In the second step, number recognition was considered as a target detection task because of the requirement of accurate recognition. Finally, the whole brand number was obtained according to the sequences of each detection box position. The experiment results showed that R-YOLOv5 outperformed other rotating target detection algorithms, and the AP (Average Accuracy) was 95.6 %, the FLOPs were 17.4 G, the detection speed was 14.3 fps. As for the results of number recognition, the mAP (mean Average Accuracy) was 95.77 %, the weight size was 13.71 MB, and the detection speed was 68.6 fps. The two-step method can accurately identify brand numbers with complex backgrounds. It also provides a stable and lightweight method for livestock individual identification.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 21-30"},"PeriodicalIF":8.2,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UAV-based field watermelon detection and counting using YOLOv8s with image panorama stitching and overlap partitioning","authors":"Liguo Jiang , Hanhui Jiang , Xudong Jing , Haojie Dang , Rui Li , Jinyong Chen , Yaqoob Majeed , Ramesh Sahni , Longsheng Fu","doi":"10.1016/j.aiia.2024.09.001","DOIUrl":"10.1016/j.aiia.2024.09.001","url":null,"abstract":"<div><p>Accurate watermelon yield estimation is crucial to the agricultural value chain, as it guides the allocation of agricultural resources as well as facilitates inventory and logistics planning. The conventional method of watermelon yield estimation relies heavily on manual labor, which is both time-consuming and labor-intensive. To address this, this work proposes an algorithmic pipeline that utilizes unmanned aerial vehicle (UAV) videos for detection and counting of watermelons. This pipeline uses You Only Look Once version 8 s (YOLOv8s) with panorama stitching and overlap partitioning, which facilitates the overall number estimation of watermelons in field. The watermelon detection model, based on YOLOv8s and obtained using transfer learning, achieved a detection accuracy of 99.20 %, demonstrating its potential for application in yield estimation. The panorama stitching and overlap partitioning based detection and counting method uses panoramic images as input and effectively mitigates the duplications compared with the video tracking based detection and counting method. The counting accuracy reached over 96.61 %, proving a promising application for yield estimation. The high accuracy demonstrates the feasibility of applying this method for overall yield estimation in large watermelon fields.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"13 ","pages":"Pages 117-127"},"PeriodicalIF":8.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000308/pdfft?md5=e51fdb350e08ba1871a8fe3fd59e2ca5&pid=1-s2.0-S2589721724000308-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zia Uddin Ahmed , Timothy J. Krupnik , Jagadish Timsina , Saiful Islam , Khaled Hossain , A.S.M. Alanuzzaman Kurishi , Shah-Al Emran , M. Harun-Ar-Rashid , Andrew J. McDonald , Mahesh K. Gathala
{"title":"Prediction of spatial heterogeneity in nutrient-limited sub-tropical maize yield: Implications for precision management in the eastern Indo-Gangetic Plains","authors":"Zia Uddin Ahmed , Timothy J. Krupnik , Jagadish Timsina , Saiful Islam , Khaled Hossain , A.S.M. Alanuzzaman Kurishi , Shah-Al Emran , M. Harun-Ar-Rashid , Andrew J. McDonald , Mahesh K. Gathala","doi":"10.1016/j.aiia.2024.08.001","DOIUrl":"10.1016/j.aiia.2024.08.001","url":null,"abstract":"<div><p>Knowledge of the factors influencing nutrient-limited subtropical maize yield and subsequent prediction is crucial for effective nutrient management, maximizing profitability, ensuring food security, and promoting environmental sustainability. We analyzed data from nutrient omission plot trials (NOPTs) conducted in 324 farmers' fields across ten agroecological zones (AEZs) in the Eastern Indo-Gangetic Plains (EIGP) of Bangladesh to explain maize yield variability and identify variables controlling nutrient-limited yields. An additive main effect and multiplicative interaction (AMMI) model was used to explain maize yield variability with nutrient addition. Interpretable machine learning (ML) algorithms in automatic machine learning (AutoML) frameworks were subsequently used to predict attainable yield relative nutrient-limited yield (RY) and to rank variables that control RY. The stack-ensemble model was identified as the best-performing model for predicting RYs of N, P, and Zn. In contrast, deep learning outperformed all base learners for predicting RY<sub>K</sub>. The best model's square errors (RMSEs) were 0.122, 0.105, 0.123, and 0.104 for RY<sub>N</sub>, RY<sub>P</sub>, RY<sub>K</sub>, and RY<sub>Zn</sub>, respectively. The permutation-based feature importance technique identified soil pH as the most critical variable controlling RY<sub>N</sub> and RY<sub>P</sub>. The RY<sub>K</sub> showed lower in the eastern longitudinal direction. Soil N and Zn were associated with RY<sub>Zn</sub>. The predicted median RY of N, P, K, and Zn, representing average soil fertility, was 0.51, 0.84, 0.87, and 0.97, accounting for 44, 54, 54, and 48% upland dry season crop area of Bangladesh, respectively. Efforts are needed to update databases cataloging variability in land type inundation classes, soil characteristics, and INS and combine them with farmers' crop management information to develop more precise nutrient guidelines for maize in the EIGP.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"13 ","pages":"Pages 100-116"},"PeriodicalIF":8.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000291/pdfft?md5=e609aaa51bea70dec6de90b8b5d1eec7&pid=1-s2.0-S2589721724000291-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments","authors":"Ranjan Sapkota, Dawood Ahmed, Manoj Karkee","doi":"10.1016/j.aiia.2024.07.001","DOIUrl":"10.1016/j.aiia.2024.07.001","url":null,"abstract":"<div><p>Instance segmentation, an important image processing operation for automation in agriculture, is used to precisely delineate individual objects of interest within images, which provides foundational information for various automated or robotic tasks such as selective harvesting and precision pruning. This study compares the one-stage YOLOv8 and the two-stage Mask R-CNN machine learning models for instance segmentation under varying orchard conditions across two datasets. Dataset 1, collected in dormant season, includes images of dormant apple trees, which were used to train multi-object segmentation models delineating tree branches and trunks. Dataset 2, collected in the early growing season, includes images of apple tree canopies with green foliage and immature (green) apples (also called fruitlet), which were used to train single-object segmentation models delineating only immature green apples. The results showed that YOLOv8 performed better than Mask R-CNN, achieving good precision and near-perfect recall across both datasets at a confidence threshold of 0.5. Specifically, for Dataset 1, YOLOv8 achieved a precision of 0.90 and a recall of 0.95 for all classes. In comparison, Mask R-CNN demonstrated a precision of 0.81 and a recall of 0.81 for the same dataset. With Dataset 2, YOLOv8 achieved a precision of 0.93 and a recall of 0.97. Mask R-CNN, in this single-class scenario, achieved a precision of 0.85 and a recall of 0.88. Additionally, the inference times for YOLOv8 were 10.9 ms for multi-class segmentation (Dataset 1) and 7.8 ms for single-class segmentation (Dataset 2), compared to 15.6 ms and 12.8 ms achieved by Mask R-CNN's, respectively. These findings show YOLOv8's superior accuracy and efficiency in machine learning applications compared to two-stage models, specifically Mask-R-CNN, which suggests its suitability in developing smart and automated orchard operations, particularly when real-time applications are necessary in such cases as robotic harvesting and robotic immature green fruit thinning.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"13 ","pages":"Pages 84-99"},"PeriodicalIF":8.2,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172400028X/pdfft?md5=d0b3ae6930c8dca43a65b49ca13f6d47&pid=1-s2.0-S258972172400028X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141729373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faisal Dharma Adhinata , Wahyono , Raden Sumiharto
{"title":"A comprehensive survey on weed and crop classification using machine learning and deep learning","authors":"Faisal Dharma Adhinata , Wahyono , Raden Sumiharto","doi":"10.1016/j.aiia.2024.06.005","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.06.005","url":null,"abstract":"<div><p>Machine learning and deep learning are subsets of Artificial Intelligence that have revolutionized object detection and classification in images or videos. This technology plays a crucial role in facilitating the transition from conventional to precision agriculture, particularly in the context of weed control. Precision agriculture, which previously relied on manual efforts, has now embraced the use of smart devices for more efficient weed detection. However, several challenges are associated with weed detection, including the visual similarity between weed and crop, occlusion and lighting effects, as well as the need for early-stage weed control. Therefore, this study aimed to provide a comprehensive review of the application of both traditional machine learning and deep learning, as well as the combination of the two methods, for weed detection across different crop fields. The results of this review show the advantages and disadvantages of using machine learning and deep learning. Generally, deep learning produced superior accuracy compared to machine learning under various conditions. Machine learning required the selection of the right combination of features to achieve high accuracy in classifying weed and crop, particularly under conditions consisting of lighting and early growth effects. Moreover, a precise segmentation stage would be required in cases of occlusion. Machine learning had the advantage of achieving real-time processing by producing smaller models than deep learning, thereby eliminating the need for additional GPUs. However, the development of GPU technology is currently rapid, so researchers are more often using deep learning for more accurate weed identification.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"13 ","pages":"Pages 45-63"},"PeriodicalIF":8.2,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000278/pdfft?md5=13d026a04a00bc2bca21fc068166d32c&pid=1-s2.0-S2589721724000278-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}