Mohammad Amin Razavi , A. Pouyan Nejadhashemi , Babak Majidi , Hoda S. Razavi , Josué Kpodo , Rasu Eeswaran , Ignacio Ciampitti , P.V. Vara Prasad
{"title":"Enhancing crop yield prediction in Senegal using advanced machine learning techniques and synthetic data","authors":"Mohammad Amin Razavi , A. Pouyan Nejadhashemi , Babak Majidi , Hoda S. Razavi , Josué Kpodo , Rasu Eeswaran , Ignacio Ciampitti , P.V. Vara Prasad","doi":"10.1016/j.aiia.2024.11.005","DOIUrl":"10.1016/j.aiia.2024.11.005","url":null,"abstract":"<div><div>In this study, we employ advanced data-driven techniques to investigate the complex relationships between the yields of five major crops and various geographical and spatiotemporal features in Senegal. We analyze how these features influence crop yields by utilizing remotely sensed data. Our methodology incorporates clustering algorithms and correlation matrix analysis to identify significant patterns and dependencies, offering a comprehensive understanding of the factors affecting agricultural productivity in Senegal. To optimize the model's performance and identify the optimal hyperparameters, we implemented a comprehensive grid search across four distinct machine learning regressors: Random Forest, Extreme Gradient Boosting (XGBoost), Categorical Boosting (CatBoost), and Light Gradient-Boosting Machine (LightGBM). Each regressor offers unique functionalities, enhancing our exploration of potential model configurations. The top-performing models were selected based on evaluating multiple performance metrics, ensuring robust and accurate predictive capabilities. The results demonstrated that XGBoost and CatBoost perform better than the other two. We introduce synthetic crop data generated using a Variational Auto Encoder to address the challenges posed by limited agricultural datasets. By achieving high similarity scores with real-world data, our synthetic samples enhance model robustness, mitigate overfitting, and provide a viable solution for small dataset issues in agriculture. Our approach distinguishes itself by creating a flexible model applicable to various crops together. By integrating five crop datasets and generating high-quality synthetic data, we improve model performance, reduce overfitting, and enhance realism. Our findings provide crucial insights for productivity drivers in key cropping systems, enabling robust recommendations and strengthening the decision-making capabilities of policymakers and farmers in data-scarce regions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 99-114"},"PeriodicalIF":8.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142742953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural network architecture search enabled wide-deep learning (NAS-WD) for spatially heterogenous property awared chicken woody breast classification and hardness regression","authors":"Chaitanya Pallerla , Yihong Feng , Casey M. Owens , Ramesh Bahadur Bist , Siavash Mahmoudi , Pouya Sohrabipour , Amirreza Davar , Dongyi Wang","doi":"10.1016/j.aiia.2024.11.003","DOIUrl":"10.1016/j.aiia.2024.11.003","url":null,"abstract":"<div><div>Due to intensive genetic selection for rapid growth rates and high broiler yields in recent years, the global poultry industry has faced a challenging problem in the form of woody breast (WB) conditions. This condition has caused significant economic losses as high as $200 million annually, and the root cause of WB has yet to be identified. Human palpation is the most common method of distinguishing a WB from others. However, this method is time-consuming and subjective. Hyperspectral imaging (HSI) combined with machine learning algorithms can evaluate the WB conditions of fillets in a non-invasive, objective, and high-throughput manner. In this study, 250 raw chicken breast fillet samples (normal, mild, severe) were taken, and spatially heterogeneous hardness distribution was first considered when designing HSI processing models. The study not only classified the WB levels from HSI but also built a regression model to correlate the spectral information with sample hardness data. To achieve a satisfactory classification and regression model, a neural network architecture search (NAS) enabled a wide-deep neural network model named NAS-WD, which was developed. In NAS-WD, NAS was first used to automatically optimize the network architecture and hyperparameters. The classification results show that NAS-WD can classify the three WB levels with an overall accuracy of 95 %, outperforming the traditional machine learning model, and the regression correlation between the spectral data and hardness was 0.75, which performs significantly better than traditional regression models.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 73-85"},"PeriodicalIF":8.2,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatima K. Abu Salem , Sara Awad , Yasmine Hamdar , Samer Kharroubi , Hadi Jaafar
{"title":"Utility-based regression and meta-learning techniques for modeling actual ET: Comparison to (METRIC-EEFLUX) model","authors":"Fatima K. Abu Salem , Sara Awad , Yasmine Hamdar , Samer Kharroubi , Hadi Jaafar","doi":"10.1016/j.aiia.2024.11.001","DOIUrl":"10.1016/j.aiia.2024.11.001","url":null,"abstract":"<div><div>Estimating actual evapotranspiration (ETₐ) is crucial for water resource management, yet existing methods face limitations. Traditional approaches, including eddy covariance and remote sensing-based energy balance methods, often struggle with high costs, limited spatial and temporal coverage, and reduced predictive accuracy, particularly for classical empirical models. While machine learning has emerged as a promising alternative, it still presents challenges, notably in underestimating ETₐ during periods of high heat. We attribute this to insufficient learning on the rare but highly relevant ETₐ values of interest, or the not-so-big climatic datasets available for use. In this manuscript, we demonstrate how <em>few-shot, meta-learning models (MAML)</em> that are specifically designed for enhanced generalizability on not-so-big datasets can outperform basic machine learning models in upscaling ETₐ from two major in-situ towers, the Ameriflux and Euroflux. Using limited remotely sensed land surface data from the METRIC-EEFlux and limited climatic variables, we demonstrate that the chosen models can attain quantifiable utility within the <em>utility-based-regression</em> paradigm towards impactful practical considerations. Our initial explorations reveal that EEflux ETₐ deviates significantly from in-situ observations measured through the Ameriflux and EEflux towers (<span><math><msup><mi>R</mi><mn>2</mn></msup><mo>=</mo><mn>39</mn><mo>%</mo></math></span>). Instead, MAML shows best performance in approximating ETₐ than basic machine learning algorithms and EEFlux (<span><math><msup><mi>R</mi><mn>2</mn></msup><mo>=</mo><mn>71</mn><mo>%</mo></math></span> on entire testing dataset, <span><math><msup><mi>R</mi><mn>2</mn></msup><mo>=</mo><mn>0.88</mn></math></span> on the Csa climate, <span><math><msup><mi>R</mi><mn>2</mn></msup><mo>=</mo><mn>0.79</mn></math></span> on the Cfa climate, and <span><math><msup><mi>R</mi><mn>2</mn></msup><mo>=</mo><mn>0.78</mn></math></span> on the CSH vegetation class), and continues to improve without overfitting even when exposed to a relatively small training dataset. Its high F2 score (96 %) indicates that MAML has very high precision and recall for rare cases, which is significant for irrigation. Of independent interest, this study confirms that limited remotely sensed EEflux products contribute significantly to knowledge about ground truth ETₐ and can thus be of valuable use in settings where access to good quality and high-volume data is compromised.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 43-55"},"PeriodicalIF":8.2,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miklós Biszkup , Gábor Vásárhelyi , Nuri Nurlaila Setiawan , Aliz Márton , Szilárd Szentes , Petra Balogh , Barbara Babay-Török , Gábor Pajor , Dóra Drexler
{"title":"Detectability of multi-dimensional movement and behaviour in cattle using sensor data and machine learning algorithms: Study on a Charolais bull","authors":"Miklós Biszkup , Gábor Vásárhelyi , Nuri Nurlaila Setiawan , Aliz Márton , Szilárd Szentes , Petra Balogh , Barbara Babay-Török , Gábor Pajor , Dóra Drexler","doi":"10.1016/j.aiia.2024.11.002","DOIUrl":"10.1016/j.aiia.2024.11.002","url":null,"abstract":"<div><div>The development of motion sensors for monitoring cattle behaviour has enabled farmers to predict the state of their cattle's welfare more efficiently. While most studies work with one dimensional output with disjunct behaviour categories, more accurate prediction can still be achieved by including complex movements and enriching the sensor algorithm to detect multi-dimensional movements, i.e., more than one movement occurring simultaneously. This paper presents such a machine-learning method for analysing overlapping independent movements. The output of the method consists of automatically recognized complex behaviour patterns that can be used for measuring animal welfare, predicting calving, or detecting early signs of diseases. This study combines automated motion sensors (i.e., halter and pedometer) for ruminants known as RumiWatch mounted on a Charolais fattening bull and camera observation. Fourteen types of complex movements were identified, i.e., defecating-urinating, eating, drinking, getting up, head movement, licking, lying down, lying, playing-aggression, rubbing, ruminating, sleeping, standing, and stepping. As multiple parallel binary classificators were used, the system was able to recognize parallel behavioural patterns with high fidelity. Two types of machine learning, i.e., Support Vector Classification (SVC) and RandomForest were used to recognize different general and non-general forms of movement. Results from these two supervised learning systems were compared. A continuous forty-eight hours of video were annotated to train the systems and validate their predictions. The success rate of both classifiers in recognizing special movements from both sensors or separately in different settings (i.e., window and padding) was examined. Although the two classifiers produced different results, the ideal settings showed that all forms of movement in the subject animal were successfully recognized with high accuracy. More studies using more individual animals and different ruminants would increase our knowledge on enhancing the system's performance and accuracy.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 86-98"},"PeriodicalIF":8.2,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dorin Shmaryahu , Rotem Lev Lehman , Ezri Peleg , Guy Shani
{"title":"Estimating TYLCV resistance level using RGBD sensors in production greenhouse conditions","authors":"Dorin Shmaryahu , Rotem Lev Lehman , Ezri Peleg , Guy Shani","doi":"10.1016/j.aiia.2024.10.004","DOIUrl":"10.1016/j.aiia.2024.10.004","url":null,"abstract":"<div><div>Automated phenotyping is the task of automatically measuring plant attributes to help farmers and breeders in developing and growing strong robust plants. An automated tool for early illness detection can accelerate the process of identifying plant resistance and quickly pinpoint problematic breeding. Many such phenotyping tasks can be achieved by analyzing images from simple, low cost, RGB-D sensors. In this paper we focused on a particular case study — identifying the resistance level of tomato hybrids to the tomato yellow leaf curl virus (TYLCV) in production greenhouses. This is a difficult task, as separating between resistance levels based on images is difficult even for expert breeders. We collected a large dataset of images from an experiment containing many tomato hybrids with varying resistance levels. We used the depth information to identify the topmost part of the tomato plant. We then used deep learning models to classify the various resistance levels. For identifying plants with visual symptoms, our methods achieved an accuracy of 0.928, a precision of 0.934, and a recall of 0.95. In the multi-class case we achieved an accuracy of 0.76 in identifying the correct level, and an error of 0.278. Our methods are not particularly tailored for the specific task, and can be extended to other tasks that identify various plant diseases with visual symptoms such as ToBRFV, mildew, ToMV and others.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 31-42"},"PeriodicalIF":8.2,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S.M. Nuruzzaman Nobel , Maharin Afroj , Md Mohsin Kabir , M.F. Mridha
{"title":"Development of a cutting-edge ensemble pipeline for rapid and accurate diagnosis of plant leaf diseases","authors":"S.M. Nuruzzaman Nobel , Maharin Afroj , Md Mohsin Kabir , M.F. Mridha","doi":"10.1016/j.aiia.2024.10.005","DOIUrl":"10.1016/j.aiia.2024.10.005","url":null,"abstract":"<div><div>Selecting techniques is a crucial aspect of disease detection analysis, particularly in the convergence of computer vision and agricultural technology. Maintaining crop disease detection in a timely and accurate manner is essential to maintaining global food security. Deep learning is a viable answer to meet this need. To proceed with this study, we have developed and evaluated a disease detection model using a novel ensemble technique. We propose to introduce DenseNetMini, a smaller version of DenseNet. We propose combining DenseNetMini with a learning resizer in ensemble approach to enhance training accuracy and expedite learning. Another unique proposition involves utilizing Gradient Product (GP) as an optimization technique, effectively reducing the training time and improving the model performance. Examining images at different magnifications reveals noteworthy diagnostic agreement and accuracy improvements. Test accuracy rates of 99.65 %, 98.96 %, and 98.11 % are seen in the Plantvillage, Tomato leaf, and Appleleaf9 datasets, respectively. One of the research's main achievements is the significant decrease in processing time, which suggests that using the GP could improve disease detection in agriculture's accessibility and efficiency. Beyond quantitative successes, the study highlights Explainable Artificial Intelligence (XAI) methods, which are essential to improving the disease detection model's interpretability and transparency. XAI enhances the interpretability of the model by visually identifying critical areas on plant leaves for disease identification, which promotes confidence and understanding of the model's functionality.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 56-72"},"PeriodicalIF":8.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis E. Chuquimarca , Boris X. Vintimilla , Sergio A. Velastin
{"title":"A review of external quality inspection for fruit grading using CNN models","authors":"Luis E. Chuquimarca , Boris X. Vintimilla , Sergio A. Velastin","doi":"10.1016/j.aiia.2024.10.002","DOIUrl":"10.1016/j.aiia.2024.10.002","url":null,"abstract":"<div><div>This article reviews the state of the art of recent CNN models used for external quality inspection of fruits, considering parameters such as color, shape, size, and defects, used to categorize fruits according to international marketing levels of agricultural products. The literature review considers the number of fruit images in different datasets, the type of images used by the CNN models, the performance results obtained by each CNNs, the optimizers that help increase the accuracy of these, and the use of pre-trained CNN models used for transfer learning. CNN models have used various types of images in the visible, infrared, hyperspectral, and multispectral bands. Furthermore, the fruit image datasets used are either real or synthetic. Finally, several tables summarize the articles reviewed, which are prioritized according to inspection parameters, facilitating a critical comparison of each work.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 1-20"},"PeriodicalIF":8.2,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhixin Hua , Yitao Jiao , Tianyu Zhang , Zheng Wang , Yuying Shang , Huaibo Song
{"title":"Automatic location and recognition of horse freezing brand using rotational YOLOv5 deep learning network","authors":"Zhixin Hua , Yitao Jiao , Tianyu Zhang , Zheng Wang , Yuying Shang , Huaibo Song","doi":"10.1016/j.aiia.2024.10.003","DOIUrl":"10.1016/j.aiia.2024.10.003","url":null,"abstract":"<div><div>Individual livestock identification is of great importance to precision livestock farming. Liquid nitrogen freezing labeled horse brand is an effective way for livestock individual identification. Along with various technological developments, deep-learning-based methods have been applied in such individual marking recognition. In this research, a deep learning method for oriented horse brand location and recognition was proposed. Firstly, Rotational YOLOv5 (R-YOLOv5) was adopted to locate the oriented horse brand, then the cropped images of the brand area were trained by YOLOv5 for number recognition. In the first step, unlike classical detection methods, R-YOLOv5 introduced the orientation into the YOLO framework by integrating Circle Smooth Label (CSL). Besides, Coordinate Attention (CA) was added to raise the attention to positional information in the network. These improvements enhanced the accuracy of detecting oriented brands. In the second step, number recognition was considered as a target detection task because of the requirement of accurate recognition. Finally, the whole brand number was obtained according to the sequences of each detection box position. The experiment results showed that R-YOLOv5 outperformed other rotating target detection algorithms, and the AP (Average Accuracy) was 95.6 %, the FLOPs were 17.4 G, the detection speed was 14.3 fps. As for the results of number recognition, the mAP (mean Average Accuracy) was 95.77 %, the weight size was 13.71 MB, and the detection speed was 68.6 fps. The two-step method can accurately identify brand numbers with complex backgrounds. It also provides a stable and lightweight method for livestock individual identification.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"14 ","pages":"Pages 21-30"},"PeriodicalIF":8.2,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UAV-based field watermelon detection and counting using YOLOv8s with image panorama stitching and overlap partitioning","authors":"Liguo Jiang , Hanhui Jiang , Xudong Jing , Haojie Dang , Rui Li , Jinyong Chen , Yaqoob Majeed , Ramesh Sahni , Longsheng Fu","doi":"10.1016/j.aiia.2024.09.001","DOIUrl":"10.1016/j.aiia.2024.09.001","url":null,"abstract":"<div><p>Accurate watermelon yield estimation is crucial to the agricultural value chain, as it guides the allocation of agricultural resources as well as facilitates inventory and logistics planning. The conventional method of watermelon yield estimation relies heavily on manual labor, which is both time-consuming and labor-intensive. To address this, this work proposes an algorithmic pipeline that utilizes unmanned aerial vehicle (UAV) videos for detection and counting of watermelons. This pipeline uses You Only Look Once version 8 s (YOLOv8s) with panorama stitching and overlap partitioning, which facilitates the overall number estimation of watermelons in field. The watermelon detection model, based on YOLOv8s and obtained using transfer learning, achieved a detection accuracy of 99.20 %, demonstrating its potential for application in yield estimation. The panorama stitching and overlap partitioning based detection and counting method uses panoramic images as input and effectively mitigates the duplications compared with the video tracking based detection and counting method. The counting accuracy reached over 96.61 %, proving a promising application for yield estimation. The high accuracy demonstrates the feasibility of applying this method for overall yield estimation in large watermelon fields.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"13 ","pages":"Pages 117-127"},"PeriodicalIF":8.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000308/pdfft?md5=e51fdb350e08ba1871a8fe3fd59e2ca5&pid=1-s2.0-S2589721724000308-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zia Uddin Ahmed , Timothy J. Krupnik , Jagadish Timsina , Saiful Islam , Khaled Hossain , A.S.M. Alanuzzaman Kurishi , Shah-Al Emran , M. Harun-Ar-Rashid , Andrew J. McDonald , Mahesh K. Gathala
{"title":"Prediction of spatial heterogeneity in nutrient-limited sub-tropical maize yield: Implications for precision management in the eastern Indo-Gangetic Plains","authors":"Zia Uddin Ahmed , Timothy J. Krupnik , Jagadish Timsina , Saiful Islam , Khaled Hossain , A.S.M. Alanuzzaman Kurishi , Shah-Al Emran , M. Harun-Ar-Rashid , Andrew J. McDonald , Mahesh K. Gathala","doi":"10.1016/j.aiia.2024.08.001","DOIUrl":"10.1016/j.aiia.2024.08.001","url":null,"abstract":"<div><p>Knowledge of the factors influencing nutrient-limited subtropical maize yield and subsequent prediction is crucial for effective nutrient management, maximizing profitability, ensuring food security, and promoting environmental sustainability. We analyzed data from nutrient omission plot trials (NOPTs) conducted in 324 farmers' fields across ten agroecological zones (AEZs) in the Eastern Indo-Gangetic Plains (EIGP) of Bangladesh to explain maize yield variability and identify variables controlling nutrient-limited yields. An additive main effect and multiplicative interaction (AMMI) model was used to explain maize yield variability with nutrient addition. Interpretable machine learning (ML) algorithms in automatic machine learning (AutoML) frameworks were subsequently used to predict attainable yield relative nutrient-limited yield (RY) and to rank variables that control RY. The stack-ensemble model was identified as the best-performing model for predicting RYs of N, P, and Zn. In contrast, deep learning outperformed all base learners for predicting RY<sub>K</sub>. The best model's square errors (RMSEs) were 0.122, 0.105, 0.123, and 0.104 for RY<sub>N</sub>, RY<sub>P</sub>, RY<sub>K</sub>, and RY<sub>Zn</sub>, respectively. The permutation-based feature importance technique identified soil pH as the most critical variable controlling RY<sub>N</sub> and RY<sub>P</sub>. The RY<sub>K</sub> showed lower in the eastern longitudinal direction. Soil N and Zn were associated with RY<sub>Zn</sub>. The predicted median RY of N, P, K, and Zn, representing average soil fertility, was 0.51, 0.84, 0.87, and 0.97, accounting for 44, 54, 54, and 48% upland dry season crop area of Bangladesh, respectively. Efforts are needed to update databases cataloging variability in land type inundation classes, soil characteristics, and INS and combine them with farmers' crop management information to develop more precise nutrient guidelines for maize in the EIGP.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"13 ","pages":"Pages 100-116"},"PeriodicalIF":8.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000291/pdfft?md5=e609aaa51bea70dec6de90b8b5d1eec7&pid=1-s2.0-S2589721724000291-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}