{"title":"Towards sustainable agriculture: Harnessing AI for global food security","authors":"Dhananjay K. Pandey , Richa Mishra","doi":"10.1016/j.aiia.2024.04.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.04.003","url":null,"abstract":"<div><p>The issue of food security continues to be a prominent global concern, affecting a significant number of individuals who experience the adverse effects of hunger and malnutrition. The finding of a solution of this intricate issue necessitates the implementation of novel and paradigm-shifting methodologies in agriculture and food sector. In recent times, the domain of artificial intelligence (AI) has emerged as a potent tool capable of instigating a profound influence on the agriculture and food sectors. AI technologies provide significant advantages by optimizing crop cultivation practices, enabling the use of predictive modelling and precision agriculture techniques, and aiding efficient crop monitoring and disease identification. Additionally, AI has the potential to optimize supply chain operations, storage management, transportation systems, and quality assurance processes. It also tackles the problem of food loss and waste through post-harvest loss reduction, predictive analytics, and smart inventory management. This study highlights that how by utilizing the power of AI, we could transform the way we produce, distribute, and manage food, ultimately creating a more secure and sustainable future for all.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 72-84"},"PeriodicalIF":0.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000151/pdfft?md5=a9d0ed80991556893a392b3b0a4013c0&pid=1-s2.0-S2589721724000151-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning-based intelligent precise aeration strategy for factory recirculating aquaculture systems","authors":"Junchao Yang , Yuting Zhou , Zhiwei Guo , Yueming Zhou , Yu Shen","doi":"10.1016/j.aiia.2024.04.001","DOIUrl":"10.1016/j.aiia.2024.04.001","url":null,"abstract":"<div><p>Factory recirculating aquaculture system (RAS) is facing in a stage of continuous research and technological innovation. Intelligent aquaculture is an important direction for the future development of aquaculture. However, the RAS nowdays still has poor self-learning and optimal decision-making capabilities, which leads to high aquaculture cost and low running efficiency. In this paper, a precise aeration strategy based on deep learning is designed for improving the healthy growth of breeding objects. Firstly, the situation perception driven by computer vision is used to detect the hypoxia behavior. Then combined with the biological energy model, it is constructed to calculate the breeding objects oxygen consumption. Finally, the optimal adaptive aeration strategy is generated according to hypoxia behavior judgement and biological energy model. Experimental results show that the energy consumption of proposed precise aeration strategy decreased by 26.3% compared with the manual control and 12.8% compared with the threshold control. Meanwhile, stable water quality conditions accelerated breeding objects growth, and the breeding cycle with the average weight of 400 g was shortened from 5 to 6 months to 3–4 months.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 57-71"},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000138/pdfft?md5=35867104fdfd8d303cccc4a2f32568ae&pid=1-s2.0-S2589721724000138-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140768894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Macdonald , Yuksel Asli Sari , Majid Pahlevani
{"title":"Grow-light smart monitoring system leveraging lightweight deep learning for plant disease classification","authors":"William Macdonald , Yuksel Asli Sari , Majid Pahlevani","doi":"10.1016/j.aiia.2024.03.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.03.003","url":null,"abstract":"<div><p>This work focuses on a novel lightweight machine learning approach to the task of plant disease classification, posing as a core component of a larger grow-light smart monitoring system. To the extent of our knowledge, this work is the first to implement lightweight convolutional neural network architectures leveraging down-scaled versions of inception blocks, residual connections, and dense residual connections applied without pre-training to the PlantVillage dataset. The novel contributions of this work include the proposal of a smart monitoring framework outline; responsible for detection and classification of ailments via the devised lightweight networks as well as interfacing with LED grow-light fixtures to optimize environmental parameters and lighting control for the growth of plants in a greenhouse system. Lightweight adaptation of dense residual connections achieved the best balance of minimizing model parameters and maximizing performance metrics with accuracy, precision, recall, and F1-scores of 96.75%, 97.62%, 97.59%, and 97.58% respectively, while consisting of only 228,479 model parameters. These results are further compared against various full-scale state-of-the-art model architectures trained on the PlantVillage dataset, of which the proposed down-scaled lightweight models were capable of performing equally to, if not better than many large-scale counterparts with drastically less computational requirements.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 44-56"},"PeriodicalIF":0.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000126/pdfft?md5=92380011c829045a5c9cecbd59eb4f0b&pid=1-s2.0-S2589721724000126-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140547142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning for broadleaf weed seedlings classification incorporating data variability and model flexibility across two contrasting environments","authors":"Lorenzo León , Cristóbal Campos , Juan Hirzel","doi":"10.1016/j.aiia.2024.03.002","DOIUrl":"10.1016/j.aiia.2024.03.002","url":null,"abstract":"<div><p>The increasing deployment of deep learning models for distinguishing weeds and crops has witnessed notable strides in agricultural scenarios. However, a conspicuous gap endures in the literature concerning the training and testing of models across disparate environmental conditions. Predominant methodologies either delineate a single dataset distribution into training, validation, and testing subsets or merge datasets from diverse conditions or distributions before their division into the subsets. Our study aims to ameliorate this gap by extending to several broadleaf weed categories across varied distributions, evaluating the impact of training convolutional neural networks on datasets specific to particular conditions or distributions, and assessing their performance in entirely distinct settings through three experiments. By evaluating diverse network architectures and training approaches (<em>finetuning</em> versus <em>feature extraction</em>), testing various architectures, employing different training strategies, and amalgamating data, we devised straightforward guidelines to ensure the model's deployability in contrasting environments with sustained precision and accuracy.</p><p>In Experiment 1, conducted in a uniform environment, accuracy ranged from 80% to 100% across all models and training strategies, with <em>finetune</em> mode achieving a superior performance of 94% to 99.9% compared to the <em>feature extraction</em> mode at 80% to 92.96%. Experiment 2 underscored a significant performance decline, with accuracy figures between 25% and 60%, primarily at 40%, when the origin of the test data deviated from the train and validation sets. Experiment 3, spotlighting dataset and distribution amalgamation, yielded promising accuracy metrics, notably a peak of 99.6% for ResNet in <em>finetuning</em> mode to a low of 69.9% for InceptionV3 in <em>feature extraction</em> mode. These pivotal findings emphasize that merging data from diverse distributions, coupled with <em>finetuned</em> training on advanced architectures like ResNet and MobileNet, markedly enhances performance, contrasting with the relatively lower performance exhibited by simpler networks like AlexNet. Our results suggest that embracing data diversity and flexible training methodologies are crucial for optimizing weed classification models when disparate data distributions are available. This study gives a practical alternative for treating diverse datasets with real-world agricultural variances.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 29-43"},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000059/pdfft?md5=d8051b8dea55cec53a6ba7889cbc0c03&pid=1-s2.0-S2589721724000059-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140283105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LeafSpotNet: A deep learning framework for detecting leaf spot disease in jasmine plants","authors":"Shwetha V, Arnav Bhagwat, Vijaya Laxmi","doi":"10.1016/j.aiia.2024.02.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.02.002","url":null,"abstract":"<div><p>Leaf blight spot disease, caused by bacteria and fungi, poses a threat to plant health, leading to leaf discoloration and diminished agricultural yield. In response, we present a MobileNetV3 based classifier designed for the Jasmine plant, leveraging lightweight Convolutional Neural Networks (CNNs) to accurately identify disease stages. The model integrates depth wise convolution layers and max pool layers for enhanced feature extraction, focusing on crucial low level features indicative of the disease. Through preprocessing techniques, including data augmentation with Conditional GAN and Particle Swarm Optimization for feature selection, the classifier achieves robust performance. Evaluation on curated datasets demonstrates an outstanding 97% training accuracy, highlighting its efficacy. Real world testing with diverse conditions, such as extreme camera angles and varied lighting, attests to the model's resilience, yielding test accuracies between 94% and 96%. The dataset's tailored design for CNN based classification ensures result reliability. Importantly, the model's lightweight classification, marked by fast computation time and reduced size, positions it as an efficient solution for real time applications. This comprehensive approach underscores the proposed classifier's significance in addressing leaf blight spot disease challenges in commercial crops.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 1-18"},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000035/pdfft?md5=eeca9eda52b267f86b4fd11610c9f9fd&pid=1-s2.0-S2589721724000035-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel approach based on a modified mask R-CNN for the weight prediction of live pigs","authors":"Chuanqi Xie , Yuji Cang , Xizhong Lou , Hua Xiao , Xing Xu , Xiangjun Li , Weidong Zhou","doi":"10.1016/j.aiia.2024.03.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.03.001","url":null,"abstract":"<div><p>Since determining the weight of pigs during large-scale breeding and production is challenging, using non-contact estimation methods is vital. This study proposed a novel pig weight prediction method based on a modified mask region-convolutional neural network (mask R-CNN). The modified approach used ResNeSt as the backbone feature extraction network to enhance the image feature extraction ability. The feature pyramid network (FPN) was added to the backbone feature extraction network for multi-scale feature fusion. The channel attention mechanism (CAM) and spatial attention mechanism (SAM) were introduced in the region proposal network (RPN) for the adaptive integration of local features and their global dependencies to capture global information, ultimately improving image segmentation accuracy. The modified network obtained a precision rate (P), recall rate (R), and mean average precision (MAP) of 90.33%, 89.85%, and 95.21%, respectively, effectively segmenting the pig regions in the images. Five image features, namely the back area (A), body length (L), body width (W), average depth (AD), and eccentricity (E), were investigated. The pig depth images were used to build five regression algorithms (ordinary least squares (OLS), AdaBoost, CatBoost, XGBoost, and random forest (RF)) for weight value prediction. AdaBoost achieved the best prediction result with a coefficient of determination (R<sup>2</sup>) of 0.987, a mean absolute error (MAE) of 2.96 kg, a mean square error (MSE) of 12.87 kg<sup>2</sup>, and a mean absolute percentage error (MAPE) of 8.45%. The results demonstrated that the machine learning models effectively predicted the weight values of the pigs, providing technical support for intelligent pig farm management.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000047/pdfft?md5=43c515f8d95da29c768ed4d67f22ebc0&pid=1-s2.0-S2589721724000047-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Baoling Ma , Zhixin Hua , Yuchen Wen , Hongxing Deng , Yongjie Zhao , Liuru Pu , Huaibo Song
{"title":"Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments","authors":"Baoling Ma , Zhixin Hua , Yuchen Wen , Hongxing Deng , Yongjie Zhao , Liuru Pu , Huaibo Song","doi":"10.1016/j.aiia.2024.02.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.02.001","url":null,"abstract":"<div><p>For the purpose of monitoring apple fruits effectively throughout the entire growth period in smart orchards. A lightweight model named YOLOv8n-ShuffleNetv2-Ghost-SE was proposed. The ShuffleNetv2 basic modules and down-sampling modules were alternately connected, replacing the Backbone of YOLOv8n model. The Ghost modules replaced the Conv modules and the C2fGhost modules replaced the C2f modules in the Neck part of the YOLOv8n. ShuffleNetv2 reduced the memory access cost through channel splitting operations. The Ghost module combined linear and non-linear convolutions to reduce the network computation cost. The Wise-IoU (WIoU) replaced the CIoU for calculating the bounding box regression loss, which dynamically adjusted the anchor box quality threshold and gradient gain allocation strategy, optimizing the size and position of predicted bounding boxes. The Squeeze-and-Excitation (SE) was embedded in the Backbone and Neck part of YOLOv8n to enhance the representation ability of feature maps. The algorithm ensured high precision while having small model size and fast detection speed, which facilitated model migration and deployment. Using 9652 images validated the effectiveness of the model. The YOLOv8n-ShuffleNetv2-Ghost-SE model achieved Precision of 94.1%, Recall of 82.6%, mean Average Precision of 91.4%, model size of 2.6 MB, parameters of 1.18 M, FLOPs of 3.9 G, and detection speed of 39.37 fps. The detection speeds on the Jetson Xavier NX development board were 3.17 fps. Comparisons with advanced models including Faster R-CNN, SSD, YOLOv5s, YOLOv7‑tiny, YOLOv8s, YOLOv8n, MobileNetv3_small-Faster, MobileNetv3_small-Ghost, ShuflleNetv2-Faster, ShuflleNetv2-Ghost, ShuflleNetv2-Ghost-CBAM, ShuflleNetv2-Ghost-ECA, and ShuflleNetv2-Ghost-CA demonstrated that the method achieved smaller model and faster detection speed. The research can provide reference for the development of smart devices in apple orchards.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 70-82"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000023/pdfft?md5=6fc303d1eb23f5151de28ee6f36c2d3d&pid=1-s2.0-S2589721724000023-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140031373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated quality inspection of baby corn using image processing and deep learning","authors":"Kris Wonggasem, Pongsan Chakranon, Papis Wongchaisuwat","doi":"10.1016/j.aiia.2024.01.001","DOIUrl":"10.1016/j.aiia.2024.01.001","url":null,"abstract":"<div><p>The food industry typically relies heavily on manual operations with high proficiency and skills. According to the quality inspection process, a baby corn with black marks or blemishes is considered a defect or unqualified class which should be discarded. Quality inspection and sorting of agricultural products like baby corn are labor-intensive and time-consuming. The main goal of this work is to develop an automated quality inspection framework to differentiate between ‘pass’ and ‘fail’ categories based on baby corn images. A traditional image processing method using a threshold principle is compared with relatively more advanced deep learning models. Particularly, Convolutional neural networks, specific sub-types of deep learning models, were implemented. Thorough experiments on choices of network architectures and their hyperparameters were conducted and compared. A Shapley additive explanations (SHAP) framework was further utilized for network interpretation purposes. The EfficientNetB5 networks with relatively larger input sizes yielded up to 99.06% accuracy as the best performance against 95.28% obtained from traditional image processing. Incorporating a region of interest identification, several model experiments, data application on baby corn images, and the SHAP framework are our main contributions. Our proposed quality inspection system to automatically differentiate baby corn images provides a potential pipeline to further support the agricultural production process.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 61-69"},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000011/pdfft?md5=7f516ee421a879bd329ecdddca0cde40&pid=1-s2.0-S2589721724000011-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139634663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haojie Zhu , Lingling Yang , Yu Wang , Yuwei Wang , Wenhui Hou , Yuan Rao , Lu Liu
{"title":"Enhanced detection algorithm for apple bruises using structured light imaging","authors":"Haojie Zhu , Lingling Yang , Yu Wang , Yuwei Wang , Wenhui Hou , Yuan Rao , Lu Liu","doi":"10.1016/j.aiia.2023.12.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.001","url":null,"abstract":"<div><p>Bruising reduces the edibility and marketability of fresh apples, inevitably causing economic losses for the apple industry. However, bruises lack obvious visual symptoms, which makes it challenging to detect them using imaging techniques with uniform or diffuse illumination. This study employed the structured light imaging (SLI) technique to detect apple bruises. First, the grayscale reflection images were captured under phase-shifted sinusoidal illumination at three different wavelengths (600, 650, and 700 nm) and six different spatial frequencies (0.05, 0.10, 0.15, 0.20, 0.25, and 0.30 cycles mm<sup>−1</sup>). Next, the grayscale reflectance images were demodulated to produce direct component (DC) images representing uniform diffuse illumination and amplitude component (AC) images revealing bruises. Then, by quantifying the contrast between bruised regions and sound regions in all AC images, it was found that bruises exhibited the optimal contrast when subjected to sinusoidal illumination at a wavelength of 700 nm and a spatial frequency of 0.25 mm<sup>−1</sup>. In the AC image with optimal contrast, the developed <em>h</em>-domes segmentation algorithm to accurately segment the location and range of the bruised regions. Moreover, the algorithm successfully accomplished the task of segmenting central bruised regions while addressing the challenge of segmenting edge bruised regions complicated by vignetting. The average Intersection over Union (IoU) values for the three types of bruises were 0.9422, 0.9231, and 0.9183, respectively. This result demonstrated that the combination of SLI and the <em>h</em>-domes segmentation algorithm was a viable approach for the effective detection of fresh apple bruises.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 50-60"},"PeriodicalIF":0.0,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000508/pdfft?md5=4b5f4f71fba5824f27f3f6fb52807dae&pid=1-s2.0-S2589721723000508-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138769651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image classification of lotus in Nong Han Chaloem Phrakiat Lotus Park using convolutional neural networks","authors":"Thanawat Phattaraworamet , Sawinee Sangsuriyun , Phoempol Kutchomsri , Susama Chokphoemphun","doi":"10.1016/j.aiia.2023.12.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.003","url":null,"abstract":"<div><p>The Nong Han Chaloem Phrakiat Lotus Park is a tourist attraction and a source of learning regarding lotus plants. However, as a training area, it lacks appeal and learning motivation due to its conventional presentation of information regarding lotus plants. The current study introduced the concept of smart learning in this setting to increase interest and motivation for learning. Convolutional neural networks (CNNs) were used for the classification of lotus plant species, for use in the development of a mobile application to display details about each species. The scope of the study was to classify 11 species of lotus plants using the proposed CNN model based on different techniques (augmentation, dropout, and L2) and hyper parameters (dropout and epoch number). The expected outcome was to obtain a high-performance CNN model with reduced total parameters compared to using three different pre-trained CNN models (Inception V3, VGG16, and VGG19) as benchmarks. The performance of the model was presented in terms of accuracy, F1-score, precision, and recall values. The results showed that the CNN model with the augmentation, dropout, and L2 techniques at a dropout value of 0.4 and an epoch number of 30 provided the highest testing accuracy of 0.9954. The best proposed model was more accurate than the pre-trained CNN models, especially compared to Inception V3. In addition, the number of total parameters was reduced by approximately 1.80–2.19 times. These findings demonstrated that the proposed model with a small number of total parameters had a satisfactory degree of classification accuracy.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 23-33"},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000491/pdfft?md5=d74952e474880b11ee67566302a088f6&pid=1-s2.0-S2589721723000491-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138656373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}