Artificial Intelligence in Agriculture最新文献

筛选
英文 中文
Deep learning methods for biotic and abiotic stresses detection and classification in fruits and vegetables: State of the art and perspectives 果蔬生物与非生物胁迫检测与分类的深度学习方法:现状与展望
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.001
Sèton Calmette Ariane Houetohossou , Vinasetan Ratheil Houndji , Castro Gbêmêmali Hounmenou , Rachidatou Sikirou , Romain Lucas Glele Kakaï
{"title":"Deep learning methods for biotic and abiotic stresses detection and classification in fruits and vegetables: State of the art and perspectives","authors":"Sèton Calmette Ariane Houetohossou ,&nbsp;Vinasetan Ratheil Houndji ,&nbsp;Castro Gbêmêmali Hounmenou ,&nbsp;Rachidatou Sikirou ,&nbsp;Romain Lucas Glele Kakaï","doi":"10.1016/j.aiia.2023.08.001","DOIUrl":"10.1016/j.aiia.2023.08.001","url":null,"abstract":"<div><p>Deep Learning (DL), a type of Machine Learning, has gained significant interest in many fields, including agriculture. This paper aims to shed light on deep learning techniques used in agriculture for abiotic and biotic stress detection in fruits and vegetables, their benefits, and the challenges faced by users. Scientific papers were collected from Web of Science, Scopus, Google Scholar, Springer, and Directory of Open Access Journals (DOAJ) using combinations of specific keywords such as:’Deep Learning’ OR’Artificial Intelligence’ in combination with fruit disease’, vegetable disease’, ‘fruit stress', OR ‘vegetable stress' following PRISMA guidelines. From the initial 818 papers identified using the keywords, 132 were reviewed after excluding books, reviews, and the irrelevant. The recovered scientific papers were from 2003 to 2022; 93 % addressed biotic stress on fruits and vegetables. The most common biotic stresses on species are fungal diseases (grey spots, brown spots, black spots, downy mildew, powdery mildew, and anthracnose). Few studies were interested in abiotic stresses (nutrient deficiency, water stress, light intensity, and heavy metal contamination). Deep Learning and Convolutional Neural Networks were the most used keywords, with GoogleNet (18.28%), ResNet50 (16.67%), and VGG16 (16.67%) as the most used architectures. Fifty-two percent of the data used to compile these models come from the fields, followed by data obtained online. Precision problems due to unbalanced classes and the small size of some databases were also analyzed. We provided the research gaps and some perspectives from the reviewed papers. Further research works are required for a deep understanding of the use of machine learning techniques in fruit and vegetable studies: collection of large datasets according to different scenarios on fruit and vegetable diseases, evaluation of the effect of climatic variability on the fruit and vegetable yield using AI methods and more abiotic stress studies.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 46-60"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46218612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-cost livestock sorting information management system based on deep learning 基于深度学习的低成本牲畜分拣信息管理系统
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.007
Yuanzhi Pan , Yuzhen Zhang , Xiaoping Wang , Xiang Xiang Gao , Zhongyu Hou
{"title":"Low-cost livestock sorting information management system based on deep learning","authors":"Yuanzhi Pan ,&nbsp;Yuzhen Zhang ,&nbsp;Xiaoping Wang ,&nbsp;Xiang Xiang Gao ,&nbsp;Zhongyu Hou","doi":"10.1016/j.aiia.2023.08.007","DOIUrl":"10.1016/j.aiia.2023.08.007","url":null,"abstract":"<div><p>Modern pig farming leaves much to be desired in terms of efficiency, as these systems rely mainly on electromechanical controls and can only categorize pigs according to their weight. This method is not only inefficient but also escalates labor expenses and heightens the threat of zoonotic diseases. Furthermore, confining pigs in large groups can exacerbate the spread of infections and complicate the monitoring and care of ill pigs. This research executed an experiment to construct a deep-learning sorting mechanism, leveraging a dataset infused with pivotal metrics and breeding imagery gathered over 24 months. This research integrated a Kalman filter-based algorithm to augment the precision of the dynamic sorting operation. This research experiment unveiled a pioneering machine vision sorting system powered by deep learning, adept at handling live imagery for multifaceted recognition objectives. The Individual recognition model based on Residual Neural Network (ResNet) monitors livestock weight for sustained data forecasting, whereas the Wasserstein Generative Adversarial Nets (WGAN) image enhancement algorithm bolsters recognition in distinct settings, fortifying the model's resilience. Notably, system can pinpoint livestock exhibiting signs of potential illness via irregular body appearances and isolate them for safety. Experimental outcomes validate the superiority of this proposed system over traditional counterparts. It not only minimizes manual interventions and data upkeep expenses but also heightens the accuracy of livestock identification and optimizes data usage. This findings reflect an 89% success rate in livestock ID recognition, a 32% surge in obscured image recognition, a 95% leap in livestock categorization accuracy, and a remarkable 98% success rate in discerning images of unwell pigs. In essence, this research augments identification efficiency, curtails operational expenses, and provides enhanced tools for disease monitoring.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 110-126"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48035348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Corn kernel classification from few training samples 基于少量训练样本的玉米籽粒分类
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.006
Patricia L. Suárez , Henry O. Velesaca , Dario Carpio , Angel D. Sappa
{"title":"Corn kernel classification from few training samples","authors":"Patricia L. Suárez ,&nbsp;Henry O. Velesaca ,&nbsp;Dario Carpio ,&nbsp;Angel D. Sappa","doi":"10.1016/j.aiia.2023.08.006","DOIUrl":"10.1016/j.aiia.2023.08.006","url":null,"abstract":"<div><p>This article presents an efficient approach to classify a set of corn kernels in contact, which may contain good, or defective kernels along with impurities. The proposed approach consists of two stages, the first one is a next-generation segmentation network, trained by using a set of synthesized images that is applied to divide the given image into a set of individual instances. An ad-hoc lightweight CNN architecture is then proposed to classify each instance into one of three categories (ie good, defective, and impurities). The segmentation network is trained using a strategy that avoids the time-consuming and human-error-prone task of manual data annotation. Regarding the classification stage, the proposed ad-hoc network is designed with only a few sets of layers to result in a lightweight architecture capable of being used in integrated solutions. Experimental results and comparisons with previous approaches showing both the improvement in accuracy and the reduction in time are provided. Finally, the segmentation and classification approach proposed can be easily adapted for use with other cereal types.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 89-99"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43946230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of morphological traits of foliage and effective plant spacing in NFT-based aquaponics system 基于NFT的水培系统中叶片形态特征和有效株距的估计
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.004
R. Abbasi , P. Martinez , R. Ahmad
{"title":"Estimation of morphological traits of foliage and effective plant spacing in NFT-based aquaponics system","authors":"R. Abbasi ,&nbsp;P. Martinez ,&nbsp;R. Ahmad","doi":"10.1016/j.aiia.2023.08.004","DOIUrl":"10.1016/j.aiia.2023.08.004","url":null,"abstract":"<div><p>Deep learning and computer vision techniques have gained significant attention in the agriculture sector due to their non-destructive and contactless features. These techniques are also being integrated into modern farming systems, such as aquaponics, to address the challenges hindering its commercialization and large-scale implementation. Aquaponics is a farming technology that combines a recirculating aquaculture system and soilless hydroponics agriculture, that promises to address food security issues. To complement the current research efforts, a methodology is proposed to automatically measure the morphological traits of crops such as width, length and area and estimate the effective plant spacing between grow channels. Plant spacing is one of the key design parameters that are dependent on crop type and its morphological traits and hence needs to be monitored to ensure high crop yield and quality which can be impacted due to foliage occlusion or overlapping as the crop grows. The proposed approach uses Mask-RCNN to estimate the size of the crops and a mathematical model to determine plant spacing for a self-adaptive aquaponics farm. For common little <em>gem</em> romaine lettuce, the growth is estimated within 2 cm of error for both length and width. The final model is deployed on a cloud-based application and integrated with an ontology model containing domain knowledge of the aquaponics system. The relevant knowledge about crop characteristics and optimal plant spacing is extracted from ontology and compared with results obtained from the final model to suggest further actions. The proposed application finds its significance as a decision support system that can pave the way for intelligent system monitoring and control.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 76-88"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45096541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting broiler chickens on litter floor with the YOLOv5-CBAM deep learning model 基于YOLOv5-CBAM深度学习模型的窝地板肉鸡检测
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.002
Yangyang Guo , Samuel E. Aggrey , Xiao Yang , Adelumola Oladeinde , Yongliang Qiao , Lilong Chai
{"title":"Detecting broiler chickens on litter floor with the YOLOv5-CBAM deep learning model","authors":"Yangyang Guo ,&nbsp;Samuel E. Aggrey ,&nbsp;Xiao Yang ,&nbsp;Adelumola Oladeinde ,&nbsp;Yongliang Qiao ,&nbsp;Lilong Chai","doi":"10.1016/j.aiia.2023.08.002","DOIUrl":"10.1016/j.aiia.2023.08.002","url":null,"abstract":"<div><p>For commercial broiler production, about 20,000–30,000 birds are raised in each confined house, which has caused growing public concerns on animal welfare. Currently, daily evaluation of broiler wellbeing and growth is conducted manually, which is labor-intensive and subjectively subject to human error. Therefore, there is a need for an automatic tool to detect and analyze the behaviors of chickens and predict their welfare status. In this study, we developed a YOLOv5-CBAM-broiler model and tested its performance for detecting broilers on litter floor. The proposed model consisted of two parts: (1) basic YOLOv5 model for bird or broiler feature extraction and object detection; and (2) the convolutional block attention module (CBAM) to improve the feature extraction capability of the network and the problem of missed detection of occluded targets and small targets. A complex dataset of broiler chicken images at different ages, multiple pens and scenes (fresh litter versus reused litter) was constructed to evaluate the effectiveness of the new model. In addition, the model was compared to the Faster R-CNN, SSD, YOLOv3, EfficientDet and YOLOv5 models. The results demonstrate that the precision, recall, F1 score and an [email protected] of the proposed method were 97.3%, 92.3%, 94.7%, and 96.5%, which were superior to the comparison models. In addition, comparing the detection effects in different scenes, the YOLOv5-CBAM model was still better than the comparison method. Overall, the proposed YOLOv5-CBAM-broiler model can achieve real-time accurate and fast target detection and provide technical support for the management and monitoring of birds in commercial broiler houses.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 36-45"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46430631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Machine learning in nutrient management: A review 机器学习在营养管理中的应用综述
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.06.001
Oumnia Ennaji , Leonardus Vergütz , Achraf El Allali
{"title":"Machine learning in nutrient management: A review","authors":"Oumnia Ennaji ,&nbsp;Leonardus Vergütz ,&nbsp;Achraf El Allali","doi":"10.1016/j.aiia.2023.06.001","DOIUrl":"10.1016/j.aiia.2023.06.001","url":null,"abstract":"<div><p>In agriculture, precise fertilization and effective nutrient management are critical. Machine learning (ML) has recently been increasingly used to develop decision support tools for modern agricultural systems, including nutrient management, to improve yields while reducing expenses and environmental impact. ML based systems require huge amounts of data from different platforms to handle non-linear tasks and build predictive models that can improve agricultural productivity. This study reviews machine learning based techniques for estimating fertilizer and nutrient status that have been developed in the last decade. A thorough investigation of detection and classification approaches was conducted, which served as the basis for a detailed assessment of the key challenges that remain to be addressed. The research findings suggest that rapid improvements in machine learning and sensor technology can provide cost-effective and thorough nutrient assessment and decision-making solutions. Future research directions are also recommended to improve the practical application of this technology.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 1-11"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44445839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CactiViT: Image-based smartphone application and transformer network for diagnosis of cactus cochineal CactiViT:基于图像的智能手机应用和变压器网络,用于仙人掌胭脂虫的诊断
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.07.002
Anas Berka , Adel Hafiane , Youssef Es-Saady , Mohamed El Hajji , Raphaël Canals , Rachid Bouharroud
{"title":"CactiViT: Image-based smartphone application and transformer network for diagnosis of cactus cochineal","authors":"Anas Berka ,&nbsp;Adel Hafiane ,&nbsp;Youssef Es-Saady ,&nbsp;Mohamed El Hajji ,&nbsp;Raphaël Canals ,&nbsp;Rachid Bouharroud","doi":"10.1016/j.aiia.2023.07.002","DOIUrl":"10.1016/j.aiia.2023.07.002","url":null,"abstract":"<div><p>The cactus is a plant that grows in many rural areas, widely used as a hedge, and has multiple benefits through the manufacture of various cosmetics and other products. However, this crop has been suffering for some time from the attack of the carmine scale <em>Dactylopius opuntia</em> (Hemiptera: Dactylopiidae). The infestation can spread rapidly if not treated in the early stage. Current solutions consist of regular field checks by the naked eyes carried out by experts. The major difficulty is the lack of experts to check all fields, especially in remote areas. In addition, this requires time and resources. Hence the need for a system that can categorize the health level of cacti remotely. To date, deep learning models used to categorize plant diseases from images have not addressed the mealy bug infestation of cacti because computer vision has not sufficiently addressed this disease. Since there is no public dataset and smartphones are commonly used as tools to take pictures, it might then be conceivable for farmers to use them to categorize the infection level of their crops. In this work, we developed a system called CactiVIT that instantly determines the health status of cacti using the Visual image Transformer (ViT) model. We also provided a new image dataset of cochineal infested cacti.<span><sup>1</sup></span> Finally, we developed a mobile application that delivers the classification results directly to farmers about the infestation in their fields by showing the probabilities related to each class. This study compares the existing models on the new dataset and presents the results obtained. The VIT-B-16 model reveals an approved performance in the literature and in our experiments, in which it achieved 88.73% overall accuracy with an average of +2.61% compared to other convolutional neural network (CNN) models that we evaluated under similar conditions.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 12-21"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43334508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rice disease identification method based on improved CNN-BiGRU 基于改进CNN-BiGRU的水稻病害识别方法
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.005
Yang Lu , Xiaoxiao Wu , Pengfei Liu , Hang Li , Wanting Liu
{"title":"Rice disease identification method based on improved CNN-BiGRU","authors":"Yang Lu ,&nbsp;Xiaoxiao Wu ,&nbsp;Pengfei Liu ,&nbsp;Hang Li ,&nbsp;Wanting Liu","doi":"10.1016/j.aiia.2023.08.005","DOIUrl":"10.1016/j.aiia.2023.08.005","url":null,"abstract":"<div><p>In the field of precision agriculture, diagnosing rice diseases from images remains challenging due to high error rates, multiple influencing factors, and unstable conditions. While machine learning and convolutional neural networks have shown promising results in identifying rice diseases, they were limited in their ability to explain the relationships among disease features. In this study, we proposed an improved rice disease classification method that combines a convolutional neural network (CNN) with a bidirectional gated recurrent unit (BiGRU). Specifically, we introduced a residual mechanism into the Inception module, expanded the module's depth, and integrated an improved Convolutional Block Attention Module (CBAM). We trained and tested the improved CNN and BiGRU, concatenated the outputs of the CNN and BiGRU modules, and passed them to the classification layer for recognition. Our experiments demonstrate that this approach achieves an accuracy of 98.21% in identifying four types of rice diseases, providing a reliable method for rice disease recognition research.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 100-109"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46834924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight convolutional neural network models for semantic segmentation of in-field cotton bolls 田间棉铃语义分割的轻量级卷积神经网络模型
Artificial Intelligence in Agriculture Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.03.001
Naseeb Singh , V.K. Tewari , P.K. Biswas , L.K. Dhruw
{"title":"Lightweight convolutional neural network models for semantic segmentation of in-field cotton bolls","authors":"Naseeb Singh ,&nbsp;V.K. Tewari ,&nbsp;P.K. Biswas ,&nbsp;L.K. Dhruw","doi":"10.1016/j.aiia.2023.03.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.03.001","url":null,"abstract":"<div><p>Robotic harvesting of cotton bolls will incorporate the benefits of manual picking as well as mechanical harvesting. For robotic harvesting, in-field cotton segmentation with minimal errors is desirable which is a challenging task. In the present study, three lightweight fully convolutional neural network models were developed for the semantic segmentation of in-field cotton bolls. Model 1 does not include any residual or skip connections, while model 2 consists of residual connections to tackle the vanishing gradient problem and skip connections for feature concatenation. Model 3 along with residual and skip connections, consists of filters of multiple sizes. The effects of filter size and the dropout rate were studied. All proposed models segment the cotton bolls successfully with the cotton-IoU (intersection-over-union) value of above 88.0%. The highest cotton-IoU of 91.03% was achieved by model 2. The proposed models achieved F1-score and pixel accuracy values greater than 95.0% and 98.0%, respectively. The developed models were compared with existing state-of-the-art networks namely VGG19, ResNet18, EfficientNet-B1, and InceptionV3. Despite having a limited number of trainable parameters, the proposed models achieved mean-IoU (mean intersection-over-union) of 93.84%, 94.15%, and 94.65% against the mean-IoU values of 95.39%, 96.54%, 96.40%, and 96.37% obtained using state-of-the-art networks. The segmentation time for the developed models was reduced up to 52.0% compared to state-of-the-art networks. The developed lightweight models segmented the in-field cotton bolls comparatively faster and with greater accuracy. Hence, developed models can be deployed to cotton harvesting robots for real-time recognition of in-field cotton bolls for harvesting.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 1-19"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50193228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Leguminous seeds detection based on convolutional neural networks: Comparison of Faster R-CNN and YOLOv4 on a small custom dataset 基于卷积神经网络的豆科植物种子检测:快速R-CNN和YOLOv4在小型自定义数据集上的比较
Artificial Intelligence in Agriculture Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.03.002
Noran S. Ouf
{"title":"Leguminous seeds detection based on convolutional neural networks: Comparison of Faster R-CNN and YOLOv4 on a small custom dataset","authors":"Noran S. Ouf","doi":"10.1016/j.aiia.2023.03.002","DOIUrl":"10.1016/j.aiia.2023.03.002","url":null,"abstract":"<div><p>This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and it can be very difficult to distinguish between them. Botanists and those who study plants, however, can identify the type of seed at a glance. As far as we know, this is the first work to consider leguminous seeds images with different backgrounds and different sizes and crowding. Machine learning is used to automatically classify and locate 11 different seed types. We chose Leguminous seeds from 11 types to be the objects of this study. Those types are of different colors, sizes, and shapes to add variety and complexity to our research. The images dataset of the leguminous seeds was manually collected, annotated, and then split randomly into three sub-datasets train, validation, and test (predictions), with a ratio of 80%, 10%, and 10% respectively. The images considered the variability between different leguminous seed types. The images were captured on five different backgrounds: white A4 paper, black pad, dark blue pad, dark green pad, and green pad. Different heights and shooting angles were considered. The crowdedness of the seeds also varied randomly between 1 and 50 seeds per image. Different combinations and arrangements between the 11 types were considered. Two different image-capturing devices were used: a SAMSUNG smartphone camera and a Canon digital camera. A total of 828 images were obtained, including 9801 seed objects (labels). The dataset contained images of different backgrounds, heights, angles, crowdedness, arrangements, and combinations. The TensorFlow framework was used to construct the Faster Region-based Convolutional Neural Network (R-CNN) model and CSPDarknet53 is used as the backbone for YOLOv4 based on DenseNet designed to connect layers in convolutional neural. Using the transfer learning method, we optimized the seed detection models. The currently dominant object detection methods, Faster R-CNN, and YOLOv4 performances were compared experimentally. The mAP (mean average precision) of the Faster R-CNN and YOLOv4 models were 84.56% and 98.52% respectively. YOLOv4 had a significant advantage in detection speed over Faster R-CNN which makes it suitable for real-time identification as well where high accuracy and low false positives are needed. The results showed that YOLOv4 had better accuracy, and detection ability, as well as faster detection speed beating Faster R-CNN by a large margin. The model can be effectively applied under a variety of backgrounds, image sizes, seed sizes, shooting angles, and shooting heights, as well as different levels of seed crowding. It constitutes an effective and efficient method for detecting different leguminous seeds in complex scenarios. This study provides a reference for further seed testing and enumeration applications.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 30-45"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43701153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信