Artificial Intelligence in Agriculture最新文献

筛选
英文 中文
Detecting broiler chickens on litter floor with the YOLOv5-CBAM deep learning model 基于YOLOv5-CBAM深度学习模型的窝地板肉鸡检测
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.002
Yangyang Guo , Samuel E. Aggrey , Xiao Yang , Adelumola Oladeinde , Yongliang Qiao , Lilong Chai
{"title":"Detecting broiler chickens on litter floor with the YOLOv5-CBAM deep learning model","authors":"Yangyang Guo ,&nbsp;Samuel E. Aggrey ,&nbsp;Xiao Yang ,&nbsp;Adelumola Oladeinde ,&nbsp;Yongliang Qiao ,&nbsp;Lilong Chai","doi":"10.1016/j.aiia.2023.08.002","DOIUrl":"10.1016/j.aiia.2023.08.002","url":null,"abstract":"<div><p>For commercial broiler production, about 20,000–30,000 birds are raised in each confined house, which has caused growing public concerns on animal welfare. Currently, daily evaluation of broiler wellbeing and growth is conducted manually, which is labor-intensive and subjectively subject to human error. Therefore, there is a need for an automatic tool to detect and analyze the behaviors of chickens and predict their welfare status. In this study, we developed a YOLOv5-CBAM-broiler model and tested its performance for detecting broilers on litter floor. The proposed model consisted of two parts: (1) basic YOLOv5 model for bird or broiler feature extraction and object detection; and (2) the convolutional block attention module (CBAM) to improve the feature extraction capability of the network and the problem of missed detection of occluded targets and small targets. A complex dataset of broiler chicken images at different ages, multiple pens and scenes (fresh litter versus reused litter) was constructed to evaluate the effectiveness of the new model. In addition, the model was compared to the Faster R-CNN, SSD, YOLOv3, EfficientDet and YOLOv5 models. The results demonstrate that the precision, recall, F1 score and an [email protected] of the proposed method were 97.3%, 92.3%, 94.7%, and 96.5%, which were superior to the comparison models. In addition, comparing the detection effects in different scenes, the YOLOv5-CBAM model was still better than the comparison method. Overall, the proposed YOLOv5-CBAM-broiler model can achieve real-time accurate and fast target detection and provide technical support for the management and monitoring of birds in commercial broiler houses.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 36-45"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46430631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Machine learning in nutrient management: A review 机器学习在营养管理中的应用综述
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.06.001
Oumnia Ennaji , Leonardus Vergütz , Achraf El Allali
{"title":"Machine learning in nutrient management: A review","authors":"Oumnia Ennaji ,&nbsp;Leonardus Vergütz ,&nbsp;Achraf El Allali","doi":"10.1016/j.aiia.2023.06.001","DOIUrl":"10.1016/j.aiia.2023.06.001","url":null,"abstract":"<div><p>In agriculture, precise fertilization and effective nutrient management are critical. Machine learning (ML) has recently been increasingly used to develop decision support tools for modern agricultural systems, including nutrient management, to improve yields while reducing expenses and environmental impact. ML based systems require huge amounts of data from different platforms to handle non-linear tasks and build predictive models that can improve agricultural productivity. This study reviews machine learning based techniques for estimating fertilizer and nutrient status that have been developed in the last decade. A thorough investigation of detection and classification approaches was conducted, which served as the basis for a detailed assessment of the key challenges that remain to be addressed. The research findings suggest that rapid improvements in machine learning and sensor technology can provide cost-effective and thorough nutrient assessment and decision-making solutions. Future research directions are also recommended to improve the practical application of this technology.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 1-11"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44445839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CactiViT: Image-based smartphone application and transformer network for diagnosis of cactus cochineal CactiViT:基于图像的智能手机应用和变压器网络,用于仙人掌胭脂虫的诊断
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.07.002
Anas Berka , Adel Hafiane , Youssef Es-Saady , Mohamed El Hajji , Raphaël Canals , Rachid Bouharroud
{"title":"CactiViT: Image-based smartphone application and transformer network for diagnosis of cactus cochineal","authors":"Anas Berka ,&nbsp;Adel Hafiane ,&nbsp;Youssef Es-Saady ,&nbsp;Mohamed El Hajji ,&nbsp;Raphaël Canals ,&nbsp;Rachid Bouharroud","doi":"10.1016/j.aiia.2023.07.002","DOIUrl":"10.1016/j.aiia.2023.07.002","url":null,"abstract":"<div><p>The cactus is a plant that grows in many rural areas, widely used as a hedge, and has multiple benefits through the manufacture of various cosmetics and other products. However, this crop has been suffering for some time from the attack of the carmine scale <em>Dactylopius opuntia</em> (Hemiptera: Dactylopiidae). The infestation can spread rapidly if not treated in the early stage. Current solutions consist of regular field checks by the naked eyes carried out by experts. The major difficulty is the lack of experts to check all fields, especially in remote areas. In addition, this requires time and resources. Hence the need for a system that can categorize the health level of cacti remotely. To date, deep learning models used to categorize plant diseases from images have not addressed the mealy bug infestation of cacti because computer vision has not sufficiently addressed this disease. Since there is no public dataset and smartphones are commonly used as tools to take pictures, it might then be conceivable for farmers to use them to categorize the infection level of their crops. In this work, we developed a system called CactiVIT that instantly determines the health status of cacti using the Visual image Transformer (ViT) model. We also provided a new image dataset of cochineal infested cacti.<span><sup>1</sup></span> Finally, we developed a mobile application that delivers the classification results directly to farmers about the infestation in their fields by showing the probabilities related to each class. This study compares the existing models on the new dataset and presents the results obtained. The VIT-B-16 model reveals an approved performance in the literature and in our experiments, in which it achieved 88.73% overall accuracy with an average of +2.61% compared to other convolutional neural network (CNN) models that we evaluated under similar conditions.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 12-21"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43334508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rice disease identification method based on improved CNN-BiGRU 基于改进CNN-BiGRU的水稻病害识别方法
Artificial Intelligence in Agriculture Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.005
Yang Lu , Xiaoxiao Wu , Pengfei Liu , Hang Li , Wanting Liu
{"title":"Rice disease identification method based on improved CNN-BiGRU","authors":"Yang Lu ,&nbsp;Xiaoxiao Wu ,&nbsp;Pengfei Liu ,&nbsp;Hang Li ,&nbsp;Wanting Liu","doi":"10.1016/j.aiia.2023.08.005","DOIUrl":"10.1016/j.aiia.2023.08.005","url":null,"abstract":"<div><p>In the field of precision agriculture, diagnosing rice diseases from images remains challenging due to high error rates, multiple influencing factors, and unstable conditions. While machine learning and convolutional neural networks have shown promising results in identifying rice diseases, they were limited in their ability to explain the relationships among disease features. In this study, we proposed an improved rice disease classification method that combines a convolutional neural network (CNN) with a bidirectional gated recurrent unit (BiGRU). Specifically, we introduced a residual mechanism into the Inception module, expanded the module's depth, and integrated an improved Convolutional Block Attention Module (CBAM). We trained and tested the improved CNN and BiGRU, concatenated the outputs of the CNN and BiGRU modules, and passed them to the classification layer for recognition. Our experiments demonstrate that this approach achieves an accuracy of 98.21% in identifying four types of rice diseases, providing a reliable method for rice disease recognition research.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 100-109"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46834924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight convolutional neural network models for semantic segmentation of in-field cotton bolls 田间棉铃语义分割的轻量级卷积神经网络模型
Artificial Intelligence in Agriculture Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.03.001
Naseeb Singh , V.K. Tewari , P.K. Biswas , L.K. Dhruw
{"title":"Lightweight convolutional neural network models for semantic segmentation of in-field cotton bolls","authors":"Naseeb Singh ,&nbsp;V.K. Tewari ,&nbsp;P.K. Biswas ,&nbsp;L.K. Dhruw","doi":"10.1016/j.aiia.2023.03.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.03.001","url":null,"abstract":"<div><p>Robotic harvesting of cotton bolls will incorporate the benefits of manual picking as well as mechanical harvesting. For robotic harvesting, in-field cotton segmentation with minimal errors is desirable which is a challenging task. In the present study, three lightweight fully convolutional neural network models were developed for the semantic segmentation of in-field cotton bolls. Model 1 does not include any residual or skip connections, while model 2 consists of residual connections to tackle the vanishing gradient problem and skip connections for feature concatenation. Model 3 along with residual and skip connections, consists of filters of multiple sizes. The effects of filter size and the dropout rate were studied. All proposed models segment the cotton bolls successfully with the cotton-IoU (intersection-over-union) value of above 88.0%. The highest cotton-IoU of 91.03% was achieved by model 2. The proposed models achieved F1-score and pixel accuracy values greater than 95.0% and 98.0%, respectively. The developed models were compared with existing state-of-the-art networks namely VGG19, ResNet18, EfficientNet-B1, and InceptionV3. Despite having a limited number of trainable parameters, the proposed models achieved mean-IoU (mean intersection-over-union) of 93.84%, 94.15%, and 94.65% against the mean-IoU values of 95.39%, 96.54%, 96.40%, and 96.37% obtained using state-of-the-art networks. The segmentation time for the developed models was reduced up to 52.0% compared to state-of-the-art networks. The developed lightweight models segmented the in-field cotton bolls comparatively faster and with greater accuracy. Hence, developed models can be deployed to cotton harvesting robots for real-time recognition of in-field cotton bolls for harvesting.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 1-19"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50193228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Leguminous seeds detection based on convolutional neural networks: Comparison of Faster R-CNN and YOLOv4 on a small custom dataset 基于卷积神经网络的豆科植物种子检测:快速R-CNN和YOLOv4在小型自定义数据集上的比较
Artificial Intelligence in Agriculture Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.03.002
Noran S. Ouf
{"title":"Leguminous seeds detection based on convolutional neural networks: Comparison of Faster R-CNN and YOLOv4 on a small custom dataset","authors":"Noran S. Ouf","doi":"10.1016/j.aiia.2023.03.002","DOIUrl":"10.1016/j.aiia.2023.03.002","url":null,"abstract":"<div><p>This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and it can be very difficult to distinguish between them. Botanists and those who study plants, however, can identify the type of seed at a glance. As far as we know, this is the first work to consider leguminous seeds images with different backgrounds and different sizes and crowding. Machine learning is used to automatically classify and locate 11 different seed types. We chose Leguminous seeds from 11 types to be the objects of this study. Those types are of different colors, sizes, and shapes to add variety and complexity to our research. The images dataset of the leguminous seeds was manually collected, annotated, and then split randomly into three sub-datasets train, validation, and test (predictions), with a ratio of 80%, 10%, and 10% respectively. The images considered the variability between different leguminous seed types. The images were captured on five different backgrounds: white A4 paper, black pad, dark blue pad, dark green pad, and green pad. Different heights and shooting angles were considered. The crowdedness of the seeds also varied randomly between 1 and 50 seeds per image. Different combinations and arrangements between the 11 types were considered. Two different image-capturing devices were used: a SAMSUNG smartphone camera and a Canon digital camera. A total of 828 images were obtained, including 9801 seed objects (labels). The dataset contained images of different backgrounds, heights, angles, crowdedness, arrangements, and combinations. The TensorFlow framework was used to construct the Faster Region-based Convolutional Neural Network (R-CNN) model and CSPDarknet53 is used as the backbone for YOLOv4 based on DenseNet designed to connect layers in convolutional neural. Using the transfer learning method, we optimized the seed detection models. The currently dominant object detection methods, Faster R-CNN, and YOLOv4 performances were compared experimentally. The mAP (mean average precision) of the Faster R-CNN and YOLOv4 models were 84.56% and 98.52% respectively. YOLOv4 had a significant advantage in detection speed over Faster R-CNN which makes it suitable for real-time identification as well where high accuracy and low false positives are needed. The results showed that YOLOv4 had better accuracy, and detection ability, as well as faster detection speed beating Faster R-CNN by a large margin. The model can be effectively applied under a variety of backgrounds, image sizes, seed sizes, shooting angles, and shooting heights, as well as different levels of seed crowding. It constitutes an effective and efficient method for detecting different leguminous seeds in complex scenarios. This study provides a reference for further seed testing and enumeration applications.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 30-45"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43701153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Feature aggregation for nutrient deficiency identification in chili based on machine learning 基于机器学习的辣椒营养缺乏识别特征聚合
Artificial Intelligence in Agriculture Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.04.001
Deffa Rahadiyan , Sri Hartati , Wahyono , Andri Prima Nugroho
{"title":"Feature aggregation for nutrient deficiency identification in chili based on machine learning","authors":"Deffa Rahadiyan ,&nbsp;Sri Hartati ,&nbsp;Wahyono ,&nbsp;Andri Prima Nugroho","doi":"10.1016/j.aiia.2023.04.001","DOIUrl":"10.1016/j.aiia.2023.04.001","url":null,"abstract":"<div><p>Macronutrient deficiency inhibits the growth and development of chili plants. One of the non-destructive methods that plays a role in processing plant image data based on specific characteristics is computer vision. This study uses 5166 image data after augmentation process for six plant health conditions. But the analysis of one feature cannot represent plant health condition. Therefore, a careful combination of features is required. This study combines three types of features with HSV and RGB for color, GLCM and LBP for texture, and Hu moments and centroid distance for shapes. Each feature and its combination are trained and tested using the same MLP architecture. The combination of RGB, GLCM, Hu moments, and Distance of centroid features results the best performance. In addition, this study compares the MLP architecture used with previous studies such as SVM, Random Forest Technique, Naive Bayes, and CNN. CNN produced the best performance, followed by SVM and MLP, with accuracy reaching 97.76%, 90.55% and 89.70%, respectively. Although MLP has lower accuracy than CNN, the model for identifying plant health conditions has a reasonably good success rate to be applied in a simple agricultural environment.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 77-90"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43991546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
GxENet: Novel fully connected neural network based approaches to incorporate GxE for predicting wheat yield GxENet:基于全连接神经网络的小麦产量预测方法
Artificial Intelligence in Agriculture Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.05.001
Sheikh Jubair , Olivier Tremblay-Savard , Mike Domaratzki
{"title":"GxENet: Novel fully connected neural network based approaches to incorporate GxE for predicting wheat yield","authors":"Sheikh Jubair ,&nbsp;Olivier Tremblay-Savard ,&nbsp;Mike Domaratzki","doi":"10.1016/j.aiia.2023.05.001","DOIUrl":"10.1016/j.aiia.2023.05.001","url":null,"abstract":"<div><p>The expression of quantitative traits of a line of a crop depends on its genetics, the environment where it is sown and the interaction between the genetic information and the environment known as GxE. Thus to maximize food production, new varieties are developed by selecting superior lines of seeds suitable for a specific environment. Genomic selection is a computational technique for developing a new variety that uses whole genome molecular markers to identify top lines of a crop. A large number of statistical and machine learning models are employed for single environment trials, where it is assumed that the environment does not have any effect on the quantitative traits. However, it is essential to consider both genomic and environmental data to develop a new variety, as these strong assumptions may lead to failing to select top lines for an environment. Here we devised three novel deep learning frameworks incorporating GxE within the deep learning model and predicted line-specific yield for an environment. In the process, we also developed a new technique for identifying environment-specific markers that can be useful in many applications of environment-specific genomic selection. The result demonstrates that our best framework obtains 1.75 to 1.95 times better correlation coefficients than other deep learning models that incorporate environmental data depending on the test scenario. Furthermore, the feature importance analysis shows that environmental information, followed by genomic information, is the driving factor in predicting environment-specific yield for a line. We also demonstrate a way to extend our framework for new data types, such as text or soil data. The extended model also shows the potential to be useful in genomic selection.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 60-76"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47674501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning method for monitoring spatial distribution of cage-free hens 一种监测无笼母鸡空间分布的深度学习方法
Artificial Intelligence in Agriculture Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.03.003
Xiao Yang, Ramesh Bist, Sachin Subedi, Lilong Chai
{"title":"A deep learning method for monitoring spatial distribution of cage-free hens","authors":"Xiao Yang,&nbsp;Ramesh Bist,&nbsp;Sachin Subedi,&nbsp;Lilong Chai","doi":"10.1016/j.aiia.2023.03.003","DOIUrl":"10.1016/j.aiia.2023.03.003","url":null,"abstract":"<div><p>The spatial distribution of laying hens in cage-free houses is an indicator of flock's health and welfare. While larger space allows chickens to perform more natural behaviors such as dustbathing, foraging, and perching in cage-free houses, an inherent challenge is evaluating chickens' locomotion and spatial distribution (e.g., real-time birds' number on perches or in nesting boxes). Manual inspection of hen's spatial distribution requires closer observation, which is labor intensive, time consuming, subject to human errors, and stress causing on birds. Therefore, an automated monitoring system is required to track the spatial distribution of hens for early detection of animal welfare and health concerns. In this study, a non–intrusive machine vision method was developed to monitor hens' spatial distribution automatically. An improved You Only Look Once version 5 (YOLOv5) method was developed and trained to test hens' distribution in research cage-free facilities (e.g., 200 hens per house). The spatial distribution of hens the system monitored includes perch zone, feeding zone, drinking zone, and nesting zone. The dataset contains a whole growth period of chickens from day 1 to day 252. About 3000 images were extracted randomly from recorded videos for model training, validation, and testing. About 2400 images were used for training and 600 images for testing, respectively. Results show that the accuracy of the new model were 87–94% for tracking distribution in different zones for different ages of hens/pullets. Birds' age affected the performance of the model as younger birds had smaller body size and were hard to be detected due to blackness or occultation by equipment. The performance of the model was 0.891 and 0.942 for baby chicks (≤10 days old) and older birds (&gt; 10 days) in detecting perching behaviors; 0.874 and 0.932 in detecting feeding/drinking behaviors. Miss detection happened when the flock density was high (&gt;18 birds/m<sup>2</sup>) and chicken body was occluded by other facilities (e.g., nest boxes, feeders, and perches). Further studies such as chicken behavior identification works in commercial housing system should be combined with the model to reach an automatic detection system.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 20-29"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48299939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
How artificial intelligence uses to achieve the agriculture sustainability: Systematic review 人工智能如何实现农业可持续发展:系统综述
Artificial Intelligence in Agriculture Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.04.002
Vilani Sachithra, L.D.C.S. Subhashini
{"title":"How artificial intelligence uses to achieve the agriculture sustainability: Systematic review","authors":"Vilani Sachithra,&nbsp;L.D.C.S. Subhashini","doi":"10.1016/j.aiia.2023.04.002","DOIUrl":"10.1016/j.aiia.2023.04.002","url":null,"abstract":"<div><p>The generation of food production that meets the rising demand for food and ecosystem security is a big challenge. With the development of Artificial Intelligence (AI) models, there is a growing need to use them to achieve sustainable agriculture. The continuous enhancement of AI in agriculture, researchers have proposed many models in agriculture functions such as prediction,weed control, resource management, advance care of crops, and so on. This article evaluates on a systematic review of AI models in agriculture functions. It also reviews how AI models are used in identified sustainable objectives. Through this extensive review, this paper discusses considerations and limitations for building the next generation of sustainable agriculture using AI.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 46-59"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41817127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信