Artificial Intelligence in Agriculture最新文献

筛选
英文 中文
Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments 使用改进的轻量级 YOLOv8 模型实时检测复杂果园环境中的多阶段苹果果实
Artificial Intelligence in Agriculture Pub Date : 2024-03-01 DOI: 10.1016/j.aiia.2024.02.001
Baoling Ma , Zhixin Hua , Yuchen Wen , Hongxing Deng , Yongjie Zhao , Liuru Pu , Huaibo Song
{"title":"Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments","authors":"Baoling Ma ,&nbsp;Zhixin Hua ,&nbsp;Yuchen Wen ,&nbsp;Hongxing Deng ,&nbsp;Yongjie Zhao ,&nbsp;Liuru Pu ,&nbsp;Huaibo Song","doi":"10.1016/j.aiia.2024.02.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.02.001","url":null,"abstract":"<div><p>For the purpose of monitoring apple fruits effectively throughout the entire growth period in smart orchards. A lightweight model named YOLOv8n-ShuffleNetv2-Ghost-SE was proposed. The ShuffleNetv2 basic modules and down-sampling modules were alternately connected, replacing the Backbone of YOLOv8n model. The Ghost modules replaced the Conv modules and the C2fGhost modules replaced the C2f modules in the Neck part of the YOLOv8n. ShuffleNetv2 reduced the memory access cost through channel splitting operations. The Ghost module combined linear and non-linear convolutions to reduce the network computation cost. The Wise-IoU (WIoU) replaced the CIoU for calculating the bounding box regression loss, which dynamically adjusted the anchor box quality threshold and gradient gain allocation strategy, optimizing the size and position of predicted bounding boxes. The Squeeze-and-Excitation (SE) was embedded in the Backbone and Neck part of YOLOv8n to enhance the representation ability of feature maps. The algorithm ensured high precision while having small model size and fast detection speed, which facilitated model migration and deployment. Using 9652 images validated the effectiveness of the model. The YOLOv8n-ShuffleNetv2-Ghost-SE model achieved Precision of 94.1%, Recall of 82.6%, mean Average Precision of 91.4%, model size of 2.6 MB, parameters of 1.18 M, FLOPs of 3.9 G, and detection speed of 39.37 fps. The detection speeds on the Jetson Xavier NX development board were 3.17 fps. Comparisons with advanced models including Faster R-CNN, SSD, YOLOv5s, YOLOv7‑tiny, YOLOv8s, YOLOv8n, MobileNetv3_small-Faster, MobileNetv3_small-Ghost, ShuflleNetv2-Faster, ShuflleNetv2-Ghost, ShuflleNetv2-Ghost-CBAM, ShuflleNetv2-Ghost-ECA, and ShuflleNetv2-Ghost-CA demonstrated that the method achieved smaller model and faster detection speed. The research can provide reference for the development of smart devices in apple orchards.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 70-82"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000023/pdfft?md5=6fc303d1eb23f5151de28ee6f36c2d3d&pid=1-s2.0-S2589721724000023-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140031373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated quality inspection of baby corn using image processing and deep learning 利用图像处理和深度学习实现婴幼儿玉米质量自动检测
Artificial Intelligence in Agriculture Pub Date : 2024-01-23 DOI: 10.1016/j.aiia.2024.01.001
Kris Wonggasem, Pongsan Chakranon, Papis Wongchaisuwat
{"title":"Automated quality inspection of baby corn using image processing and deep learning","authors":"Kris Wonggasem,&nbsp;Pongsan Chakranon,&nbsp;Papis Wongchaisuwat","doi":"10.1016/j.aiia.2024.01.001","DOIUrl":"10.1016/j.aiia.2024.01.001","url":null,"abstract":"<div><p>The food industry typically relies heavily on manual operations with high proficiency and skills. According to the quality inspection process, a baby corn with black marks or blemishes is considered a defect or unqualified class which should be discarded. Quality inspection and sorting of agricultural products like baby corn are labor-intensive and time-consuming. The main goal of this work is to develop an automated quality inspection framework to differentiate between ‘pass’ and ‘fail’ categories based on baby corn images. A traditional image processing method using a threshold principle is compared with relatively more advanced deep learning models. Particularly, Convolutional neural networks, specific sub-types of deep learning models, were implemented. Thorough experiments on choices of network architectures and their hyperparameters were conducted and compared. A Shapley additive explanations (SHAP) framework was further utilized for network interpretation purposes. The EfficientNetB5 networks with relatively larger input sizes yielded up to 99.06% accuracy as the best performance against 95.28% obtained from traditional image processing. Incorporating a region of interest identification, several model experiments, data application on baby corn images, and the SHAP framework are our main contributions. Our proposed quality inspection system to automatically differentiate baby corn images provides a potential pipeline to further support the agricultural production process.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 61-69"},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000011/pdfft?md5=7f516ee421a879bd329ecdddca0cde40&pid=1-s2.0-S2589721724000011-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139634663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced detection algorithm for apple bruises using structured light imaging 利用结构光成像技术增强苹果伤痕检测算法
Artificial Intelligence in Agriculture Pub Date : 2023-12-13 DOI: 10.1016/j.aiia.2023.12.001
Haojie Zhu , Lingling Yang , Yu Wang , Yuwei Wang , Wenhui Hou , Yuan Rao , Lu Liu
{"title":"Enhanced detection algorithm for apple bruises using structured light imaging","authors":"Haojie Zhu ,&nbsp;Lingling Yang ,&nbsp;Yu Wang ,&nbsp;Yuwei Wang ,&nbsp;Wenhui Hou ,&nbsp;Yuan Rao ,&nbsp;Lu Liu","doi":"10.1016/j.aiia.2023.12.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.001","url":null,"abstract":"<div><p>Bruising reduces the edibility and marketability of fresh apples, inevitably causing economic losses for the apple industry. However, bruises lack obvious visual symptoms, which makes it challenging to detect them using imaging techniques with uniform or diffuse illumination. This study employed the structured light imaging (SLI) technique to detect apple bruises. First, the grayscale reflection images were captured under phase-shifted sinusoidal illumination at three different wavelengths (600, 650, and 700 nm) and six different spatial frequencies (0.05, 0.10, 0.15, 0.20, 0.25, and 0.30 cycles mm<sup>−1</sup>). Next, the grayscale reflectance images were demodulated to produce direct component (DC) images representing uniform diffuse illumination and amplitude component (AC) images revealing bruises. Then, by quantifying the contrast between bruised regions and sound regions in all AC images, it was found that bruises exhibited the optimal contrast when subjected to sinusoidal illumination at a wavelength of 700 nm and a spatial frequency of 0.25 mm<sup>−1</sup>. In the AC image with optimal contrast, the developed <em>h</em>-domes segmentation algorithm to accurately segment the location and range of the bruised regions. Moreover, the algorithm successfully accomplished the task of segmenting central bruised regions while addressing the challenge of segmenting edge bruised regions complicated by vignetting. The average Intersection over Union (IoU) values for the three types of bruises were 0.9422, 0.9231, and 0.9183, respectively. This result demonstrated that the combination of SLI and the <em>h</em>-domes segmentation algorithm was a viable approach for the effective detection of fresh apple bruises.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 50-60"},"PeriodicalIF":0.0,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000508/pdfft?md5=4b5f4f71fba5824f27f3f6fb52807dae&pid=1-s2.0-S2589721723000508-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138769651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image classification of lotus in Nong Han Chaloem Phrakiat Lotus Park using convolutional neural networks 利用卷积神经网络对 Nong Han Chaloem Phrakiat 莲花公园的莲花进行图像分类
Artificial Intelligence in Agriculture Pub Date : 2023-12-12 DOI: 10.1016/j.aiia.2023.12.003
Thanawat Phattaraworamet , Sawinee Sangsuriyun , Phoempol Kutchomsri , Susama Chokphoemphun
{"title":"Image classification of lotus in Nong Han Chaloem Phrakiat Lotus Park using convolutional neural networks","authors":"Thanawat Phattaraworamet ,&nbsp;Sawinee Sangsuriyun ,&nbsp;Phoempol Kutchomsri ,&nbsp;Susama Chokphoemphun","doi":"10.1016/j.aiia.2023.12.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.003","url":null,"abstract":"<div><p>The Nong Han Chaloem Phrakiat Lotus Park is a tourist attraction and a source of learning regarding lotus plants. However, as a training area, it lacks appeal and learning motivation due to its conventional presentation of information regarding lotus plants. The current study introduced the concept of smart learning in this setting to increase interest and motivation for learning. Convolutional neural networks (CNNs) were used for the classification of lotus plant species, for use in the development of a mobile application to display details about each species. The scope of the study was to classify 11 species of lotus plants using the proposed CNN model based on different techniques (augmentation, dropout, and L2) and hyper parameters (dropout and epoch number). The expected outcome was to obtain a high-performance CNN model with reduced total parameters compared to using three different pre-trained CNN models (Inception V3, VGG16, and VGG19) as benchmarks. The performance of the model was presented in terms of accuracy, F1-score, precision, and recall values. The results showed that the CNN model with the augmentation, dropout, and L2 techniques at a dropout value of 0.4 and an epoch number of 30 provided the highest testing accuracy of 0.9954. The best proposed model was more accurate than the pre-trained CNN models, especially compared to Inception V3. In addition, the number of total parameters was reduced by approximately 1.80–2.19 times. These findings demonstrated that the proposed model with a small number of total parameters had a satisfactory degree of classification accuracy.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 23-33"},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000491/pdfft?md5=d74952e474880b11ee67566302a088f6&pid=1-s2.0-S2589721723000491-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138656373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time litchi detection in complex orchard environments: A portable, low-energy edge computing approach for enhanced automated harvesting 在复杂果园环境中实时检测荔枝:一种便携式、低能耗的边缘计算方法,用于增强自动采摘功能
Artificial Intelligence in Agriculture Pub Date : 2023-12-12 DOI: 10.1016/j.aiia.2023.12.002
Zeyu Jiao , Kai Huang , Qun Wang , Zhenyu Zhong , Yingjie Cai
{"title":"Real-time litchi detection in complex orchard environments: A portable, low-energy edge computing approach for enhanced automated harvesting","authors":"Zeyu Jiao ,&nbsp;Kai Huang ,&nbsp;Qun Wang ,&nbsp;Zhenyu Zhong ,&nbsp;Yingjie Cai","doi":"10.1016/j.aiia.2023.12.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.002","url":null,"abstract":"<div><p>Litchi, a succulent and perishable fruit, presents a narrow annual harvest window of under two weeks. The advent of smart agriculture has driven the adoption of visually-guided, automated litchi harvesting techniques. However, conventional approaches typically rely on laboratory-based, high-performance computing equipment, which presents challenges in terms of size, energy consumption, and practical application within litchi orchards. To address these limitations, we propose a real-time litchi detection methodology for complex environments, utilizing portable, low-energy edge computing devices. Initially, the litchi orchard imagery is collected to enhance data generalization. Subsequently, a convolutional neural network (CNN)-based single-stage detector, YOLOx, is constructed to accurately pinpoint litchi fruit locations within the images. To facilitate deployment on portable, low-energy edge devices, we employed channel pruning and layer pruning algorithms to compress the trained model, reducing its size and parameters. Additionally, the knowledge distillation technique is harnessed to fine-tune the network. Experimental findings demonstrated that our proposed method achieved a 97.1% compression rate, yielding a compact litchi detection model of a mere 6.9 MB, while maintaining 94.9% average precision and 97.2% average recall. Processing 99 frames per second (FPS), the method exhibited a 1.8-fold increase in speed compared to the unprocessed model. Consequently, our approach can be readily integrated into portable, low-computational automatic harvesting equipment, ensuring real-time, precise litchi detection within orchard settings.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 13-22"},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172300051X/pdfft?md5=184c6e07ee017be8224834a9e8f6c30d&pid=1-s2.0-S258972172300051X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138657251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Intelligence for Smart Sheep Farming: Applying Ensemble Learning to Detect Sheep Breeds 用于智能养羊的视觉智能:应用集合学习检测绵羊品种
Artificial Intelligence in Agriculture Pub Date : 2023-11-28 DOI: 10.1016/j.aiia.2023.11.002
Galib Muhammad Shahriar Himel , Md. Masudul Islam , Mijanur Rahaman
{"title":"Vision Intelligence for Smart Sheep Farming: Applying Ensemble Learning to Detect Sheep Breeds","authors":"Galib Muhammad Shahriar Himel ,&nbsp;Md. Masudul Islam ,&nbsp;Mijanur Rahaman","doi":"10.1016/j.aiia.2023.11.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.11.002","url":null,"abstract":"<div><p>The ability to automatically recognize sheep breeds holds significant value for the sheep industry. Sheep farmers often require breed identification to assess the commercial worth of their flocks. However, many farmers specifically the novice one encounter difficulties in accurately identifying sheep breeds without experts in the field. Therefore, there is a need for autonomous approaches that can effectively and precisely replicate the breed identification skills of a sheep breed expert while functioning within a farm environment, thus providing considerable benefits the industry-specific to the novice farmers in the industry. To achieve this objective, we suggest utilizing a model based on convolutional neural networks (CNNs) which can rapidly and efficiently identify the type of sheep based on their facial features. This approach offers a cost-effective solution. To conduct our experiment, we utilized a dataset consisting of 1680 facial images which represented four distinct sheep breeds. This paper proposes an ensemble method that combines Xception, VGG16, InceptionV3, InceptionResNetV2, and DenseNet121 models. During the transfer learning using this pre-trained model, we applied several optimizers and loss functions and chose the best combinations out of them. This classification model has the potential to aid sheep farmers in precisely and efficiently distinguishing between various breeds, enabling more precise assessments of sector-specific classification for different businesses.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 1-12"},"PeriodicalIF":0.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172300048X/pdfft?md5=5303eef40412bbb4acced911b2385da5&pid=1-s2.0-S258972172300048X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138582174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepRice: A deep learning and deep feature based classification of Rice leaf disease subtypes DeepRice:基于深度学习和深度特征的水稻叶病亚型分类
Artificial Intelligence in Agriculture Pub Date : 2023-11-23 DOI: 10.1016/j.aiia.2023.11.001
P. Isaac Ritharson , Kumudha Raimond , X. Anitha Mary , Jennifer Eunice Robert , Andrew J
{"title":"DeepRice: A deep learning and deep feature based classification of Rice leaf disease subtypes","authors":"P. Isaac Ritharson ,&nbsp;Kumudha Raimond ,&nbsp;X. Anitha Mary ,&nbsp;Jennifer Eunice Robert ,&nbsp;Andrew J","doi":"10.1016/j.aiia.2023.11.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.11.001","url":null,"abstract":"<div><p>Rice stands as a crucial staple food globally, with its enduring sustainability hinging on the prompt detection of rice leaf diseases. Hence, efficiently detecting diseases when they have already occurred holds paramount importance for solving the cost of manual visual identification and chemical testing. In the recent past, the identification of leaf pathologies in crops predominantly relies on manual methods using specialized equipment, which proves to be time-consuming and inefficient. This study offers a remedy by harnessing Deep Learning (DL) and transfer learning techniques to accurately identify and classify rice leaf diseases. A comprehensive dataset comprising 5932 self-generated images of rice leaves was assembled along with the benchmark datasets, categorized into 9 classes irrespective of the extent of disease spread across the leaves. These classes encompass diverse states including healthy leaves, mild and severe blight, mild and severe tungro, mild and severe blast, as well as mild and severe brown spot. Following meticulous manual labelling and dataset segmentation, which was validated by horticulture experts, data augmentation strategies were implemented to amplify the number of images. The datasets were subjected to evaluation using the proposed tailored Convolutional Neural Networks models. Their performance are scrutinized in conjunction with alternative transfer learning approaches like VGG16, Xception, ResNet50, DenseNet121, Inception ResnetV2, and Inception V3. The effectiveness of the proposed custom VGG16 model was gauged by its capacity to generalize to unseen images, yielding an exceptional accuracy of 99.94%, surpassing the benchmarks set by existing state-of-the-art models. Further, the layer wise feature extraction is also visualized as the interpretable AI.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 34-49"},"PeriodicalIF":0.0,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000430/pdfft?md5=8298c1c42100a96a98aecb1442163521&pid=1-s2.0-S2589721723000430-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138656374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cumulative unsupervised multi-domain adaptation for Holstein cattle re-identification 荷斯坦牛再识别的累积无监督多域自适应
Artificial Intelligence in Agriculture Pub Date : 2023-10-17 DOI: 10.1016/j.aiia.2023.10.002
Fabian Dubourvieux , Guillaume Lapouge , Angélique Loesch , Bertrand Luvison , Romaric Audigier
{"title":"Cumulative unsupervised multi-domain adaptation for Holstein cattle re-identification","authors":"Fabian Dubourvieux ,&nbsp;Guillaume Lapouge ,&nbsp;Angélique Loesch ,&nbsp;Bertrand Luvison ,&nbsp;Romaric Audigier","doi":"10.1016/j.aiia.2023.10.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.10.002","url":null,"abstract":"<div><p>In dairy farming, ensuring the health of each cow and minimizing economic losses requires individual monitoring, achieved through cow <em>Re</em>-Identification (Re-ID). Computer vision-based Re-ID relies on visually distinguishing features, such as the distinctive coat patterns of breeds like Holstein.</p><p>However, annotating every cow in each farm is cost-prohibitive. Our objective is to develop <em>Re</em>-ID methods applicable to both labeled and unlabeled farms, accommodating new individuals and diverse environments. Unsupervised Domain Adaptation (UDA) techniques bridge this gap, transferring knowledge from labeled source domains to unlabeled target domains, but have only been mainly designed for pedestrian and vehicle <em>Re</em>-ID applications.</p><p>Our work introduces Cumulative Unsupervised Multi-Domain Adaptation (CUMDA) to address challenges of limited identity diversity and diverse farm appearances. CUMDA accumulates knowledge from all domains, enhancing specialization in known domains and improving generalization to unseen domains. Our contributions include a CUMDA method adapting to multiple unlabeled target domains while preserving source domain performance, along with extensive cross-dataset experiments on three cattle <em>Re</em>-ID datasets. These experiments demonstrate significant enhancements in source preservation, target domain specialization, and generalization to unseen domains.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"10 ","pages":"Pages 46-60"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000429/pdfft?md5=415adc99dee89219367d287b9bd79295&pid=1-s2.0-S2589721723000429-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91987268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harvest optimization for sustainable agriculture: The case of tea harvest scheduling 可持续农业的收获优化——以茶叶收获调度为例
Artificial Intelligence in Agriculture Pub Date : 2023-10-12 DOI: 10.1016/j.aiia.2023.10.001
Bedirhan Sarımehmet, Mehmet Pınarbaşı, Hacı Mehmet Alakaş, Tamer Eren
{"title":"Harvest optimization for sustainable agriculture: The case of tea harvest scheduling","authors":"Bedirhan Sarımehmet,&nbsp;Mehmet Pınarbaşı,&nbsp;Hacı Mehmet Alakaş,&nbsp;Tamer Eren","doi":"10.1016/j.aiia.2023.10.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.10.001","url":null,"abstract":"<div><p>To ensure sustainability in agriculture, many optimization problems need to be solved. An important one of them is harvest scheduling problem. In this study, the harvest scheduling problem for the tea is discussed. The tea harvest problem includes the creating a harvest schedule by considering the farmers' quotas under the purchase location and factory capacity. Tea harvesting is carried out in cooperation with the farmer - factory. Factory management is interested in using its resources. So, the factory capacity, purchase location capacities and number of expeditions should be considered during the harvesting process. When the farmer's side is examined, it is seen that the real professions of farmers are different. On harvest days, farmers often cannot attend to their primary professions. Considering the harvest day preferences of farmers in creating the harvest schedule are of great importance for sustainability in agriculture. Two different mathematical models are proposed to solve this problem. The first model minimizes the number of weekly expeditions of factory vehicles within the factor and purchase location capacity restrictions. The second model minimizes the number of expeditions and aims to comply with the preferences of the farmers as much as possible. A sample application was performed in a region with 12 purchase locations, 988 farmers, and 3392 decares of tea fields. The results show that the compliance rate of farmers to harvesting preferences could be increased from 52% to 97%, and this situation did not affect the number of expeditions of the factory. This result shows that considering the farmers' preferences on the harvest day will have no negative impact on the factory. On the contrary, it was concluded that this situation increases sustainability and encouragement in agriculture. Furthermore, the results show that models are effective for solving the problem.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"10 ","pages":"Pages 35-45"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50186814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based spectral and spatial analysis of hyper- and multi-spectral leaf images for Dutch elm disease detection and resistance screening 基于机器学习的超光谱和多光谱叶片图像的光谱和空间分析用于荷兰榆树病害检测和抗性筛选
Artificial Intelligence in Agriculture Pub Date : 2023-09-26 DOI: 10.1016/j.aiia.2023.09.003
Xing Wei , Jinnuo Zhang , Anna O. Conrad , Charles E. Flower , Cornelia C. Pinchot , Nancy Hayes-Plazolles , Ziling Chen , Zhihang Song , Songlin Fei , Jian Jin
{"title":"Machine learning-based spectral and spatial analysis of hyper- and multi-spectral leaf images for Dutch elm disease detection and resistance screening","authors":"Xing Wei ,&nbsp;Jinnuo Zhang ,&nbsp;Anna O. Conrad ,&nbsp;Charles E. Flower ,&nbsp;Cornelia C. Pinchot ,&nbsp;Nancy Hayes-Plazolles ,&nbsp;Ziling Chen ,&nbsp;Zhihang Song ,&nbsp;Songlin Fei ,&nbsp;Jian Jin","doi":"10.1016/j.aiia.2023.09.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.09.003","url":null,"abstract":"<div><p>Diseases caused by invasive pathogens are an increasing threat to forest health, and early and accurate disease detection is essential for timely and precision forest management. The recent technological advancements in spectral imaging and artificial intelligence have opened up new possibilities for plant disease detection in both crops and trees. In this study, Dutch elm disease (DED; caused by <em>Ophiostoma novo-ulmi,</em>) and American elm (<em>Ulmus americana</em>) was used as example pathosystem to evaluate the accuracy of two in-house developed high-precision portable hyper- and multi-spectral leaf imagers combined with machine learning as new tools for forest disease detection. Hyper- and multi-spectral images were collected from leaves of American elm genotypes with varied disease susceptibilities after mock-inoculation and inoculation with <em>O. novo-ulmi</em> under greenhouse conditions. Both traditional machine learning and state-of-art deep learning models were built upon derived spectra and directly upon spectral image cubes. Deep learning models that incorporate both spectral and spatial features of high-resolution spectral leaf images have better performance than traditional machine learning models built upon spectral features alone in detecting DED. Edges and symptomatic spots on the leaves were highlighted in the deep learning model as important spatial features to distinguish leaves from inoculated and mock-inoculated trees. In addition, spectral and spatial feature patterns identified in the machine learning-based models were found relative to the DED susceptibility of elm genotypes. Though further studies are needed to assess applications in other pathosystems, hyper- and multi-spectral leaf imagers combined with machine learning show potential as new tools for disease phenotyping in trees.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"10 ","pages":"Pages 26-34"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50186813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信