Artificial Intelligence in Agriculture最新文献

筛选
英文 中文
Unveiling the drivers contributing to global wheat yield shocks through quantile regression 通过分位数回归揭示全球小麦产量冲击的驱动因素
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-03-22 DOI: 10.1016/j.aiia.2025.03.004
Srishti Vishwakarma , Xin Zhang , Vyacheslav Lyubchich
{"title":"Unveiling the drivers contributing to global wheat yield shocks through quantile regression","authors":"Srishti Vishwakarma ,&nbsp;Xin Zhang ,&nbsp;Vyacheslav Lyubchich","doi":"10.1016/j.aiia.2025.03.004","DOIUrl":"10.1016/j.aiia.2025.03.004","url":null,"abstract":"<div><div>Sudden reductions in crop yield (i.e., yield shocks) severely disrupt the food supply, intensify food insecurity, depress farmers' welfare, and worsen a country's economic conditions. Here, we study the spatiotemporal patterns of wheat yield shocks, quantified by the lower quantiles of yield fluctuations, in 86 countries over 30 years. Furthermore, we assess the relationships between shocks and their key ecological and socioeconomic drivers using quantile regression based on statistical (linear quantile mixed model) and machine learning (quantile random forest) models. Using a panel dataset that captures spatiotemporal patterns of yield shocks and possible drivers in 86 countries, we find that the severity of yield shocks has been increasing globally since 1997. Moreover, our cross-validation exercise shows that quantile random forest outperforms the linear quantile regression model. Despite this performance difference, both models consistently reveal that the severity of shocks is associated with higher weather stress, nitrogen fertilizer application rate, and gross domestic product (GDP) per capita (a typical indicator for economic and technological advancement in a country). While the unexpected negative association between more severe wheat yield shocks and higher fertilizer application rate and GDP per capita does not imply a direct causal effect, they indicate that the advancement in wheat production has been primarily on achieving higher yields and less on lowering the possibility and magnitude of sharp yield reductions. Hence, in the context of growing extreme weather stress, there is a critical need to enhance the technology and management practices that mitigate yield shocks to improve the resilience of the world food systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 564-572"},"PeriodicalIF":8.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new tool to improve the computation of animal kinetic activity indices in precision poultry farming 一种改进精密家禽养殖动物动力活动指数计算的新工具
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-03-22 DOI: 10.1016/j.aiia.2025.03.005
Alberto Carraro , Mattia Pravato , Francesco Marinello , Francesco Bordignon , Angela Trocino , Gerolamo Xiccato , Andrea Pezzuolo
{"title":"A new tool to improve the computation of animal kinetic activity indices in precision poultry farming","authors":"Alberto Carraro ,&nbsp;Mattia Pravato ,&nbsp;Francesco Marinello ,&nbsp;Francesco Bordignon ,&nbsp;Angela Trocino ,&nbsp;Gerolamo Xiccato ,&nbsp;Andrea Pezzuolo","doi":"10.1016/j.aiia.2025.03.005","DOIUrl":"10.1016/j.aiia.2025.03.005","url":null,"abstract":"<div><div>Precision Livestock Farming (PLF) emerges as a promising solution for revolutionising farming by enabling real-time automated monitoring of animals through smart technologies. PLF provides farmers with precise data to enhance farm management, increasing productivity and profitability. For instance, it allows for non-intrusive health assessments, contributing to maintaining a healthy herd while reducing stress associated with handling. In the poultry sector, image analysis can be utilised to monitor and analyse the behaviour of each hen in real time. Researchers have recently used machine learning algorithms to monitor the behaviour, health, and positioning of hens through computer vision techniques. Convolutional neural networks, a type of deep learning algorithm, have been utilised for image analysis to identify and categorise various hen behaviours and track specific activities like feeding and drinking. This research presents an automated system for analysing laying hen movement using video footage from surveillance cameras. With a customised implementation of object tracking, the system can efficiently process hundreds of hours of videos while maintaining high measurement precision. Its modular implementation adapts well to optimally exploit the GPU computing capabilities of the hardware platform it is running on. The use of this system is beneficial for both real-time monitoring and post-processing, contributing to improved monitoring capabilities in precision livestock farming.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 659-670"},"PeriodicalIF":8.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144253342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning 数字化温室试验:一种利用深度学习高效客观评估植物损害的自动化方法
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-03-17 DOI: 10.1016/j.aiia.2025.03.001
Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra
{"title":"Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning","authors":"Laura Gómez-Zamanillo ,&nbsp;Arantza Bereciartúa-Pérez ,&nbsp;Artzai Picón ,&nbsp;Liliana Parra ,&nbsp;Marian Oldenbuerger ,&nbsp;Ramón Navarra-Mestre ,&nbsp;Christian Klukas ,&nbsp;Till Eggers ,&nbsp;Jone Echazarra","doi":"10.1016/j.aiia.2025.03.001","DOIUrl":"10.1016/j.aiia.2025.03.001","url":null,"abstract":"<div><div>The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.</div><div>To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.</div><div>The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R<sup>2</sup>) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 280-295"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting grapevine phenological stages prediction based on climatic data by pseudo-labeling approach 伪标记法提高葡萄物候期预测的气候数据
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-03-17 DOI: 10.1016/j.aiia.2025.03.003
Mehdi Fasihi , Mirko Sodini , Alex Falcon , Francesco Degano , Paolo Sivilotti , Giuseppe Serra
{"title":"Boosting grapevine phenological stages prediction based on climatic data by pseudo-labeling approach","authors":"Mehdi Fasihi ,&nbsp;Mirko Sodini ,&nbsp;Alex Falcon ,&nbsp;Francesco Degano ,&nbsp;Paolo Sivilotti ,&nbsp;Giuseppe Serra","doi":"10.1016/j.aiia.2025.03.003","DOIUrl":"10.1016/j.aiia.2025.03.003","url":null,"abstract":"<div><div>Predicting grapevine phenological stages (GPHS) is critical for precisely managing vineyard operations, including plant disease treatments, pruning, and harvest. Solutions commonly used to address viticulture challenges rely on image processing techniques, which have achieved significant results. However, they require the installation of dedicated hardware in the vineyard, making it invasive and difficult to maintain. Moreover, accurate prediction is influenced by the interplay of climatic factors, especially temperature, and the impact of global warming, which are difficult to model using images. Another problem frequently found in GPHS prediction is the persistent issue of missing values in viticultural datasets, particularly in phenological stages. This paper proposes a semi-supervised approach that begins with a small set of labeled phenological stage examples and automatically generates new annotations for large volumes of unlabeled climatic data. This approach aims to address key challenges in phenological analysis. This novel climatic data-based approach offers advantages over common image processing methods, as it is non-intrusive, cost-effective, and adaptable for vineyards of various sizes and technological levels. To ensure the robustness of the proposed Pseudo-labelling strategy, we integrated it into eight machine-learning algorithms. We evaluated its performance across seven diverse datasets, each exhibiting varying percentages of missing values. Performance metrics, including the coefficient of determination (R<sup>2</sup>) and root-mean-square error (RMSE), are employed to assess the effectiveness of the models. The study demonstrates that integrating the proposed Pseudo-labeling strategy with supervised learning approaches significantly improves predictive accuracy. Moreover, the study shows that the proposed methodology can also be integrated with explainable artificial intelligence techniques to determine the importance of the input features. In particular, the investigation highlights that growing degree days are crucial for improved GPHS prediction.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 550-563"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TGFN-SD: A text-guided multimodal fusion network for swine disease diagnosis TGFN-SD:文本引导的猪疾病诊断多模式融合网络
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-03-14 DOI: 10.1016/j.aiia.2025.03.002
Gan Yang , Qifeng Li , Chunjiang Zhao , Chaoyuan Wang , Hua Yan , Rui Meng , Yu Liu , Ligen Yu
{"title":"TGFN-SD: A text-guided multimodal fusion network for swine disease diagnosis","authors":"Gan Yang ,&nbsp;Qifeng Li ,&nbsp;Chunjiang Zhao ,&nbsp;Chaoyuan Wang ,&nbsp;Hua Yan ,&nbsp;Rui Meng ,&nbsp;Yu Liu ,&nbsp;Ligen Yu","doi":"10.1016/j.aiia.2025.03.002","DOIUrl":"10.1016/j.aiia.2025.03.002","url":null,"abstract":"<div><div>China is the world's largest producer of pigs, but traditional manual prevention, treatment, and diagnosis methods cannot satisfy the demands of the current intensive production environment. Existing computer-aided diagnosis (CAD) systems for pigs are dominated by expert systems, which cannot be widely applied because the collection and maintenance of knowledge is difficult, and most of them ignore the effect of multimodal information. A swine disease diagnosis model was proposed in this study, the Text-Guided Fusion Network-Swine Diagnosis (TGFN-SD) model, which integrated text case reports and disease images. The model integrated the differences and complementary information in the multimodal representation of diseases through the text-guided transformer module such that text case reports could carry the semantic information of disease images for disease identification. Moreover, it alleviated the phenotypic overlap problem caused by similar diseases in combination with supervised learning and self-supervised learning. Experimental results revealed that TGFN-SD achieved satisfactory performance on a constructed swine disease image and text dataset (SDT6K) that covered six disease classification datasets with accuracy and F1-score of 94.48 % and 94.4 % respectively. The accuracies and F1-scores increased by 8.35 % and 7.24 % compared with those under the unimodal situation and by 2.02 % and 1.63 % compared with those of the optimal baseline model under the multimodal fusion. Additionally, interpretability analysis revealed that the model focus area was consistent with the habits and rules of the veterinary clinical diagnosis of pigs, indicating the effectiveness of the proposed model and providing new ideas and perspectives for the study of swine disease CAD.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 266-279"},"PeriodicalIF":8.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of the application prospects of cloud-edge-end collaborative technology in freshwater aquaculture 云-端协同技术在淡水养殖中的应用前景综述
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-03-04 DOI: 10.1016/j.aiia.2025.02.008
Jihao Wang , Xiaochan Wang , Yinyan Shi , Haihui Yang , Bo Jia , Xiaolei Zhang , Lebin Lin
{"title":"A review of the application prospects of cloud-edge-end collaborative technology in freshwater aquaculture","authors":"Jihao Wang ,&nbsp;Xiaochan Wang ,&nbsp;Yinyan Shi ,&nbsp;Haihui Yang ,&nbsp;Bo Jia ,&nbsp;Xiaolei Zhang ,&nbsp;Lebin Lin","doi":"10.1016/j.aiia.2025.02.008","DOIUrl":"10.1016/j.aiia.2025.02.008","url":null,"abstract":"<div><div>This paper reviews the application and potential of cloud-edge-end collaborative (CEEC) technology in the field of freshwater aquaculture, a rapidly developing sector driven by the growing global demand for aquatic products. The sustainable development of freshwater aquaculture has become a critical challenge due to issues such as water pollution and inefficient resource utilization in traditional farming methods. In response to these challenges, the integration of smart technologies has emerged as a promising solution to improve both efficiency and sustainability. Cloud computing and edge computing, when combined, form the backbone of CEEC technology, offering an innovative approach that can significantly enhance aquaculture practices. By leveraging the strengths of both technologies, CEEC enables efficient data processing through cloud infrastructure and real-time responsiveness via edge computing, making it a compelling solution for modern aquaculture. This review explores the key applications of CEEC in areas such as environmental monitoring, intelligent feeding systems, health management, and product traceability. The ability of CEEC technology to optimize the aquaculture environment, enhance product quality, and boost overall farming efficiency highlights its potential to become a mainstream solution in the industry. Furthermore, the paper discusses the limitations and challenges that need to be addressed in order to fully realize the potential of CEEC in freshwater aquaculture. In conclusion, this paper provides researchers and practitioners with valuable insights into the current state of CEEC technology in aquaculture, offering suggestions for future development and optimization to further enhance its contributions to the sustainable growth of freshwater aquaculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 232-251"},"PeriodicalIF":8.2,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors 基于采收前无人机时间序列数据和气象因子的叠置- lstm模型预测甜菜产量和品质参数
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-02-27 DOI: 10.1016/j.aiia.2025.02.004
Qing Wang , Ke Shao , Zhibo Cai , Yingpu Che , Haochong Chen , Shunfu Xiao , Ruili Wang , Yaling Liu , Baoguo Li , Yuntao Ma
{"title":"Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors","authors":"Qing Wang ,&nbsp;Ke Shao ,&nbsp;Zhibo Cai ,&nbsp;Yingpu Che ,&nbsp;Haochong Chen ,&nbsp;Shunfu Xiao ,&nbsp;Ruili Wang ,&nbsp;Yaling Liu ,&nbsp;Baoguo Li ,&nbsp;Yuntao Ma","doi":"10.1016/j.aiia.2025.02.004","DOIUrl":"10.1016/j.aiia.2025.02.004","url":null,"abstract":"<div><div>Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with <em>R</em><sup>2</sup> values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 252-265"},"PeriodicalIF":8.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143643996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review 基于深度学习的番茄叶病分类、检测和分割:最新进展综述
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-02-20 DOI: 10.1016/j.aiia.2025.02.006
Aritra Das , Fahad Pathan , Jamin Rahman Jim , Md Mohsin Kabir , M.F. Mridha
{"title":"Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review","authors":"Aritra Das ,&nbsp;Fahad Pathan ,&nbsp;Jamin Rahman Jim ,&nbsp;Md Mohsin Kabir ,&nbsp;M.F. Mridha","doi":"10.1016/j.aiia.2025.02.006","DOIUrl":"10.1016/j.aiia.2025.02.006","url":null,"abstract":"<div><div>The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 192-220"},"PeriodicalIF":8.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed 利用无人机多光谱图像和CGS-YOLO算法对玉米种子和杂草进行区分
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-02-17 DOI: 10.1016/j.aiia.2025.02.007
Boyi Tang , Jingping Zhou , Chunjiang Zhao , Yuchun Pan , Yao Lu , Chang Liu , Kai Ma , Xuguang Sun , Ruifang Zhang , Xiaohe Gu
{"title":"Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed","authors":"Boyi Tang ,&nbsp;Jingping Zhou ,&nbsp;Chunjiang Zhao ,&nbsp;Yuchun Pan ,&nbsp;Yao Lu ,&nbsp;Chang Liu ,&nbsp;Kai Ma ,&nbsp;Xuguang Sun ,&nbsp;Ruifang Zhang ,&nbsp;Xiaohe Gu","doi":"10.1016/j.aiia.2025.02.007","DOIUrl":"10.1016/j.aiia.2025.02.007","url":null,"abstract":"<div><div>Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 162-181"},"PeriodicalIF":8.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereo vision based broccoli recognition and attitude estimation method for field harvesting 基于立体视觉的西兰花田间收获识别与姿态估计方法
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-02-13 DOI: 10.1016/j.aiia.2025.02.002
Zhenni He , Fahui Yuan , Yansuo Zhou , Bingbo Cui , Yong He , Yufei Liu
{"title":"Stereo vision based broccoli recognition and attitude estimation method for field harvesting","authors":"Zhenni He ,&nbsp;Fahui Yuan ,&nbsp;Yansuo Zhou ,&nbsp;Bingbo Cui ,&nbsp;Yong He ,&nbsp;Yufei Liu","doi":"10.1016/j.aiia.2025.02.002","DOIUrl":"10.1016/j.aiia.2025.02.002","url":null,"abstract":"<div><div>At present, automatic broccoli harvest in field still faces some issues. It is difficult to segment broccoli in real time under complex field background, and hard to pick tilt-growing broccoli for the end-effector of robot. In this research, an improved YOLOv8n-seg model, named YOLO-Broccoli-Seg was proposed for broccoli recognition. Through adding a triplet attention module to YOLOv8-Seg model, the feature fusion capability of the algorithm is improved significantly. The mean average precision mAP50 (Mask), mAP95 (Mask), mAP50 (Bounding Box, Bbox) and mAP95 (Bbox) of YOLO-Broccoli-Seg are 0.973, 0.683, 0.973 and 0.748 respectively. Precision <em>P</em>-value was improved the most, with an increment of 8.7 %. In addition, an attitude estimation method based on three-dimensional point cloud is proposed. When the tilt angle of broccoli is between −30°and 30°, the R<sup>2</sup> between the estimated value and the true value is 0.934. It indicated that this method can well represent the growth attitude of broccoli. This research can provide the rich broccoli information and technical basis for the automated broccoli picking.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 526-536"},"PeriodicalIF":8.2,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信