Artificial Intelligence in Agriculture最新文献

筛选
英文 中文
EU-GAN: A root inpainting network for improving 2D soil-cultivated root phenotyping EU-GAN:改善二维土壤栽培根系表型的根染网络
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-11 DOI: 10.1016/j.aiia.2025.06.004
Shangyuan Xie , Jiawei Shi , Wen Li , Tao Luo , Weikun Li , Lingfeng Duan , Peng Song , Xiyan Yang , Baoqi Li , Wanneng Yang
{"title":"EU-GAN: A root inpainting network for improving 2D soil-cultivated root phenotyping","authors":"Shangyuan Xie ,&nbsp;Jiawei Shi ,&nbsp;Wen Li ,&nbsp;Tao Luo ,&nbsp;Weikun Li ,&nbsp;Lingfeng Duan ,&nbsp;Peng Song ,&nbsp;Xiyan Yang ,&nbsp;Baoqi Li ,&nbsp;Wanneng Yang","doi":"10.1016/j.aiia.2025.06.004","DOIUrl":"10.1016/j.aiia.2025.06.004","url":null,"abstract":"<div><div>Beyond its fundamental roles in nutrient uptake and plant anchorage, the root system critically influences crop development and stress tolerance. Rhizobox enables in situ and nondestructive phenotypic detection of roots in soil, serving as a cost-effective root imaging method. However, the opacity of the soil often results in intermittent gaps in the root images, which reduces the accuracy of the root phenotype calculations. We present a root inpainting method built upon Generative Adversarial Networks (GANs) architecture In addition, we built a hybrid root inpainting dataset (HRID) that contains 1206 cotton root images with real gaps and 7716 rice root images with generated gaps. Compared with computer simulation root images, our dataset provides real root system architecture (RSA) and root texture information. Our method avoids cropping during training by instead utilizing downsampled images to provide the overall root morphology. The model is trained using binary cross-entropy loss to distinguish between root and non-root pixels. Additionally, Dice loss is employed to mitigate the challenge of imbalanced data distribution Additionally, we remove the skip connections in U-Net and introduce an edge attention module (EAM) to capture more detailed information. Compared with other methods, our approach significantly improves the recall rate from 17.35 % to 35.75 % on the test dataset of 122 cotton root images, revealing improved inpainting capabilities. The trait error reduction rates (TERRs) for the root area, root length, convex hull area, and root depth are 76.07 %, 68.63 %, 48.64 %, and 88.28 %, respectively, enabling a substantial improvement in the accuracy of root phenotyping. The codes for the EU-GAN and the 8922 labeled images are open-access, which could be reused by researchers in other AI-related work. This method establishes a robust solution for root phenotyping, thereby increasing breeding program efficiency and advancing our understanding of root system dynamics.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 770-782"},"PeriodicalIF":8.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid detection and visualization of physiological signatures in cotton leaves under Verticillium wilt stress 黄萎病胁迫下棉花叶片生理特征的快速检测与可视化
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-06 DOI: 10.1016/j.aiia.2025.06.002
Na Wu , Pan Gao , Jie Wu , Yun Zhao , Xing Xu , Chu Zhang , Erik Alexandersson , Juan Yang , Qinlin Xiao , Yong He
{"title":"Rapid detection and visualization of physiological signatures in cotton leaves under Verticillium wilt stress","authors":"Na Wu ,&nbsp;Pan Gao ,&nbsp;Jie Wu ,&nbsp;Yun Zhao ,&nbsp;Xing Xu ,&nbsp;Chu Zhang ,&nbsp;Erik Alexandersson ,&nbsp;Juan Yang ,&nbsp;Qinlin Xiao ,&nbsp;Yong He","doi":"10.1016/j.aiia.2025.06.002","DOIUrl":"10.1016/j.aiia.2025.06.002","url":null,"abstract":"<div><div>Verticillium wilt poses a severe threat to cotton growth and significantly impacts cotton yield. It is of significant importance to detect Verticillium wilt stress in time. In this study, the effects of Verticillium wilt stress on the microstructure and physiological indicators (SOD, POD, CAT, MDA, Chl<sub>a</sub>, Chl<sub>b</sub>, Chl<sub>ab</sub>, Car) of cotton leaves were investigated, and the feasibility of utilizing hyperspectral imaging to estimate physiological indicators of cotton leaves was explored. The results showed that Verticillium wilt stress-induced alterations in cotton leaf cell morphology, leading to the disruption and decomposition of chloroplasts and mitochondria. In addition, compared to healthy leaves, infected leaves exhibited significantly higher activities of SOD and POD, along with increased MDA amounts, while chlorophyll and carotenoid levels were notably reduced. Furthermore, rapid detection models for cotton physiological indicators were constructed, with the <em>R</em><sub><em>p</em></sub> of the optimal models ranging from 0.809 to 0.975. Based on these models, visual distribution maps of the physiological signatures across cotton leaves were created. These results indicated that the physiological phenotype of cotton leaves could be effectively detected by hyperspectral imaging, which could provide a solid theoretical basis for the rapid detection of Verticillium wilt stress.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 757-769"},"PeriodicalIF":8.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-camera fusion and bird-eye view location mapping for deep learning-based cattle behavior monitoring 基于深度学习的牛行为监测的多摄像机融合和鸟瞰图定位
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-06 DOI: 10.1016/j.aiia.2025.06.001
Muhammad Fahad Nasir , Alvaro Fuentes , Shujie Han , Jiaqi Liu , Yongchae Jeong , Sook Yoon , Dong Sun Park
{"title":"Multi-camera fusion and bird-eye view location mapping for deep learning-based cattle behavior monitoring","authors":"Muhammad Fahad Nasir ,&nbsp;Alvaro Fuentes ,&nbsp;Shujie Han ,&nbsp;Jiaqi Liu ,&nbsp;Yongchae Jeong ,&nbsp;Sook Yoon ,&nbsp;Dong Sun Park","doi":"10.1016/j.aiia.2025.06.001","DOIUrl":"10.1016/j.aiia.2025.06.001","url":null,"abstract":"<div><div>Cattle behavioral monitoring is an integral component of the modern infrastructure of the livestock industry. Ensuring cattle well-being requires precise observation, typically using wearable devices or surveillance cameras. Integrating deep learning into these systems enhances the monitoring of cattle behavior. However, challenges remain, such as occlusions, pose variations, and limited camera viewpoints, which hinder accurate detection and location mapping of individual cattle. To address these challenges, this paper proposes a multi-viewpoint surveillance system for indoor cattle barns, using footage from four cameras and deep learning-based models including action detection and pose estimation for behavior monitoring. The system accurately detects hierarchical behaviors across camera viewpoints. These results are fed into a Bird's Eye View (BEV) algorithm, producing precise cattle position maps in the barn. Despite complexities like overlapping and non-overlapping camera regions, our system, implemented on a real farm, ensures accurate cattle detection and BEV-based projections in real-time. Detailed experiments validate the system's efficiency, offering an end-to-end methodology for accurate behavior detection and location mapping of individual cattle using multi-camera data.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 724-743"},"PeriodicalIF":8.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review on enhancing agricultural intelligence with large language models 基于大语言模型的农业智能化研究进展
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-06-04 DOI: 10.1016/j.aiia.2025.05.006
Hongda Li , Huarui Wu , Qingxue Li , Chunjiang Zhao
{"title":"A review on enhancing agricultural intelligence with large language models","authors":"Hongda Li ,&nbsp;Huarui Wu ,&nbsp;Qingxue Li ,&nbsp;Chunjiang Zhao","doi":"10.1016/j.aiia.2025.05.006","DOIUrl":"10.1016/j.aiia.2025.05.006","url":null,"abstract":"<div><div>This paper systematically explores the application potential of large language models (LLMs) in the field of agricultural intelligence, focusing on key technologies and practical pathways. The study focuses on the adaptation of LLMs to agricultural knowledge, starting with foundational concepts such as architecture design, pre-training strategies, and fine-tuning techniques, to build a technical framework for knowledge integration in the agricultural domain. Using tools such as vector databases and knowledge graphs, the study enables the structured development of professional agricultural knowledge bases. Additionally, by combining multimodal learning and intelligent question-answering (Q&amp;A) system design, it validates the application value of LLMs in agricultural knowledge services. Addressing core challenges in domain adaptation, including knowledge acquisition and integration, logical reasoning, multimodal data processing, agent collaboration, and dynamic knowledge updating, the paper proposes targeted solutions. The study further explores the innovative applications of LLMs in scenarios such as precision crop management and market dynamics analysis, providing theoretical support and technical pathways for the development of agricultural intelligence. Through the technological innovation of large language models and their deep integration with the agricultural sector, the intelligence level of agricultural production, decision-making, and services can be effectively enhanced.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 671-685"},"PeriodicalIF":8.2,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSNet: A multispectral-image driven rapeseed canopy instance segmentation network 一个多光谱图像驱动的油菜籽冠层实例分割网络
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-05-31 DOI: 10.1016/j.aiia.2025.05.008
Yuang Yang, Xiaole Wang, Fugui Zhang, Zhenchao Wu, Yu Wang, Yujie Liu, Xuan Lv, Bowen Luo, Liqing Chen, Yang Yang
{"title":"MSNet: A multispectral-image driven rapeseed canopy instance segmentation network","authors":"Yuang Yang,&nbsp;Xiaole Wang,&nbsp;Fugui Zhang,&nbsp;Zhenchao Wu,&nbsp;Yu Wang,&nbsp;Yujie Liu,&nbsp;Xuan Lv,&nbsp;Bowen Luo,&nbsp;Liqing Chen,&nbsp;Yang Yang","doi":"10.1016/j.aiia.2025.05.008","DOIUrl":"10.1016/j.aiia.2025.05.008","url":null,"abstract":"<div><div>Precise detection of rapeseed and the growth of its canopy area are crucial phenotypic indicators of its growth status. Achieving accurate identification of the rapeseed target and its growth region provides significant data support for phenotypic analysis and breeding research. However, in natural field environments, rapeseed detection remains a substantial challenge due to the limited feature representation capabilities of RGB-only modalities. To address this challenge, this study proposes a dual-modal instance segmentation network, MSNet, based on YOLOv11n-seg, integrating both RGB and Near-Infrared (NIR) modalities. The main improvements of this network include three different fusion location strategies (frontend fusion, mid-stage fusion, and backend fusion) and the newly introduced Hierarchical Attention Fusion Block (HAFB) for multimodal feature fusion. Comparative experiments on fusion locations indicate that the mid-stage fusion strategy achieves the best balance between detection accuracy and parameter efficiency. Compared to the baseline network, the <em>mAP50:95</em> improvement can reach up to 3.5 %. After introducing the HAFB module, the MSNet-H-HAFB model demonstrates a 6.5 % increase in <em>mAP50:95</em> relative to the baseline network, with less than a 38 % increase in parameter count. It is noteworthy that the mid-stage fusion consistently delivered the best detection performance in all experiments, providing clear design guidance for selecting fusion locations in future multimodal networks. In addition, comparisons with various RGB-only instance segmentation models show that all the proposed MSNet-HAFB fusion models significantly outperform single-modal models in rapeseed count detection tasks, confirming the potential advantages of multispectral fusion strategies in agricultural target recognition. Finally, the MSNet was applied in an agricultural case study, including vegetation index level analysis and frost damage classification. The results show that ZN6–2836 and ZS11 were predicted as potential superior varieties, and the EVI2 vegetation index achieved the best performance in rapeseed frost damage classification.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 642-658"},"PeriodicalIF":8.2,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144231062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An autonomous navigation method for field phenotyping robot based on ground-air collaboration 基于地空协同的现场分型机器人自主导航方法
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-05-30 DOI: 10.1016/j.aiia.2025.05.005
Zikang Zhang , Zhengda Li , Meng Yang , Jiale Cui , Yang Shao , Youchun Ding , Wanneng Yang , Wen Qiao , Peng Song
{"title":"An autonomous navigation method for field phenotyping robot based on ground-air collaboration","authors":"Zikang Zhang ,&nbsp;Zhengda Li ,&nbsp;Meng Yang ,&nbsp;Jiale Cui ,&nbsp;Yang Shao ,&nbsp;Youchun Ding ,&nbsp;Wanneng Yang ,&nbsp;Wen Qiao ,&nbsp;Peng Song","doi":"10.1016/j.aiia.2025.05.005","DOIUrl":"10.1016/j.aiia.2025.05.005","url":null,"abstract":"<div><div>High-throughput phenotyping collection technology is important in affecting the efficiency of crop breeding. This study introduces a novel autonomous navigation method for phenotyping robots that leverages ground-air collaboration to meet the demands of unmanned crop phenotypic data collection. The proposed method employs a UAV equipped with a Real-Time Kinematic (RTK) module for the construction of high-precision Field maps. It utilizes SegFormor-B0 semantic segmentation models to detect crop rows, and extracts key coordinate points of these rows, and generates navigation paths for the phenotyping robots by mapping these points to actual geographic coordinates. Furthermore, an adaptive controller based on the Pure Pursuit algorithm is proposed, which dynamically adjusts the steering angle of the phenotyping robot in real-time, according to the distance (<span><math><mi>d</mi></math></span>), angular deviation (<span><math><mi>α</mi></math></span>) and the lateral deviation (<span><math><msub><mi>e</mi><mi>y</mi></msub></math></span>) between the robot's current position and its target position. This enables the robot to accurately trace paths in field environments. The results demonstrate that the mean absolute error (MAE) of the proposed method in extracting the centerline of potted plants area's rows is 2.83 cm, and the cropland's rows is 4.51 cm. The majority of global path tracking errors stay within 2 cm. In the potted plants area, 99.1 % of errors lie within this range, with a mean absolute error of 0.62 cm and a maximum error of 2.59 cm. In the cropland, 72.4 % of errors remain within this range, with a mean absolute error of 1.51 cm and a maximum error of 4.22 cm. Compared with traditional GNSS-based navigation methods and single vision methods, this method shows significant advantages in adapting to the dynamic growth of crops and complex field environments, which not only ensures that the phenotyping robot accurately travels along the crop rows during field operations to avoid damage to the crops, but also provides an efficient and accurate means of data acquisition for crop phenotyping.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 610-621"},"PeriodicalIF":8.2,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144194569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technical study on the efficiency and models of weed control methods using unmanned ground vehicles: A review 无人驾驶地面车辆除草效率与模型技术研究综述
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-05-28 DOI: 10.1016/j.aiia.2025.05.003
Evans K. Wiafe, Kelvin Betitame, Billy G. Ram, Xin Sun
{"title":"Technical study on the efficiency and models of weed control methods using unmanned ground vehicles: A review","authors":"Evans K. Wiafe,&nbsp;Kelvin Betitame,&nbsp;Billy G. Ram,&nbsp;Xin Sun","doi":"10.1016/j.aiia.2025.05.003","DOIUrl":"10.1016/j.aiia.2025.05.003","url":null,"abstract":"<div><div>As precision agriculture evolves, unmanned ground vehicles (UGVs) have become an essential tool for improving weed management techniques, offering automated and targeted methods that obviously reduce the reliance on manual labor and blanket herbicide applications. Several papers on UGV-based weed control methods have been published in recent years, yet there is no explicit attempt to systematically study these papers to discuss these weed control methods, UGVs adopted, and their key components, and how they impact the environment and economy. Therefore, the objective of this study was to present a systematic review that involves the efficiency and types of weed control methods deployed in UGVs, including mechanical weeding, targeted herbicide application, thermal/flaming weeding, and laser weeding in the last 2 decades. For this purpose, a thorough literature review was conducted, analyzing 68 relevant articles on weed control methods for UGVs. The study found that the research focus on using UGVs in mechanical weeding has been more dominant, followed by target or precision spraying/ chemical weeding, with hybrid weeding systems quickly emerging. The effectiveness of UGVs for weed control is hinged on the accuracy of their navigation and weed detection technologies, which are influenced heavily by environmental conditions, including lighting, weather, uneven terrain, and weed and crop density. Also, there is a shift from using traditional machine learning (ML) algorithms to deep learning neural networks, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), for weed detection algorithm development due to their potential to work in complex environments. Finally, trials of most UGVs have limited documentation or lack extensive trials under various conditions, such as varying soil types, crop fields, topography, field geometry, and annual weather conditions. This review paper serves as an in-depth update on UGVs in weed management for farmers, researchers, robotic technology industry players, and AI enthusiasts, helping to further foster collaborative efforts to develop new ideas and advance this revolutionary technique in modern agriculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 622-641"},"PeriodicalIF":8.2,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144203295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Picking point localization method based on semantic reasoning for complex picking scenarios in vineyards 基于语义推理的葡萄园复杂采摘场景采摘点定位方法
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-05-26 DOI: 10.1016/j.aiia.2025.05.004
Xuemin Lin , Jinhai Wang , Jinshuan Wang , Huiling Wei , Mingyou Chen , Lufeng Luo
{"title":"Picking point localization method based on semantic reasoning for complex picking scenarios in vineyards","authors":"Xuemin Lin ,&nbsp;Jinhai Wang ,&nbsp;Jinshuan Wang ,&nbsp;Huiling Wei ,&nbsp;Mingyou Chen ,&nbsp;Lufeng Luo","doi":"10.1016/j.aiia.2025.05.004","DOIUrl":"10.1016/j.aiia.2025.05.004","url":null,"abstract":"<div><div>In the complex orchard environment, precise picking point localization is crucial for the automation of fruit picking robots. However, existing methods are prone to positioning errors when dealing with complex scenarios such as short peduncles, partial occlusion, or complete misidentification, which can affect the actual work efficiency of the fruit picking robot. This study proposes an enhanced picking point localization method based on semantic reasoning for complex picking scenarios in vineyard. It innovatively designs three modules: the semantic reasoning module (SRM), the ROI threshold adjustment strategy (RTAS), and the picking point location optimization module (PPOM). The SRM is applied to handle the scenarios of grape peduncles being obstructed by obstacles, partial misidentification of peduncles, and complete misidentification of peduncles. The RTAS addresses the issue of low and short peduncles during the picking process. Finally, the PPOM optimizes the final position of the picking point, allowing the robotic arm to perform the picking operation with greater flexibility. Experimental results show that SegFormer achieves an mIoU (mean Intersection over Union) of 84.54 %, with B_IoU and P_IoU reaching 73.90 % and 75.63 %, respectively. Additionally, the success rate of the improved fruit picking point localization algorithm reached 94.96 %, surpassing the baseline algorithm by 8.12 %. The algorithm's average processing time is 0.5428 ± 0.0063 s, meeting the practical requirements for real-time picking.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 744-756"},"PeriodicalIF":8.2,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADeepWeeD: An adaptive deep learning framework for weed species classification ADeepWeeD:一个用于杂草分类的自适应深度学习框架
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-05-22 DOI: 10.1016/j.aiia.2025.04.009
Md Geaur Rahman , Md Anisur Rahman , Mohammad Zavid Parvez , Md Anwarul Kaium Patwary , Tofael Ahamed , David A. Fleming-Muñoz , Saad Aloteibi , Mohammad Ali Moni PhD
{"title":"ADeepWeeD: An adaptive deep learning framework for weed species classification","authors":"Md Geaur Rahman ,&nbsp;Md Anisur Rahman ,&nbsp;Mohammad Zavid Parvez ,&nbsp;Md Anwarul Kaium Patwary ,&nbsp;Tofael Ahamed ,&nbsp;David A. Fleming-Muñoz ,&nbsp;Saad Aloteibi ,&nbsp;Mohammad Ali Moni PhD","doi":"10.1016/j.aiia.2025.04.009","DOIUrl":"10.1016/j.aiia.2025.04.009","url":null,"abstract":"<div><div>Efficient weed management in agricultural fields is essential for attaining optimal crop yields and safeguarding global food security. Every year, farmers worldwide invest significant time, capital, and resources to combat yield losses, approximately USD 75.6 billion, due to weed infestations. Deep Learning (DL) methodologies have been recently implemented to revolutionise agricultural practices, particularly in weed detection and classification. Existing DL-based weed classification techniques, including VGG16 and ResNet50, initially construct a model by implementing the algorithm on a training dataset comprising weed species, subsequently employing the model to identify weed species acquired during training. Given the dynamic nature of crop fields, we argue that existing methods may exhibit suboptimal performance due to two key issues: (i) the unavailability of all training weed species initially, as these species emerge over time, resulting in a progressively expanding training dataset, and (ii) the constrained memory and computational capacity of the system utilised for model development, which hinders the retention of all weed species that manifest over an extended duration. To address the issues, this paper introduces a novel DL-based framework called ADeepWeeD for weed classification that facilitates adaptive (i.e. incremental) learning so that it can handle new weed species by keeping track of historical information. ADeepWeeD is evaluated using two criteria, namely <span><math><msub><mi>F</mi><mn>1</mn></msub></math></span>-Score and classification accuracy, by comparing its performances against four non-incremental and two incremental state-of-the-art methods on three publicly available large datasets. Our experimental results demonstrate that ADeepWeeD outperforms existing techniques used in this study. We believe that our developed model could be used to develop an automation system for weed identification. The code of the proposed method is available on GitHub: <span><span>https://github.com/grahman20/ADeepWeed</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 590-609"},"PeriodicalIF":8.2,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144178825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale cross-modal feature fusion and cost-sensitive loss function for differential detection of occluded bagging pears in practical orchards 多尺度跨模态特征融合与代价敏感损失函数在实际果园闭塞套袋梨鉴别检测中的应用
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-05-18 DOI: 10.1016/j.aiia.2025.05.002
Shengli Yan , Wenhui Hou , Yuan Rao , Dan Jiang , Xiu Jin , Tan Wang , Yuwei Wang , Lu Liu , Tong Zhang , Arthur Genis
{"title":"Multi-scale cross-modal feature fusion and cost-sensitive loss function for differential detection of occluded bagging pears in practical orchards","authors":"Shengli Yan ,&nbsp;Wenhui Hou ,&nbsp;Yuan Rao ,&nbsp;Dan Jiang ,&nbsp;Xiu Jin ,&nbsp;Tan Wang ,&nbsp;Yuwei Wang ,&nbsp;Lu Liu ,&nbsp;Tong Zhang ,&nbsp;Arthur Genis","doi":"10.1016/j.aiia.2025.05.002","DOIUrl":"10.1016/j.aiia.2025.05.002","url":null,"abstract":"<div><div>In practical orchards, the challenges posed by fruit overlapping, branch and leaf occlusion, significantly impede the successful implementation of automated picking, particularly for bagging pears. To address this issue, this paper introduces the multi-scale cross-modal feature fusion and cost-sensitive classification loss function network (MCCNet), specifically designed to accurately detect bagging pears with various occlusion categories. The network designs a dual-stream convolutional neural network as its backbone, enabling the parallel extraction of multi-modal features. Meanwhile, we propose a novel lightweight cross-modal feature fusion method, inspired by enhancing shared features between modalities while extracting specific features from RGB and depth modalities. The cross-modal method enhances the perceptual capabilities of the model by facilitating the fusion of complementary information from multimodal bagging pear image pairs. Furthermore, we optimize the classification loss function by transforming it into a cost-sensitive loss function, aiming to improve detection classification efficiency and reduce instances of missing and false detections during the picking process. Experimental results on a bagging pear dataset demonstrate that our MCCNet achieves mAP0.5 and mAP0.5:0.95 values of 97.3 % and 80.3 %, respectively, representing improvements of 3.6 % and 6.3 % over the classical YOLOv10m model. When benchmarked against several state-of-the-art detection models, our MCCNet network has only 19.5 million parameters while maintaining superior inference speed.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 573-589"},"PeriodicalIF":8.2,"publicationDate":"2025-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144139789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信