Artificial Intelligence in Agriculture最新文献

筛选
英文 中文
PWM offline variable application based on UAV remote sensing 3D prescription map 基于无人机遥感三维处方图的PWM离线变量应用
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-27 DOI: 10.1016/j.aiia.2025.01.011
Leng Han , Zhichong Wang , Miao He , Yajia Liu , Xiongkui He
{"title":"PWM offline variable application based on UAV remote sensing 3D prescription map","authors":"Leng Han ,&nbsp;Zhichong Wang ,&nbsp;Miao He ,&nbsp;Yajia Liu ,&nbsp;Xiongkui He","doi":"10.1016/j.aiia.2025.01.011","DOIUrl":"10.1016/j.aiia.2025.01.011","url":null,"abstract":"<div><div>Precision application in orchards enhancing deposition uniformity and environmental sustainability by accurately matching nozzle output with canopy parameters. This study provides a pipeline for creating 3D prescription maps using a UAV and performing offline variable application. It also evaluates the accuracy of ground altitude measurements at various flight heights. At a flight height of 30 m, with a three-dimensional reconstruction method without phase-control points, the root mean square error (RMSE) for ground altitude measurement was 0.214 m and the mean absolute error (MAE) was 0.211 m; for the canopy area, these values were 0.591 m and 0.541 m, respectively. As flight height increased, the accuracy of altitude measurements declined and tended to be underestimated. Moreover, during offline variable spraying, the shape of the spray area influenced deposition accuracy, with collision detection area of a line segment achieving greater precision than conical ones. Field tests showed that the offline variable application method reduced pesticide usage by 32.43 % and enhanced spray uniformity. This newly developed process does not require costly sensors on each sprayer and has potential for field applications.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 496-507"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient one-stage detection of shrimp larvae in complex aquaculture scenarios 复杂养殖环境下对虾幼虫的高效一期检测
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-27 DOI: 10.1016/j.aiia.2025.01.009
Guoxu Zhang , Tianyi Liao , Yingyi Chen , Ping Zhong , Zhencai Shen , Daoliang Li
{"title":"Efficient one-stage detection of shrimp larvae in complex aquaculture scenarios","authors":"Guoxu Zhang ,&nbsp;Tianyi Liao ,&nbsp;Yingyi Chen ,&nbsp;Ping Zhong ,&nbsp;Zhencai Shen ,&nbsp;Daoliang Li","doi":"10.1016/j.aiia.2025.01.009","DOIUrl":"10.1016/j.aiia.2025.01.009","url":null,"abstract":"<div><div>The swift evolution of deep learning has greatly benefited the field of intensive aquaculture. Specifically, deep learning-based shrimp larvae detection has offered important technical assistance for counting shrimp larvae and recognizing abnormal behaviors. Firstly, the transparent bodies and small sizes of shrimp larvae, combined with complex scenarios due to variations in light intensity and water turbidity, make it challenging for current detection methods to achieve high accuracy. Secondly, deep learning-based object detection demands substantial computing power and storage space, which restricts its application on edge devices. This paper proposes an efficient one-stage shrimp larvae detection method, FAMDet, specifically designed for complex scenarios in intensive aquaculture. Firstly, different from the ordinary detection methods, it exploits an efficient FasterNet backbone, constructed with partial convolution, to extract effective multi-scale shrimp larvae features. Meanwhile, we construct an adaptively bi-directional fusion neck to integrate high-level semantic information and low-level detail information of shrimp larvae in a matter that sufficiently merges features and further mitigates noise interference. Finally, a decoupled detection head equipped with MPDIoU is used for precise bounding box regression of shrimp larvae. We collected images of shrimp larvae from multiple scenarios and labeled 108,365 targets for experiments. Compared with the ordinary detection methods (Faster RCNN, SSD, RetinaNet, CenterNet, FCOS, DETR, and YOLOX_s), FAMDet has obtained considerable advantages in accuracy, speed, and complexity. Compared with the outstanding one-stage method YOLOv8s, it has improved accuracy while reducing 57 % parameters, 37 % FLOPs, 22 % inference latency per image on CPU, and 56 % storage overhead. Furthermore, FAMDet has still outperformed multiple lightweight methods (EfficientDet, RT-DETR, GhostNetV2, EfficientFormerV2, EfficientViT, and MobileNetV4). In addition, we conducted experiments on the public dataset (VOC 07 + 12) to further verify the effectiveness of FAMDet. Consequently, the proposed method can effectively alleviate the limitations faced by resource-constrained devices and achieve superior shrimp larvae detection results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 338-349"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic body condition scoring system for dairy cows in group state based on improved YOLOv5 and video analysis 基于改进YOLOv5和视频分析的奶牛群态体况自动评分系统
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-27 DOI: 10.1016/j.aiia.2025.01.010
Jingwen Li , Pengbo Zeng , Shuai Yue , Zhiyang Zheng , Lifeng Qin , Huaibo Song
{"title":"Automatic body condition scoring system for dairy cows in group state based on improved YOLOv5 and video analysis","authors":"Jingwen Li ,&nbsp;Pengbo Zeng ,&nbsp;Shuai Yue ,&nbsp;Zhiyang Zheng ,&nbsp;Lifeng Qin ,&nbsp;Huaibo Song","doi":"10.1016/j.aiia.2025.01.010","DOIUrl":"10.1016/j.aiia.2025.01.010","url":null,"abstract":"<div><div>This study proposes an automated scoring system for cow body condition using improved YOLOv5 to assess the body condition distribution of herd cows, which significantly impacts herd productivity and feeding management. A dataset was created by capturing images of the cow's hindquarters using an image sensor at the entrance of the milking hall. This system enhances feature extraction ability by introducing dual path networks and convolutional block attention modules and improves efficiency by replacing some modules from the standard YOLOv5s with deep separable convolution to reduce parameters. Furthermore, the system employs an automatic detection and segmentation algorithm to achieve individual cow segmentation and body condition acquisition in the video. Subsequently, the system computes the body condition distribution of cows in a group state. The experimental findings demonstrate that the proposed model outperforms the original YOLOv5 network with higher accuracy and fewer computations and parameters. The precision, recall, and mean average precision of the model are 94.3 %, 92.5 %, and 91.8 %, respectively. The algorithm achieved an overall detection rate of 94.2 % for individual cow segmentation and body condition acquisition in the video, with a body condition scoring accuracy of 92.5 % among accurately detected cows and an overall body condition scoring accuracy of 87.1 % across the 10 video tests.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 350-362"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying key factors influencing maize stalk lodging resistance through wind tunnel simulations with machine learning algorithms 利用机器学习算法模拟风洞,识别影响玉米茎秆抗倒伏的关键因素
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-13 DOI: 10.1016/j.aiia.2025.01.007
Guanmin Huang, Ying Zhang, Shenghao Gu, Weiliang Wen, Xianju Lu, Xinyu Guo
{"title":"Identifying key factors influencing maize stalk lodging resistance through wind tunnel simulations with machine learning algorithms","authors":"Guanmin Huang,&nbsp;Ying Zhang,&nbsp;Shenghao Gu,&nbsp;Weiliang Wen,&nbsp;Xianju Lu,&nbsp;Xinyu Guo","doi":"10.1016/j.aiia.2025.01.007","DOIUrl":"10.1016/j.aiia.2025.01.007","url":null,"abstract":"<div><div>Climate change has intensified maize stalk lodging, severely impacting global maize production. While numerous traits influence stalk lodging resistance, their relative importance remains unclear, hindering breeding efforts. This study introduces an combining wind tunnel testing with machine learning algorithms to quantitatively evaluate stalk lodging resistance traits. Through extensive field experiments and literature review, we identified and measured 74 phenotypic traits encompassing plant morphology, biomass, and anatomical characteristics in maize plants. Correlation analysis revealed a median linear correlation coefficient of 0.497 among these traits, with 15.1 % of correlations exceeding 0.8. Principal component analysis showed that the first five components explained 90 % of the total variance, indicating significant trait interactions. Through feature engineering and gradient boosting regression, we developed a high-precision wind speed-ear displacement prediction model (R<sup>2</sup> = 0.93) and identified 29 key traits critical for stalk lodging resistance. Sensitivity analysis revealed plant height as the most influential factor (sensitivity coefficient: −3.87), followed by traits of the 7th internode including epidermis layer thickness (0.62), pith area (−0.60), and lignin content (0.35). Our methodological framework not only provides quantitative insights into maize stalk lodging resistance mechanisms but also establishes a systematic approach for trait evaluation. The findings offer practical guidance for breeding programs focused on enhancing stalk lodging resistance and yield stability under climate change conditions, with potential applications in agronomic practice optimization and breeding strategy development.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 316-326"},"PeriodicalIF":8.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive review on 3D point cloud segmentation in plants 植物三维点云分割技术综述
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-11 DOI: 10.1016/j.aiia.2025.01.006
Hongli Song , Weiliang Wen , Sheng Wu , Xinyu Guo
{"title":"Comprehensive review on 3D point cloud segmentation in plants","authors":"Hongli Song ,&nbsp;Weiliang Wen ,&nbsp;Sheng Wu ,&nbsp;Xinyu Guo","doi":"10.1016/j.aiia.2025.01.006","DOIUrl":"10.1016/j.aiia.2025.01.006","url":null,"abstract":"<div><div>Segmentation of three-dimensional (3D) point clouds is fundamental in comprehending unstructured structural and morphological data. It plays a critical role in research related to plant phenomics, 3D plant modeling, and functional-structural plant modeling. Although technologies for plant point cloud segmentation (PPCS) have advanced rapidly, there has been a lack of a systematic overview of the development process. This paper presents an overview of the progress made in 3D point cloud segmentation research in plants. It starts by discussing the methods used to acquire point clouds in plants, and analyzes the impact of point cloud resolution and quality on the segmentation task. It then introduces multi-scale point cloud segmentation in plants. The paper summarizes and analyzes traditional methods for PPCS, including the global and local features. This paper discusses the progress of machine learning-based segmentation on plant point clouds through supervised, unsupervised, and integrated approaches. It also summarizes the datasets that for PPCS using deep learning-oriented methods and explains the advantages and disadvantages of deep learning-based methods for projection-based, voxel-based, and point-based approaches respectively. Finally, the development of PPCS is discussed and prospected. Deep learning methods are predicted to become dominant in the field of PPCS, and 3D point cloud segmentation would develop towards more automated with higher resolution and precision.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 296-315"},"PeriodicalIF":8.2,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-throughput phenotyping techniques for forage: Status, bottleneck, and challenges 饲料高通量表型技术:现状、瓶颈和挑战
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-10 DOI: 10.1016/j.aiia.2025.01.003
Tao Cheng , Dongyan Zhang , Gan Zhang , Tianyi Wang , Weibo Ren , Feng Yuan , Yaling Liu , Zhaoming Wang , Chunjiang Zhao
{"title":"High-throughput phenotyping techniques for forage: Status, bottleneck, and challenges","authors":"Tao Cheng ,&nbsp;Dongyan Zhang ,&nbsp;Gan Zhang ,&nbsp;Tianyi Wang ,&nbsp;Weibo Ren ,&nbsp;Feng Yuan ,&nbsp;Yaling Liu ,&nbsp;Zhaoming Wang ,&nbsp;Chunjiang Zhao","doi":"10.1016/j.aiia.2025.01.003","DOIUrl":"10.1016/j.aiia.2025.01.003","url":null,"abstract":"<div><div>High-throughput phenotyping (HTP) technology is now a significant bottleneck in the efficient selection and breeding of superior forage genetic resources. To better understand the status of forage phenotyping research and identify key directions for development, this review summarizes advances in HTP technology for forage phenotypic analysis over the past ten years. This paper reviews the unique aspects and research priorities in forage phenotypic monitoring, highlights key remote sensing platforms, examines the applications of advanced sensing technology for quantifying phenotypic traits, explores artificial intelligence (AI) algorithms in phenotypic data integration and analysis, and assesses recent progress in phenotypic genomics. The practical applications of HTP technology in forage remain constrained by several challenges. These include establishing uniform data collection standards, designing effective algorithms to handle complex genetic and environmental interactions, deepening the cross-exploration of phenomics-genomics, solving the problem of pathological inversion of forage phenotypic growth monitoring models, and developing low-cost forage phenotypic equipment. Resolving these challenges will unlock the full potential of HTP, enabling precise identification of superior forage traits, accelerating the breeding of superior varieties, and ultimately improving forage yield.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 98-115"},"PeriodicalIF":8.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crop-conditional semantic segmentation for efficient agricultural disease assessment 农作物条件语义分割用于农业病害的有效评估
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-10 DOI: 10.1016/j.aiia.2025.01.002
Artzai Picon , Itziar Eguskiza , Pablo Galan , Laura Gomez-Zamanillo , Javier Romero , Christian Klukas , Arantza Bereciartua-Perez , Mike Scharner , Ramon Navarra-Mestre
{"title":"Crop-conditional semantic segmentation for efficient agricultural disease assessment","authors":"Artzai Picon ,&nbsp;Itziar Eguskiza ,&nbsp;Pablo Galan ,&nbsp;Laura Gomez-Zamanillo ,&nbsp;Javier Romero ,&nbsp;Christian Klukas ,&nbsp;Arantza Bereciartua-Perez ,&nbsp;Mike Scharner ,&nbsp;Ramon Navarra-Mestre","doi":"10.1016/j.aiia.2025.01.002","DOIUrl":"10.1016/j.aiia.2025.01.002","url":null,"abstract":"<div><div>In this study, we introduced an innovative crop-conditional semantic segmentation architecture that seamlessly incorporates contextual metadata (crop information). This is achieved by merging the contextual information at a late layer stage, allowing the method to be integrated with any semantic segmentation architecture, including novel ones. To evaluate the effectiveness of this approach, we curated a challenging dataset of over 100,000 images captured in real-field conditions using mobile phones. This dataset includes various disease stages across 21 diseases and seven crops (wheat, barley, corn, rice, rape-seed, vinegrape, and cucumber), with the added complexity of multiple diseases coexisting in a single image. We demonstrate that incorporating contextual multi-crop information significantly enhances the performance of semantic segmentation models for plant disease detection. By leveraging crop-specific metadata, our approach achieves higher accuracy and better generalization across diverse crops (F1 = 0.68, <em>r</em> = 0.75) compared to traditional methods (F1 = 0.24, <em>r</em> = 0.68). Additionally, the adoption of a semi-supervised approach based on pseudo-labeling of single diseased plants, offers significant advantages for plant disease segmentation and quantification (F1 = 0.73, <em>r</em> = 0.95). This method enhances the model's performance by leveraging both labeled and unlabeled data, reducing the dependency on extensive manual annotations, which are often time-consuming and costly.</div><div>The deployment of this algorithm holds the potential to revolutionize the digitization of crop protection product testing, ensuring heightened repeatability while minimizing human subjectivity. By addressing the challenges of semantic segmentation and disease quantification, we contribute to more effective and precise phenotyping, ultimately supporting better crop management and protection strategies.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 79-87"},"PeriodicalIF":8.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-guided temperature correction method for soluble solids content detection of watermelon based on Vis/NIR spectroscopy 基于知识引导的西瓜可溶性固形物含量可见光/近红外光谱检测温度校正方法
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-09 DOI: 10.1016/j.aiia.2025.01.004
Zhizhong Sun , Jie Yang , Yang Yao , Dong Hu , Yibin Ying , Junxian Guo , Lijuan Xie
{"title":"Knowledge-guided temperature correction method for soluble solids content detection of watermelon based on Vis/NIR spectroscopy","authors":"Zhizhong Sun ,&nbsp;Jie Yang ,&nbsp;Yang Yao ,&nbsp;Dong Hu ,&nbsp;Yibin Ying ,&nbsp;Junxian Guo ,&nbsp;Lijuan Xie","doi":"10.1016/j.aiia.2025.01.004","DOIUrl":"10.1016/j.aiia.2025.01.004","url":null,"abstract":"<div><div>Visible/near-infrared (Vis/NIR) spectroscopy technology has been extensively utilized for the determination of soluble solids content (SSC) in fruits. Nonetheless, the spectral distortion resulting from temperature variations in the sample leads to a decrease in detection accuracy. To mitigate the influence of temperature fluctuations on the accuracy of SSC detection in fruits, using watermelon as an example, this study presents a knowledge-guided temperature correction method utilizing one-dimensional convolutional neural networks (1D-CNN). This method consists of two stages: the first stage involves utilizing 1D-CNN models and gradient-weighted class activation mapping (Grad-CAM) method to acquire gradient-weighted features correlating with temperature. The second stage involves mapping these features and integrating them with the original Vis/NIR spectrum, and then train and test the partial least squares (PLS) model. This knowledge-guided method can identify wavelength bands with high temperature correlation in the Vis/NIR spectra, offering valuable guidance for spectral data processing. The performance of the PLS model constructed using the 15 °C spectrum guided by this method is superior to that of the global model, and can reduce the root mean square error of the prediction set (RMSEP) to 0.324°Brix, which is 32.5 % lower than the RMSEP of the global model (0.480°Brix). The method proposed in this study has superior temperature correction effects than slope and bias correction, piecewise direct standardization, and external parameter orthogonalization correction methods. The results indicate that the knowledge-guided temperature correction method based on deep learning can significantly enhance the detection accuracy of SSC in watermelon, providing valuable reference for the development of PLS calibration methods.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 88-97"},"PeriodicalIF":8.2,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing citrus surface defects detection: A priori feature guided semantic segmentation model 增强柑橘表面缺陷检测:一个先验特征引导的语义分割模型
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-09 DOI: 10.1016/j.aiia.2025.01.005
Xufeng Xu , Tao Xu , Zichao Wei , Zetong Li , Yafei Wang , Xiuqin Rao
{"title":"Enhancing citrus surface defects detection: A priori feature guided semantic segmentation model","authors":"Xufeng Xu ,&nbsp;Tao Xu ,&nbsp;Zichao Wei ,&nbsp;Zetong Li ,&nbsp;Yafei Wang ,&nbsp;Xiuqin Rao","doi":"10.1016/j.aiia.2025.01.005","DOIUrl":"10.1016/j.aiia.2025.01.005","url":null,"abstract":"<div><div>The accurate detection of citrus surface defects is of great importance for elevating the product quality and augmenting its market value. However, due to defect diversity and complexity, existing methods focused on parameter and data enhancement have limitations in detection and segmentation. Therefore, this study proposed a citrus surface defect segmentation model guided by prior features, named PrioriFormer. The model extracted texture features, boundary features, and superpixel features that were crucial for defect detection and segmentation, as priori features. A Priori Feature Fusion Module (PFFM) was designed to integrate the priori features, thereby establishing a priori feature branch. Then the priori feature branch was integrated into the baseline model SegFormer, with the objective of enhancing key feature learning capacity of the model. Finally, the effectiveness of the priori features in enhancing the performance of the model was demonstrated through the implementation of specific experiments. The result showed that PrioriFormer achieved an mPA (mean Pixel Accuracy), mIoU (mean Intersection over Union), and Dice Coefficient of 91.0 %, 85.8 %, and 91.0 %, respectively. Compared to other semantic segmentation models, the proposed model has achieved the best performance. The model parameters of PrioriFormer have only increase by 2.7 % in comparison to the baseline model, while the mIoU has improved by 3.3 %, indicating that the improvement of segmentation performance had less dependence on model parameters. Even when trained on few data, PrioriFormer maintained the high segmentation performance, with the reduction of mIoU not exceeding 4.2 %. This demonstrated the strong feature learning ability of the model in scenarios with limited data. Furthermore, validation on external datasets confirmed PrioriFormer's superior performance and adaptability to different tasks. The study found that the proposed PrioriFomer guided by priori features can effectively enhance the accuracy of the citrus surface defect segmentation model, providing technical reference for citrus sorting and quality assessment.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 67-78"},"PeriodicalIF":8.2,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PAB-Mamba-YOLO: VSSM assists in YOLO for aggressive behavior detection among weaned piglets bab - mamba -YOLO: VSSM协助YOLO在断奶仔猪的攻击行为检测
IF 8.2
Artificial Intelligence in Agriculture Pub Date : 2025-01-06 DOI: 10.1016/j.aiia.2025.01.001
Xue Xia , Ning Zhang , Zhibin Guan , Xin Chai , Shixin Ma , Xiujuan Chai , Tan Sun
{"title":"PAB-Mamba-YOLO: VSSM assists in YOLO for aggressive behavior detection among weaned piglets","authors":"Xue Xia ,&nbsp;Ning Zhang ,&nbsp;Zhibin Guan ,&nbsp;Xin Chai ,&nbsp;Shixin Ma ,&nbsp;Xiujuan Chai ,&nbsp;Tan Sun","doi":"10.1016/j.aiia.2025.01.001","DOIUrl":"10.1016/j.aiia.2025.01.001","url":null,"abstract":"<div><div>Aggressive behavior among piglets is considered a harmful social contact. Monitoring weaned piglets with intense aggressive behaviors is paramount for pig breeding management. This study introduced a novel hybrid model, PAB-Mamba-YOLO, integrating the principles of Mamba and YOLO for efficient visual detection of weaned piglets' aggressive behaviors, including climbing body, nose hitting, biting tail and biting ear. Within the proposed model, a novel CSPVSS module, which integrated the Cross Stage Partial (CSP) structure with the Visual State Space Model (VSSM), has been developed. This module was adeptly integrated into the Neck part of the network, where it harnessed convolutional capabilities for local feature extraction and leveraged the visual state space to reveal long-distance dependencies. The model exhibited sound performance in detecting aggressive behaviors, with an average precision (AP) of 0.976 for climbing body, 0.994 for nose hitting, 0.977 for biting tail and 0.994 for biting ear. The mean average precision (mAP) of 0.985 reflected the model's overall effectiveness in detecting all classes of aggressive behaviors. The model achieved a detection speed FPS of 69 f/s, with model complexity measured by 7.2 G floating-point operations (GFLOPs) and parameters (Params) of 2.63 million. Comparative experiments with existing prevailing models confirmed the superiority of the proposed model. This work is expected to contribute a glimmer of fresh ideas and inspiration to the research field of precision breeding and behavioral analysis of animals.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 1","pages":"Pages 52-66"},"PeriodicalIF":8.2,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信