Jiacheng Wei, Xi Tang, Jinxiu Liu, Ting Luo, Yan Wu, Junhui Duan, Shijun Xiao, Zhiyan Zhang
{"title":"Pork-YOLO: Automated collection of pork quality traits","authors":"Jiacheng Wei, Xi Tang, Jinxiu Liu, Ting Luo, Yan Wu, Junhui Duan, Shijun Xiao, Zhiyan Zhang","doi":"10.1016/j.jafr.2025.101838","DOIUrl":null,"url":null,"abstract":"<div><div>As consumer demand for high-quality meat rises, efficient and precise phenotypic measurement of pork traits is essential for improving quality. Key parameters include marbling density and longissimus thoracis area. This study proposes an automated system utilizing high-resolution cameras to rapidly and efficiently measure eye muscle area and marbling scores on a large scale. A novel algorithm called \"Pork-YOLO\" was developed. First, a lightweight segmentation head network, Pork-Seg, was introduced, which utilizes shared convolution and group normalization to minimize parameters and enhance generalization, with prototype learning incorporated to reduce complexity. Next, the StarNet backbone was integrated, employing star-shaped operations for high-dimensional feature representation while maintaining computational efficiency, along with a hierarchical convolutional structure to boost performance. Lastly, the C2f-SCA module combines contextual anchor attention with star-shaped operations to improve long-range dependency capture. Verification experiments demonstrated that Pork-YOLO achieved a mean Intersection over Union (mIoU) of 97.86 % and a frame rate of 160.4 FPS (6.23 ms per image), with GFLOPs reduced to 30.1. When compared with the ground truth, the Pork-YOLO model exhibited satisfactory segmentation accuracy, closely aligning with the ground truth. The coefficient of determination (R<sup>2</sup>) between the results obtained by the Pork-YOLO model and the gold standard method was 0.9717. For marbling scoring, an image classification task yielded an average accuracy of 98.9 %, with a strong correlation (R<sup>2</sup> = 0.999) between predicted and actual values. This study presents an innovative method for rapid automated assessment of pork quality traits, offering valuable insights for future phenotypic measurement automation.</div></div>","PeriodicalId":34393,"journal":{"name":"Journal of Agriculture and Food Research","volume":"21 ","pages":"Article 101838"},"PeriodicalIF":4.8000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Agriculture and Food Research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666154325002091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
As consumer demand for high-quality meat rises, efficient and precise phenotypic measurement of pork traits is essential for improving quality. Key parameters include marbling density and longissimus thoracis area. This study proposes an automated system utilizing high-resolution cameras to rapidly and efficiently measure eye muscle area and marbling scores on a large scale. A novel algorithm called "Pork-YOLO" was developed. First, a lightweight segmentation head network, Pork-Seg, was introduced, which utilizes shared convolution and group normalization to minimize parameters and enhance generalization, with prototype learning incorporated to reduce complexity. Next, the StarNet backbone was integrated, employing star-shaped operations for high-dimensional feature representation while maintaining computational efficiency, along with a hierarchical convolutional structure to boost performance. Lastly, the C2f-SCA module combines contextual anchor attention with star-shaped operations to improve long-range dependency capture. Verification experiments demonstrated that Pork-YOLO achieved a mean Intersection over Union (mIoU) of 97.86 % and a frame rate of 160.4 FPS (6.23 ms per image), with GFLOPs reduced to 30.1. When compared with the ground truth, the Pork-YOLO model exhibited satisfactory segmentation accuracy, closely aligning with the ground truth. The coefficient of determination (R2) between the results obtained by the Pork-YOLO model and the gold standard method was 0.9717. For marbling scoring, an image classification task yielded an average accuracy of 98.9 %, with a strong correlation (R2 = 0.999) between predicted and actual values. This study presents an innovative method for rapid automated assessment of pork quality traits, offering valuable insights for future phenotypic measurement automation.