利用2D神经网络框架,通过采收野生蓝莓(Vaccinium angustifolium Ait.)的深度图分析进行3D分割。

IF 2.7 Q3 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY
Connor C Mullins, Travis J Esau, Qamar U Zaman, Ahmad A Al-Mallahi, Aitazaz A Farooque
{"title":"利用2D神经网络框架,通过采收野生蓝莓(Vaccinium angustifolium Ait.)的深度图分析进行3D分割。","authors":"Connor C Mullins, Travis J Esau, Qamar U Zaman, Ahmad A Al-Mallahi, Aitazaz A Farooque","doi":"10.3390/jimaging10120324","DOIUrl":null,"url":null,"abstract":"<p><p>This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red-green-blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities. Notably, the YOLOv8 models, particularly YOLOv8n-seg, demonstrated superior processing efficiency, with an average time of 18.10 ms, significantly faster than the Detectron2 models, which exceeded 57 ms, while maintaining high performance with a mean intersection over union (IoU) of 0.944 and a Matthew's correlation coefficient (MCC) of 0.957. A qualitative comparison of segmentation masks indicated that the YOLO models produced smoother and more accurate object boundaries, whereas Detectron2 showed jagged edges and under-segmentation. Statistical analyses, including ANOVA and Tukey's HSD test (α = 0.05), confirmed the superior segmentation performance of models on depth maps over RGB images (<i>p</i> < 0.001). This study concludes by recommending the YOLOv8n-seg model for real-time 3D segmentation in precision agriculture, providing insights that can enhance volume estimation, yield prediction, and resource management practices.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676057/pdf/","citationCount":"0","resultStr":"{\"title\":\"Exploiting 2D Neural Network Frameworks for 3D Segmentation Through Depth Map Analytics of Harvested Wild Blueberries (<i>Vaccinium angustifolium</i> Ait.).\",\"authors\":\"Connor C Mullins, Travis J Esau, Qamar U Zaman, Ahmad A Al-Mallahi, Aitazaz A Farooque\",\"doi\":\"10.3390/jimaging10120324\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red-green-blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities. Notably, the YOLOv8 models, particularly YOLOv8n-seg, demonstrated superior processing efficiency, with an average time of 18.10 ms, significantly faster than the Detectron2 models, which exceeded 57 ms, while maintaining high performance with a mean intersection over union (IoU) of 0.944 and a Matthew's correlation coefficient (MCC) of 0.957. A qualitative comparison of segmentation masks indicated that the YOLO models produced smoother and more accurate object boundaries, whereas Detectron2 showed jagged edges and under-segmentation. Statistical analyses, including ANOVA and Tukey's HSD test (α = 0.05), confirmed the superior segmentation performance of models on depth maps over RGB images (<i>p</i> < 0.001). This study concludes by recommending the YOLOv8n-seg model for real-time 3D segmentation in precision agriculture, providing insights that can enhance volume estimation, yield prediction, and resource management practices.</p>\",\"PeriodicalId\":37035,\"journal\":{\"name\":\"Journal of Imaging\",\"volume\":\"10 12\",\"pages\":\"\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676057/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/jimaging10120324\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/jimaging10120324","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

本研究介绍了一种新的3D图像分割方法,利用应用于2D深度图图像的神经网络框架,通过颜色渐变可视化Z轴值。这项研究包括从机械收获的野生蓝莓中收集综合数据,分别通过飞行时间和RGB相机填充3D和红绿蓝(RGB)图像。对来自YOLOv8和Detectron2框架的高级神经网络模型的分割能力进行了评估。值得注意的是,YOLOv8模型,特别是YOLOv8n-seg,表现出了卓越的处理效率,平均处理时间为18.10 ms,显著快于Detectron2模型(超过57 ms),同时保持了较高的性能,平均交集比联合(IoU)为0.944,马修相关系数(MCC)为0.957。通过对分割掩模的定性比较,发现YOLO模型生成的目标边界更平滑、更精确,而Detectron2模型则呈现出锯齿状边缘和分割不足。统计分析,包括方差分析和Tukey的HSD检验(α = 0.05),证实了模型在深度图上优于RGB图像的分割性能(p < 0.001)。该研究最后推荐了用于精准农业实时3D分割的YOLOv8n-seg模型,为提高产量估算、产量预测和资源管理实践提供了见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploiting 2D Neural Network Frameworks for 3D Segmentation Through Depth Map Analytics of Harvested Wild Blueberries (Vaccinium angustifolium Ait.).

This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red-green-blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities. Notably, the YOLOv8 models, particularly YOLOv8n-seg, demonstrated superior processing efficiency, with an average time of 18.10 ms, significantly faster than the Detectron2 models, which exceeded 57 ms, while maintaining high performance with a mean intersection over union (IoU) of 0.944 and a Matthew's correlation coefficient (MCC) of 0.957. A qualitative comparison of segmentation masks indicated that the YOLO models produced smoother and more accurate object boundaries, whereas Detectron2 showed jagged edges and under-segmentation. Statistical analyses, including ANOVA and Tukey's HSD test (α = 0.05), confirmed the superior segmentation performance of models on depth maps over RGB images (p < 0.001). This study concludes by recommending the YOLOv8n-seg model for real-time 3D segmentation in precision agriculture, providing insights that can enhance volume estimation, yield prediction, and resource management practices.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Imaging
Journal of Imaging Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
5.90
自引率
6.20%
发文量
303
审稿时长
7 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信