机器人缝合中合成数据生成和实例分割的可复制框架。

IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL
Pietro Leoncini, Francesco Marzola, Matteo Pescio, Maura Casadio, Alberto Arezzo, Giulio Dagnino
{"title":"机器人缝合中合成数据生成和实例分割的可复制框架。","authors":"Pietro Leoncini, Francesco Marzola, Matteo Pescio, Maura Casadio, Alberto Arezzo, Giulio Dagnino","doi":"10.1007/s11548-025-03460-8","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Automating suturing in robotic-assisted surgery offers significant benefits including enhanced precision, reduced operative time, and alleviated surgeon fatigue. Achieving this requires robust computer vision (CV) models. Still, their development is hindered by the scarcity of task-specific datasets and the complexity of acquiring and annotating real surgical data. This work addresses these challenges using a sim-to-real approach to create synthetic datasets and a data-driven methodology for model training and evaluation.</p><p><strong>Methods: </strong>Existing 3D models of Da Vinci tools were modified and new models-needle and tissue cuts-were created to account for diverse data scenarios, enabling the generation of three synthetic datasets with increasing realism using Unity and the Perception package. These datasets were then employed to train several YOLOv8-m models for object detection to evaluate the generalizability of synthetic-trained models in real scenarios and the impact of dataset realism on model performance. Additionally, a real-time instance segmentation model was developed through a hybrid training strategy combining synthetic and a minimal set of real images.</p><p><strong>Results: </strong>Synthetic-trained models showed improved performance on real test sets as training dataset realism increased, but realism levels remained insufficient for complete generalization. Instead, the hybrid approach significantly increased performance in real scenarios. Indeed, the hybrid instance segmentation model exhibited real-time capabilities and robust accuracy, achieving the best Dice coefficient (0.92) with minimal dependence on real training data (30-50 images).</p><p><strong>Conclusions: </strong>This study demonstrates the potential of sim-to-real synthetic datasets to advance robotic suturing automation through a simple and reproducible framework. By sharing 3D models, Unity environments and annotated datasets, this work provides resources for creating additional images, expanding datasets, and enabling fine-tuning or semi-supervised learning. By facilitating further exploration, this work lays a foundation for advancing suturing automation and addressing task-specific dataset scarcity.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1567-1576"},"PeriodicalIF":2.3000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350469/pdf/","citationCount":"0","resultStr":"{\"title\":\"A reproducible framework for synthetic data generation and instance segmentation in robotic suturing.\",\"authors\":\"Pietro Leoncini, Francesco Marzola, Matteo Pescio, Maura Casadio, Alberto Arezzo, Giulio Dagnino\",\"doi\":\"10.1007/s11548-025-03460-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Automating suturing in robotic-assisted surgery offers significant benefits including enhanced precision, reduced operative time, and alleviated surgeon fatigue. Achieving this requires robust computer vision (CV) models. Still, their development is hindered by the scarcity of task-specific datasets and the complexity of acquiring and annotating real surgical data. This work addresses these challenges using a sim-to-real approach to create synthetic datasets and a data-driven methodology for model training and evaluation.</p><p><strong>Methods: </strong>Existing 3D models of Da Vinci tools were modified and new models-needle and tissue cuts-were created to account for diverse data scenarios, enabling the generation of three synthetic datasets with increasing realism using Unity and the Perception package. These datasets were then employed to train several YOLOv8-m models for object detection to evaluate the generalizability of synthetic-trained models in real scenarios and the impact of dataset realism on model performance. Additionally, a real-time instance segmentation model was developed through a hybrid training strategy combining synthetic and a minimal set of real images.</p><p><strong>Results: </strong>Synthetic-trained models showed improved performance on real test sets as training dataset realism increased, but realism levels remained insufficient for complete generalization. Instead, the hybrid approach significantly increased performance in real scenarios. Indeed, the hybrid instance segmentation model exhibited real-time capabilities and robust accuracy, achieving the best Dice coefficient (0.92) with minimal dependence on real training data (30-50 images).</p><p><strong>Conclusions: </strong>This study demonstrates the potential of sim-to-real synthetic datasets to advance robotic suturing automation through a simple and reproducible framework. By sharing 3D models, Unity environments and annotated datasets, this work provides resources for creating additional images, expanding datasets, and enabling fine-tuning or semi-supervised learning. By facilitating further exploration, this work lays a foundation for advancing suturing automation and addressing task-specific dataset scarcity.</p>\",\"PeriodicalId\":51251,\"journal\":{\"name\":\"International Journal of Computer Assisted Radiology and Surgery\",\"volume\":\" \",\"pages\":\"1567-1576\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350469/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computer Assisted Radiology and Surgery\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s11548-025-03460-8\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/6/24 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-025-03460-8","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/24 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

目的:在机器人辅助手术中自动化缝合提供了显著的好处,包括提高精度,减少手术时间,减轻外科医生的疲劳。实现这一目标需要强大的计算机视觉(CV)模型。然而,它们的发展受到特定任务数据集的缺乏和获取和注释真实手术数据的复杂性的阻碍。这项工作解决了这些挑战,使用模拟到真实的方法来创建合成数据集和数据驱动的方法来进行模型训练和评估。方法:修改现有的达芬奇工具3D模型,并创建新的模型-针和组织切割-以考虑不同的数据场景,使用Unity和Perception软件包生成三个合成数据集,增加真实感。然后利用这些数据集训练多个YOLOv8-m模型用于目标检测,以评估综合训练模型在真实场景中的泛化性以及数据集真实感对模型性能的影响。此外,通过将合成图像和最小集真实图像相结合的混合训练策略,建立了实时实例分割模型。结果:随着训练数据集真实感的增加,合成训练模型在真实测试集上的表现有所改善,但真实感水平仍不足以完全泛化。相反,混合方法在实际场景中显著提高了性能。实际上,混合实例分割模型表现出实时性和鲁棒性,在对真实训练数据(30-50张图像)的依赖最小的情况下实现了最佳Dice系数(0.92)。结论:本研究展示了模拟到真实的合成数据集的潜力,通过一个简单和可重复的框架来推进机器人缝合自动化。通过共享3D模型、Unity环境和带注释的数据集,这项工作为创建额外的图像、扩展数据集和实现微调或半监督学习提供了资源。通过促进进一步的探索,这项工作为推进缝合自动化和解决特定于任务的数据集稀缺性奠定了基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A reproducible framework for synthetic data generation and instance segmentation in robotic suturing.

A reproducible framework for synthetic data generation and instance segmentation in robotic suturing.

A reproducible framework for synthetic data generation and instance segmentation in robotic suturing.

A reproducible framework for synthetic data generation and instance segmentation in robotic suturing.

Purpose: Automating suturing in robotic-assisted surgery offers significant benefits including enhanced precision, reduced operative time, and alleviated surgeon fatigue. Achieving this requires robust computer vision (CV) models. Still, their development is hindered by the scarcity of task-specific datasets and the complexity of acquiring and annotating real surgical data. This work addresses these challenges using a sim-to-real approach to create synthetic datasets and a data-driven methodology for model training and evaluation.

Methods: Existing 3D models of Da Vinci tools were modified and new models-needle and tissue cuts-were created to account for diverse data scenarios, enabling the generation of three synthetic datasets with increasing realism using Unity and the Perception package. These datasets were then employed to train several YOLOv8-m models for object detection to evaluate the generalizability of synthetic-trained models in real scenarios and the impact of dataset realism on model performance. Additionally, a real-time instance segmentation model was developed through a hybrid training strategy combining synthetic and a minimal set of real images.

Results: Synthetic-trained models showed improved performance on real test sets as training dataset realism increased, but realism levels remained insufficient for complete generalization. Instead, the hybrid approach significantly increased performance in real scenarios. Indeed, the hybrid instance segmentation model exhibited real-time capabilities and robust accuracy, achieving the best Dice coefficient (0.92) with minimal dependence on real training data (30-50 images).

Conclusions: This study demonstrates the potential of sim-to-real synthetic datasets to advance robotic suturing automation through a simple and reproducible framework. By sharing 3D models, Unity environments and annotated datasets, this work provides resources for creating additional images, expanding datasets, and enabling fine-tuning or semi-supervised learning. By facilitating further exploration, this work lays a foundation for advancing suturing automation and addressing task-specific dataset scarcity.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery ENGINEERING, BIOMEDICAL-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
5.90
自引率
6.70%
发文量
243
审稿时长
6-12 weeks
期刊介绍: The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信