{"title":"基于有限训练数据的三维医学图像分割的可推广深度学习框架。","authors":"Tobias Ekman, Arthur Barakat, Einar Heiberg","doi":"10.1186/s41205-025-00254-1","DOIUrl":null,"url":null,"abstract":"<p><p>Medical image segmentation is a critical component in a wide range of clinical applications, enabling the identification and delineation of anatomical structures. This study focuses on segmentation of anatomical structures for 3D printing, virtual surgery planning, and advanced visualization such as virtual or augmented reality. Manual segmentation methods are labor-intensive and can be subjective, leading to inter-observer variability. Machine learning algorithms, particularly deep learning models, have gained traction for automating the process and are now considered state-of-the-art. However, deep-learning methods typically demand large datasets for fine-tuning and powerful graphics cards, limiting their applicability in resource-constrained settings. In this paper we introduce a robust deep learning framework for 3D medical segmentation that achieves high performance across a range of medical segmentation tasks, even when trained on a small number of subjects. This approach overcomes the need for extensive data and heavy GPU resources, facilitating adoption within healthcare systems. The potential is exemplified through six different clinical applications involving orthopedics, orbital segmentation, mandible CT, cardiac CT, fetal MRI and lung CT. Notably, a small set of hyper-parameters and augmentation settings produced segmentations with an average Dice score of 92% (SD = ±0.06) across a diverse range of organs and tissues.</p>","PeriodicalId":72036,"journal":{"name":"3D printing in medicine","volume":"11 1","pages":"9"},"PeriodicalIF":3.2000,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11884210/pdf/","citationCount":"0","resultStr":"{\"title\":\"Generalizable deep learning framework for 3D medical image segmentation using limited training data.\",\"authors\":\"Tobias Ekman, Arthur Barakat, Einar Heiberg\",\"doi\":\"10.1186/s41205-025-00254-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Medical image segmentation is a critical component in a wide range of clinical applications, enabling the identification and delineation of anatomical structures. This study focuses on segmentation of anatomical structures for 3D printing, virtual surgery planning, and advanced visualization such as virtual or augmented reality. Manual segmentation methods are labor-intensive and can be subjective, leading to inter-observer variability. Machine learning algorithms, particularly deep learning models, have gained traction for automating the process and are now considered state-of-the-art. However, deep-learning methods typically demand large datasets for fine-tuning and powerful graphics cards, limiting their applicability in resource-constrained settings. In this paper we introduce a robust deep learning framework for 3D medical segmentation that achieves high performance across a range of medical segmentation tasks, even when trained on a small number of subjects. This approach overcomes the need for extensive data and heavy GPU resources, facilitating adoption within healthcare systems. The potential is exemplified through six different clinical applications involving orthopedics, orbital segmentation, mandible CT, cardiac CT, fetal MRI and lung CT. Notably, a small set of hyper-parameters and augmentation settings produced segmentations with an average Dice score of 92% (SD = ±0.06) across a diverse range of organs and tissues.</p>\",\"PeriodicalId\":72036,\"journal\":{\"name\":\"3D printing in medicine\",\"volume\":\"11 1\",\"pages\":\"9\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-03-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11884210/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"3D printing in medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s41205-025-00254-1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"3D printing in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s41205-025-00254-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Generalizable deep learning framework for 3D medical image segmentation using limited training data.
Medical image segmentation is a critical component in a wide range of clinical applications, enabling the identification and delineation of anatomical structures. This study focuses on segmentation of anatomical structures for 3D printing, virtual surgery planning, and advanced visualization such as virtual or augmented reality. Manual segmentation methods are labor-intensive and can be subjective, leading to inter-observer variability. Machine learning algorithms, particularly deep learning models, have gained traction for automating the process and are now considered state-of-the-art. However, deep-learning methods typically demand large datasets for fine-tuning and powerful graphics cards, limiting their applicability in resource-constrained settings. In this paper we introduce a robust deep learning framework for 3D medical segmentation that achieves high performance across a range of medical segmentation tasks, even when trained on a small number of subjects. This approach overcomes the need for extensive data and heavy GPU resources, facilitating adoption within healthcare systems. The potential is exemplified through six different clinical applications involving orthopedics, orbital segmentation, mandible CT, cardiac CT, fetal MRI and lung CT. Notably, a small set of hyper-parameters and augmentation settings produced segmentations with an average Dice score of 92% (SD = ±0.06) across a diverse range of organs and tissues.