Tiziano Natali, Andrey Zhylka, Karin Olthof, Jasper N Smit, Tarik R Baetens, Niels F M Kok, Koert F D Kuhlmann, Oleksandra Ivashchenko, Theo J M Ruers, Matteo Fusaglia
{"title":"术中超声中的肝肿瘤自动分割:一种有监督的深度学习方法。","authors":"Tiziano Natali, Andrey Zhylka, Karin Olthof, Jasper N Smit, Tarik R Baetens, Niels F M Kok, Koert F D Kuhlmann, Oleksandra Ivashchenko, Theo J M Ruers, Matteo Fusaglia","doi":"10.1117/1.JMI.11.2.024501","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Training and evaluation of the performance of a supervised deep-learning model for the segmentation of hepatic tumors from intraoperative US (iUS) images, with the purpose of improving the accuracy of tumor margin assessment during liver surgeries and the detection of lesions during colorectal surgeries.</p><p><strong>Approach: </strong>In this retrospective study, a U-Net network was trained with the nnU-Net framework in different configurations for the segmentation of CRLM from iUS. The model was trained on B-mode intraoperative hepatic US images, hand-labeled by an expert clinician. The model was tested on an independent set of similar images. The average age of the study population was 61.9 ± 9.9 years. Ground truth for the test set was provided by a radiologist, and three extra delineation sets were used for the computation of inter-observer variability.</p><p><strong>Results: </strong>The presented model achieved a DSC of 0.84 (<math><mrow><mi>p</mi><mo>=</mo><mn>0.0037</mn></mrow></math>), which is comparable to the expert human raters scores. The model segmented hypoechoic and mixed lesions more accurately (DSC of 0.89 and 0.88, respectively) than hyper- and isoechoic ones (DSC of 0.70 and 0.60, respectively) only missing isoechoic or >20 mm in diameter (8% of the tumors) lesions. The inclusion of extra margins of probable tumor tissue around the lesions in the training ground truth resulted in lower DSCs of 0.75 (<math><mrow><mi>p</mi><mo>=</mo><mn>0.0022</mn></mrow></math>).</p><p><strong>Conclusion: </strong>The model can accurately segment hepatic tumors from iUS images and has the potential to speed up the resection margin definition during surgeries and the detection of lesion in screenings by automating iUS assessment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 2","pages":"024501"},"PeriodicalIF":1.9000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10929734/pdf/","citationCount":"0","resultStr":"{\"title\":\"Automatic hepatic tumor segmentation in intra-operative ultrasound: a supervised deep-learning approach.\",\"authors\":\"Tiziano Natali, Andrey Zhylka, Karin Olthof, Jasper N Smit, Tarik R Baetens, Niels F M Kok, Koert F D Kuhlmann, Oleksandra Ivashchenko, Theo J M Ruers, Matteo Fusaglia\",\"doi\":\"10.1117/1.JMI.11.2.024501\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Training and evaluation of the performance of a supervised deep-learning model for the segmentation of hepatic tumors from intraoperative US (iUS) images, with the purpose of improving the accuracy of tumor margin assessment during liver surgeries and the detection of lesions during colorectal surgeries.</p><p><strong>Approach: </strong>In this retrospective study, a U-Net network was trained with the nnU-Net framework in different configurations for the segmentation of CRLM from iUS. The model was trained on B-mode intraoperative hepatic US images, hand-labeled by an expert clinician. The model was tested on an independent set of similar images. The average age of the study population was 61.9 ± 9.9 years. Ground truth for the test set was provided by a radiologist, and three extra delineation sets were used for the computation of inter-observer variability.</p><p><strong>Results: </strong>The presented model achieved a DSC of 0.84 (<math><mrow><mi>p</mi><mo>=</mo><mn>0.0037</mn></mrow></math>), which is comparable to the expert human raters scores. The model segmented hypoechoic and mixed lesions more accurately (DSC of 0.89 and 0.88, respectively) than hyper- and isoechoic ones (DSC of 0.70 and 0.60, respectively) only missing isoechoic or >20 mm in diameter (8% of the tumors) lesions. The inclusion of extra margins of probable tumor tissue around the lesions in the training ground truth resulted in lower DSCs of 0.75 (<math><mrow><mi>p</mi><mo>=</mo><mn>0.0022</mn></mrow></math>).</p><p><strong>Conclusion: </strong>The model can accurately segment hepatic tumors from iUS images and has the potential to speed up the resection margin definition during surgeries and the detection of lesion in screenings by automating iUS assessment.</p>\",\"PeriodicalId\":47707,\"journal\":{\"name\":\"Journal of Medical Imaging\",\"volume\":\"11 2\",\"pages\":\"024501\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10929734/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1117/1.JMI.11.2.024501\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/12 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1117/1.JMI.11.2.024501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/12 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
摘要
目的:训练和评估用于从术中 US(iUS)图像分割肝脏肿瘤的有监督深度学习模型的性能,以提高肝脏手术中肿瘤边缘评估和结直肠手术中病变检测的准确性:在这项回顾性研究中,使用 nnU-Net 框架以不同配置训练了一个 U-Net 网络,用于从 iUS 中分割 CRLM。该模型在由临床专家手工标记的 B 型术中肝 US 图像上进行训练。该模型在一组独立的类似图像上进行了测试。研究对象的平均年龄为 61.9 ± 9.9 岁。测试集的基本真相由一名放射科医生提供,另外三组划线用于计算观察者之间的变异性:结果:该模型的 DSC 值为 0.84(p=0.0037),与人类专家的评分相当。与高回声和等回声病变(DSC 分别为 0.70 和 0.60)相比,该模型对低回声和混合型病变的分割更准确(DSC 分别为 0.89 和 0.88),仅遗漏了等回声或直径大于 20 毫米的病变(占肿瘤总数的 8%)。将病变周围可能存在肿瘤组织的额外边缘纳入训练基本真相后,DSCs 降低到 0.75(p=0.0022):该模型能从 iUS 图像中准确分割肝肿瘤,有望通过自动化 iUS 评估,加快手术中切除边缘的定义和筛查中病灶的检测。
Automatic hepatic tumor segmentation in intra-operative ultrasound: a supervised deep-learning approach.
Purpose: Training and evaluation of the performance of a supervised deep-learning model for the segmentation of hepatic tumors from intraoperative US (iUS) images, with the purpose of improving the accuracy of tumor margin assessment during liver surgeries and the detection of lesions during colorectal surgeries.
Approach: In this retrospective study, a U-Net network was trained with the nnU-Net framework in different configurations for the segmentation of CRLM from iUS. The model was trained on B-mode intraoperative hepatic US images, hand-labeled by an expert clinician. The model was tested on an independent set of similar images. The average age of the study population was 61.9 ± 9.9 years. Ground truth for the test set was provided by a radiologist, and three extra delineation sets were used for the computation of inter-observer variability.
Results: The presented model achieved a DSC of 0.84 (), which is comparable to the expert human raters scores. The model segmented hypoechoic and mixed lesions more accurately (DSC of 0.89 and 0.88, respectively) than hyper- and isoechoic ones (DSC of 0.70 and 0.60, respectively) only missing isoechoic or >20 mm in diameter (8% of the tumors) lesions. The inclusion of extra margins of probable tumor tissue around the lesions in the training ground truth resulted in lower DSCs of 0.75 ().
Conclusion: The model can accurately segment hepatic tumors from iUS images and has the potential to speed up the resection margin definition during surgeries and the detection of lesion in screenings by automating iUS assessment.
期刊介绍:
JMI covers fundamental and translational research, as well as applications, focused on medical imaging, which continue to yield physical and biomedical advancements in the early detection, diagnostics, and therapy of disease as well as in the understanding of normal. The scope of JMI includes: Imaging physics, Tomographic reconstruction algorithms (such as those in CT and MRI), Image processing and deep learning, Computer-aided diagnosis and quantitative image analysis, Visualization and modeling, Picture archiving and communications systems (PACS), Image perception and observer performance, Technology assessment, Ultrasonic imaging, Image-guided procedures, Digital pathology, Biomedical applications of biomedical imaging. JMI allows for the peer-reviewed communication and archiving of scientific developments, translational and clinical applications, reviews, and recommendations for the field.