基于迁移学习和卷积神经网络的术中荧光图像肿瘤分割技术

IF 1.2 4区 医学 Q3 SURGERY
Weijia Hou, Liwen Zou, Dong Wang
{"title":"基于迁移学习和卷积神经网络的术中荧光图像肿瘤分割技术","authors":"Weijia Hou, Liwen Zou, Dong Wang","doi":"10.1177/15533506241246576","DOIUrl":null,"url":null,"abstract":"ObjectiveTo propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest.MethodsWe employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method.ResultsThe transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset.ConclusionTo the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.","PeriodicalId":22095,"journal":{"name":"Surgical Innovation","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Tumor Segmentation in Intraoperative Fluorescence Images Based on Transfer Learning and Convolutional Neural Networks\",\"authors\":\"Weijia Hou, Liwen Zou, Dong Wang\",\"doi\":\"10.1177/15533506241246576\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ObjectiveTo propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest.MethodsWe employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method.ResultsThe transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset.ConclusionTo the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.\",\"PeriodicalId\":22095,\"journal\":{\"name\":\"Surgical Innovation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2024-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Surgical Innovation\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/15533506241246576\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Surgical Innovation","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/15533506241246576","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

摘要

目标提出一种基于迁移学习的术中荧光图像肿瘤分割方法,帮助外科医生高效、准确地识别感兴趣肿瘤的边界。方法我们采用迁移学习和深度卷积神经网络(DCNN)进行肿瘤分割。具体来说,我们首先在 ImageNet 数据集上预训练了四个网络,以提取低级特征。随后,我们分别在两个荧光图像数据集(ABFM 和 DTHP)上对这些网络进行微调,以提高荧光图像的分割性能。最后,我们在 DTHL 数据集上测试了训练好的模型。结果基于迁移学习的 UNet++ 模型在 ABFM 数据集上实现了 82.17% 的高分割准确率,在 DTHP 数据集上实现了 95.61% 的高分割准确率,在 DTHL 测试集上实现了 85.49% 的高分割准确率。在 DTHP 数据集上,预训练的 Deeplab v3 + 网络表现出色,分割准确率达到 96.48%。此外,在处理 DTHP 数据集时,所有模型的分割准确率都超过了 90%。 结论 据我们所知,本研究首次探索了术中荧光图像上的肿瘤分割。结果表明,与传统方法相比,深度学习在提高分割性能方面具有显著优势。与端到端训练相比,迁移学习能让深度学习模型在小样本荧光图像数据上表现得更好。这一发现为外科医生在手术过程中获得更可靠、更准确的图像分割结果提供了有力支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Tumor Segmentation in Intraoperative Fluorescence Images Based on Transfer Learning and Convolutional Neural Networks
ObjectiveTo propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest.MethodsWe employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method.ResultsThe transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset.ConclusionTo the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Surgical Innovation
Surgical Innovation 医学-外科
CiteScore
2.90
自引率
0.00%
发文量
72
审稿时长
6-12 weeks
期刊介绍: Surgical Innovation (SRI) is a peer-reviewed bi-monthly journal focusing on minimally invasive surgical techniques, new instruments such as laparoscopes and endoscopes, and new technologies. SRI prepares surgeons to think and work in "the operating room of the future" through learning new techniques, understanding and adapting to new technologies, maintaining surgical competencies, and applying surgical outcomes data to their practices. This journal is a member of the Committee on Publication Ethics (COPE).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信