SwinD-Net:用于腹腔镜肝脏分割的轻量级分割网络。

IF 1.5 4区 医学 Q3 SURGERY
Computer Assisted Surgery Pub Date : 2024-12-01 Epub Date: 2024-03-20 DOI:10.1080/24699322.2024.2329675
Shuiming Ouyang, Baochun He, Huoling Luo, Fucang Jia
{"title":"SwinD-Net:用于腹腔镜肝脏分割的轻量级分割网络。","authors":"Shuiming Ouyang, Baochun He, Huoling Luo, Fucang Jia","doi":"10.1080/24699322.2024.2329675","DOIUrl":null,"url":null,"abstract":"<p><p>The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SwinD-Net: a lightweight segmentation network for laparoscopic liver segmentation.\",\"authors\":\"Shuiming Ouyang, Baochun He, Huoling Luo, Fucang Jia\",\"doi\":\"10.1080/24699322.2024.2329675\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.</p>\",\"PeriodicalId\":56051,\"journal\":{\"name\":\"Computer Assisted Surgery\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Assisted Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1080/24699322.2024.2329675\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/3/20 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Assisted Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/24699322.2024.2329675","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/3/20 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

摘要

腹腔镜手术辅助系统对图像分割的实时性要求极高。传统的深度学习模型虽然能确保较高的分割精度,但却存在较大的计算负担。在大多数医院缺乏强大计算资源的实际环境中,这些模型无法满足实时计算需求。我们提出了一种基于 Skip 连接的新型网络 SwinD-Net,其中包含深度可分离卷积和 Swin 变换块。为了减少计算开销,我们取消了第一层中的跳转连接,并减少了浅层特征图中的通道数量。此外,我们还引入了计算量和参数占用较大的 Swin 变换器块,以提取全局信息并捕捉高级语义特征。通过这些修改,我们的网络在保持轻量级设计的同时实现了理想的性能。我们在 CholecSeg8k 数据集上进行了实验,以验证我们方法的有效性。与其他模型相比,我们的方法在实现高准确度的同时,还大大减少了计算和参数开销。具体来说,我们的模型只需 98.82 M 次浮点运算(FLOP)和 0.52 M 个参数,每幅图像在 CPU 上的推理时间为 47.49 ms。与最近提出的轻量级分割网络 UNeXt 相比,我们的模型不仅在 Dice 指标上优于它,而且参数数量只有它的 1/3,FLOP 只有它的 1/22。此外,我们的模型推理速度是 UNeXt 的 2.4 倍,在准确性和速度方面都有全面提升。我们的模型有效减少了参数数量和计算复杂度,在提高推理速度的同时保持了相当的准确性。源代码可在 https://github.com/ouyangshuiming/SwinDNet 上获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SwinD-Net: a lightweight segmentation network for laparoscopic liver segmentation.

The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Assisted Surgery
Computer Assisted Surgery Medicine-Surgery
CiteScore
2.30
自引率
0.00%
发文量
13
审稿时长
10 weeks
期刊介绍: omputer Assisted Surgery aims to improve patient care by advancing the utilization of computers during treatment; to evaluate the benefits and risks associated with the integration of advanced digital technologies into surgical practice; to disseminate clinical and basic research relevant to stereotactic surgery, minimal access surgery, endoscopy, and surgical robotics; to encourage interdisciplinary collaboration between engineers and physicians in developing new concepts and applications; to educate clinicians about the principles and techniques of computer assisted surgery and therapeutics; and to serve the international scientific community as a medium for the transfer of new information relating to theory, research, and practice in biomedical imaging and the surgical specialties. The scope of Computer Assisted Surgery encompasses all fields within surgery, as well as biomedical imaging and instrumentation, and digital technology employed as an adjunct to imaging in diagnosis, therapeutics, and surgery. Topics featured include frameless as well as conventional stereotactic procedures, surgery guided by intraoperative ultrasound or magnetic resonance imaging, image guided focused irradiation, robotic surgery, and any therapeutic interventions performed with the use of digital imaging technology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信