A streamlined U-Net convolution network for medical image processing.

IF 2.9 2区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Quantitative Imaging in Medicine and Surgery Pub Date : 2025-01-02 Epub Date: 2024-12-20 DOI:10.21037/qims-24-1429
Ching-Hsue Cheng, Jun-He Yang, Yu-Chen Hsu
{"title":"A streamlined U-Net convolution network for medical image processing.","authors":"Ching-Hsue Cheng, Jun-He Yang, Yu-Chen Hsu","doi":"10.21037/qims-24-1429","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Image segmentation is crucial in medical diagnosis, helping to identify diseased areas in images for more accurate diagnoses. The U-Net model, a convolutional neural network (CNN) widely used for medical image segmentation, has limitations in extracting global features and handling multi-scale pathological information. This study aims to address these challenges by proposing a novel model that enhances segmentation performance while reducing computational demands.</p><p><strong>Methods: </strong>We introduce the LUNeXt model, which integrates Vision Transformers (ViT) with a redesigned convolution block structure. This model employs depthwise separable convolutions to capture global features with fewer parameters. Comprehensive experiments were conducted on four diverse medical image datasets to evaluate the model's performance.</p><p><strong>Results: </strong>The LUNeXt model demonstrated competitive segmentation performance with a significant reduction in parameters and floating-point operations (FLOPs) compared to traditional U-Net models. The application of explainable AI techniques provided clear visualization of segmentation results, highlighting the model's efficacy in efficient medical image segmentation.</p><p><strong>Conclusions: </strong>LUNeXt facilitates efficient medical image segmentation on standard hardware, reducing the learning curve and making advanced techniques more accessible to practitioners. This model balances the complexity and parameter count, offering a promising solution for enhancing the accuracy of pathological feature extraction in medical images.</p>","PeriodicalId":54267,"journal":{"name":"Quantitative Imaging in Medicine and Surgery","volume":"15 1","pages":"455-472"},"PeriodicalIF":2.9000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11744110/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quantitative Imaging in Medicine and Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.21037/qims-24-1429","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/20 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Image segmentation is crucial in medical diagnosis, helping to identify diseased areas in images for more accurate diagnoses. The U-Net model, a convolutional neural network (CNN) widely used for medical image segmentation, has limitations in extracting global features and handling multi-scale pathological information. This study aims to address these challenges by proposing a novel model that enhances segmentation performance while reducing computational demands.

Methods: We introduce the LUNeXt model, which integrates Vision Transformers (ViT) with a redesigned convolution block structure. This model employs depthwise separable convolutions to capture global features with fewer parameters. Comprehensive experiments were conducted on four diverse medical image datasets to evaluate the model's performance.

Results: The LUNeXt model demonstrated competitive segmentation performance with a significant reduction in parameters and floating-point operations (FLOPs) compared to traditional U-Net models. The application of explainable AI techniques provided clear visualization of segmentation results, highlighting the model's efficacy in efficient medical image segmentation.

Conclusions: LUNeXt facilitates efficient medical image segmentation on standard hardware, reducing the learning curve and making advanced techniques more accessible to practitioners. This model balances the complexity and parameter count, offering a promising solution for enhancing the accuracy of pathological feature extraction in medical images.

用于医学图像处理的精简U-Net卷积网络。
背景:图像分割在医学诊断中至关重要,它有助于识别图像中的病变区域,从而更准确地进行诊断。作为一种广泛应用于医学图像分割的卷积神经网络(CNN), U-Net模型在提取全局特征和处理多尺度病理信息方面存在局限性。本研究旨在通过提出一种新的模型来解决这些挑战,该模型在减少计算需求的同时提高了分割性能。方法:我们介绍了LUNeXt模型,该模型集成了视觉变压器(ViT)和重新设计的卷积块结构。该模型采用深度可分卷积,以更少的参数捕获全局特征。在四种不同的医学图像数据集上进行了综合实验,以评估模型的性能。结果:与传统的U-Net模型相比,LUNeXt模型显示出具有竞争力的分割性能,参数和浮点运算(FLOPs)显著减少。可解释的人工智能技术的应用为分割结果提供了清晰的可视化,突出了模型在高效医学图像分割中的功效。结论:LUNeXt有助于在标准硬件上进行有效的医学图像分割,减少了学习曲线,使从业者更容易获得先进的技术。该模型平衡了复杂性和参数数量,为提高医学图像中病理特征提取的准确性提供了一个有希望的解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Quantitative Imaging in Medicine and Surgery
Quantitative Imaging in Medicine and Surgery Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
4.20
自引率
17.90%
发文量
252
期刊介绍: Information not localized
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信