G-SAM:基于 GMM 的医学影像分类和分段模型

Xiaoxiao Liu, Yan Zhao, Shigang Wang, Jian Wei
{"title":"G-SAM:基于 GMM 的医学影像分类和分段模型","authors":"Xiaoxiao Liu, Yan Zhao, Shigang Wang, Jian Wei","doi":"10.1007/s10586-024-04679-x","DOIUrl":null,"url":null,"abstract":"<p>In medical imaging, the classification and segmentation of lesions have always been significant topics in clinical research. Different categories of lesions require different treatment strategies, and accurate segmentation helps to assist in improving the effect of the clinical treatment. The Segment anything model (SAM) is an image segmentation model trained on a large-scale dataset with strong prompt segmentation capability, but it cannot be directly applied to the classification and segmentation tasks of medical images due to insufficient training on medical image data. In this paper, we propose a deep learning method for the classification and segmentation of lesions, called GMM-based segment anything model (G-SAM). Prompt-tuning is utilized in the model with the LoRA strategy, and the lesion feature extraction (GFE) module based on the Gaussian mixture model (GMM), is designed to effectively improve the effect of lesion classification and segmentation on the basis of the SAM. Notably, G-SAM exhibits greater sensitivity to early stage of the lesions, aiding in tumor detection and prevention, which holds important clinical value. G-SAM overcomes the limitation that SAM is not suitable for the medical image classification and segmentation tasks due to insufficient training data with minimal cost. Moreover, it enhances classification accuracy and segmentation precision compared to traditional Gaussian model-based methods. The effectiveness of G-SAM in classifying and segmenting lesions is validated on the LIDC dataset, demonstrating advantages over state-of-the-art (SOTA) methods. The study further validates the applicability of G-SAM on large publicly available datasets across three different image modalities, achieving superior performance.</p>","PeriodicalId":501576,"journal":{"name":"Cluster Computing","volume":"35 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"G-SAM: GMM-based segment anything model for medical image classification and segmentation\",\"authors\":\"Xiaoxiao Liu, Yan Zhao, Shigang Wang, Jian Wei\",\"doi\":\"10.1007/s10586-024-04679-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In medical imaging, the classification and segmentation of lesions have always been significant topics in clinical research. Different categories of lesions require different treatment strategies, and accurate segmentation helps to assist in improving the effect of the clinical treatment. The Segment anything model (SAM) is an image segmentation model trained on a large-scale dataset with strong prompt segmentation capability, but it cannot be directly applied to the classification and segmentation tasks of medical images due to insufficient training on medical image data. In this paper, we propose a deep learning method for the classification and segmentation of lesions, called GMM-based segment anything model (G-SAM). Prompt-tuning is utilized in the model with the LoRA strategy, and the lesion feature extraction (GFE) module based on the Gaussian mixture model (GMM), is designed to effectively improve the effect of lesion classification and segmentation on the basis of the SAM. Notably, G-SAM exhibits greater sensitivity to early stage of the lesions, aiding in tumor detection and prevention, which holds important clinical value. G-SAM overcomes the limitation that SAM is not suitable for the medical image classification and segmentation tasks due to insufficient training data with minimal cost. Moreover, it enhances classification accuracy and segmentation precision compared to traditional Gaussian model-based methods. The effectiveness of G-SAM in classifying and segmenting lesions is validated on the LIDC dataset, demonstrating advantages over state-of-the-art (SOTA) methods. The study further validates the applicability of G-SAM on large publicly available datasets across three different image modalities, achieving superior performance.</p>\",\"PeriodicalId\":501576,\"journal\":{\"name\":\"Cluster Computing\",\"volume\":\"35 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s10586-024-04679-x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10586-024-04679-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在医学影像领域,病变的分类和分割一直是临床研究的重要课题。不同类别的病变需要不同的治疗策略,准确的分割有助于辅助提高临床治疗效果。Segment anything model(SAM)是一种在大规模数据集上训练的图像分割模型,具有很强的及时分割能力,但由于对医学图像数据的训练不足,无法直接应用于医学图像的分类和分割任务。本文提出了一种用于病变分类和分割的深度学习方法,称为基于 GMM 的分割模型(G-SAM)。该模型利用 LoRA 策略进行提示调整,并设计了基于高斯混合模型(GMM)的病变特征提取(GFE)模块,从而在 SAM 的基础上有效提高了病变分类和分割的效果。值得注意的是,G-SAM 对早期病变表现出更高的灵敏度,有助于肿瘤的检测和预防,具有重要的临床价值。G-SAM 以最小的成本克服了 SAM 因训练数据不足而不适用于医学图像分类和分割任务的局限性。此外,与传统的基于高斯模型的方法相比,它还提高了分类准确率和分割精度。研究在 LIDC 数据集上验证了 G-SAM 在病变分类和分割方面的有效性,证明了它比最先进的(SOTA)方法更具优势。研究进一步验证了 G-SAM 在三种不同图像模式的大型公开数据集上的适用性,并取得了优异的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

G-SAM: GMM-based segment anything model for medical image classification and segmentation

G-SAM: GMM-based segment anything model for medical image classification and segmentation

In medical imaging, the classification and segmentation of lesions have always been significant topics in clinical research. Different categories of lesions require different treatment strategies, and accurate segmentation helps to assist in improving the effect of the clinical treatment. The Segment anything model (SAM) is an image segmentation model trained on a large-scale dataset with strong prompt segmentation capability, but it cannot be directly applied to the classification and segmentation tasks of medical images due to insufficient training on medical image data. In this paper, we propose a deep learning method for the classification and segmentation of lesions, called GMM-based segment anything model (G-SAM). Prompt-tuning is utilized in the model with the LoRA strategy, and the lesion feature extraction (GFE) module based on the Gaussian mixture model (GMM), is designed to effectively improve the effect of lesion classification and segmentation on the basis of the SAM. Notably, G-SAM exhibits greater sensitivity to early stage of the lesions, aiding in tumor detection and prevention, which holds important clinical value. G-SAM overcomes the limitation that SAM is not suitable for the medical image classification and segmentation tasks due to insufficient training data with minimal cost. Moreover, it enhances classification accuracy and segmentation precision compared to traditional Gaussian model-based methods. The effectiveness of G-SAM in classifying and segmenting lesions is validated on the LIDC dataset, demonstrating advantages over state-of-the-art (SOTA) methods. The study further validates the applicability of G-SAM on large publicly available datasets across three different image modalities, achieving superior performance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信