基于自监督预训练的混合网络深度灰质核分割。

IF 1.7
Magnetic Resonance Letters Pub Date : 2026-02-01 Epub Date: 2025-07-12 DOI:10.1016/j.mrl.2025.200226
Yang Deng , Jiaxiu Xi , Zhong Chen , Lijun Bao
{"title":"基于自监督预训练的混合网络深度灰质核分割。","authors":"Yang Deng ,&nbsp;Jiaxiu Xi ,&nbsp;Zhong Chen ,&nbsp;Lijun Bao","doi":"10.1016/j.mrl.2025.200226","DOIUrl":null,"url":null,"abstract":"<div><div>The accurate segmentation of deep gray matter nuclei is critical for neuropathological research, disease diagnosis and treatment. Existing methods employ the supervised learning training approach, which requires large labeled datasets. It is challenging and time-consuming to obtain such datasets for medical image analysis. In addition, these methods based on convolutional neural networks (CNNs) only achieve suboptimal performance due to the locality of convolutional operations. Vision Transformers (ViTs) efficiently model long-range dependencies and thus have the potentiality to outperform these methods in segmentation tasks. To address these issues, we propose a novel hybrid network based on self-supervised pre-training for deep gray matter nuclei segmentation. Specifically, we present a CNN-Transformer hybrid network (CTNet), whose encoder consists of 3D CNN and ViT to learn local spatial-detailed features and global semantic information. A self-supervised learning (SSL) approach that integrates rotation prediction and masked feature reconstruction is proposed to pre-train the CTNet, enabling the model to learn valuable visual representations from unlabeled data. We evaluate the effectiveness of our method on 3T and 7T human brain MRI datasets. The results demonstrate that our CTNet achieves better performance than other comparison models and our pre-training strategy outperforms other advanced self-supervised methods. When the training set has only one sample, our pre-trained CTNet enhances segmentation performance, showing an 8.4% improvement in Dice similarity coefficient (DSC) compared to the randomly initialized CTNet.</div></div>","PeriodicalId":93594,"journal":{"name":"Magnetic Resonance Letters","volume":"6 1","pages":"Article 200226"},"PeriodicalIF":1.7000,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-supervised pre-training based hybrid network for deep gray matter nuclei segmentation\",\"authors\":\"Yang Deng ,&nbsp;Jiaxiu Xi ,&nbsp;Zhong Chen ,&nbsp;Lijun Bao\",\"doi\":\"10.1016/j.mrl.2025.200226\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The accurate segmentation of deep gray matter nuclei is critical for neuropathological research, disease diagnosis and treatment. Existing methods employ the supervised learning training approach, which requires large labeled datasets. It is challenging and time-consuming to obtain such datasets for medical image analysis. In addition, these methods based on convolutional neural networks (CNNs) only achieve suboptimal performance due to the locality of convolutional operations. Vision Transformers (ViTs) efficiently model long-range dependencies and thus have the potentiality to outperform these methods in segmentation tasks. To address these issues, we propose a novel hybrid network based on self-supervised pre-training for deep gray matter nuclei segmentation. Specifically, we present a CNN-Transformer hybrid network (CTNet), whose encoder consists of 3D CNN and ViT to learn local spatial-detailed features and global semantic information. A self-supervised learning (SSL) approach that integrates rotation prediction and masked feature reconstruction is proposed to pre-train the CTNet, enabling the model to learn valuable visual representations from unlabeled data. We evaluate the effectiveness of our method on 3T and 7T human brain MRI datasets. The results demonstrate that our CTNet achieves better performance than other comparison models and our pre-training strategy outperforms other advanced self-supervised methods. When the training set has only one sample, our pre-trained CTNet enhances segmentation performance, showing an 8.4% improvement in Dice similarity coefficient (DSC) compared to the randomly initialized CTNet.</div></div>\",\"PeriodicalId\":93594,\"journal\":{\"name\":\"Magnetic Resonance Letters\",\"volume\":\"6 1\",\"pages\":\"Article 200226\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2026-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Magnetic Resonance Letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772516225000518\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/7/12 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Magnetic Resonance Letters","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772516225000518","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/7/12 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深部灰质核的准确分割对神经病理研究、疾病诊断和治疗具有重要意义。现有方法采用监督学习训练方法,这需要大量标记数据集。获取此类数据集用于医学图像分析是具有挑战性和耗时的。此外,由于卷积运算的局部性,这些基于卷积神经网络(cnn)的方法只能达到次优性能。视觉变换(Vision transformer, ViTs)可以有效地对远程依赖关系进行建模,因此在分割任务中有可能优于这些方法。为了解决这些问题,我们提出了一种新的基于自监督预训练的混合网络,用于深度灰质核分割。具体而言,我们提出了一种CNN- transformer混合网络(CTNet),其编码器由3D CNN和ViT组成,以学习局部空间细节特征和全局语义信息。提出了一种结合旋转预测和屏蔽特征重建的自监督学习(SSL)方法对CTNet进行预训练,使模型能够从未标记的数据中学习有价值的视觉表示。我们评估了我们的方法在3T和7T人脑MRI数据集上的有效性。结果表明,我们的CTNet比其他比较模型取得了更好的性能,我们的预训练策略优于其他先进的自监督方法。当训练集只有一个样本时,我们的预训练CTNet增强了分割性能,与随机初始化的CTNet相比,Dice相似系数(DSC)提高了8.4%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Self-supervised pre-training based hybrid network for deep gray matter nuclei segmentation

Self-supervised pre-training based hybrid network for deep gray matter nuclei segmentation
The accurate segmentation of deep gray matter nuclei is critical for neuropathological research, disease diagnosis and treatment. Existing methods employ the supervised learning training approach, which requires large labeled datasets. It is challenging and time-consuming to obtain such datasets for medical image analysis. In addition, these methods based on convolutional neural networks (CNNs) only achieve suboptimal performance due to the locality of convolutional operations. Vision Transformers (ViTs) efficiently model long-range dependencies and thus have the potentiality to outperform these methods in segmentation tasks. To address these issues, we propose a novel hybrid network based on self-supervised pre-training for deep gray matter nuclei segmentation. Specifically, we present a CNN-Transformer hybrid network (CTNet), whose encoder consists of 3D CNN and ViT to learn local spatial-detailed features and global semantic information. A self-supervised learning (SSL) approach that integrates rotation prediction and masked feature reconstruction is proposed to pre-train the CTNet, enabling the model to learn valuable visual representations from unlabeled data. We evaluate the effectiveness of our method on 3T and 7T human brain MRI datasets. The results demonstrate that our CTNet achieves better performance than other comparison models and our pre-training strategy outperforms other advanced self-supervised methods. When the training set has only one sample, our pre-trained CTNet enhances segmentation performance, showing an 8.4% improvement in Dice similarity coefficient (DSC) compared to the randomly initialized CTNet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Magnetic Resonance Letters
Magnetic Resonance Letters Analytical Chemistry, Spectroscopy, Radiology and Imaging, Biochemistry, Genetics and Molecular Biology (General)
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书