IEEE transactions on medical imaging最新文献

筛选
英文 中文
Self-navigated 3D diffusion MRI using an optimized CAIPI sampling and structured low-rank reconstruction estimated navigator. 使用优化 CAIPI 采样和结构化低秩重建估计导航器的自导航三维弥散 MRI。
IEEE transactions on medical imaging Pub Date : 2024-09-06 DOI: 10.1109/TMI.2024.3454994
Ziyu Li, Karla L Miller, Xi Chen, Mark Chiew, Wenchuan Wu
{"title":"Self-navigated 3D diffusion MRI using an optimized CAIPI sampling and structured low-rank reconstruction estimated navigator.","authors":"Ziyu Li, Karla L Miller, Xi Chen, Mark Chiew, Wenchuan Wu","doi":"10.1109/TMI.2024.3454994","DOIUrl":"10.1109/TMI.2024.3454994","url":null,"abstract":"<p><p>3D multi-slab acquisitions are an appealing approach for diffusion MRI because they are compatible with the imaging regime delivering optimal SNR efficiency. In conventional 3D multi-slab imaging, shot-to-shot phase variations caused by motion pose challenges due to the use of multi-shot k-space acquisition. Navigator acquisition after each imaging echo is typically employed to correct phase variations, which prolongs scan time and increases the specific absorption rate (SAR). The aim of this study is to develop a highly efficient, self-navigated method to correct for phase variations in 3D multi-slab diffusion MRI without explicitly acquiring navigators. The sampling of each shot is carefully designed to intersect with the central kz=0 plane of each slab, and the multi-shot sampling is optimized for self-navigation performance while retaining decent reconstruction quality. The kz=0 intersections from all shots are jointly used to reconstruct a 2D phase map for each shot using a structured low-rank constrained reconstruction that leverages the redundancy in shot and coil dimensions. The phase maps are used to eliminate the shot-to-shot phase inconsistency in the final 3D multi-shot reconstruction. We demonstrate the method's efficacy using retrospective simulations and prospectively acquired in-vivo experiments at 1.22 mm and 1.09 mm isotropic resolutions. Compared to conventional navigated 3D multi-slab imaging, the proposed self-navigated method achieves comparable image quality while shortening the scan time by 31.7% and improving the SNR efficiency by 15.5%. The proposed method produces comparable quality of DTI and white matter tractography to conventional navigated 3D multi-slab acquisition with a much shorter scan time.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cohort-Individual Cooperative Learning for Multimodal Cancer Survival Analysis. 用于多模式癌症生存分析的队列-个体合作学习
IEEE transactions on medical imaging Pub Date : 2024-09-06 DOI: 10.1109/TMI.2024.3455931
Huajun Zhou, Fengtao Zhou, Hao Chen
{"title":"Cohort-Individual Cooperative Learning for Multimodal Cancer Survival Analysis.","authors":"Huajun Zhou, Fengtao Zhou, Hao Chen","doi":"10.1109/TMI.2024.3455931","DOIUrl":"https://doi.org/10.1109/TMI.2024.3455931","url":null,"abstract":"<p><p>Recently, we have witnessed impressive achievements in cancer survival analysis by integrating multimodal data, e.g., pathology images and genomic profiles. However, the heterogeneity and high dimensionality of these modalities pose significant challenges for extracting discriminative representations while maintaining good generalization. In this paper, we propose a Cohortindividual Cooperative Learning (CCL) framework to advance cancer survival analysis by collaborating knowledge decomposition and cohort guidance. Specifically, first, we propose a Multimodal Knowledge Decomposition (MKD) module to explicitly decompose multimodal knowledge into four distinct components: redundancy, synergy and uniqueness of the two modalities. Such a comprehensive decomposition can enlighten the models to perceive easily overlooked yet important information, facilitating an effective multimodal fusion. Second, we propose a Cohort Guidance Modeling (CGM) to mitigate the risk of overfitting task-irrelevant information. It can promote a more comprehensive and robust understanding of the underlying multimodal data, while avoiding the pitfalls of overfitting and enhancing the generalization ability of the model. By cooperating the knowledge decomposition and cohort guidance methods, we develop a robust multimodal survival analysis model with enhanced discrimination and generalization abilities. Extensive experimental results on five cancer datasets demonstrate the effectiveness of our model in integrating multimodal data for survival analysis. The code will be publicly available soon.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142143503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-dose CT image super-resolution with noise suppression based on prior degradation estimator and self-guidance mechanism. 基于先验退化估计器和自我引导机制的低剂量 CT 图像超分辨率与噪声抑制。
IEEE transactions on medical imaging Pub Date : 2024-09-04 DOI: 10.1109/TMI.2024.3454268
Jianning Chi, Zhiyi Sun, Liuyi Meng, Siqi Wang, Xiaosheng Yu, Xiaolin Wei, Bin Yang
{"title":"Low-dose CT image super-resolution with noise suppression based on prior degradation estimator and self-guidance mechanism.","authors":"Jianning Chi, Zhiyi Sun, Liuyi Meng, Siqi Wang, Xiaosheng Yu, Xiaolin Wei, Bin Yang","doi":"10.1109/TMI.2024.3454268","DOIUrl":"https://doi.org/10.1109/TMI.2024.3454268","url":null,"abstract":"<p><p>The anatomies in low-dose computer tomography (LDCT) are usually distorted during the zooming-in observation process due to the small amount of quantum. Super-resolution (SR) methods have been proposed to enhance qualities of LDCT images as post-processing approaches without increasing radiation damage to patients, but suffered from incorrect prediction of degradation information and incomplete leverage of internal connections within the 3D CT volume, resulting in the imbalance between noise removal and detail sharpening in the super-resolution results. In this paper, we propose a novel LDCT SR network where the degradation information self-parsed from the LDCT slice and the 3D anatomical information captured from the LDCT volume are integrated to guide the backbone network. The prior degradation estimator (PDE) is proposed following the contrastive learning strategy to estimate the degradation features in the LDCT images without paired low-normal dose CT images. The self-guidance fusion module (SGFM) is designed to capture anatomical features with internal 3D consistencies between the squashed images along the coronal, sagittal, and axial views of the CT volume. Finally, the features representing degradation and anatomical structures are integrated to recover the CT images with higher resolutions. We apply the proposed method to the 2016 NIH-AAPM Mayo Clinic LDCT Grand Challenge dataset and our collected LDCT dataset to evaluate its ability to recover LDCT images. Experimental results illustrate the superiority of our network concerning quantitative metrics and qualitative observations, demonstrating its potential in recovering detail-sharp and noise-free CT images with higher resolutions from the practical LDCT images.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LOQUAT: Low-Rank Quaternion Reconstruction for Photon-Counting CT. LOQUAT:用于光子计数 CT 的低函数四元数重建。
IEEE transactions on medical imaging Pub Date : 2024-09-03 DOI: 10.1109/TMI.2024.3454174
Zefan Lin, Guotao Quan, Haixian Qu, Yanfeng Du, Jun Zhao
{"title":"LOQUAT: Low-Rank Quaternion Reconstruction for Photon-Counting CT.","authors":"Zefan Lin, Guotao Quan, Haixian Qu, Yanfeng Du, Jun Zhao","doi":"10.1109/TMI.2024.3454174","DOIUrl":"https://doi.org/10.1109/TMI.2024.3454174","url":null,"abstract":"<p><p>Photon-counting computed tomography (PCCT) may dramatically benefit clinical practice due to its versatility such as dose reduction and material characterization. However, the limited number of photons detected in each individual energy bin can induce severe noise contamination in the reconstructed image. Fortunately, the notable low-rank prior inherent in the PCCT image can guide the reconstruction to a denoised outcome. To fully excavate and leverage the intrinsic low-rankness, we propose a novel reconstruction algorithm based on quaternion representation (QR), called low-rank quaternion reconstruction (LOQUAT). First, we organize a group of nonlocal similar patches into a quaternion matrix. Then, an adjusted weighted Schatten-p norm (AWSN) is introduced and imposed on the matrix to enforce its low-rank nature. Subsequently, we formulate an AWSN-regularized model and devise an alternating direction method of multipliers (ADMM) framework to solve it. Experiments on simulated and real-world data substantiate the superiority of the LOQUAT technique over several state-of-the-art competitors in terms of both visual inspection and quantitative metrics. Moreover, our QR-based method exhibits lower computational complexity than some popular tensor representation (TR) based counterparts. Besides, the global convergence of LOQUAT is theoretically established under a mild condition. These properties bolster the robustness and practicality of LOQUAT, facilitating its application in PCCT clinical scenarios. The source code will be available at https://github.com/linzf23/LOQUAT.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shapley Values-enabled Progressive Pseudo Bag Augmentation for Whole-Slide Image Classification. Shapley Values-enabled Progressive Pseudo Bag Augmentation for Whole-Slide Image Classification.
IEEE transactions on medical imaging Pub Date : 2024-09-02 DOI: 10.1109/TMI.2024.3453386
Renao Yan, Qiehe Sun, Cheng Jin, Yiqing Liu, Yonghong He, Tian Guan, Hao Chen
{"title":"Shapley Values-enabled Progressive Pseudo Bag Augmentation for Whole-Slide Image Classification.","authors":"Renao Yan, Qiehe Sun, Cheng Jin, Yiqing Liu, Yonghong He, Tian Guan, Hao Chen","doi":"10.1109/TMI.2024.3453386","DOIUrl":"10.1109/TMI.2024.3453386","url":null,"abstract":"<p><p>In computational pathology, whole-slide image (WSI) classification presents a formidable challenge due to its gigapixel resolution and limited fine-grained annotations. Multiple-instance learning (MIL) offers a weakly supervised solution, yet refining instance-level information from bag-level labels remains challenging. While most of the conventional MIL methods use attention scores to estimate instance importance scores (IIS) which contribute to the prediction of the slide labels, these often lead to skewed attention distributions and inaccuracies in identifying crucial instances. To address these issues, we propose a new approach inspired by cooperative game theory: employing Shapley values to assess each instance's contribution, thereby improving IIS estimation. The computation of the Shapley value is then accelerated using attention, meanwhile retaining the enhanced instance identification and prioritization. We further introduce a framework for the progressive assignment of pseudo bags based on estimated IIS, encouraging more balanced attention distributions in MIL models. Our extensive experiments on CAMELYON-16, BRACS, TCGA-LUNG, and TCGA-BRCA datasets show our method's superiority over existing state-of-the-art approaches, offering enhanced interpretability and class-wise insights. We will release the code upon acceptance.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for Histopathological Image Segmentation. GenSelfDiff-HIS:利用扩散进行组织病理图像分割的生成式自我监督
IEEE transactions on medical imaging Pub Date : 2024-09-02 DOI: 10.1109/TMI.2024.3453492
Vishnuvardhan Purma, Suhas Srinath, Seshan Srirangarajan, Aanchal Kakkar, A P Prathosh
{"title":"GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for Histopathological Image Segmentation.","authors":"Vishnuvardhan Purma, Suhas Srinath, Seshan Srirangarajan, Aanchal Kakkar, A P Prathosh","doi":"10.1109/TMI.2024.3453492","DOIUrl":"https://doi.org/10.1109/TMI.2024.3453492","url":null,"abstract":"<p><p>Histopathological image segmentation is a laborious and time-intensive task, often requiring analysis from experienced pathologists for accurate examinations. To reduce this burden, supervised machine-learning approaches have been adopted using large-scale annotated datasets for histopathological image analysis. However, in several scenarios, the availability of large-scale annotated data is a bottleneck while training such models. Self-supervised learning (SSL) is an alternative paradigm that provides some respite by constructing models utilizing only the unannotated data which is often abundant. The basic idea of SSL is to train a network to perform one or many pseudo or pretext tasks on unannotated data and use it subsequently as the basis for a variety of downstream tasks. It is seen that the success of SSL depends critically on the considered pretext task. While there have been many efforts in designing pretext tasks for classification problems, there have not been many attempts on SSL for histopathological image segmentation. Motivated by this, we propose an SSL approach for segmenting histopathological images via generative diffusion models. Our method is based on the observation that diffusion models effectively solve an image-to-image translation task akin to a segmentation task. Hence, we propose generative diffusion as the pretext task for histopathological image segmentation. We also utilize a multi-loss function-based fine-tuning for the downstream task. We validate our method using several metrics on two publicly available datasets along with a newly proposed head and neck (HN) cancer dataset containing Hematoxylin and Eosin (H&E) stained images along with annotations.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-aware Multisite Adaptive Graph Transformer for Brain Disorder Diagnosis. 用于脑部疾病诊断的知识感知多站点自适应图转换器
IEEE transactions on medical imaging Pub Date : 2024-09-02 DOI: 10.1109/TMI.2024.3453419
Xuegang Song, Kaixiang Shu, Peng Yang, Cheng Zhao, Feng Zhou, Alejandro F Frangi, Xiaohua Xiao, Lei Dong, Tianfu Wang, Shuqiang Wang, Baiying Lei
{"title":"Knowledge-aware Multisite Adaptive Graph Transformer for Brain Disorder Diagnosis.","authors":"Xuegang Song, Kaixiang Shu, Peng Yang, Cheng Zhao, Feng Zhou, Alejandro F Frangi, Xiaohua Xiao, Lei Dong, Tianfu Wang, Shuqiang Wang, Baiying Lei","doi":"10.1109/TMI.2024.3453419","DOIUrl":"https://doi.org/10.1109/TMI.2024.3453419","url":null,"abstract":"<p><p>Brain disorder diagnosis via resting-state functional magnetic resonance imaging (rs-fMRI) is usually limited due to the complex imaging features and sample size. For brain disorder diagnosis, the graph convolutional network (GCN) has achieved remarkable success by capturing interactions between individuals and the population. However, there are mainly three limitations: 1) The previous GCN approaches consider the non-imaging information in edge construction but ignore the sensitivity differences of features to non-imaging information. 2) The previous GCN approaches solely focus on establishing interactions between subjects (i.e., individuals and the population), disregarding the essential relationship between features. 3) Multisite data increase the sample size to help classifier training, but the inter-site heterogeneity limits the performance to some extent. This paper proposes a knowledge-aware multisite adaptive graph Transformer to address the above problems. First, we evaluate the sensitivity of features to each piece of non-imaging information, and then construct feature-sensitive and feature-insensitive subgraphs. Second, after fusing the above subgraphs, we integrate a Transformer module to capture the intrinsic relationship between features. Third, we design a domain adaptive GCN using multiple loss function terms to relieve data heterogeneity and to produce the final classification results. Last, the proposed framework is validated on two brain disorder diagnostic tasks. Experimental results show that the proposed framework can achieve state-of-the-art performance.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatiotemporal Microstate Dynamics of Spike-free Scalp EEG Offer a Potential Biomarker for Refractory Temporal Lobe Epilepsy. 无尖峰头皮脑电图的时空微状态动力学为难治性颞叶癫痫提供了一种潜在的生物标记。
IEEE transactions on medical imaging Pub Date : 2024-09-02 DOI: 10.1109/TMI.2024.3453377
Rui Feng, Jingwen Yang, Hao Huang, Zelin Chen, Ruiyan Feng, N U Farrukh Hameed, Xudong Zhang, Jie Hu, Liang Chen, Shuo Lu
{"title":"Spatiotemporal Microstate Dynamics of Spike-free Scalp EEG Offer a Potential Biomarker for Refractory Temporal Lobe Epilepsy.","authors":"Rui Feng, Jingwen Yang, Hao Huang, Zelin Chen, Ruiyan Feng, N U Farrukh Hameed, Xudong Zhang, Jie Hu, Liang Chen, Shuo Lu","doi":"10.1109/TMI.2024.3453377","DOIUrl":"https://doi.org/10.1109/TMI.2024.3453377","url":null,"abstract":"<p><p>Refractory temporal lobe epilepsy (TLE) is one of the most frequently observed subtypes of epilepsy and endangers more than 50 million people world-wide. Although electroencephalogram (EEG) had been widely recognized as a classic tool to screen and diagnose epilepsy, for many years it heavily relied on identifying epileptic discharges and epileptogenic zone localization, which however, limits the understanding of refractory epilepsy due to the network nature of this disease. This work hypothesizes that the microstate dynamics based on resting-state scalp EEG can offer an additional network depiction of the disease and provide potential complementary evaluation tool for the TLE even without detectable epileptic discharges on EEG. We propose a novel framework for EEG microstate spatial-temporal dynamics (EEG-MiSTD) analysis based on machine learning to comprehensively model millisecond-changing whole-brain network dynamics. With only 100 seconds of resting-state EEG even without epileptic discharges, this approach successfully distinguishes TLE patients from healthy controls and is related to the lateralization of epileptic focus. Besides, microstate temporal and spatial features are found to be widely related to clinical parameters, which further demonstrate that TLE is a network disease. A preliminary exploration suggests that the spatial topography is sensitive to the following surgical outcomes. From such a new perspective, our results suggest that spatiotemporal microstate dynamics is potentially a biomarker of the disease. The developed EEG-MiSTD framework can probably be considered as a general tool to examine dynamical brain network disruption in a user-friendly way for other types of epilepsy.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Modal Federated Learning for Cancer Staging over Non-IID Datasets with Unbalanced Modalities. 在不平衡模式的非 IID 数据集上进行癌症分期的多模式联合学习
IEEE transactions on medical imaging Pub Date : 2024-08-28 DOI: 10.1109/TMI.2024.3450855
Kasra Borazjani, Naji Khosravan, Leslie Ying, Seyyedali Hosseinalipour
{"title":"Multi-Modal Federated Learning for Cancer Staging over Non-IID Datasets with Unbalanced Modalities.","authors":"Kasra Borazjani, Naji Khosravan, Leslie Ying, Seyyedali Hosseinalipour","doi":"10.1109/TMI.2024.3450855","DOIUrl":"https://doi.org/10.1109/TMI.2024.3450855","url":null,"abstract":"<p><p>The use of machine learning (ML) for cancer staging through medical image analysis has gained substantial interest across medical disciplines. When accompanied by the innovative federated learning (FL) framework, ML techniques can further overcome privacy concerns related to patient data exposure. Given the frequent presence of diverse data modalities within patient records, leveraging FL in a multi-modal learning framework holds considerable promise for cancer staging. However, existing works on multi-modal FL often presume that all data-collecting institutions have access to all data modalities. This oversimplified approach neglects institutions that have access to only a portion of data modalities within the system. In this work, we introduce a novel FL architecture designed to accommodate not only the heterogeneity of data samples, but also the inherent heterogeneity/non-uniformity of data modalities across institutions. We shed light on the challenges associated with varying convergence speeds observed across different data modalities within our FL system. Subsequently, we propose a solution to tackle these challenges by devising a distributed gradient blending and proximity-aware client weighting strategy tailored for multi-modal FL. To show the superiority of our method, we conduct experiments using The Cancer Genome Atlas program (TCGA) datalake considering different cancer types and three modalities of data: mRNA sequences, histopathological image data, and clinical information. Our results further unveil the impact and severity of class-based vs type-based heterogeneity across institutions on the model performance, which widens the perspective to the notion of data heterogeneity in multi-modal FL literature.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142086428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emulating Low-Dose PCCT Image Pairs with Independent Noise for Self-Supervised Spectral Image Denoising. 利用独立噪声模拟低剂量 PCCT 图像对,实现自监督光谱图像去噪。
IEEE transactions on medical imaging Pub Date : 2024-08-28 DOI: 10.1109/TMI.2024.3449817
Sen Wang, Yirong Yang, Grant M Stevens, Zhye Yin, Adam S Wang
{"title":"Emulating Low-Dose PCCT Image Pairs with Independent Noise for Self-Supervised Spectral Image Denoising.","authors":"Sen Wang, Yirong Yang, Grant M Stevens, Zhye Yin, Adam S Wang","doi":"10.1109/TMI.2024.3449817","DOIUrl":"https://doi.org/10.1109/TMI.2024.3449817","url":null,"abstract":"<p><p>Photon counting CT (PCCT) acquires spectral measurements and enables generation of material decomposition (MD) images that provide distinct advantages in various clinical situations. However, noise amplification is observed in MD images, and denoising is typically applied. Clean or high-quality references are rare in clinical scans, often making supervised learning (Noise2Clean) impractical. Noise2Noise is a self-supervised counterpart, using noisy images and corresponding noisy references with zero-mean, independent noise. PCCT counts transmitted photons separately, and raw measurements are assumed to follow a Poisson distribution in each energy bin, providing the possibility to create noise-independent pairs. The approach is to use binomial selection to split the counts into two low-dose scans with independent noise. We prove that the reconstructed spectral images inherit the noise independence from counts domain through noise propagation analysis and also validated it in numerical simulation and experimental phantom scans. The method offers the flexibility to split measurements into desired dose levels while ensuring the reconstructed images share identical underlying features, thereby strengthening the model's robustness for input dose levels and capability of preserving fine details. In both numerical simulation and experimental phantom scans, we demonstrated that Noise2Noise with binomial selection outperforms other common self-supervised learning methods based on different presumptive conditions.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142086427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信