IEEE Transactions on Medical Imaging最新文献

筛选
英文 中文
Unsupervised Brain Lesion Segmentation Using Posterior Distributions Learned by Subspace-based Generative Model. 基于子空间生成模型学习后验分布的无监督脑损伤分割。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-08 DOI: 10.1109/tmi.2025.3597080
Huixiang Zhuang,Yue Guan,Yi Ding,Chang Xu,Zijun Cheng,Yuhao Ma,Ruihao Liu,Ziyu Meng,Cao Li,Yao Li,Zhi-Pei Liang
{"title":"Unsupervised Brain Lesion Segmentation Using Posterior Distributions Learned by Subspace-based Generative Model.","authors":"Huixiang Zhuang,Yue Guan,Yi Ding,Chang Xu,Zijun Cheng,Yuhao Ma,Ruihao Liu,Ziyu Meng,Cao Li,Yao Li,Zhi-Pei Liang","doi":"10.1109/tmi.2025.3597080","DOIUrl":"https://doi.org/10.1109/tmi.2025.3597080","url":null,"abstract":"Unsupervised brain lesion segmentation, focusing on learning normative distributions from images of healthy subjects, are less dependent on lesion-labeled data, thus exhibiting better generalization capabilities. A fundamental challenge in learning normative distributions of images lies in the high dimensionality if image pixels are treated as correlated random variables to capture spatial dependence. In this study, we proposed a subspace-based deep generative model to learn the posterior normal distributions. Specifically, we used probabilistic subspace models to capture spatial-intensity distributions and spatial-structure distributions of brain images from healthy subjects. These models captured prior spatial-intensity and spatial-structure variations effectively by treating the subspace coefficients as random variables with basis functions being the eigen-images and eigen-density functions learned from the training data. These prior distributions were then converted to posterior distributions, including both the posterior normal and posterior lesion distributions for a given image using the subspace-based generative model and subspace-assisted Bayesian analysis, respectively. Finally, an unsupervised fusion classifier was used to combine the posterior and likelihood features for lesion segmentation. The proposed method has been evaluated on simulated and real lesion data, including tumor, multiple sclerosis, and stroke, demonstrating superior segmentation accuracy and robustness over the state-of-the-art methods. Our proposed method holds promise for enhancing unsupervised brain lesion delineation in clinical applications.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"699 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144802499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Anisotropic Cross-View Texture Transfer with Multi-Reference Non-Local Attention for CT Slice Interpolation. 基于多参考非局部关注的CT切片插值各向异性交叉纹理传递。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-08 DOI: 10.1109/tmi.2025.3596957
Kwang-Hyun Uhm,Hyunjun Cho,Sung-Hoo Hong,Seung-Won Jung
{"title":"An Anisotropic Cross-View Texture Transfer with Multi-Reference Non-Local Attention for CT Slice Interpolation.","authors":"Kwang-Hyun Uhm,Hyunjun Cho,Sung-Hoo Hong,Seung-Won Jung","doi":"10.1109/tmi.2025.3596957","DOIUrl":"https://doi.org/10.1109/tmi.2025.3596957","url":null,"abstract":"Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at https://github.com/khuhm/ACVTT.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"70 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144802503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JustRAIGS: Justified Referral in AI Glaucoma Screening Challenge. jusstraigs:人工智能青光眼筛查挑战的合理转诊。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-07 DOI: 10.1109/tmi.2025.3596874
Yeganeh Madadi,Hina Raja,Koenraad A Vermeer,Hans G Lemij,Xiaoqin Huang,Eunjin Kim,Seunghoon Lee,Gitaek Kwon,Hyunwoo Kim,Jaeyoung Kim,Adrian Galdran,Miguel A Gonzalez Ballester,Dan Presil,Kristhian Aguilar,Victor Cavalcante,Celso Carvalho,Waldir Sabino,Mateus Oliveira,Hui Lin,Charilaos Apostolidis,Aggelos K Katsaggelos,Tomasz Kubrak,A Casado-Garcia,J Heras,M Ortega,L Ramos,Philippe Zhang,Yihao Li,Jing Zhang,Weili Jiang,Pierre-Henri Conze,Mathieu Lamard,Gwenole Quellec,Mostafa El Habib Daho,Madukuri Shaurya,Anumeha Varma,Monika Agrawal,Siamak Yousefi
{"title":"JustRAIGS: Justified Referral in AI Glaucoma Screening Challenge.","authors":"Yeganeh Madadi,Hina Raja,Koenraad A Vermeer,Hans G Lemij,Xiaoqin Huang,Eunjin Kim,Seunghoon Lee,Gitaek Kwon,Hyunwoo Kim,Jaeyoung Kim,Adrian Galdran,Miguel A Gonzalez Ballester,Dan Presil,Kristhian Aguilar,Victor Cavalcante,Celso Carvalho,Waldir Sabino,Mateus Oliveira,Hui Lin,Charilaos Apostolidis,Aggelos K Katsaggelos,Tomasz Kubrak,A Casado-Garcia,J Heras,M Ortega,L Ramos,Philippe Zhang,Yihao Li,Jing Zhang,Weili Jiang,Pierre-Henri Conze,Mathieu Lamard,Gwenole Quellec,Mostafa El Habib Daho,Madukuri Shaurya,Anumeha Varma,Monika Agrawal,Siamak Yousefi","doi":"10.1109/tmi.2025.3596874","DOIUrl":"https://doi.org/10.1109/tmi.2025.3596874","url":null,"abstract":"A major contributor to permanent vision loss is glaucoma. Early diagnosis is crucial for preventing vision loss due to glaucoma, making glaucoma screening essential. A more affordable method of glaucoma screening can be achieved by applying artificial intelligence to evaluate color fundus photographs (CFPs). We present the Justified Referral in AI Glaucoma Screening (JustRAIGS) challenge to further develop these AI algorithms for glaucoma screening and to assess their efficacy. To support this challenge, we have generated a distinctive big dataset containing more than 110,000 meticulously labeled CFPs obtained from approximately 60,000 patients and 500 distinct screening centers in the USA. Our objective is to assess the practicality of creating advanced and dependable AI systems that can take a CFP as input and produce the probability of referable glaucoma, as well as outputs for glaucoma justification by integrating both binary and multi-label classification tasks. This paper presents the evaluation of solutions provided by nine teams, recognizing the team with the highest level of performance. The highest achieved score of sensitivity at a specificity level of 95% was 85%, and the highest achieved score of Hamming losses average was 0.13. Additionally, we test the top three participants' algorithms on an external dataset to validate the performance and generalization of these models. The outcomes of this research can offer valuable insights into the development of intelligent systems for detecting glaucoma. Ultimately, findings can aid in the early detection and treatment of glaucoma patients, hence decreasing preventable vision impairment and blindness caused by glaucoma.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"95 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144796864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Learning of New Diseases through Knowledge-Enhanced Initialization for Federated Adapter Tuning. 通过知识增强初始化联邦适配器调优提高对新疾病的学习。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-07 DOI: 10.1109/tmi.2025.3596835
Danni Peng,Yuan Wang,Kangning Cai,Peiyan Ning,Jiming Xu,Yong Liu,Rick Siow Mong Goh,Qingsong Wei,Huazhu Fu
{"title":"Improving Learning of New Diseases through Knowledge-Enhanced Initialization for Federated Adapter Tuning.","authors":"Danni Peng,Yuan Wang,Kangning Cai,Peiyan Ning,Jiming Xu,Yong Liu,Rick Siow Mong Goh,Qingsong Wei,Huazhu Fu","doi":"10.1109/tmi.2025.3596835","DOIUrl":"https://doi.org/10.1109/tmi.2025.3596835","url":null,"abstract":"In healthcare, federated learning (FL) is a widely adopted framework that enables privacy-preserving collaboration among medical institutions. With large foundation models (FMs) demonstrating impressive capabilities, using FMs in FL through cost-efficient adapter tuning has become a popular approach. Given the rapidly evolving healthcare environment, it is crucial for individual clients to quickly adapt to new tasks or diseases by tuning adapters while drawing upon past experiences. In this work, we introduce Federated Knowledge-Enhanced Initialization (FedKEI), a novel framework that leverages cross-client and cross-task transfer from past knowledge to generate informed initializations for learning new tasks with adapters. FedKEI begins with a global clustering process at the server to generalize knowledge across tasks, followed by the optimization of aggregation weights across clusters (inter-cluster weights) and within each cluster (intra-cluster weights) to personalize knowledge transfer for each new task. To facilitate more effective learning of the inter- and intra-cluster weights, we adopt a bi-level optimization scheme that collaboratively learns the global intra-cluster weights across clients and optimizes the local inter-cluster weights toward each client's task objective. Extensive experiments on three benchmark datasets of different modalities, including dermatology, chest X-rays, and retinal OCT, demonstrate FedKEI's advantage in adapting to new diseases compared to state-of-the-art methods.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"40 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144796870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GM-ABS: Promptable Generalist Model Drives Active Barely Supervised Training in Specialist Model for 3D Medical Image Segmentation. GM-ABS:提示通才模型驱动积极的几乎没有监督的专家模型训练,用于3D医学图像分割。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-07 DOI: 10.1109/tmi.2025.3596850
Zhe Xu,Cheng Chen,Donghuan Lu,Jinghan Sun,Dong Wei,Yefeng Zheng,Quanzheng Li,Raymond Kai-Yu Tong
{"title":"GM-ABS: Promptable Generalist Model Drives Active Barely Supervised Training in Specialist Model for 3D Medical Image Segmentation.","authors":"Zhe Xu,Cheng Chen,Donghuan Lu,Jinghan Sun,Dong Wei,Yefeng Zheng,Quanzheng Li,Raymond Kai-Yu Tong","doi":"10.1109/tmi.2025.3596850","DOIUrl":"https://doi.org/10.1109/tmi.2025.3596850","url":null,"abstract":"Semi-supervised learning (SSL) has greatly advanced 3D medical image segmentation by alleviating the need for intensive labeling by radiologists. While previous efforts focused on model-centric advancements, the emergence of foundational generalist models like the Segment Anything Model (SAM) is expected to reshape the SSL landscape. Although these generalists usually show performance gaps relative to previous specialists in medical imaging, they possess impressive zero-shot segmentation abilities with manual prompts. Thus, this capability could serve as \"free lunch\" for training specialists, offering future SSL a promising data-centric perspective, especially revolutionizing both pseudo and expert labeling strategies to enhance the data pool. In this regard, we propose the Generalist Model-driven Active Barely Supervised (GM-ABS) learning paradigm, for developing specialized 3D segmentation models under extremely limited (barely) annotation budgets, e.g., merely cross-labeling three slices per selected scan. In specific, building upon a basic mean-teacher SSL framework, GM-ABS modernizes the SSL paradigm with two key data-centric designs: (i) Specialist-generalist collaboration, where the in-training specialist leverages class-specific positional prompts derived from class prototypes to interact with the frozen class-agnostic generalist across multiple views to achieve noisy-yet-effective label augmentation. Then, the specialist robustly assimilates the augmented knowledge via noise-tolerant collaborative learning. (ii) Expert-model collaboration that promotes active cross-labeling with notably low labeling efforts. This design progressively furnishes the specialist with informative and efficient supervision via a human-in-the-loop manner, which in turn benefits the quality of class-specific prompts. Extensive experiments on three benchmark datasets highlight the promising performance of GM-ABS over recent SSL approaches under extremely constrained labeling resources.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"736 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144796857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Learning of Augmentation and Disentanglement for Semi-Supervised Domain Generalized Medical Image Segmentation. 半监督域广义医学图像分割的增强解纠缠协同学习。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-06 DOI: 10.1109/tmi.2025.3596247
Zhiqiang Shen,Peng Cao,Qinghua Zhou,Jinzhu Yang,Osmar R Zaiane
{"title":"Collaborative Learning of Augmentation and Disentanglement for Semi-Supervised Domain Generalized Medical Image Segmentation.","authors":"Zhiqiang Shen,Peng Cao,Qinghua Zhou,Jinzhu Yang,Osmar R Zaiane","doi":"10.1109/tmi.2025.3596247","DOIUrl":"https://doi.org/10.1109/tmi.2025.3596247","url":null,"abstract":"This paper explores a challenging yet realistic scenario: semi-supervised domain generalization (SSDG) that includes label scarcity and domain shift problems. We pinpoint that the limitations of previous SSDG methods lie in 1) neglecting the difference between domain shifts existing within a training dataset (intra-domain shift, IDS) and those occurring between training and testing datasets (cross-domain shift, CDS) and 2) overlooking the interplay between label scarcity and domain shifts, resulting in these methods merely stitching together semi-supervised learning (SSL) and domain generalization (DG) techniques. Considering these limitations, we propose a novel perspective to decompose SSDG into the combination of unsupervised domain adaptation (UDA) and DG problems. To this end, we design a causal augmentation and disentanglement framework (CausalAD) for semi-supervised domain generalized medical image segmentation. Concretely, CausalAD involves two collaborative processes: an augmentation process, which utilizes disentangled style factors to perform style augmentation for UDA, and a disentanglement process, which decouples domain-invariant (content) and domain-variant (noise and style) features for DG. Furthermore, we propose a proxy-based self-paced training strategy (ProSPT) to guide the training of CausalAD by gradually selecting unlabeled image pixels with high-quality pseudo labels in a self-paced training manner. Finally, we introduce a hierarchical structural causal model (HSCM) to explain the intuition and concept behind our method. Extensive experiments in the cross-sequence, cross-site, and cross-modality semi-supervised domain generalized medical image segmentation settings show the effectiveness of CausalAD and its superiority over the state-of-the-art. The code is available at https://github.com/Senyh/CausalAD.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"34 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144791842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Brain Source Reconstruction by Initializing 3D Neural Networks with Physical Inverse Solutions. 用物理逆解初始化三维神经网络增强脑源重构。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-04 DOI: 10.1109/tmi.2025.3594724
Marco Morik,Ali Hashemi,Klaus-Robert Muller,Stefan Haufe,Shinichi Nakajima
{"title":"Enhancing Brain Source Reconstruction by Initializing 3D Neural Networks with Physical Inverse Solutions.","authors":"Marco Morik,Ali Hashemi,Klaus-Robert Muller,Stefan Haufe,Shinichi Nakajima","doi":"10.1109/tmi.2025.3594724","DOIUrl":"https://doi.org/10.1109/tmi.2025.3594724","url":null,"abstract":"Reconstructing brain sources is a fundamental challenge in neuroscience, crucial for understanding brain function and dysfunction. Electroencephalography (EEG) signals have a high temporal resolution. However, identifying the correct spatial location of brain sources from these signals remains difficult due to the ill-posed structure of the problem. Traditional methods predominantly rely on manually crafted priors, missing the flexibility of data-driven learning, while recent deep learning approaches focus on end-to-end learning, typically using the physical information of the forward model only for generating training data. We propose the novel hybrid method 3D-PIUNet for EEG source localization that effectively integrates the strengths of traditional and deep learning techniques. 3D-PIUNet starts from an initial physics-informed estimate by using the pseudo inverse to map from measurements to source space. Secondly, by viewing the brain as a 3D volume, we use a 3D convolutional U-Net to capture spatial dependencies and refine the solution according to the learned data prior. Training the model relies on simulated pseudo-realistic brain source data, covering different source distributions. Trained on this data, our model significantly improves spatial accuracy, demonstrating superior performance over both traditional and end-to-end data-driven methods. Additionally, we validate our findings with real EEG data from a visual task, where 3D-PIUNet successfully identifies the visual cortex and reconstructs the expected temporal behavior, thereby showcasing its practical applicability.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"26 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144778009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VLM-CPL: Consensus Pseudo-Labels from Vision-Language Models for Annotation-Free Pathological Image Classification VLM-CPL:用于无注释病理图像分类的一致伪标签视觉语言模型
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-04 DOI: 10.1109/tmi.2025.3595111
Lanfeng Zhong, Zongyao Huang, Yang Liu, Wenjun Liao, Shichuan Zhang, Guotai Wang, Shaoting Zhang
{"title":"VLM-CPL: Consensus Pseudo-Labels from Vision-Language Models for Annotation-Free Pathological Image Classification","authors":"Lanfeng Zhong, Zongyao Huang, Yang Liu, Wenjun Liao, Shichuan Zhang, Guotai Wang, Shaoting Zhang","doi":"10.1109/tmi.2025.3595111","DOIUrl":"https://doi.org/10.1109/tmi.2025.3595111","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"147 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144778204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In vivo 4D x-ray dark-field lung imaging in mice 小鼠体内4D x线肺暗场成像
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-04 DOI: 10.1109/tmi.2025.3595666
Ying Ying How, Nicole Reyne, Michelle K. Croughan, Patricia Cmielewski, Daniel Batey, Lucy F. Costello, Ronan Smith, Jannis N. Ahlers, Marian Cholewa, Magdalena Kolodziej, Julia Duerr, Marcus A. Mall, Marcus J. Kitchen, Marie-Liesse Asselin-Labat, David M. Paganin, Martin Donnelley, Kaye S. Morgan
{"title":"In vivo 4D x-ray dark-field lung imaging in mice","authors":"Ying Ying How, Nicole Reyne, Michelle K. Croughan, Patricia Cmielewski, Daniel Batey, Lucy F. Costello, Ronan Smith, Jannis N. Ahlers, Marian Cholewa, Magdalena Kolodziej, Julia Duerr, Marcus A. Mall, Marcus J. Kitchen, Marie-Liesse Asselin-Labat, David M. Paganin, Martin Donnelley, Kaye S. Morgan","doi":"10.1109/tmi.2025.3595666","DOIUrl":"https://doi.org/10.1109/tmi.2025.3595666","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"20 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144778310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Wasserstein Space Based Framework for Processing Fiber Orientation Geometry in Diffusion MRI. 弥散MRI中纤维取向几何处理的Wasserstein空间框架。
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2025-08-04 DOI: 10.1109/tmi.2025.3595367
Xinyu Nie,Yonggang Shi
{"title":"A Wasserstein Space Based Framework for Processing Fiber Orientation Geometry in Diffusion MRI.","authors":"Xinyu Nie,Yonggang Shi","doi":"10.1109/tmi.2025.3595367","DOIUrl":"https://doi.org/10.1109/tmi.2025.3595367","url":null,"abstract":"The fiber orientation distribution (FOD) function is an advanced model for high angular resolution diffusion MRI, capable of representing complex crossing or fanning fiber geometries. However, the intricate mathematical structures of FOD functions pose significant challenges for data processing and analysis. Current frameworks often fail to consider fiber bundle rotation information among FOD peaks, leading to improper data processing, such as inaccurate FOD interpolation and, consequently, anatomically incorrect fiber tracking. This paper presents a novel Wasserstein space based framework for processing and analyzing FOD functions that systematically considers fiber-bundle-specific geometry. Our approach begins with a spherical deconvolution method to accurately detect and decompose FOD functions into single-peak lobes. These single-peak lobes are then embedded into the Wasserstein space, where a new metric for FOD functions is defined, capable of handling rotations among peak lobes. We introduce a geometry-aware clustering method to regroup the single-peak lobes for further bundle-specific FOD processing. The proposed framework is applied to the essential task of FOD interpolation, computed as the Barycenter of the new metric, with a fast approximation method for efficient computation. Experiments conducted on synthetic data, as well as datasets from the Human Connectome Project (HCP) and the Alzheimer's Disease Neuroimaging Initiative (ADNI), demonstrate that our framework effectively handles complex fiber geometries, provides anatomically meaningful FOD interpolations, and significantly enhances the performance of FOD-based tractography.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"154 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144778008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信