IEEE transactions on medical imaging最新文献

筛选
英文 中文
Building a Synthetic Vascular Model: Evaluation in an Intracranial Aneurysms Detection Scenario. 建立合成血管模型:在颅内动脉瘤检测场景中进行评估。
IEEE transactions on medical imaging Pub Date : 2024-11-06 DOI: 10.1109/TMI.2024.3492313
Rafic Nader, Florent Autrusseau, Vincent L'Allinec, Romain Bourcier
{"title":"Building a Synthetic Vascular Model: Evaluation in an Intracranial Aneurysms Detection Scenario.","authors":"Rafic Nader, Florent Autrusseau, Vincent L'Allinec, Romain Bourcier","doi":"10.1109/TMI.2024.3492313","DOIUrl":"https://doi.org/10.1109/TMI.2024.3492313","url":null,"abstract":"<p><p>We hereby present a full synthetic model, able to mimic the various constituents of the cerebral vascular tree, including the cerebral arteries, bifurcations and intracranial aneurysms. This model intends to provide a substantial dataset of brain arteries which could be used by a 3D convolutional neural network to efficiently detect Intra-Cranial Aneurysms. The cerebral aneurysms most often occur on a particular structure of the vascular tree named the Circle of Willis. Various studies have been conducted to detect and monitor the aneurysms and those based on Deep Learning achieve the best performance. Specifically, in this work, we propose a full synthetic 3D model able to mimic the brain vasculature as acquired by Magnetic Resonance Angiography, Time Of Flight principle. Among the various MRI modalities, this latter allows for a good rendering of the blood vessels and is non-invasive. Our model has been designed to simultaneously mimic the arteries' geometry, the aneurysm shape, and the background noise. The vascular tree geometry is modeled thanks to an interpolation with 3D Spline functions, and the statistical properties of the background noise is collected from angiography acquisitions and reproduced within the model. In this work, we thoroughly describe the synthetic vasculature model, we build up a neural network designed for aneurysm segmentation and detection, finally, we carry out an in-depth evaluation of the performance gap gained thanks to the synthetic model data augmentation.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAMF-Net: Feature Alignment Mutual Attention Fusion with Region Awareness for Breast Cancer Diagnosis via Imbalanced Data. FAMF-Net:通过不平衡数据进行乳腺癌诊断的特征对齐与区域感知相互关注融合。
IEEE transactions on medical imaging Pub Date : 2024-11-05 DOI: 10.1109/TMI.2024.3485612
Yiyao Liu, Jinyao Li, Cheng Zhao, Yongtao Zhang, Qian Chen, Jing Qin, Lei Dong, Tianfu Wang, Wei Jiang, Baiying Lei
{"title":"FAMF-Net: Feature Alignment Mutual Attention Fusion with Region Awareness for Breast Cancer Diagnosis via Imbalanced Data.","authors":"Yiyao Liu, Jinyao Li, Cheng Zhao, Yongtao Zhang, Qian Chen, Jing Qin, Lei Dong, Tianfu Wang, Wei Jiang, Baiying Lei","doi":"10.1109/TMI.2024.3485612","DOIUrl":"https://doi.org/10.1109/TMI.2024.3485612","url":null,"abstract":"<p><p>Automatic and accurate classification of breast cancer in multimodal ultrasound images is crucial to improve patients' diagnosis and treatment effect and save medical resources. Methodologically, the fusion of multimodal ultrasound images often encounters challenges such as misalignment, limited utilization of complementary information, poor interpretability in feature fusion, and imbalances in sample categories. To solve these problems, we propose a feature alignment mutual attention fusion method (FAMF-Net), which consists of a region awareness alignment (RAA) block, a mutual attention fusion (MAF) block, and a reinforcement learning-based dynamic optimization strategy(RDO). Specifically, RAA achieves region awareness through class activation mapping and performs translation transformation to achieve feature alignment. When MAF utilizes a mutual attention mechanism for feature interaction fusion, it mines edge and color features separately in B-mode and shear wave elastography images, enhancing the complementarity of features and improving interpretability. Finally, RDO uses the distribution of samples and prediction probabilities during training as the state of reinforcement learning to dynamically optimize the weights of the loss function, thereby solving the problem of class imbalance. The experimental results based on our clinically obtained dataset demonstrate the effectiveness of the proposed method. Our code will be available at: https://github.com/Magnety/Multi_modal_Image.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142585413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrections to “Contrastive Graph Pooling for Explainable Classification of Brain Networks” 脑网络可解释分类的对比图集合》的更正。
IEEE transactions on medical imaging Pub Date : 2024-11-04 DOI: 10.1109/TMI.2024.3465968
Jiaxing Xu;Qingtian Bian;Xinhang Li;Aihu Zhang;Yiping Ke;Miao Qiao;Wei Zhang;Wei Khang Jeremy Sim;Balázs Gulyás
{"title":"Corrections to “Contrastive Graph Pooling for Explainable Classification of Brain Networks”","authors":"Jiaxing Xu;Qingtian Bian;Xinhang Li;Aihu Zhang;Yiping Ke;Miao Qiao;Wei Zhang;Wei Khang Jeremy Sim;Balázs Gulyás","doi":"10.1109/TMI.2024.3465968","DOIUrl":"10.1109/TMI.2024.3465968","url":null,"abstract":"","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"43 11","pages":"4075-4075"},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10741900","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Center Fetal Brain Tissue Annotation (FeTA) Challenge 2022 Results. 2022 年多中心胎儿脑组织注释(FeTA)挑战赛结果。
IEEE transactions on medical imaging Pub Date : 2024-10-30 DOI: 10.1109/TMI.2024.3485554
Kelly Payette, Celine Steger, Roxane Licandro, Priscille De Dumast, Hongwei Bran Li, Matthew Barkovich, Liu Li, Maik Dannecker, Chen Chen, Cheng Ouyang, Niccolo McConnell, Alina Miron, Yongmin Li, Alena Uus, Irina Grigorescu, Paula Ramirez Gilliland, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Haoyu Wang, Ziyan Huang, Jin Ye, Mireia Alenya, Valentin Comte, Oscar Camara, Jean-Baptiste Masson, Astrid Nilsson, Charlotte Godard, Moona Mazher, Abdul Qayyum, Yibo Gao, Hangqi Zhou, Shangqi Gao, Jia Fu, Guiming Dong, Guotai Wang, ZunHyan Rieu, HyeonSik Yang, Minwoo Lee, Szymon Plotka, Michal K Grzeszczyk, Arkadiusz Sitek, Luisa Vargas Daza, Santiago Usma, Pablo Arbelaez, Wenying Lu, Wenhao Zhang, Jing Liang, Romain Valabregue, Anand A Joshi, Krishna N Nayak, Richard M Leahy, Luca Wilhelmi, Aline Dandliker, Hui Ji, Antonio G Gennari, Anton Jakovcic, Melita Klaic, Ana Adzic, Pavel Markovic, Gracia Grabaric, Gregor Kasprian, Gregor Dovjak, Milan Rados, Lana Vasung, Meritxell Bach Cuadra, Andras Jakab
{"title":"Multi-Center Fetal Brain Tissue Annotation (FeTA) Challenge 2022 Results.","authors":"Kelly Payette, Celine Steger, Roxane Licandro, Priscille De Dumast, Hongwei Bran Li, Matthew Barkovich, Liu Li, Maik Dannecker, Chen Chen, Cheng Ouyang, Niccolo McConnell, Alina Miron, Yongmin Li, Alena Uus, Irina Grigorescu, Paula Ramirez Gilliland, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Haoyu Wang, Ziyan Huang, Jin Ye, Mireia Alenya, Valentin Comte, Oscar Camara, Jean-Baptiste Masson, Astrid Nilsson, Charlotte Godard, Moona Mazher, Abdul Qayyum, Yibo Gao, Hangqi Zhou, Shangqi Gao, Jia Fu, Guiming Dong, Guotai Wang, ZunHyan Rieu, HyeonSik Yang, Minwoo Lee, Szymon Plotka, Michal K Grzeszczyk, Arkadiusz Sitek, Luisa Vargas Daza, Santiago Usma, Pablo Arbelaez, Wenying Lu, Wenhao Zhang, Jing Liang, Romain Valabregue, Anand A Joshi, Krishna N Nayak, Richard M Leahy, Luca Wilhelmi, Aline Dandliker, Hui Ji, Antonio G Gennari, Anton Jakovcic, Melita Klaic, Ana Adzic, Pavel Markovic, Gracia Grabaric, Gregor Kasprian, Gregor Dovjak, Milan Rados, Lana Vasung, Meritxell Bach Cuadra, Andras Jakab","doi":"10.1109/TMI.2024.3485554","DOIUrl":"https://doi.org/10.1109/TMI.2024.3485554","url":null,"abstract":"<p><p>Segmentation is a critical step in analyzing the developing human fetal brain. There have been vast improvements in automatic segmentation methods in the past several years, and the Fetal Brain Tissue Annotation (FeTA) Challenge 2021 helped to establish an excellent standard of fetal brain segmentation. However, FeTA 2021 was a single center study, limiting real-world clinical applicability and acceptance. The multi-center FeTA Challenge 2022 focused on advancing the generalizability of fetal brain segmentation algorithms for magnetic resonance imaging (MRI). In FeTA 2022, the training dataset contained images and corresponding manually annotated multi-class labels from two imaging centers, and the testing data contained images from these two centers as well as two additional unseen centers. The multi-center data included different MR scanners, imaging parameters, and fetal brain super-resolution algorithms applied. 16 teams participated and 17 algorithms were evaluated. Here, the challenge results are presented, focusing on the generalizability of the submissions. Both in- and out-of-domain, the white matter and ventricles were segmented with the highest accuracy (Top Dice scores: 0.89, 0.87 respectively), while the most challenging structure remains the grey matter (Top Dice score: 0.75) due to anatomical complexity. The top 5 average Dices scores ranged from 0.81-0.82, the top 5 average 95<sup>th</sup> percentile Hausdorff distance values ranged from 2.3-2.5mm, and the top 5 volumetric similarity scores ranged from 0.90-0.92. The FeTA Challenge 2022 was able to successfully evaluate and advance generalizability of multi-class fetal brain tissue segmentation algorithms for MRI and it continues to benchmark new algorithms.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142549774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CQformer: Learning Dynamics Across Slices in Medical Image Segmentation CQformer:医学图像分割中的跨切片动态学习
IEEE transactions on medical imaging Pub Date : 2024-10-10 DOI: 10.1109/TMI.2024.3477555
Shengjie Zhang;Xin Shen;Xiang Chen;Ziqi Yu;Bohan Ren;Haibo Yang;Xiao-Yong Zhang;Yuan Zhou
{"title":"CQformer: Learning Dynamics Across Slices in Medical Image Segmentation","authors":"Shengjie Zhang;Xin Shen;Xiang Chen;Ziqi Yu;Bohan Ren;Haibo Yang;Xiao-Yong Zhang;Yuan Zhou","doi":"10.1109/TMI.2024.3477555","DOIUrl":"10.1109/TMI.2024.3477555","url":null,"abstract":"Prevalent studies on deep learning-based 3D medical image segmentation capture the continuous variation across 2D slices mainly via convolution, Transformer, inter-slice interaction, and time series models. In this work, via modeling this variation by an ordinary differential equation (ODE), we propose a cross instance query-guided Transformer architecture (CQformer) that leverages features from preceding slices to improve the segmentation performance of subsequent slices. Its key components include a cross-attention mechanism in an ODE formulation, which bridges the features of contiguous 2D slices of the 3D volumetric data. In addition, a regression head is employed to shorten the gap between the bottleneck and the prediction layer. Extensive experiments on 7 datasets with various modalities (CT, MRI) and tasks (organ, tissue, and lesion) demonstrate that CQformer outperforms previous state-of-the-art segmentation algorithms on 6 datasets by 0.44%–2.45%, and achieves the second highest performance of 88.30% on the BTCV dataset. The code is available at <uri>https://github.com/qbmizsj/CQformer</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"1043-1057"},"PeriodicalIF":0.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142402485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Invasive Deep-Brain Imaging With 3D Integrated Photoacoustic Tomography and Ultrasound Localization Microscopy (3D-PAULM) 利用三维集成光声层析成像和超声定位显微镜(3D-PAULM)进行无创深部脑成像。
IEEE transactions on medical imaging Pub Date : 2024-10-09 DOI: 10.1109/TMI.2024.3477317
Yuqi Tang;Nanchao Wang;Zhijie Dong;Matthew Lowerison;Angela del Aguila;Natalie Johnston;Tri Vu;Chenshuo Ma;Yirui Xu;Wei Yang;Pengfei Song;Junjie Yao
{"title":"Non-Invasive Deep-Brain Imaging With 3D Integrated Photoacoustic Tomography and Ultrasound Localization Microscopy (3D-PAULM)","authors":"Yuqi Tang;Nanchao Wang;Zhijie Dong;Matthew Lowerison;Angela del Aguila;Natalie Johnston;Tri Vu;Chenshuo Ma;Yirui Xu;Wei Yang;Pengfei Song;Junjie Yao","doi":"10.1109/TMI.2024.3477317","DOIUrl":"10.1109/TMI.2024.3477317","url":null,"abstract":"Photoacoustic computed tomography (PACT) is a proven technology for imaging hemodynamics in deep brain of small animal models. PACT is inherently compatible with ultrasound (US) imaging, providing complementary contrast mechanisms. While PACT can quantify the brain’s oxygen saturation of hemoglobin (sO<inline-formula> <tex-math>$_{{2}}text {)}$ </tex-math></inline-formula>, US imaging can probe the blood flow based on the Doppler effect. Further, by tracking gas-filled microbubbles, ultrasound localization microscopy (ULM) can map the blood flow velocity with sub-diffraction spatial resolution. In this work, we present a 3D deep-brain imaging system that seamlessly integrates PACT and ULM into a single device, 3D-PAULM. Using a low ultrasound frequency of 4 MHz, 3D-PAULM is capable of imaging the brain hemodynamic functions with intact scalp and skull in a totally non-invasive manner. Using 3D-PAULM, we studied the mouse brain functions with ischemic stroke. Multi-spectral PACT, US B-mode imaging, microbubble-enhanced power Doppler (PD), and ULM were performed on the same mouse brain with intrinsic image co-registration. From the multi-modality measurements, we further quantified blood perfusion, sO2, vessel density, and flow velocity of the mouse brain, showing stroke-induced ischemia, hypoxia, and reduced blood flow. We expect that 3D-PAULM can find broad applications in studying deep brain functions on small animal models.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"994-1004"},"PeriodicalIF":0.0,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142396305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GlandSAM: Injecting Morphology Knowledge Into Segment Anything Model for Label-Free Gland Segmentation GlandSAM:为无标签腺体分段模型注入形态学知识
IEEE transactions on medical imaging Pub Date : 2024-10-08 DOI: 10.1109/TMI.2024.3476176
Qixiang Zhang;Yi Li;Cheng Xue;Haonan Wang;Xiaomeng Li
{"title":"GlandSAM: Injecting Morphology Knowledge Into Segment Anything Model for Label-Free Gland Segmentation","authors":"Qixiang Zhang;Yi Li;Cheng Xue;Haonan Wang;Xiaomeng Li","doi":"10.1109/TMI.2024.3476176","DOIUrl":"10.1109/TMI.2024.3476176","url":null,"abstract":"This paper presents a label-free gland segmentation, GlandSAM, which achieves comparable performance with supervised methods while no label is required during its training or inference phase. We observe that the Segment Anything model produces sub-optimal results on gland dataset: It either over-segments a gland into many fractions or under-segments the gland regions by confusing many of them with the background, due to the complex morphology of glands and lack of sufficient labels. To address this challenge, our GlandSAM innovatively injects two clues about gland morphology into SAM to guide the segmentation process: (1) Heterogeneity within glands and (2) Similarity with the background. Initially, we leverage the clues to decompose the intricate glands by selectively extracting a proposal for each gland sub-region of heterogeneous appearances. Then, we inject the morphology clues into SAM in a fine-tuning manner with a novel morphology-aware semantic grouping module that explicitly groups the high-level semantics of gland sub-regions. In this way, our GlandSAM could capture comprehensive knowledge about gland morphology, and produce well-delineated and complete segmentation results. Extensive experiments conducted on the GlaS dataset and the CRAG dataset reveal that GlandSAM outperforms state-of-the-art label-free methods by a significant margin. Notably, our GlandSAM even surpasses several fully-supervised methods that require pixel-wise labels for training, which highlights the remarkable performance and potential of GlandSAM in the realm of gland segmentation.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"1070-1082"},"PeriodicalIF":0.0,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142385483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleash the Power of State Space Model for Whole Slide Image With Local Aware Scanning and Importance Resampling 利用局部感知扫描和重要性重采样,释放状态空间模型对整个幻灯片图像的处理能力
IEEE transactions on medical imaging Pub Date : 2024-10-07 DOI: 10.1109/TMI.2024.3475587
Yanyan Huang;Weiqin Zhao;Yu Fu;Lingting Zhu;Lequan Yu
{"title":"Unleash the Power of State Space Model for Whole Slide Image With Local Aware Scanning and Importance Resampling","authors":"Yanyan Huang;Weiqin Zhao;Yu Fu;Lingting Zhu;Lequan Yu","doi":"10.1109/TMI.2024.3475587","DOIUrl":"10.1109/TMI.2024.3475587","url":null,"abstract":"Whole slide image (WSI) analysis is gaining prominence within the medical imaging field. However, previous methods often fall short of efficiently processing entire WSIs due to their gigapixel size. Inspired by recent developments in state space models, this paper introduces a new Pathology Mamba (PAM) for more accurate and robust WSI analysis. PAM includes three carefully designed components to tackle the challenges of enormous image size, the utilization of local and hierarchical information, and the mismatch between the feature distributions of training and testing during WSI analysis. Specifically, we design a Bi-directional Mamba Encoder to process the extensive patches present in WSIs effectively and efficiently, which can handle large-scale pathological images while achieving high performance and accuracy. To further harness the local information and inherent hierarchical structure of WSI, we introduce a novel Local-aware Scanning module, which employs a local-aware mechanism alongside hierarchical scanning to adeptly capture both the local information and the overarching structure within WSIs. Moreover, to alleviate the patch feature distribution misalignment between training and testing, we propose a Test-time Importance Resampling module to conduct testing patch resampling to ensure consistency of feature distribution between the training and testing phases, and thus enhance model prediction. Extensive evaluation on nine WSI datasets with cancer subtyping and survival prediction tasks demonstrates that PAM outperforms current state-of-the-art methods and also its enhanced capability in modeling discriminative areas within WSIs. The source code is available at <uri>https://github.com/HKU-MedAI/PAM</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"1032-1042"},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142384456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morphology-Based Non-Rigid Registration of Coronary Computed Tomography and Intravascular Images Through Virtual Catheter Path Optimization 通过虚拟导管路径优化实现冠状动脉计算机断层扫描和血管内图像的基于形态学的非刚性配准
IEEE transactions on medical imaging Pub Date : 2024-10-07 DOI: 10.1109/TMI.2024.3474053
Karim Kadry;Max L. Olender;Andreas Schuh;Abhishek Karmakar;Kersten Petersen;Michiel Schaap;David Marlevi;Adam UpdePac;Takuya Mizukami;Charles Taylor;Elazer R. Edelman;Farhad R. Nezami
{"title":"Morphology-Based Non-Rigid Registration of Coronary Computed Tomography and Intravascular Images Through Virtual Catheter Path Optimization","authors":"Karim Kadry;Max L. Olender;Andreas Schuh;Abhishek Karmakar;Kersten Petersen;Michiel Schaap;David Marlevi;Adam UpdePac;Takuya Mizukami;Charles Taylor;Elazer R. Edelman;Farhad R. Nezami","doi":"10.1109/TMI.2024.3474053","DOIUrl":"10.1109/TMI.2024.3474053","url":null,"abstract":"Coronary computed tomography angiography (CCTA) provides 3D information on obstructive coronary artery disease, but cannot fully visualize high-resolution features within the vessel wall. Intravascular imaging, in contrast, can spatially resolve atherosclerotic in cross sectional slices, but is limited in capturing 3D relationships between each slice. Co-registering CCTA and intravascular images enables a variety of clinical research applications but is time consuming and user-dependent. This is due to intravascular images suffering from non-rigid distortions arising from irregularities in the imaging catheter path. To address these issues, we present a morphology-based framework for the rigid and non-rigid matching of intravascular images to CCTA images. To do this, we find the optimal virtual catheter path that samples the coronary artery in CCTA image space to recapitulate the coronary artery morphology observed in the intravascular image. We validate our framework on a multi-center cohort of 40 patients using bifurcation landmarks as ground truth for longitudinal and rotational registration. Our registration approach significantly outperforms other approaches for bifurcation alignment. By providing a differentiable framework for multi-modal vascular co-registration, our framework reduces the manual effort required to conduct large-scale multi-modal clinical studies and enables the development of machine learning-based co-registration approaches.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"880-890"},"PeriodicalIF":0.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142384455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GobletNet: Wavelet-Based High-Frequency Fusion Network for Semantic Segmentation of Electron Microscopy Images GobletNet:基于小波的高频融合网络,用于电子显微镜图像的语义分割。
IEEE transactions on medical imaging Pub Date : 2024-10-04 DOI: 10.1109/TMI.2024.3474028
Yanfeng Zhou;Lingrui Li;Chenlong Wang;Le Song;Ge Yang
{"title":"GobletNet: Wavelet-Based High-Frequency Fusion Network for Semantic Segmentation of Electron Microscopy Images","authors":"Yanfeng Zhou;Lingrui Li;Chenlong Wang;Le Song;Ge Yang","doi":"10.1109/TMI.2024.3474028","DOIUrl":"10.1109/TMI.2024.3474028","url":null,"abstract":"Semantic segmentation of electron microscopy (EM) images is crucial for nanoscale analysis. With the development of deep neural networks (DNNs), semantic segmentation of EM images has achieved remarkable success. However, current EM image segmentation models are usually extensions or adaptations of natural or biomedical models. They lack the full exploration and utilization of the intrinsic characteristics of EM images. Furthermore, they are often designed only for several specific segmentation objects and lack versatility. In this study, we quantitatively analyze the characteristics of EM images compared with those of natural and other biomedical images via the wavelet transform. To better utilize these characteristics, we design a high-frequency (HF) fusion network, GobletNet, which outperforms state-of-the-art models by a large margin in the semantic segmentation of EM images. We use the wavelet transform to generate HF images as extra inputs and use an extra encoding branch to extract HF information. Furthermore, we introduce a fusion-attention module (FAM) into GobletNet to facilitate better absorption and fusion of information from raw images and HF images. Extensive benchmarking on seven public EM datasets (EPFL, CREMI, SNEMI3D, UroCell, MitoEM, Nanowire and BetaSeg) demonstrates the effectiveness of our model. The code is available at <uri>https://github.com/Yanfeng-Zhou/GobletNet</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 2","pages":"1058-1069"},"PeriodicalIF":0.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信