Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)最新文献

筛选
英文 中文
Super-resolution segmentation network for inner-ear tissue segmentation. 用于内耳组织分割的超分辨率分割网络。
Ziteng Liu, Yubo Fan, Ange Lou, Jack H Noble
{"title":"Super-resolution segmentation network for inner-ear tissue segmentation.","authors":"Ziteng Liu, Yubo Fan, Ange Lou, Jack H Noble","doi":"10.1007/978-3-031-44689-4_2","DOIUrl":"10.1007/978-3-031-44689-4_2","url":null,"abstract":"<p><p>Cochlear implants (CIs) are considered the standard-of-care treatment for profound sensory-based hearing loss. Several groups have proposed computational models of the cochlea in order to study the neural activation patterns in response to CI stimulation. However, most of the current implementations either rely on high-resolution histological images that cannot be customized for CI users or CT images that lack the spatial resolution to show cochlear structures. In this work, we propose to use a deep learning-based method to obtain μCT level tissue labels using patient CT images. Experiments showed that the proposed super-resolution segmentation architecture achieved very good performance on the inner-ear tissue segmentation. Our best-performing model (0.871) outperformed the UNet (0.746), VNet (0.853), nnUNet (0.861), TransUNet (0.848), and SRGAN (0.780) in terms of mean dice score.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10979466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140338011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TAI-GAN: Temporally and Anatomically Informed GAN for Early-to-Late Frame Conversion in Dynamic Cardiac PET Motion Correction. TAI-GAN:用于动态心脏 PET 运动校正中早期到晚期帧转换的时间和解剖信息 GAN。
Xueqi Guo, Luyao Shi, Xiongchao Chen, Bo Zhou, Qiong Liu, Huidong Xie, Yi-Hwa Liu, Richard Palyo, Edward J Miller, Albert J Sinusas, Bruce Spottiswoode, Chi Liu, Nicha C Dvornek
{"title":"TAI-GAN: Temporally and Anatomically Informed GAN for Early-to-Late Frame Conversion in Dynamic Cardiac PET Motion Correction.","authors":"Xueqi Guo, Luyao Shi, Xiongchao Chen, Bo Zhou, Qiong Liu, Huidong Xie, Yi-Hwa Liu, Richard Palyo, Edward J Miller, Albert J Sinusas, Bruce Spottiswoode, Chi Liu, Nicha C Dvornek","doi":"10.1007/978-3-031-44689-4_7","DOIUrl":"10.1007/978-3-031-44689-4_7","url":null,"abstract":"<p><p>The rapid tracer kinetics of rubidium-82 (<sup>82</sup>Rb) and high variation of cross-frame distribution in dynamic cardiac positron emission tomography (PET) raise significant challenges for inter-frame motion correction, particularly for the early frames where conventional intensity-based image registration techniques are not applicable. Alternatively, a promising approach utilizes generative methods to handle the tracer distribution changes to assist existing registration methods. To improve frame-wise registration and parametric quantification, we propose a Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) to transform the early frames into the late reference frame using an all-to-one mapping. Specifically, a feature-wise linear modulation layer encodes channel-wise parameters generated from temporal tracer kinetics information, and rough cardiac segmentations with local shifts serve as the anatomical information. We validated our proposed method on a clinical <sup>82</sup>Rb PET dataset and found that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, motion estimation accuracy and clinical myocardial blood flow (MBF) quantification were improved compared to using the original frames. Our code is published at https://github.com/gxq1998/TAI-GAN.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10923183/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140095313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain Lesion Synthesis via Progressive Adversarial Variational Auto-Encoder. 通过渐进式对抗变异自动编码器合成脑损伤。
Jiayu Huo, Vejay Vakharia, Chengyuan Wu, Ashwini Sharan, Andrew Ko, Sébastien Ourselin, Rachel Sparks
{"title":"Brain Lesion Synthesis via Progressive Adversarial Variational Auto-Encoder.","authors":"Jiayu Huo, Vejay Vakharia, Chengyuan Wu, Ashwini Sharan, Andrew Ko, Sébastien Ourselin, Rachel Sparks","doi":"10.1007/978-3-031-16980-9_10","DOIUrl":"10.1007/978-3-031-16980-9_10","url":null,"abstract":"<p><p>Laser interstitial thermal therapy (LITT) is a novel minimally invasive treatment that is used to ablate intracranial structures to treat mesial temporal lobe epilepsy (MTLE). Region of interest (ROI) segmentation before and after LITT would enable automated lesion quantification to objectively assess treatment efficacy. Deep learning techniques, such as convolutional neural networks (CNNs) are state-of-the-art solutions for ROI segmentation, but require large amounts of annotated data during the training. However, collecting large datasets from emerging treatments such as LITT is impractical. In this paper, we propose a progressive brain lesion synthesis framework (PAVAE) to expand both the quantity and diversity of the training dataset. Concretely, our framework consists of two sequential networks: a mask synthesis network and a mask-guided lesion synthesis network. To better employ extrinsic information to provide additional supervision during network training, we design a condition embedding block (CEB) and a mask embedding block (MEB) to encode inherent conditions of masks to the feature space. Finally, a segmentation network is trained using raw and synthetic lesion images to evaluate the effectiveness of the proposed framework. Experimental results show that our method can achieve realistic synthetic results and boost the performance of down-stream segmentation tasks above traditional data augmentation techniques.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7616255/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-directional Synthesis of Pre- and Post-contrast MRI via Guided Feature Disentanglement. 通过引导特征分解实现对比前和对比后磁共振成像的双向合成
Yuan Xue, Blake E Dewey, Lianrui Zuo, Shuo Han, Aaron Carass, Peiyu Duan, Samuel W Remedios, Dzung L Pham, Shiv Saidha, Peter A Calabresi, Jerry L Prince
{"title":"Bi-directional Synthesis of Pre- and Post-contrast MRI via Guided Feature Disentanglement.","authors":"Yuan Xue, Blake E Dewey, Lianrui Zuo, Shuo Han, Aaron Carass, Peiyu Duan, Samuel W Remedios, Dzung L Pham, Shiv Saidha, Peter A Calabresi, Jerry L Prince","doi":"10.1007/978-3-031-16980-9_6","DOIUrl":"10.1007/978-3-031-16980-9_6","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) with gadolinium contrast is widely used for tissue enhancement and better identification of active lesions and tumors. Recent studies have shown that gadolinium deposition can accumulate in tissues including the brain, which raises safety concerns. Prior works have tried to synthesize post-contrast T1-weighted MRIs from pre-contrast MRIs to avoid the use of gadolinium. However, contrast and image representations are often entangled during the synthesis process, resulting in synthetic post-contrast MRIs with undesirable contrast enhancements. Moreover, the synthesis of pre-contrast MRIs from post-contrast MRIs which can be useful for volumetric analysis is rarely investigated in the literature. To tackle pre- and post- contrast MRI synthesis, we propose a BI-directional Contrast Enhancement Prediction and Synthesis (BICEPS) network that enables disentanglement of contrast and image representations via a bi-directional image-to-image translation(I2I)model. Our proposed model can perform both pre-to-post and post-to-pre contrast synthesis, and provides an interpretable synthesis process by predicting contrast enhancement maps from the learned contrast embedding. Extensive experiments on a multiple sclerosis dataset demonstrate the feasibility of applying our bidirectional synthesis and show that BICEPS outperforms current methods.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9623769/pdf/nihms-1845155.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40444210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation and Synthesis in Medical Imaging: 7th International Workshop, SASHIMI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18, 2022, Proceedings 医学成像的模拟和综合:第七届国际研讨会,SASHIMI 2022,与MICCAI 2022一起举行,新加坡,2022年9月18日,会议录
{"title":"Simulation and Synthesis in Medical Imaging: 7th International Workshop, SASHIMI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18, 2022, Proceedings","authors":"","doi":"10.1007/978-3-031-16980-9","DOIUrl":"https://doi.org/10.1007/978-3-031-16980-9","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91331397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Joint Image and Label Self-super-Resolution 联合图像和标签自超分辨率
Samuel W. Remedios, Shuo Han, B. Dewey, D. Pham, Jerry L Prince, A. Carass
{"title":"Joint Image and Label Self-super-Resolution","authors":"Samuel W. Remedios, Shuo Han, B. Dewey, D. Pham, Jerry L Prince, A. Carass","doi":"10.1007/978-3-030-87592-3_2","DOIUrl":"https://doi.org/10.1007/978-3-030-87592-3_2","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77174170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Synth-by-Reg (SbR): Contrastive learning for synthesis-based registration of paired images. Synth-by-Reg (SbR):基于合成的配对图像配准的对比学习。
Adrià Casamitjana, Matteo Mancini, Juan Eugenio Iglesias
{"title":"Synth-by-Reg (SbR): Contrastive learning for synthesis-based registration of paired images.","authors":"Adrià Casamitjana,&nbsp;Matteo Mancini,&nbsp;Juan Eugenio Iglesias","doi":"10.1007/978-3-030-87592-3_5","DOIUrl":"10.1007/978-3-030-87592-3_5","url":null,"abstract":"<p><p>Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8582976/pdf/nihms-1753298.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39733092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Simulation and Synthesis in Medical Imaging: 6th International Workshop, SASHIMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings 医学成像的模拟和综合:第六届国际研讨会,SASHIMI 2021,与MICCAI 2021一起举行,斯特拉斯堡,法国,2021年9月27日,论文集
{"title":"Simulation and Synthesis in Medical Imaging: 6th International Workshop, SASHIMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings","authors":"","doi":"10.1007/978-3-030-87592-3","DOIUrl":"https://doi.org/10.1007/978-3-030-87592-3","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90322319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditory Nerve Fiber Health Estimation Using Patient Specific Cochlear Implant Stimulation Models. 使用患者特定人工耳蜗刺激模型评估听神经纤维健康状况。
Ziteng Liu, Ahmet Cakir, Jack H Noble
{"title":"Auditory Nerve Fiber Health Estimation Using Patient Specific Cochlear Implant Stimulation Models.","authors":"Ziteng Liu,&nbsp;Ahmet Cakir,&nbsp;Jack H Noble","doi":"10.1007/978-3-030-59520-3_19","DOIUrl":"https://doi.org/10.1007/978-3-030-59520-3_19","url":null,"abstract":"<p><p>Cochlear implants (CIs) restore hearing using an array of electrodes implanted in the cochlea to directly stimulate auditory nerve fibers (ANFs). Hearing outcomes with CIs are dependent on the health of the ANFs. In this research, we developed an approach to estimate the health of ANFs using patient-customized, image-based computational models of CI stimulation. Our stimulation models build on a previous model-based solution to estimate the intra-cochlear electric field (EF) created by the CI. Herein, we propose to use the estimated EF to drive ANF models representing 75 nerve bundles along the length of the cochlea. We propose a method to detect the neural health of the ANF models by optimizing neural health parameters to minimize the sum of squared differences between simulated and the physiological measurements available via patients' CIs. The resulting health parameters provide an estimate of the health of ANF bundles. Experiments with 8 subjects show promising model prediction accuracy, with excellent agreement between neural stimulation responses that are clinically measured and those that are predicted by our parameter optimized models. These results suggest our modeling approach may provide an accurate estimation of ANF health for CI users.</p>","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8054972/pdf/nihms-1683800.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38897049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Simulation and Synthesis in Medical Imaging: 5th International Workshop, SASHIMI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings 医学成像的模拟和综合:第五届国际研讨会,SASHIMI 2020,与MICCAI 2020一起举行,秘鲁利马,2020年10月4日,会议录
Ninon Burgos, D. Svoboda, J. Wolterink, Can Zhao
{"title":"Simulation and Synthesis in Medical Imaging: 5th International Workshop, SASHIMI 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Proceedings","authors":"Ninon Burgos, D. Svoboda, J. Wolterink, Can Zhao","doi":"10.1007/978-3-030-59520-3","DOIUrl":"https://doi.org/10.1007/978-3-030-59520-3","url":null,"abstract":"","PeriodicalId":91967,"journal":{"name":"Simulation and synthesis in medical imaging : ... International Workshop, SASHIMI ..., held in conjunction with MICCAI ..., proceedings. SASHIMI (Workshop)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88209507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信