利用肿瘤掩膜条件合成三维多对比度脑肿瘤磁共振成像。

Nghi C D Truong, Chandan Ganesh Bangalore Yogananda, Benjamin C Wagner, James M Holcomb, Divya Reddy, Niloufar Saadat, Kimmo J Hatanpaa, Toral R Patel, Baowei Fei, Matthew D Lee, Rajan Jain, Richard J Bruce, Marco C Pinho, Ananth J Madhuranthakam, Joseph A Maldjian
{"title":"利用肿瘤掩膜条件合成三维多对比度脑肿瘤磁共振成像。","authors":"Nghi C D Truong, Chandan Ganesh Bangalore Yogananda, Benjamin C Wagner, James M Holcomb, Divya Reddy, Niloufar Saadat, Kimmo J Hatanpaa, Toral R Patel, Baowei Fei, Matthew D Lee, Rajan Jain, Richard J Bruce, Marco C Pinho, Ananth J Madhuranthakam, Joseph A Maldjian","doi":"10.1117/12.3009331","DOIUrl":null,"url":null,"abstract":"<p><p>Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This approach can be a potential solution to mitigate the data scarcity problem and enhance training data availability. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. The framework comprises two components: a 3D autoencoder model for perceptual compression and a conditional 3D Diffusion Probabilistic Model (DPM) for generating high-quality and diverse multi-contrast brain tumor MRI samples, guided by a conditional tumor mask. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the DPM to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. We trained our models using two brain tumor datasets: The Cancer Genome Atlas (TCGA) public dataset and an internal dataset from the University of Texas Southwestern Medical Center (UTSW). The models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. The quality of the generated images was evaluated using the Fréchet Inception Distance (FID) score. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12931 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11075745/pdf/","citationCount":"0","resultStr":"{\"title\":\"Synthesizing 3D Multi-Contrast Brain Tumor MRIs Using Tumor Mask Conditioning.\",\"authors\":\"Nghi C D Truong, Chandan Ganesh Bangalore Yogananda, Benjamin C Wagner, James M Holcomb, Divya Reddy, Niloufar Saadat, Kimmo J Hatanpaa, Toral R Patel, Baowei Fei, Matthew D Lee, Rajan Jain, Richard J Bruce, Marco C Pinho, Ananth J Madhuranthakam, Joseph A Maldjian\",\"doi\":\"10.1117/12.3009331\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This approach can be a potential solution to mitigate the data scarcity problem and enhance training data availability. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. The framework comprises two components: a 3D autoencoder model for perceptual compression and a conditional 3D Diffusion Probabilistic Model (DPM) for generating high-quality and diverse multi-contrast brain tumor MRI samples, guided by a conditional tumor mask. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the DPM to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. We trained our models using two brain tumor datasets: The Cancer Genome Atlas (TCGA) public dataset and an internal dataset from the University of Texas Southwestern Medical Center (UTSW). The models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. The quality of the generated images was evaluated using the Fréchet Inception Distance (FID) score. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.</p>\",\"PeriodicalId\":74505,\"journal\":{\"name\":\"Proceedings of SPIE--the International Society for Optical Engineering\",\"volume\":\"12931 \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11075745/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of SPIE--the International Society for Optical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.3009331\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/4/2 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SPIE--the International Society for Optical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3009331","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/4/2 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

数据稀缺和数据不平衡是在脑肿瘤核磁共振成像数据等医学图像上训练深度学习模型的两大挑战。生成式人工智能的最新进展为合成生成磁共振成像数据(包括脑肿瘤磁共振成像扫描)提供了新的可能性。这种方法是缓解数据稀缺问题和提高训练数据可用性的潜在解决方案。这项工作的重点是调整二维潜在扩散模型,以生成以肿瘤掩膜为条件的三维多对比度脑肿瘤磁共振成像数据。该框架由两部分组成:用于感知压缩的三维自动编码器模型和条件三维扩散概率模型(DPM),用于在条件肿瘤掩膜的引导下生成高质量、多样化的多对比度脑肿瘤 MRI 样本。与专注于生成二维多对比度或三维单对比度磁共振成像样本的现有工作不同,我们的模型可生成多对比度三维磁共振成像样本。我们还在 DPM 的 UNet 主干网中集成了一个条件模块,以捕捉由提供的肿瘤掩膜驱动的语义分类数据分布,从而根据特定的脑肿瘤掩膜生成 MRI 脑肿瘤样本。我们使用两个脑肿瘤数据集对我们的模型进行了训练:癌症基因组图谱(TCGA)公共数据集和德克萨斯大学西南医学中心(UTSW)的内部数据集。模型能够生成高质量的三维多对比度脑肿瘤 MRI 样本,肿瘤位置与输入条件掩膜对齐。生成图像的质量使用弗雷谢特起始距离(FID)评分进行评估。这项工作有望缓解脑肿瘤数据稀缺的问题,并提高涉及脑肿瘤核磁共振成像数据的深度学习模型的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Synthesizing 3D Multi-Contrast Brain Tumor MRIs Using Tumor Mask Conditioning.

Data scarcity and data imbalance are two major challenges in training deep learning models on medical images, such as brain tumor MRI data. The recent advancements in generative artificial intelligence have opened new possibilities for synthetically generating MRI data, including brain tumor MRI scans. This approach can be a potential solution to mitigate the data scarcity problem and enhance training data availability. This work focused on adapting the 2D latent diffusion models to generate 3D multi-contrast brain tumor MRI data with a tumor mask as the condition. The framework comprises two components: a 3D autoencoder model for perceptual compression and a conditional 3D Diffusion Probabilistic Model (DPM) for generating high-quality and diverse multi-contrast brain tumor MRI samples, guided by a conditional tumor mask. Unlike existing works that focused on generating either 2D multi-contrast or 3D single-contrast MRI samples, our models generate multi-contrast 3D MRI samples. We also integrated a conditional module within the UNet backbone of the DPM to capture the semantic class-dependent data distribution driven by the provided tumor mask to generate MRI brain tumor samples based on a specific brain tumor mask. We trained our models using two brain tumor datasets: The Cancer Genome Atlas (TCGA) public dataset and an internal dataset from the University of Texas Southwestern Medical Center (UTSW). The models were able to generate high-quality 3D multi-contrast brain tumor MRI samples with the tumor location aligned by the input condition mask. The quality of the generated images was evaluated using the Fréchet Inception Distance (FID) score. This work has the potential to mitigate the scarcity of brain tumor data and improve the performance of deep learning models involving brain tumor MRI data.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
0.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信