使用深度生成模型绘制裁剪的扩散MRI。

Rafi Ayub, Qingyu Zhao, M J Meloy, Edith V Sullivan, Adolf Pfefferbaum, Ehsan Adeli, Kilian M Pohl
{"title":"使用深度生成模型绘制裁剪的扩散MRI。","authors":"Rafi Ayub,&nbsp;Qingyu Zhao,&nbsp;M J Meloy,&nbsp;Edith V Sullivan,&nbsp;Adolf Pfefferbaum,&nbsp;Ehsan Adeli,&nbsp;Kilian M Pohl","doi":"10.1007/978-3-030-59354-4_9","DOIUrl":null,"url":null,"abstract":"<p><p>Minor artifacts introduced during image acquisition are often negligible to the human eye, such as a confined field of view resulting in MRI missing the top of the head. This cropping artifact, however, can cause suboptimal processing of the MRI resulting in data omission or decreasing the power of subsequent analyses. We propose to avoid data or quality loss by restoring these missing regions of the head via variational autoencoders (VAE), a deep generative model that has been previously applied to high resolution image reconstruction. Based on diffusion weighted images (DWI) acquired by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), we evaluate the accuracy of inpainting the top of the head by common autoencoder models (U-Net, VQVAE, and VAE-GAN) and a custom model proposed herein called U-VQVAE. Our results show that U-VQVAE not only achieved the highest accuracy, but also resulted in MRI processing producing lower fractional anisotropy (FA) in the supplementary motor area than FA derived from the original MRIs. Lower FA implies that inpainting reduces noise in processing DWI and thus increase the quality of the generated results. The code is available at https://github.com/RdoubleA/DWIinpainting.</p>","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"12329 ","pages":"91-100"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8123091/pdf/nihms-1698575.pdf","citationCount":"3","resultStr":"{\"title\":\"Inpainting Cropped Diffusion MRI using Deep Generative Models.\",\"authors\":\"Rafi Ayub,&nbsp;Qingyu Zhao,&nbsp;M J Meloy,&nbsp;Edith V Sullivan,&nbsp;Adolf Pfefferbaum,&nbsp;Ehsan Adeli,&nbsp;Kilian M Pohl\",\"doi\":\"10.1007/978-3-030-59354-4_9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Minor artifacts introduced during image acquisition are often negligible to the human eye, such as a confined field of view resulting in MRI missing the top of the head. This cropping artifact, however, can cause suboptimal processing of the MRI resulting in data omission or decreasing the power of subsequent analyses. We propose to avoid data or quality loss by restoring these missing regions of the head via variational autoencoders (VAE), a deep generative model that has been previously applied to high resolution image reconstruction. Based on diffusion weighted images (DWI) acquired by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), we evaluate the accuracy of inpainting the top of the head by common autoencoder models (U-Net, VQVAE, and VAE-GAN) and a custom model proposed herein called U-VQVAE. Our results show that U-VQVAE not only achieved the highest accuracy, but also resulted in MRI processing producing lower fractional anisotropy (FA) in the supplementary motor area than FA derived from the original MRIs. Lower FA implies that inpainting reduces noise in processing DWI and thus increase the quality of the generated results. The code is available at https://github.com/RdoubleA/DWIinpainting.</p>\",\"PeriodicalId\":92572,\"journal\":{\"name\":\"PRedictive Intelligence in MEdicine. PRIME (Workshop)\",\"volume\":\"12329 \",\"pages\":\"91-100\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8123091/pdf/nihms-1698575.pdf\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PRedictive Intelligence in MEdicine. PRIME (Workshop)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-030-59354-4_9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-59354-4_9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

在图像采集过程中引入的轻微伪影通常对人眼来说是可以忽略不计的,例如导致MRI丢失头部顶部的受限视野。然而,这种裁剪伪影可能会导致MRI的次优处理,从而导致数据遗漏或降低后续分析的能力。我们建议通过变分自编码器(VAE)(一种深度生成模型,以前已应用于高分辨率图像重建)来恢复头部的这些缺失区域,以避免数据或质量损失。基于全国青少年酒精和神经发育协会(nanda)获得的弥散加权图像(DWI),我们通过常见的自编码器模型(U-Net, VQVAE和VAE-GAN)和本文提出的称为U-VQVAE的自定义模型来评估头部顶部涂膜的准确性。我们的研究结果表明,U-VQVAE不仅达到了最高的准确性,而且导致MRI处理在辅助运动区域产生的分数各向异性(FA)比原始MRI得到的FA低。较低的FA意味着在处理DWI时减少了噪声,从而提高了生成结果的质量。代码可在https://github.com/RdoubleA/DWIinpainting上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Inpainting Cropped Diffusion MRI using Deep Generative Models.

Minor artifacts introduced during image acquisition are often negligible to the human eye, such as a confined field of view resulting in MRI missing the top of the head. This cropping artifact, however, can cause suboptimal processing of the MRI resulting in data omission or decreasing the power of subsequent analyses. We propose to avoid data or quality loss by restoring these missing regions of the head via variational autoencoders (VAE), a deep generative model that has been previously applied to high resolution image reconstruction. Based on diffusion weighted images (DWI) acquired by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), we evaluate the accuracy of inpainting the top of the head by common autoencoder models (U-Net, VQVAE, and VAE-GAN) and a custom model proposed herein called U-VQVAE. Our results show that U-VQVAE not only achieved the highest accuracy, but also resulted in MRI processing producing lower fractional anisotropy (FA) in the supplementary motor area than FA derived from the original MRIs. Lower FA implies that inpainting reduces noise in processing DWI and thus increase the quality of the generated results. The code is available at https://github.com/RdoubleA/DWIinpainting.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信