Imagenation - A DCGAN based method for Image Reconstruction from fMRI

K. Bhargav, S. Ambika, S. Deepak, S. Sudha
{"title":"Imagenation - A DCGAN based method for Image Reconstruction from fMRI","authors":"K. Bhargav, S. Ambika, S. Deepak, S. Sudha","doi":"10.1109/ICRCICN50933.2020.9296192","DOIUrl":null,"url":null,"abstract":"We propose a method to reconstruct natural grayscale images and handwritten characters from Functional Magnetic Resonance Imaging (fMRI) data and achieve a high degree of similarity to the original stimuli images. The approach utilizes a pre-trained Deep Convolutional Generative Adversarial Network (DCGAN) to reconstruct images and provide visual confirmations regarding the resemblance between the reconstructed and original images. A linear regressor is used to elicit information from the fMRI data and estimate a latent space representation for the formerly trained generative model. A composite loss function combining the Perceptual and Multi-Scale Structural Similarity Index (MS-SSIM) losses is used to train the regressor. The advantages of both functions are evident with the Perceptual loss capturing semantic information and the MS-SSIM loss carrying information about objects in a scene. With this loss function, we were able to reconstruct human objects in the stimuli to a degree of accuracy. The reconstructions obtained were then validated using the Scale Invariant Feature Transform (SIFT) method to elucidate the number of features matched between the original and recreated images. The SSIM scores for the reconstructed images are observed to be higher than state-of-the-art methods. Parallels are drawn between the distortions produced in images submerged underwater and those in the reconstructed images using the Contrast Limited Adaptive Histogram Equalization (CLAHE), an image enhancement technique. A sharp increase in the number of SIFT features matched, is observed with the application of CLAHE on the reconstructed images.","PeriodicalId":138966,"journal":{"name":"2020 Fifth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Fifth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRCICN50933.2020.9296192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We propose a method to reconstruct natural grayscale images and handwritten characters from Functional Magnetic Resonance Imaging (fMRI) data and achieve a high degree of similarity to the original stimuli images. The approach utilizes a pre-trained Deep Convolutional Generative Adversarial Network (DCGAN) to reconstruct images and provide visual confirmations regarding the resemblance between the reconstructed and original images. A linear regressor is used to elicit information from the fMRI data and estimate a latent space representation for the formerly trained generative model. A composite loss function combining the Perceptual and Multi-Scale Structural Similarity Index (MS-SSIM) losses is used to train the regressor. The advantages of both functions are evident with the Perceptual loss capturing semantic information and the MS-SSIM loss carrying information about objects in a scene. With this loss function, we were able to reconstruct human objects in the stimuli to a degree of accuracy. The reconstructions obtained were then validated using the Scale Invariant Feature Transform (SIFT) method to elucidate the number of features matched between the original and recreated images. The SSIM scores for the reconstructed images are observed to be higher than state-of-the-art methods. Parallels are drawn between the distortions produced in images submerged underwater and those in the reconstructed images using the Contrast Limited Adaptive Histogram Equalization (CLAHE), an image enhancement technique. A sharp increase in the number of SIFT features matched, is observed with the application of CLAHE on the reconstructed images.
一种基于DCGAN的fMRI图像重建方法
本文提出了一种从功能磁共振成像(fMRI)数据中重建自然灰度图像和手写字符的方法,并实现了与原始刺激图像的高度相似。该方法利用预训练的深度卷积生成对抗网络(DCGAN)来重建图像,并提供关于重建图像与原始图像之间相似性的视觉确认。线性回归器用于从fMRI数据中提取信息,并估计先前训练的生成模型的潜在空间表示。结合感知和多尺度结构相似指数(MS-SSIM)损失的复合损失函数用于训练回归器。两种功能的优势都很明显,感知损失捕获语义信息,而MS-SSIM损失携带场景中物体的信息。有了这个损失函数,我们能够在一定程度上准确地重建刺激中的人类物体。然后使用尺度不变特征变换(SIFT)方法验证重建得到的图像,以阐明原始图像和重建图像之间匹配的特征数量。观察到重建图像的SSIM分数高于最先进的方法。利用对比度有限自适应直方图均衡化(CLAHE),一种图像增强技术,绘制了淹没在水下的图像和重建图像中产生的畸变之间的平行关系。在重建图像上应用CLAHE后,可以观察到匹配的SIFT特征数量急剧增加。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信