CCX-rayNet:一类双平面x射线到CT体积的条件卷积神经网络

Md. Aminur Rab Ratul, Kun Yuan, Won-Sook Lee
{"title":"CCX-rayNet:一类双平面x射线到CT体积的条件卷积神经网络","authors":"Md. Aminur Rab Ratul, Kun Yuan, Won-Sook Lee","doi":"10.1109/ISBI48211.2021.9433870","DOIUrl":null,"url":null,"abstract":"Despite the advancement of the deep neural network, the 3D CT reconstruction from its correspondence 2D X-ray is still a challenging task in computer vision. To tackle this issue here, we proposed a new class-conditioned network, namely CCX-rayNet, which is proficient in recapturing the shapes and textures with prior semantic information in the resulting CT volume. Firstly, we propose a Deep Feature Transform (DFT) module to modulate the 2D feature maps of semantic segmentation spatially by generating the affine transformation parameters. Secondly, by bridging 2D and 3D features (Depth-Aware Connection), we heighten the feature representation of the X-ray image. Particularly, we approximate a 3D attention mask to be employed on the enlarged 3D feature map, where the contextual association is emphasized. Furthermore, in the biplanar view model, we incorporate the Adaptive Feature Fusion (AFF) module to relieve the registration problem that occurs with unrestrained input data by using the similarity matrix. As far as we are aware, this is the first study to utilize prior semantic knowledge in the 3D CT reconstruction. Both qualitative and quantitative analyses manifest that our proposed CCX-rayNet outperforms the baseline method.","PeriodicalId":372939,"journal":{"name":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","volume":"284 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"CCX-rayNet: A Class Conditioned Convolutional Neural Network For Biplanar X-Rays to CT Volume\",\"authors\":\"Md. Aminur Rab Ratul, Kun Yuan, Won-Sook Lee\",\"doi\":\"10.1109/ISBI48211.2021.9433870\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Despite the advancement of the deep neural network, the 3D CT reconstruction from its correspondence 2D X-ray is still a challenging task in computer vision. To tackle this issue here, we proposed a new class-conditioned network, namely CCX-rayNet, which is proficient in recapturing the shapes and textures with prior semantic information in the resulting CT volume. Firstly, we propose a Deep Feature Transform (DFT) module to modulate the 2D feature maps of semantic segmentation spatially by generating the affine transformation parameters. Secondly, by bridging 2D and 3D features (Depth-Aware Connection), we heighten the feature representation of the X-ray image. Particularly, we approximate a 3D attention mask to be employed on the enlarged 3D feature map, where the contextual association is emphasized. Furthermore, in the biplanar view model, we incorporate the Adaptive Feature Fusion (AFF) module to relieve the registration problem that occurs with unrestrained input data by using the similarity matrix. As far as we are aware, this is the first study to utilize prior semantic knowledge in the 3D CT reconstruction. Both qualitative and quantitative analyses manifest that our proposed CCX-rayNet outperforms the baseline method.\",\"PeriodicalId\":372939,\"journal\":{\"name\":\"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)\",\"volume\":\"284 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISBI48211.2021.9433870\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI48211.2021.9433870","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

尽管深度神经网络取得了很大的进步,但从对应的二维x射线中重建三维CT仍然是计算机视觉中的一个具有挑战性的任务。为了解决这个问题,我们提出了一个新的类条件网络,即CCX-rayNet,它精通于在生成的CT体中重新捕获具有先验语义信息的形状和纹理。首先,我们提出了一个深度特征变换(DFT)模块,通过生成仿射变换参数对语义分割的二维特征映射进行空间调制。其次,通过桥接2D和3D特征(深度感知连接),我们提高了x射线图像的特征表示。特别地,我们近似地在放大的3D特征地图上使用3D注意力遮罩,其中上下文关联被强调。此外,在双平面视图模型中,我们引入了自适应特征融合(AFF)模块,通过使用相似矩阵来缓解输入数据不受约束时出现的配准问题。据我们所知,这是第一个利用先验语义知识进行三维CT重建的研究。定性和定量分析表明,我们提出的CCX-rayNet优于基线方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CCX-rayNet: A Class Conditioned Convolutional Neural Network For Biplanar X-Rays to CT Volume
Despite the advancement of the deep neural network, the 3D CT reconstruction from its correspondence 2D X-ray is still a challenging task in computer vision. To tackle this issue here, we proposed a new class-conditioned network, namely CCX-rayNet, which is proficient in recapturing the shapes and textures with prior semantic information in the resulting CT volume. Firstly, we propose a Deep Feature Transform (DFT) module to modulate the 2D feature maps of semantic segmentation spatially by generating the affine transformation parameters. Secondly, by bridging 2D and 3D features (Depth-Aware Connection), we heighten the feature representation of the X-ray image. Particularly, we approximate a 3D attention mask to be employed on the enlarged 3D feature map, where the contextual association is emphasized. Furthermore, in the biplanar view model, we incorporate the Adaptive Feature Fusion (AFF) module to relieve the registration problem that occurs with unrestrained input data by using the similarity matrix. As far as we are aware, this is the first study to utilize prior semantic knowledge in the 3D CT reconstruction. Both qualitative and quantitative analyses manifest that our proposed CCX-rayNet outperforms the baseline method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信