SRT:用于二维MRI点云三维重建的形状重建变压器

Bowen Hu, Yanyan Shen, Guocheng Wu, Shuqiang Wang
{"title":"SRT:用于二维MRI点云三维重建的形状重建变压器","authors":"Bowen Hu, Yanyan Shen, Guocheng Wu, Shuqiang Wang","doi":"10.1145/3529836.3529902","DOIUrl":null,"url":null,"abstract":"There has been some work on 3D reconstruction of organ shapes based on medical images in minimally invasive surgeries. They aim to help lift visualization limitations for procedures with poor visual environments. However, extant models are often based on deep convolutional neural networks and complex, hard-to-train generative adversarial networks; their problems about stability and real-time plague the further development of the technique. In this paper, we propose the Shape Reconstruction Transformer (SRT) based on the self-attentive mechanism and up-down-up generative structure to design fast and accurate 3D brain reconstruction models through fully connected layer networks only. Point clouds are used as the 3D representation of the model. Considering the specificity of the surgical scene, a single 2D image is limited as the input to the model. Qualitative demonstrations and quantitative experiments based on multiple metrics show the generative capability of the proposed model and demonstrate the advantages of the proposed model over other state-of-the-art models.","PeriodicalId":285191,"journal":{"name":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"SRT: Shape Reconstruction Transformer for 3D Reconstruction of Point Cloud from 2D MRI\",\"authors\":\"Bowen Hu, Yanyan Shen, Guocheng Wu, Shuqiang Wang\",\"doi\":\"10.1145/3529836.3529902\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There has been some work on 3D reconstruction of organ shapes based on medical images in minimally invasive surgeries. They aim to help lift visualization limitations for procedures with poor visual environments. However, extant models are often based on deep convolutional neural networks and complex, hard-to-train generative adversarial networks; their problems about stability and real-time plague the further development of the technique. In this paper, we propose the Shape Reconstruction Transformer (SRT) based on the self-attentive mechanism and up-down-up generative structure to design fast and accurate 3D brain reconstruction models through fully connected layer networks only. Point clouds are used as the 3D representation of the model. Considering the specificity of the surgical scene, a single 2D image is limited as the input to the model. Qualitative demonstrations and quantitative experiments based on multiple metrics show the generative capability of the proposed model and demonstrate the advantages of the proposed model over other state-of-the-art models.\",\"PeriodicalId\":285191,\"journal\":{\"name\":\"2022 14th International Conference on Machine Learning and Computing (ICMLC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-02-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 14th International Conference on Machine Learning and Computing (ICMLC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3529836.3529902\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529836.3529902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

在微创手术中,基于医学图像的器官形状三维重建已经有了一些研究。他们的目标是帮助消除视觉环境差的程序的可视化限制。然而,现有的模型通常是基于深度卷积神经网络和复杂的、难以训练的生成对抗网络;它们在稳定性和实时性方面的问题困扰着该技术的进一步发展。在本文中,我们提出了基于自关注机制和上下向上生成结构的形状重建变压器(SRT),仅通过全连接层网络就可以设计快速准确的三维大脑重建模型。点云被用作模型的三维表示。考虑到手术场景的特殊性,单一的二维图像被限制为模型的输入。基于多个指标的定性演示和定量实验显示了所提出模型的生成能力,并证明了所提出模型相对于其他最先进模型的优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SRT: Shape Reconstruction Transformer for 3D Reconstruction of Point Cloud from 2D MRI
There has been some work on 3D reconstruction of organ shapes based on medical images in minimally invasive surgeries. They aim to help lift visualization limitations for procedures with poor visual environments. However, extant models are often based on deep convolutional neural networks and complex, hard-to-train generative adversarial networks; their problems about stability and real-time plague the further development of the technique. In this paper, we propose the Shape Reconstruction Transformer (SRT) based on the self-attentive mechanism and up-down-up generative structure to design fast and accurate 3D brain reconstruction models through fully connected layer networks only. Point clouds are used as the 3D representation of the model. Considering the specificity of the surgical scene, a single 2D image is limited as the input to the model. Qualitative demonstrations and quantitative experiments based on multiple metrics show the generative capability of the proposed model and demonstrate the advantages of the proposed model over other state-of-the-art models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信