{"title":"SRT: Shape Reconstruction Transformer for 3D Reconstruction of Point Cloud from 2D MRI","authors":"Bowen Hu, Yanyan Shen, Guocheng Wu, Shuqiang Wang","doi":"10.1145/3529836.3529902","DOIUrl":null,"url":null,"abstract":"There has been some work on 3D reconstruction of organ shapes based on medical images in minimally invasive surgeries. They aim to help lift visualization limitations for procedures with poor visual environments. However, extant models are often based on deep convolutional neural networks and complex, hard-to-train generative adversarial networks; their problems about stability and real-time plague the further development of the technique. In this paper, we propose the Shape Reconstruction Transformer (SRT) based on the self-attentive mechanism and up-down-up generative structure to design fast and accurate 3D brain reconstruction models through fully connected layer networks only. Point clouds are used as the 3D representation of the model. Considering the specificity of the surgical scene, a single 2D image is limited as the input to the model. Qualitative demonstrations and quantitative experiments based on multiple metrics show the generative capability of the proposed model and demonstrate the advantages of the proposed model over other state-of-the-art models.","PeriodicalId":285191,"journal":{"name":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Machine Learning and Computing (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529836.3529902","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
There has been some work on 3D reconstruction of organ shapes based on medical images in minimally invasive surgeries. They aim to help lift visualization limitations for procedures with poor visual environments. However, extant models are often based on deep convolutional neural networks and complex, hard-to-train generative adversarial networks; their problems about stability and real-time plague the further development of the technique. In this paper, we propose the Shape Reconstruction Transformer (SRT) based on the self-attentive mechanism and up-down-up generative structure to design fast and accurate 3D brain reconstruction models through fully connected layer networks only. Point clouds are used as the 3D representation of the model. Considering the specificity of the surgical scene, a single 2D image is limited as the input to the model. Qualitative demonstrations and quantitative experiments based on multiple metrics show the generative capability of the proposed model and demonstrate the advantages of the proposed model over other state-of-the-art models.