{"title":"变形组织四维重建的基础模型引导高斯溅射","authors":"Yifan Liu;Chenxin Li;Hengyu Liu;Chen Yang;Yixuan Yuan","doi":"10.1109/TMI.2025.3545183","DOIUrl":null,"url":null,"abstract":"Reconstructing deformable anatomical structures from endoscopic videos is a pivotal and promising research topic that can enable advanced surgical applications and improve patient outcomes. While existing surgical scene reconstruction methods have made notable progress, they often suffer from slow rendering speeds due to using neural radiance fields, limiting their practical viability in real-world applications. To overcome this bottleneck, we propose EndoGaussian, a framework that integrates the strengths of 3D Gaussian Splatting representations, allowing for high-fidelity tissue reconstruction, efficient training, and real-time rendering. Specifically, we dedicate a Foundation Model-driven Initialization (FMI) module, which distills 3D cues from multiple vision foundation models (VFMs) to swiftly construct the preliminary scene structure for Gaussian initialization. Then, a Spatio-temporal Gaussian Tracking (SGT) is designed, efficiently modeling scene dynamics using the multi-scale HexPlane with spatio-temporal priors. Furthermore, to improve the dynamics modeling ability for scenes with large deformation, EndoGaussian integrates Motion-aware Frame Synthesis (MFS) to adaptively synthesize new frames as extra training constraints. Experimental results on public datasets demonstrate EndoGaussian’s efficacy against prior state-of-the-art methods, including superior rendering speed (168 FPS, real-time), enhanced rendering quality (38.555 PSNR), and reduced training overhead (within 2 min/scene). These results underscore EndoGaussian’s potential to significantly advance intraoperative surgery applications, paving the way for more accurate and efficient real-time surgical guidance and decision-making in clinical scenarios. Code is available at: <uri>https://github.com/CUHK-AIM-Group/EndoGaussian</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 6","pages":"2672-2682"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Foundation Model-Guided Gaussian Splatting for 4D Reconstruction of Deformable Tissues\",\"authors\":\"Yifan Liu;Chenxin Li;Hengyu Liu;Chen Yang;Yixuan Yuan\",\"doi\":\"10.1109/TMI.2025.3545183\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Reconstructing deformable anatomical structures from endoscopic videos is a pivotal and promising research topic that can enable advanced surgical applications and improve patient outcomes. While existing surgical scene reconstruction methods have made notable progress, they often suffer from slow rendering speeds due to using neural radiance fields, limiting their practical viability in real-world applications. To overcome this bottleneck, we propose EndoGaussian, a framework that integrates the strengths of 3D Gaussian Splatting representations, allowing for high-fidelity tissue reconstruction, efficient training, and real-time rendering. Specifically, we dedicate a Foundation Model-driven Initialization (FMI) module, which distills 3D cues from multiple vision foundation models (VFMs) to swiftly construct the preliminary scene structure for Gaussian initialization. Then, a Spatio-temporal Gaussian Tracking (SGT) is designed, efficiently modeling scene dynamics using the multi-scale HexPlane with spatio-temporal priors. Furthermore, to improve the dynamics modeling ability for scenes with large deformation, EndoGaussian integrates Motion-aware Frame Synthesis (MFS) to adaptively synthesize new frames as extra training constraints. Experimental results on public datasets demonstrate EndoGaussian’s efficacy against prior state-of-the-art methods, including superior rendering speed (168 FPS, real-time), enhanced rendering quality (38.555 PSNR), and reduced training overhead (within 2 min/scene). These results underscore EndoGaussian’s potential to significantly advance intraoperative surgery applications, paving the way for more accurate and efficient real-time surgical guidance and decision-making in clinical scenarios. Code is available at: <uri>https://github.com/CUHK-AIM-Group/EndoGaussian</uri>.\",\"PeriodicalId\":94033,\"journal\":{\"name\":\"IEEE transactions on medical imaging\",\"volume\":\"44 6\",\"pages\":\"2672-2682\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on medical imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10902412/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10902412/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Foundation Model-Guided Gaussian Splatting for 4D Reconstruction of Deformable Tissues
Reconstructing deformable anatomical structures from endoscopic videos is a pivotal and promising research topic that can enable advanced surgical applications and improve patient outcomes. While existing surgical scene reconstruction methods have made notable progress, they often suffer from slow rendering speeds due to using neural radiance fields, limiting their practical viability in real-world applications. To overcome this bottleneck, we propose EndoGaussian, a framework that integrates the strengths of 3D Gaussian Splatting representations, allowing for high-fidelity tissue reconstruction, efficient training, and real-time rendering. Specifically, we dedicate a Foundation Model-driven Initialization (FMI) module, which distills 3D cues from multiple vision foundation models (VFMs) to swiftly construct the preliminary scene structure for Gaussian initialization. Then, a Spatio-temporal Gaussian Tracking (SGT) is designed, efficiently modeling scene dynamics using the multi-scale HexPlane with spatio-temporal priors. Furthermore, to improve the dynamics modeling ability for scenes with large deformation, EndoGaussian integrates Motion-aware Frame Synthesis (MFS) to adaptively synthesize new frames as extra training constraints. Experimental results on public datasets demonstrate EndoGaussian’s efficacy against prior state-of-the-art methods, including superior rendering speed (168 FPS, real-time), enhanced rendering quality (38.555 PSNR), and reduced training overhead (within 2 min/scene). These results underscore EndoGaussian’s potential to significantly advance intraoperative surgery applications, paving the way for more accurate and efficient real-time surgical guidance and decision-making in clinical scenarios. Code is available at: https://github.com/CUHK-AIM-Group/EndoGaussian.