Real-Time, Free-Viewpoint Holographic Patient Rendering for Telerehabilitation via a Single Camera: A Data-driven Approach with 3D Gaussian Splatting for Real-World Adaptation.
{"title":"Real-Time, Free-Viewpoint Holographic Patient Rendering for Telerehabilitation via a Single Camera: A Data-driven Approach with 3D Gaussian Splatting for Real-World Adaptation.","authors":"Shengting Cao, Jiamiao Zhao, Fei Hu, Yu Gan","doi":"10.1109/TVCG.2025.3544297","DOIUrl":null,"url":null,"abstract":"<p><p>Telerehabilitation is a cost-effective alternative to in-clinic rehabilitation. Although convenient, it lacks immersive and free-viewpoint patient visualization. Current research explores two solutions to this issue. Mesh-based methods use 3D models and motion capture for AR visualization. However, they are labor-intensive and less photorealistic than 2D images. Microsoft's Holoportation generates photorealistic 3D models with eight RGBD cameras in real time. However, it requires complex setups, high GPU power, and high-speed communication infrastructure, making deployment challenging. This paper presents a Real-Time Free-Viewpoint Holographic Patient Rendering (RT-FVHP) system for telerehabilitation. Unlike traditional methods that require manually crafted assets such as 3D meshes, texture maps, and skeletal rigging, our data-driven approach eliminates the need for explicit asset definitions. Inspired by the HumanNeRF framework, we retarget dynamic human poses to a canonical pose and leverage 3D Gaussian Splatting to train a neural network in canonical space for patient representation. The trained model generates 2D RGB outputs via Gaussian Splatting rasterization, guided by camera parameters and human pose inputs. Compatible with HoloLens 2 and web-based platforms, RT-FVHP operates effectively under real-world conditions, including handling occlusions caused by treadmills. Occlusion handling is accomplished using our Shape-Enforced Gaussian Density Control (SGDC), which initializes and densifies 3D Gaussians in occluded regions using estimated SMPL human body priors. This approach minimizes manual intervention while ensuring complete body reconstruction. With efficient Gaussian rasterization, the model delivers real-time performance of up to 400 FPS at 1080p resolution on a dedicated RTX6000 GPU.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3544297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Telerehabilitation is a cost-effective alternative to in-clinic rehabilitation. Although convenient, it lacks immersive and free-viewpoint patient visualization. Current research explores two solutions to this issue. Mesh-based methods use 3D models and motion capture for AR visualization. However, they are labor-intensive and less photorealistic than 2D images. Microsoft's Holoportation generates photorealistic 3D models with eight RGBD cameras in real time. However, it requires complex setups, high GPU power, and high-speed communication infrastructure, making deployment challenging. This paper presents a Real-Time Free-Viewpoint Holographic Patient Rendering (RT-FVHP) system for telerehabilitation. Unlike traditional methods that require manually crafted assets such as 3D meshes, texture maps, and skeletal rigging, our data-driven approach eliminates the need for explicit asset definitions. Inspired by the HumanNeRF framework, we retarget dynamic human poses to a canonical pose and leverage 3D Gaussian Splatting to train a neural network in canonical space for patient representation. The trained model generates 2D RGB outputs via Gaussian Splatting rasterization, guided by camera parameters and human pose inputs. Compatible with HoloLens 2 and web-based platforms, RT-FVHP operates effectively under real-world conditions, including handling occlusions caused by treadmills. Occlusion handling is accomplished using our Shape-Enforced Gaussian Density Control (SGDC), which initializes and densifies 3D Gaussians in occluded regions using estimated SMPL human body priors. This approach minimizes manual intervention while ensuring complete body reconstruction. With efficient Gaussian rasterization, the model delivers real-time performance of up to 400 FPS at 1080p resolution on a dedicated RTX6000 GPU.