{"title":"LiteAT:用于远程教育的数据轻量级和用户自适应VR远程呈现系统。","authors":"Yuxin Shen, Wei Liang, Jianzhu Ma","doi":"10.1109/TVCG.2025.3616747","DOIUrl":null,"url":null,"abstract":"<p><p>In educators' ongoing pursuit of enriching remote education, Virtual Reality (VR)-based telepresence has shown significant promise due to its immersive and interactive nature. Existing approaches often rely on point cloud or NeRF-based techniques to deliver realistic representations of teachers and classrooms to remote students. However, achieving low latency is non-trivial, and maintaining high-fidelity rendering under such constraints poses an even greater challenge. This paper introduces LiteAT, a data-lightweight and user-adaptive VR telepresence system, to enable real-time, immersive learning experiences. LiteAT employs a Gaussian Splatting-based reconstruction pipeline that integrates an SMPL-X-driven dynamic human model with a static classroom, supporting lightweight data transmission and high-quality rendering. To enable efficient and personalized exploration in the virtual classroom, we propose a user-adaptive viewpoint recommendation framework that dynamically suggests high-quality viewpoints tailored to user preferences. Candidate viewpoints are evaluated based on multiple visual quality factors and are continuously optimized based on recent user behavior and scene dynamics. Quantitative experiments and user studies validate the effectiveness of LiteAT across multiple evaluation metrics. LiteAT establishes a versatile and scalable foundation for immersive telepresence, potentially supporting real-time scenarios such as procedural teaching, multimodal instruction, and collaborative learning.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5000,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LiteAT: A Data-Lightweight and User-Adaptive VR Telepresence System for Remote Education.\",\"authors\":\"Yuxin Shen, Wei Liang, Jianzhu Ma\",\"doi\":\"10.1109/TVCG.2025.3616747\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In educators' ongoing pursuit of enriching remote education, Virtual Reality (VR)-based telepresence has shown significant promise due to its immersive and interactive nature. Existing approaches often rely on point cloud or NeRF-based techniques to deliver realistic representations of teachers and classrooms to remote students. However, achieving low latency is non-trivial, and maintaining high-fidelity rendering under such constraints poses an even greater challenge. This paper introduces LiteAT, a data-lightweight and user-adaptive VR telepresence system, to enable real-time, immersive learning experiences. LiteAT employs a Gaussian Splatting-based reconstruction pipeline that integrates an SMPL-X-driven dynamic human model with a static classroom, supporting lightweight data transmission and high-quality rendering. To enable efficient and personalized exploration in the virtual classroom, we propose a user-adaptive viewpoint recommendation framework that dynamically suggests high-quality viewpoints tailored to user preferences. Candidate viewpoints are evaluated based on multiple visual quality factors and are continuously optimized based on recent user behavior and scene dynamics. Quantitative experiments and user studies validate the effectiveness of LiteAT across multiple evaluation metrics. LiteAT establishes a versatile and scalable foundation for immersive telepresence, potentially supporting real-time scenarios such as procedural teaching, multimodal instruction, and collaborative learning.</p>\",\"PeriodicalId\":94035,\"journal\":{\"name\":\"IEEE transactions on visualization and computer graphics\",\"volume\":\"PP \",\"pages\":\"\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2025-10-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on visualization and computer graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TVCG.2025.3616747\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3616747","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LiteAT: A Data-Lightweight and User-Adaptive VR Telepresence System for Remote Education.
In educators' ongoing pursuit of enriching remote education, Virtual Reality (VR)-based telepresence has shown significant promise due to its immersive and interactive nature. Existing approaches often rely on point cloud or NeRF-based techniques to deliver realistic representations of teachers and classrooms to remote students. However, achieving low latency is non-trivial, and maintaining high-fidelity rendering under such constraints poses an even greater challenge. This paper introduces LiteAT, a data-lightweight and user-adaptive VR telepresence system, to enable real-time, immersive learning experiences. LiteAT employs a Gaussian Splatting-based reconstruction pipeline that integrates an SMPL-X-driven dynamic human model with a static classroom, supporting lightweight data transmission and high-quality rendering. To enable efficient and personalized exploration in the virtual classroom, we propose a user-adaptive viewpoint recommendation framework that dynamically suggests high-quality viewpoints tailored to user preferences. Candidate viewpoints are evaluated based on multiple visual quality factors and are continuously optimized based on recent user behavior and scene dynamics. Quantitative experiments and user studies validate the effectiveness of LiteAT across multiple evaluation metrics. LiteAT establishes a versatile and scalable foundation for immersive telepresence, potentially supporting real-time scenarios such as procedural teaching, multimodal instruction, and collaborative learning.