{"title":"移动设备上三维场景的伪沉浸式实时显示","authors":"Ming Li, A. Schmitz, L. Kobbelt","doi":"10.1109/3DIMPVT.2011.15","DOIUrl":null,"url":null,"abstract":"The display of complex 3D scenes in real-time on mobile devices is difficult due to the insufficient data throughput and a relatively weak graphics performance. Hence, we propose a client-server system, where the processing of the complex scene is performed on a server and the resulting data is streamed to the mobile device. In order to cope with low transmission bit rates, the server sends new data only with a frame rate of about 2 Hz. However, instead of sending plain frame buffers, the server decomposes the scene geometry represented by the current view's depth profile into a small set of textured polygons. This processing does not require the knowledge of objects or structures in the scene, i.e. the output of Time-of-flight cameras can be handled as well. The 2.5D representation of the current frame allows the mobile device to render plausibly distorted views of the scene at high frame rates as long as the viewing position does not change too much before the next frame arrives from the server. In order to further augment the visual experience, we use the mobile device's built-in camera or gyroscope to detect the spatial relation between the user's face and the device, so that the camera view can be adapted accordingly. This produces a pseudo-immersive visual effect. Besides designing the overall system with a render-server, 3D display client, and real-time face/pose detection, our main technical contribution is a highly efficient algorithm that decomposes a frame buffer with per-pixel depth and normal information into a small set of planar regions which can be textured with the current frame. This representation is simple enough for real time display on today's mobile devices.","PeriodicalId":330003,"journal":{"name":"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Pseudo-Immersive Real-Time Display of 3D Scenes on Mobile Devices\",\"authors\":\"Ming Li, A. Schmitz, L. Kobbelt\",\"doi\":\"10.1109/3DIMPVT.2011.15\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The display of complex 3D scenes in real-time on mobile devices is difficult due to the insufficient data throughput and a relatively weak graphics performance. Hence, we propose a client-server system, where the processing of the complex scene is performed on a server and the resulting data is streamed to the mobile device. In order to cope with low transmission bit rates, the server sends new data only with a frame rate of about 2 Hz. However, instead of sending plain frame buffers, the server decomposes the scene geometry represented by the current view's depth profile into a small set of textured polygons. This processing does not require the knowledge of objects or structures in the scene, i.e. the output of Time-of-flight cameras can be handled as well. The 2.5D representation of the current frame allows the mobile device to render plausibly distorted views of the scene at high frame rates as long as the viewing position does not change too much before the next frame arrives from the server. In order to further augment the visual experience, we use the mobile device's built-in camera or gyroscope to detect the spatial relation between the user's face and the device, so that the camera view can be adapted accordingly. This produces a pseudo-immersive visual effect. Besides designing the overall system with a render-server, 3D display client, and real-time face/pose detection, our main technical contribution is a highly efficient algorithm that decomposes a frame buffer with per-pixel depth and normal information into a small set of planar regions which can be textured with the current frame. This representation is simple enough for real time display on today's mobile devices.\",\"PeriodicalId\":330003,\"journal\":{\"name\":\"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-05-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/3DIMPVT.2011.15\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DIMPVT.2011.15","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pseudo-Immersive Real-Time Display of 3D Scenes on Mobile Devices
The display of complex 3D scenes in real-time on mobile devices is difficult due to the insufficient data throughput and a relatively weak graphics performance. Hence, we propose a client-server system, where the processing of the complex scene is performed on a server and the resulting data is streamed to the mobile device. In order to cope with low transmission bit rates, the server sends new data only with a frame rate of about 2 Hz. However, instead of sending plain frame buffers, the server decomposes the scene geometry represented by the current view's depth profile into a small set of textured polygons. This processing does not require the knowledge of objects or structures in the scene, i.e. the output of Time-of-flight cameras can be handled as well. The 2.5D representation of the current frame allows the mobile device to render plausibly distorted views of the scene at high frame rates as long as the viewing position does not change too much before the next frame arrives from the server. In order to further augment the visual experience, we use the mobile device's built-in camera or gyroscope to detect the spatial relation between the user's face and the device, so that the camera view can be adapted accordingly. This produces a pseudo-immersive visual effect. Besides designing the overall system with a render-server, 3D display client, and real-time face/pose detection, our main technical contribution is a highly efficient algorithm that decomposes a frame buffer with per-pixel depth and normal information into a small set of planar regions which can be textured with the current frame. This representation is simple enough for real time display on today's mobile devices.