{"title":"Pseudo-Immersive Real-Time Display of 3D Scenes on Mobile Devices","authors":"Ming Li, A. Schmitz, L. Kobbelt","doi":"10.1109/3DIMPVT.2011.15","DOIUrl":"https://doi.org/10.1109/3DIMPVT.2011.15","url":null,"abstract":"The display of complex 3D scenes in real-time on mobile devices is difficult due to the insufficient data throughput and a relatively weak graphics performance. Hence, we propose a client-server system, where the processing of the complex scene is performed on a server and the resulting data is streamed to the mobile device. In order to cope with low transmission bit rates, the server sends new data only with a frame rate of about 2 Hz. However, instead of sending plain frame buffers, the server decomposes the scene geometry represented by the current view's depth profile into a small set of textured polygons. This processing does not require the knowledge of objects or structures in the scene, i.e. the output of Time-of-flight cameras can be handled as well. The 2.5D representation of the current frame allows the mobile device to render plausibly distorted views of the scene at high frame rates as long as the viewing position does not change too much before the next frame arrives from the server. In order to further augment the visual experience, we use the mobile device's built-in camera or gyroscope to detect the spatial relation between the user's face and the device, so that the camera view can be adapted accordingly. This produces a pseudo-immersive visual effect. Besides designing the overall system with a render-server, 3D display client, and real-time face/pose detection, our main technical contribution is a highly efficient algorithm that decomposes a frame buffer with per-pixel depth and normal information into a small set of planar regions which can be textured with the current frame. This representation is simple enough for real time display on today's mobile devices.","PeriodicalId":330003,"journal":{"name":"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121687586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Reconstruction of Urban Areas","authors":"Charalambos (Charis) Poullis, Suya You","doi":"10.1109/3DIMPVT.2011.14","DOIUrl":"https://doi.org/10.1109/3DIMPVT.2011.14","url":null,"abstract":"Virtual representations of real world areas are increasingly being employed in a variety of different applications such as urban planning, personnel training, simulations, etc. Despite the increasing demand for such realistic 3D representations, it still remains a very hard and often manual process. In this paper, we address the problem of creating photo realistic 3D scene models for large-scale areas and present a complete system. The proposed system comprises of two main components: (1) A reconstruction pipeline which employs a fully automatic technique for extracting and producing high-fidelity geometric models directly from Light Detection and Ranging (LiDAR) data and (2) A flexible texture blending technique for generating high-quality photo realistic textures by fusing information from multiple optical sensor resources. The result is a photo realistic 3D representation of large-scale areas(city-size) of the real-world. We have tested the proposed system extensively with many city-size datasets which confirms the validity and robustness of the approach. The reported results verify that the system is a consistent work flow that allows non-expert and non-artists to rapidly fuse aerial LiDAR and imagery to construct photo realistic 3D scene models.","PeriodicalId":330003,"journal":{"name":"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Song, Yonghuai Liu, Ralph Robert Martin, Paul L. Rosin
{"title":"Higher Order CRF for Surface Reconstruction from Multi-view Data Sets","authors":"R. Song, Yonghuai Liu, Ralph Robert Martin, Paul L. Rosin","doi":"10.1109/3DIMPVT.2011.27","DOIUrl":"https://doi.org/10.1109/3DIMPVT.2011.27","url":null,"abstract":"We propose a novel method based on higher order Conditional Random Field (CRF) for reconstructing surface models from multi-view data sets. This method is automatic and robust to inevitable scanning noise and registration errors involved in the stages of data acquisition and registration. By incorporating the information within the input data sets into the energy function more sufficiently than existing methods, it more effectively captures spatial relations between 3D points, making the reconstructed surface both topologically and geometrically consistent with the data sources. We employ the state-of-the-art belief propagation algorithm to infer this higher order CRF while utilizing the sparseness of the CRF labeling to reduce the computational complexity. Experiments show that the proposed approach provides improved surface reconstruction.","PeriodicalId":330003,"journal":{"name":"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125806076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}