2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission最新文献

筛选
英文 中文
Pseudo-Immersive Real-Time Display of 3D Scenes on Mobile Devices 移动设备上三维场景的伪沉浸式实时显示
Ming Li, A. Schmitz, L. Kobbelt
{"title":"Pseudo-Immersive Real-Time Display of 3D Scenes on Mobile Devices","authors":"Ming Li, A. Schmitz, L. Kobbelt","doi":"10.1109/3DIMPVT.2011.15","DOIUrl":"https://doi.org/10.1109/3DIMPVT.2011.15","url":null,"abstract":"The display of complex 3D scenes in real-time on mobile devices is difficult due to the insufficient data throughput and a relatively weak graphics performance. Hence, we propose a client-server system, where the processing of the complex scene is performed on a server and the resulting data is streamed to the mobile device. In order to cope with low transmission bit rates, the server sends new data only with a frame rate of about 2 Hz. However, instead of sending plain frame buffers, the server decomposes the scene geometry represented by the current view's depth profile into a small set of textured polygons. This processing does not require the knowledge of objects or structures in the scene, i.e. the output of Time-of-flight cameras can be handled as well. The 2.5D representation of the current frame allows the mobile device to render plausibly distorted views of the scene at high frame rates as long as the viewing position does not change too much before the next frame arrives from the server. In order to further augment the visual experience, we use the mobile device's built-in camera or gyroscope to detect the spatial relation between the user's face and the device, so that the camera view can be adapted accordingly. This produces a pseudo-immersive visual effect. Besides designing the overall system with a render-server, 3D display client, and real-time face/pose detection, our main technical contribution is a highly efficient algorithm that decomposes a frame buffer with per-pixel depth and normal information into a small set of planar regions which can be textured with the current frame. This representation is simple enough for real time display on today's mobile devices.","PeriodicalId":330003,"journal":{"name":"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121687586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
3D Reconstruction of Urban Areas 城市区域三维重建
Charalambos (Charis) Poullis, Suya You
{"title":"3D Reconstruction of Urban Areas","authors":"Charalambos (Charis) Poullis, Suya You","doi":"10.1109/3DIMPVT.2011.14","DOIUrl":"https://doi.org/10.1109/3DIMPVT.2011.14","url":null,"abstract":"Virtual representations of real world areas are increasingly being employed in a variety of different applications such as urban planning, personnel training, simulations, etc. Despite the increasing demand for such realistic 3D representations, it still remains a very hard and often manual process. In this paper, we address the problem of creating photo realistic 3D scene models for large-scale areas and present a complete system. The proposed system comprises of two main components: (1) A reconstruction pipeline which employs a fully automatic technique for extracting and producing high-fidelity geometric models directly from Light Detection and Ranging (LiDAR) data and (2) A flexible texture blending technique for generating high-quality photo realistic textures by fusing information from multiple optical sensor resources. The result is a photo realistic 3D representation of large-scale areas(city-size) of the real-world. We have tested the proposed system extensively with many city-size datasets which confirms the validity and robustness of the approach. The reported results verify that the system is a consistent work flow that allows non-expert and non-artists to rapidly fuse aerial LiDAR and imagery to construct photo realistic 3D scene models.","PeriodicalId":330003,"journal":{"name":"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Higher Order CRF for Surface Reconstruction from Multi-view Data Sets 基于多视图数据集的高阶CRF曲面重构
R. Song, Yonghuai Liu, Ralph Robert Martin, Paul L. Rosin
{"title":"Higher Order CRF for Surface Reconstruction from Multi-view Data Sets","authors":"R. Song, Yonghuai Liu, Ralph Robert Martin, Paul L. Rosin","doi":"10.1109/3DIMPVT.2011.27","DOIUrl":"https://doi.org/10.1109/3DIMPVT.2011.27","url":null,"abstract":"We propose a novel method based on higher order Conditional Random Field (CRF) for reconstructing surface models from multi-view data sets. This method is automatic and robust to inevitable scanning noise and registration errors involved in the stages of data acquisition and registration. By incorporating the information within the input data sets into the energy function more sufficiently than existing methods, it more effectively captures spatial relations between 3D points, making the reconstructed surface both topologically and geometrically consistent with the data sources. We employ the state-of-the-art belief propagation algorithm to infer this higher order CRF while utilizing the sparseness of the CRF labeling to reduce the computational complexity. Experiments show that the proposed approach provides improved surface reconstruction.","PeriodicalId":330003,"journal":{"name":"2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125806076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信