Proceedings of the 11th European Conference on Visual Media Production最新文献

筛选
英文 中文
Rerendering landscape photographs 重新渲染风景照片
Proceedings of the 11th European Conference on Visual Media Production Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668942
Pu Wang, Diana Bicazan, A. Ghosh
{"title":"Rerendering landscape photographs","authors":"Pu Wang, Diana Bicazan, A. Ghosh","doi":"10.1145/2668904.2668942","DOIUrl":"https://doi.org/10.1145/2668904.2668942","url":null,"abstract":"We present a practical approach for realistic rerendering of landscape photographs. We extract a view dependent depth map from single input landscape images by examining global and local pixel color distributions and demonstrate applications of depth dependent rendering such as novel viewpoints, digital refocusing and dehazing. We also present a simple approach to relight the input landscape photograph under novel sky illumination. Here, we assume diffuse reflectance and relight landscapes by estimating the irradiance due the sky in the input photograph. Finally, we also take into account specular reflections on water surfaces which are common in landscape photography and demonstrate a semiautomatic process for relighting scenes with still water.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133289653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bullet time using multi-viewpoint robotic camera system 子弹时间使用多视点机器人摄像机系统
Proceedings of the 11th European Conference on Visual Media Production Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668932
Kensuke Ikeya, K. Hisatomi, Miwa Katayama, T. Mishina, Y. Iwadate
{"title":"Bullet time using multi-viewpoint robotic camera system","authors":"Kensuke Ikeya, K. Hisatomi, Miwa Katayama, T. Mishina, Y. Iwadate","doi":"10.1145/2668904.2668932","DOIUrl":"https://doi.org/10.1145/2668904.2668932","url":null,"abstract":"The main purpose of our research was to generate the bullet time of dynamically moving subjects in 3D space or multiple shots of subjects within 3D space. In addition, we wanted to create a practical and generic bullet time system that required less time for advance preparation and generated bullet time in semi-real time after subjects had been captured that enabled sports broadcasting to be replayed. We developed a multi-viewpoint robotic camera system to achieve our purpose. A cameraman controls multi-viewpoint robotic cameras to simultaneously focus on subjects in 3D space in our system, and captures multi-viewpoint videos. Bullet time is generated from these videos in semi-real time by correcting directional control errors due to operating errors by the cameraman or mechanical control errors by robotic cameras using directional control of virtual cameras based on projective transformation. The experimental results revealed our system was able to generate bullet time for a dynamically moving player in 3D space or multiple shots of players within 3D space in volleyball, gymnastics, and basketball in just about a minute. System preparation in calibrating the cameras in advance was finished in just about five minutes. Our system was utilized in the \"ISU Grand Prix of Figure Skating 2013/2014, NHK Trophy\" live sports program in November 2013. The bullet time of a dynamically moving skater on a large skating rink was generated in semi-real time using our system and broadcast in a replay just after the competition. Thus, we confirmed our bullet time system was more practical and generic.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114870618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Web-based visualisation of on-set point cloud data 基于web的定点云数据可视化
Proceedings of the 11th European Conference on Visual Media Production Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668937
A. Evans, J. Agenjo, J. Blat
{"title":"Web-based visualisation of on-set point cloud data","authors":"A. Evans, J. Agenjo, J. Blat","doi":"10.1145/2668904.2668937","DOIUrl":"https://doi.org/10.1145/2668904.2668937","url":null,"abstract":"In this paper we present a system for progressive encoding, storage, transmission, and web based visualization of large point cloud datasets. Point cloud data is typically recorded on-set during a film production, and is later used to assist with various stages of the post-production process. The remote visualization of this data (on or off-set, either via desktop or mobile device) can be difficult, as the volume of data can take a long time to be transferred, and can easily overwhelm the memory of a typical 3D web or mobile client. Yet web-based visualization of this data opens up many possibilities for remote and collaborative workflow models. In order to facilitate this workflow, we present a system to progressively transfer point cloud data to a WebGL based client, updating the visualisation as more information is downloaded and maintaining a coherent structure at lower resolutions. Existing work on progressive transfer of 3D assets has focused on well-formed triangle meshes, and thus is unsuitable for use with raw LIDAR data. Our work addresses this challenge directly, and as such the principal contribution is that it is the first published method of progressive visualization of point cloud data via the web.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117308102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Frequency-based controls for terrain editing 基于频率的地形编辑控制
Proceedings of the 11th European Conference on Visual Media Production Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668944
Gwyneth Bradbury, I. Choi, C. Amati, Kenny Mitchell, T. Weyrich
{"title":"Frequency-based controls for terrain editing","authors":"Gwyneth Bradbury, I. Choi, C. Amati, Kenny Mitchell, T. Weyrich","doi":"10.1145/2668904.2668944","DOIUrl":"https://doi.org/10.1145/2668904.2668944","url":null,"abstract":"Authoring virtual terrains can be a challenging task. Procedural and stochastic methods for automated terrain generation produce plausible results but lack intuitive control of the terrain features, while data driven methods offer more creative control at the cost of a limited feature set, higher storage requirements and blending artefacts. Moreover, artists often prefer a workflow involving varied reference material such as photographs, concept art, elevation maps and satellite images, for the incorporation of which there is little support from commercial content-creation tools. We present a sketch-based toolset for asset-guided creation and intuitive editing of virtual terrains, allowing the manipulation of both elevation maps and 3D meshes, and exploiting a layer-based interface. We employ a frequency-band subdivision of elevation maps to allow using the appropriate editing tool for each level of detail. Using our system, we show that a user can start from various input types: storyboard sketches, photographs or height maps to easily develop and customise a virtual terrain.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134644880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Line-preserving hole-filling for 2D-to-3D conversion 2d到3d转换的保线填充孔
Proceedings of the 11th European Conference on Visual Media Production Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668931
Nils Plath, Lutz Goldmann, A. Nitsch, S. Knorr, T. Sikora
{"title":"Line-preserving hole-filling for 2D-to-3D conversion","authors":"Nils Plath, Lutz Goldmann, A. Nitsch, S. Knorr, T. Sikora","doi":"10.1145/2668904.2668931","DOIUrl":"https://doi.org/10.1145/2668904.2668931","url":null,"abstract":"Many 2D-to-3D conversion techniques rely on image-based rendering methods in order to synthesize 3D views from monoscopic images. This leads to holes in the generated views due to previously occluded objects becoming visible for which no texture information is available. Approaches attempting to alleviate the effects of these artifacts are referred to as hole-filling. This paper proposes a method which determines a non-uniform deformation of the stereoscopic view such that no holes are visible. Additionally, an energy term is devised, which prevents straight lines in the input image from being bent due to the non-uniform image warp. This is achieved by constructing a triangle mesh, which approximates the depth map of the input image and by integrating a set of detected lines into it. The line information is incorporated into the underlying optimization problem in order to prevent bending of the lines. The evaluation of the proposed algorithm on a comprehensive dataset with a variety of scenes shows that holes are efficiently filled without obvious background distortions.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126343355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A comparison of night vision simulation methods for video 视频夜视仿真方法的比较
Proceedings of the 11th European Conference on Visual Media Production Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668945
R. Wanat, Rafał K. Mantiuk
{"title":"A comparison of night vision simulation methods for video","authors":"R. Wanat, Rafał K. Mantiuk","doi":"10.1145/2668904.2668945","DOIUrl":"https://doi.org/10.1145/2668904.2668945","url":null,"abstract":"The properties of the human vision change depending on the absolute luminance of the perceived scene. The change is most noticeable at night, when cones lose their sensitivity and rods activate. This change is imitated in video footage using various tricks and filters. In this study, we compared 4 algorithms that can realistically simulate the appearance of night scenes on a standard display. We conducted a subjective evaluation study to compare the results of night vision simulation with a reference footage dimmed using a photographic filter to determine which algorithm offers the greatest accuracy. The results of our study can be used in computer graphics rendering to apply the most realistic simulation of night vision to the rendered night scenes or in photography to reproduce photographs taken at night as similar as possible to how the human eye would see them.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128577432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Proceedings of the 11th European Conference on Visual Media Production 第11届欧洲视觉媒体制作会议论文集
{"title":"Proceedings of the 11th European Conference on Visual Media Production","authors":"","doi":"10.1145/2668904","DOIUrl":"https://doi.org/10.1145/2668904","url":null,"abstract":"","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127464889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信