{"title":"Rerendering landscape photographs","authors":"Pu Wang, Diana Bicazan, A. Ghosh","doi":"10.1145/2668904.2668942","DOIUrl":"https://doi.org/10.1145/2668904.2668942","url":null,"abstract":"We present a practical approach for realistic rerendering of landscape photographs. We extract a view dependent depth map from single input landscape images by examining global and local pixel color distributions and demonstrate applications of depth dependent rendering such as novel viewpoints, digital refocusing and dehazing. We also present a simple approach to relight the input landscape photograph under novel sky illumination. Here, we assume diffuse reflectance and relight landscapes by estimating the irradiance due the sky in the input photograph. Finally, we also take into account specular reflections on water surfaces which are common in landscape photography and demonstrate a semiautomatic process for relighting scenes with still water.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133289653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kensuke Ikeya, K. Hisatomi, Miwa Katayama, T. Mishina, Y. Iwadate
{"title":"Bullet time using multi-viewpoint robotic camera system","authors":"Kensuke Ikeya, K. Hisatomi, Miwa Katayama, T. Mishina, Y. Iwadate","doi":"10.1145/2668904.2668932","DOIUrl":"https://doi.org/10.1145/2668904.2668932","url":null,"abstract":"The main purpose of our research was to generate the bullet time of dynamically moving subjects in 3D space or multiple shots of subjects within 3D space. In addition, we wanted to create a practical and generic bullet time system that required less time for advance preparation and generated bullet time in semi-real time after subjects had been captured that enabled sports broadcasting to be replayed. We developed a multi-viewpoint robotic camera system to achieve our purpose. A cameraman controls multi-viewpoint robotic cameras to simultaneously focus on subjects in 3D space in our system, and captures multi-viewpoint videos. Bullet time is generated from these videos in semi-real time by correcting directional control errors due to operating errors by the cameraman or mechanical control errors by robotic cameras using directional control of virtual cameras based on projective transformation. The experimental results revealed our system was able to generate bullet time for a dynamically moving player in 3D space or multiple shots of players within 3D space in volleyball, gymnastics, and basketball in just about a minute. System preparation in calibrating the cameras in advance was finished in just about five minutes. Our system was utilized in the \"ISU Grand Prix of Figure Skating 2013/2014, NHK Trophy\" live sports program in November 2013. The bullet time of a dynamically moving skater on a large skating rink was generated in semi-real time using our system and broadcast in a replay just after the competition. Thus, we confirmed our bullet time system was more practical and generic.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114870618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Web-based visualisation of on-set point cloud data","authors":"A. Evans, J. Agenjo, J. Blat","doi":"10.1145/2668904.2668937","DOIUrl":"https://doi.org/10.1145/2668904.2668937","url":null,"abstract":"In this paper we present a system for progressive encoding, storage, transmission, and web based visualization of large point cloud datasets. Point cloud data is typically recorded on-set during a film production, and is later used to assist with various stages of the post-production process. The remote visualization of this data (on or off-set, either via desktop or mobile device) can be difficult, as the volume of data can take a long time to be transferred, and can easily overwhelm the memory of a typical 3D web or mobile client. Yet web-based visualization of this data opens up many possibilities for remote and collaborative workflow models. In order to facilitate this workflow, we present a system to progressively transfer point cloud data to a WebGL based client, updating the visualisation as more information is downloaded and maintaining a coherent structure at lower resolutions. Existing work on progressive transfer of 3D assets has focused on well-formed triangle meshes, and thus is unsuitable for use with raw LIDAR data. Our work addresses this challenge directly, and as such the principal contribution is that it is the first published method of progressive visualization of point cloud data via the web.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117308102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gwyneth Bradbury, I. Choi, C. Amati, Kenny Mitchell, T. Weyrich
{"title":"Frequency-based controls for terrain editing","authors":"Gwyneth Bradbury, I. Choi, C. Amati, Kenny Mitchell, T. Weyrich","doi":"10.1145/2668904.2668944","DOIUrl":"https://doi.org/10.1145/2668904.2668944","url":null,"abstract":"Authoring virtual terrains can be a challenging task. Procedural and stochastic methods for automated terrain generation produce plausible results but lack intuitive control of the terrain features, while data driven methods offer more creative control at the cost of a limited feature set, higher storage requirements and blending artefacts. Moreover, artists often prefer a workflow involving varied reference material such as photographs, concept art, elevation maps and satellite images, for the incorporation of which there is little support from commercial content-creation tools. We present a sketch-based toolset for asset-guided creation and intuitive editing of virtual terrains, allowing the manipulation of both elevation maps and 3D meshes, and exploiting a layer-based interface. We employ a frequency-band subdivision of elevation maps to allow using the appropriate editing tool for each level of detail. Using our system, we show that a user can start from various input types: storyboard sketches, photographs or height maps to easily develop and customise a virtual terrain.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134644880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nils Plath, Lutz Goldmann, A. Nitsch, S. Knorr, T. Sikora
{"title":"Line-preserving hole-filling for 2D-to-3D conversion","authors":"Nils Plath, Lutz Goldmann, A. Nitsch, S. Knorr, T. Sikora","doi":"10.1145/2668904.2668931","DOIUrl":"https://doi.org/10.1145/2668904.2668931","url":null,"abstract":"Many 2D-to-3D conversion techniques rely on image-based rendering methods in order to synthesize 3D views from monoscopic images. This leads to holes in the generated views due to previously occluded objects becoming visible for which no texture information is available. Approaches attempting to alleviate the effects of these artifacts are referred to as hole-filling. This paper proposes a method which determines a non-uniform deformation of the stereoscopic view such that no holes are visible. Additionally, an energy term is devised, which prevents straight lines in the input image from being bent due to the non-uniform image warp. This is achieved by constructing a triangle mesh, which approximates the depth map of the input image and by integrating a set of detected lines into it. The line information is incorporated into the underlying optimization problem in order to prevent bending of the lines. The evaluation of the proposed algorithm on a comprehensive dataset with a variety of scenes shows that holes are efficiently filled without obvious background distortions.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126343355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comparison of night vision simulation methods for video","authors":"R. Wanat, Rafał K. Mantiuk","doi":"10.1145/2668904.2668945","DOIUrl":"https://doi.org/10.1145/2668904.2668945","url":null,"abstract":"The properties of the human vision change depending on the absolute luminance of the perceived scene. The change is most noticeable at night, when cones lose their sensitivity and rods activate. This change is imitated in video footage using various tricks and filters. In this study, we compared 4 algorithms that can realistically simulate the appearance of night scenes on a standard display. We conducted a subjective evaluation study to compare the results of night vision simulation with a reference footage dimmed using a photographic filter to determine which algorithm offers the greatest accuracy. The results of our study can be used in computer graphics rendering to apply the most realistic simulation of night vision to the rendered night scenes or in photography to reproduce photographs taken at night as similar as possible to how the human eye would see them.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128577432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 11th European Conference on Visual Media Production","authors":"","doi":"10.1145/2668904","DOIUrl":"https://doi.org/10.1145/2668904","url":null,"abstract":"","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127464889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}