{"title":"Should a movie have two different soundtracks for its stereoscopic and non-stereoscopic versions? A study on the front/rear balance","authors":"Etienne Hendrickx, M. Paquier, Vincent Koehl","doi":"10.1109/IC3D.2013.6732079","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732079","url":null,"abstract":"Few psychoacoustic studies have been made on the influence of stereoscopy on the sound mixing of movies. Yet very different opinions can be found among scientific, esthetical or technical communities. Some argue that sound needs to be mixed differently for stereoscopic movies, whereas others pretend that image has actually caught up with sound, that was already “three-dimensional” and should not therefore be affected by stereoscopy. In the present experiment, expert subjects were asked to achieve surround sound ambiance mixings for eleven short sequences presented in both stereoscopic and non-stereoscopic versions. The results suggest that the influence of stereoscopy on the front/rear balance strongly depends on the content of the sequence and only appears in a few specific situations.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124930059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cédric R. André, E. Corteel, J. Embrechts, J. Verly, B. Katz
{"title":"A new validated method for improving the audiovisual spatial congruence in the case of stereoscopic-3D video and wave field synthesis","authors":"Cédric R. André, E. Corteel, J. Embrechts, J. Verly, B. Katz","doi":"10.1109/IC3D.2013.6732081","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732081","url":null,"abstract":"While 3D cinema is becoming increasingly established, little effort has focused on the general problem of producing a 3D sound scene spatially coherent with the visual content of a stereoscopic-3D (s-3D) movie. The perceptual relevance of such spatial audiovisual coherence is of significant interest. In this paper, we explain why the combination of accurate sound positioning and stereoscopic-3D images can lead to an incongruence between the sound and the image for multiple spectators. Then, we adapt to s-3D viewing a method originally proposed for 2D images in the literature to reduce this error. Finally, a subjective experiment is carried out to prove the efficiency of the method.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122583285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On uniqueness in triangulation based pattern for structured light reconstruction","authors":"Rémi Slysz, L. Giraud-Moreau, H. Borouchaki","doi":"10.1109/IC3D.2013.6732093","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732093","url":null,"abstract":"In this paper, a new light pattern is proposed for three-dimensional reconstruction. This pattern has several interesting properties and is highly robust to deformation due to its topology. Moreover, the proposed pattern is based on triangulation meshes which allow us to construct arbitrarily a high number of unique key points. Furthermore, the graph permits to have information on connectivity between keys as it is a graph, and thus to work in areas where the key is not completely defined because of discontinuity in the surface of the scene.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"60 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114023769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust homography for real-time image un-distortion","authors":"Jianhui Chen, Karim Benzeroual, R. Allison","doi":"10.1109/IC3D.2013.6732075","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732075","url":null,"abstract":"Stereoscopic 3D film production has increased the need for efficient and robust camera calibration and tracking. Many of these tasks involve making planar correspondence and thus accurate fast homography estimation is essential. However, homography estimation may fail with distorted images since the planar projected corners may be distorted far away from the “perfect” locations. On the other hand, precisely estimating lens distortion from a single image is still a challenge, especially in real-time applications. In this paper, we drop the assumption that the image distortion is negligible in homography estimation. We propose robust homography as a simple and efficient approach which combines homography mapping and image distortion estimation in a least square constraint. Our method can simultaneously estimate homography and image distortion from a single image in real-time. Compared with previous methods, it has two advantages: first, un-distortion can be achieved with little overhead due to the need for only a single calibration image and the real-time homography mapping of easy to track corners; second, due to the use of precise calibration targets the accuracy of our method is comparable to the multiple image calibration methods. In an experimental evaluation, we show that our method can accurately estimate image distortion parameters in both synthetic and real images. We also present its applications in close range un-distortion and robust corner detection.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124061510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youngmo Jeong, Jonghyun Kim, J. Yeom, Keehoon Hong, Byoungho Lee
{"title":"Handheld light field camera for intergral imaging with pixel mapping algorithm","authors":"Youngmo Jeong, Jonghyun Kim, J. Yeom, Keehoon Hong, Byoungho Lee","doi":"10.1109/IC3D.2013.6732094","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732094","url":null,"abstract":"We propose a handheld light field camera for integral imaging with pixel mapping algorithm to generate an elemental image. We implement the light field camera which includes image sensor, imaging lens, and micro lens array. An elemental image can be generated by pixel mapping algorithm from captured image taken by light field camera. Experimental setup and resultant images are presented to verify feasibility of the proposed method.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130366924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessment of plant leaf area measurement by using stereo-vision","authors":"V. Leemans, B. Dumont, M. Destain","doi":"10.1109/IC3D.2013.6732085","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732085","url":null,"abstract":"The aim of this study is to develop an alternative measurement for the leaf area index (LAI), an important agronomic parameter for plant growth assessment. A 3D stereo-vision technique was developed to measure both leaf area and corresponding ground area. The leaf area was based on pixel related measurements while the ground area was based on the mean distance from the leaves to the camera. Laboratory and field experiments were undertaken to estimate the accuracy and the precision of the technique. Result showed that, though the leaves-camera distance had to be estimated precisely in order to have accurate measurement, the precision of the LAI evaluation, after regression, was equivalent to the reference measurements, that is to say around 10% of the estimated value. This shows the potential of the 3D measurements compared with tedious reference measurements.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133109658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Objective bitstream quality metric for 3D-HD video","authors":"Dawid Juszka, L. Janowski, Z. Papir","doi":"10.1109/IC3D.2013.6732098","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732098","url":null,"abstract":"Now growing interest in 3D video services inevitably brings a greater need to measure their quality. From a financial point of view alone, it is crucial for providers to offer users the best experience possible. Such measurements are necessary in order to optimize the performance of telecommunication networks or to plan new investment. Unfortunately, the nature of this kind of service makes it almost impossible to use existing and verified 2D quality metrics. Therefore, suitable 3D quality metrics are urgently needed. In this paper, we propose one such solution - an objective, bitstream quality metric for stereoscopic high definition video affected by compression and packet loss in a network.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123710096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Swash, A. Aggoun, O. Fatah, J. C. Fernandez, E. Alazawi, Emmanuel Tsekleves
{"title":"Distributed pixel mapping for refining dark area in parallax barriers based holoscopic 3D Display","authors":"M. Swash, A. Aggoun, O. Fatah, J. C. Fernandez, E. Alazawi, Emmanuel Tsekleves","doi":"10.1109/IC3D.2013.6732101","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732101","url":null,"abstract":"Autostereoscopic 3D Display is robustly developed and available in the market for both home and professional users. However 3D resolution with acceptable 3D image quality remains a great challenge. This paper proposes a novel pixel mapping method for refining dark areas between two pinholes by distributing it into 3 times smaller dark areas and creating micro-pinholes in parallax barriers based holoscopic 3D displays. The proposed method allows to project RED, GREEN, BLUE subpixels separately from 3 different pinholes and it distributes the dark spaces into 3 times smaller dark spaces, which become unnoticeable and improves quality of the constructed holoscopic 3D scene significantly. Parallax barrier technology refers to a pinhole sheet or device placed in front or back of a liquid crystal display, allowing to project viewpoint pixels into space that reconstructs a holoscopic 3D scene in space. The holoscopic technology mimics the imaging system of insects, such as the fly, utilizing a single camera, equipped with a large number of micro-lenses or pinholes, to capture a scene, offering rich parallax information and enhanced 3D feeling without the need of wearing specific eyewear.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127374642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrik Goorts, S. Maesen, Dimitri Scarlino, P. Bekaert
{"title":"Bringing 3D vision to the web: Acquiring motion parallax using commodity cameras and WebGL","authors":"Patrik Goorts, S. Maesen, Dimitri Scarlino, P. Bekaert","doi":"10.1109/IC3D.2013.6732092","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732092","url":null,"abstract":"In this paper, we present a system to acquire 3D vision by motion parallax on web-based platforms using head tracking. We employ the camshift algorithm to perform color-based head tracking. Using the position of the head, we can render a 3D scene from the viewpoint of the viewer, thus acquiring motion parallax, a strong cue for 3D vision. We employed web technologies to allow the adoption of our method to any modern device, including mobile devices. WebGL is used for rendering and head tracking, and WebRTC is used for camera input. No software installation or plugins are required. We demonstrated the effectiveness of our method on a variety of devices, such as desktop computers, laptops, and tablets.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116641990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The relationship of pixel parallax to audience emotional response","authors":"Bruce Fitter","doi":"10.1109/IC3D.2013.6732076","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732076","url":null,"abstract":"This paper addresses questions surrounding the placement of characters and the conversion point in stereoscopic film scenes based on proxemics as described by E.T. Hall. “Man's evolution has been marked by the development of the \"distance receptors\"-sight and hearing. Thus he has been able to develop the arts which employ these two senses to the virtual exclusion of all the others.” [1] “The portrait, he says, is distinguished from any other sort of painting by psychological nearness, which \"depends directly on the actual physical interval- the distance in feet and inches between the model and the painter.\"”[1] quoting from [2] There is a distinct relationship between the distance between people and their emotional response to each other, a relationship that is also felt in art. As opposed to two dimensional films, stereoscopic films work in all three directions and as such a study into proxemics reveals the emotional consequences of character placement decisions. By being informed through proxemics it is expected that stereographers and directors will be guided to creating stereoscopic productions that best exploit these innate aspects of our humanness.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125467078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}