{"title":"Off-axis complex hologram encoding method for holographic display with amplitude-only modulation","authors":"Chi-Young Hwang, Beom-Ryeol Lee, Joonku Hahn","doi":"10.1109/IC3D.2013.6732096","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732096","url":null,"abstract":"We investigate an off-axis complex hologram encoding method for amplitude-only modulation on the basis of the principle of off-axis holography. The encoding method is analytically derived and formulated in Fresnel approximation. The hologram is reconstructed by using a holographic display with narrow viewing window and we experimentally verify the encoding method.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126855806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jong-Young Hong, J. Yeom, Youngmo Jeong, Jonghyun Kim, Soon-gi Park, Keehoon Hong, Byoungho Lee
{"title":"Table-top display using integral floating display","authors":"Jong-Young Hong, J. Yeom, Youngmo Jeong, Jonghyun Kim, Soon-gi Park, Keehoon Hong, Byoungho Lee","doi":"10.1109/IC3D.2013.6732077","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732077","url":null,"abstract":"In this paper, we present a table-top 3D display using integral floating technology to make a static-type display system. Because the proposed system does not use rotating devices, it has several advantages over conventional integral floating displays such as stability and static feature. We analyze the viewing characteristics and pickup process of the proposed system by ray optics and make an experimental setup to verify our idea.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116615371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Vupparaboina, T. R. Chandra, S. Jana, A. Richhariya, J. Chhablani
{"title":"3D visualization and mapping of choroid thickness based on optical coherence tomography: A step-by-step geometric approach","authors":"K. Vupparaboina, T. R. Chandra, S. Jana, A. Richhariya, J. Chhablani","doi":"10.1109/IC3D.2013.6732082","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732082","url":null,"abstract":"Although bodily organs are inherently 3D, medical diagnosis often relies on their 2D representation. For instance, sectional images of the eye (especially, of its posterior part) based on optical coherence tomography (OCT) provide internal views, from which the ophthalmologist makes medical decisions about 3D eye structures. In the course, the physician is forced to mentally synthesize the underlying 3D context, which could be both time consuming and stressful. In this backdrop, can such 2D sections be arranged and presented in the natural 3D form for faster and stress-free diagnosis? In this paper, we consider ailments affecting choroid thickness, and address the aforementioned question at two levels-in terms of 3D visualization and 3D mapping. In particular, we exploit the spherical geometry of the eye, align OCT sections on a nominal sphere, and extract the choroid by peeling off inner and outer layers. At each step, we render our intermediate results on a 3D lightfield display, which provides a natural visual representation. Finally, the thickness variation of the extracted choroid is spatially mapped, and observed on a lightfield display as well as using 3D visualization softwares on a regular 2D terminal. Consequently, we identified choroid depletion around optic disc based on the test OCT images. We believe that the proposed technique would provide ophthalmologists with a tool for making faster diagnostic decisions with less stress.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124605162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Complete processing chain for 3D video generation using Kinect sensor","authors":"Michal Joachimiak, M. Hannuksela, M. Gabbouj","doi":"10.1109/IC3D.2013.6732088","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732088","url":null,"abstract":"The multiview-video-plus-depth (MVD) format selected for 3D video standardization describes 3D scene by video and associated depth and enables generation of virtual views through the depth-image-based rendering (DIBR) process. The Kinect™ sensor is equipped with a red-green-blue (RGB) camera and a depth sensor and it is suitable to capture synchronous video and depth streams. However, these sensors do not produce data with pixel-wise correspondence that is required by the DIBR process.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134467418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Theodoris Theodoridis, K. Papachristou, N. Nikolaidis, I. Pitas
{"title":"Object motion description in stereoscopic videos","authors":"Theodoris Theodoridis, K. Papachristou, N. Nikolaidis, I. Pitas","doi":"10.1109/IC3D.2013.6732097","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732097","url":null,"abstract":"The efficient search and retrieval of the increasing volume of stereoscopic videos drives the need for the semantic description of its content. The derivation of disparity (depth) information from stereoscopic content allows the extraction of semantic information that is inherent to 3D. The purpose of this paper is to propose algorithms for semantically characterizing the motion of an object or groups of objects along any of the X, Y, Z axes. Experimental results are also provided.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"122 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114092101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shao-Ping Lu, B. Ceulemans, A. Munteanu, P. Schelkens
{"title":"Performance optimizations for PatchMatch-based pixel-level multiview inpainting","authors":"Shao-Ping Lu, B. Ceulemans, A. Munteanu, P. Schelkens","doi":"10.1109/IC3D.2013.6732089","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732089","url":null,"abstract":"As 3D content is becoming ubiquitous in today's media landscape, there is a rising interest for 3D displays that do not demand wearing special headgear in order to experience the 3D effect. Autostereoscopic displays realize this by providing multiple different views of the same scene. It is however unfeasible to record, store or transmit the amount of data that such displays require. Therefore there is a strong need for real-time solutions that can generate multiple extra viewpoints from a limited set of originally recorded views. The main difficulty in current solutions is that the synthesized views contain disocclusion holes where the pixel values are unknown. In order to seamlessly fill-in these holes, inpainting techniques are being used. In this work we consider a depth-based pixel-level inpainting system for multiview video. The employed technique operates in a multi-scale fashion, fills in the disocclusion holes on a pixel-per-pixel basis and computes approximate Nearest Neighbor Fields (NNF) to identify pixel correspondences. To this end, we employ a multi-scale variation on the well-known PatchMatch algorithm followed by a refinement step to escape from local minima in the matching-cost function. In this paper we analyze the performance of different cost functions and search methods within our existing inpainting framework.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116255015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An investigation of the effectiveness and persuasiveness of stereoscopic 3D advertising","authors":"Stefan Rudolf Sonntag, N. Xing","doi":"10.1109/IC3D.2013.6732080","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732080","url":null,"abstract":"The rising popularity of stereoscopic 3D (three dimensional) moving images in cinemas and TV has created a new opportunity for advertisers-3D ads. Advertising research has yet to fully determine whether 3D advertising effectively delivers commercial messages. To understand the cognitive and behavioural effects of 3D advertising, this study focuses on the use and effectiveness of 3D images in television commercials. Previous advertising research has examined the market, consumer and advertising media, and the effectiveness of advertisements. This study investigates people's perceptions of 3D images in advertising. Five variables are tested in this study: memory, aesthetics, brand awareness, persuasiveness and immersion.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116942574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The range of fusible horizontal disparities around the empirical horopters","authors":"P. M. Grove, Ashleigh L. Harrold","doi":"10.1109/IC3D.2013.6732084","DOIUrl":"https://doi.org/10.1109/IC3D.2013.6732084","url":null,"abstract":"The range of horizontal disparities for which single vision is experienced, referred to as Panum's fusional range, increases in the left and right periphery and is approximately symmetrical around the empirical horizontal horopter. No corresponding data have been published specifying the range and locus of symmetry for the range of fusible horizontal disparities for locations above and below the horizontal plane of regard. We mapped the empirical horizontal and vertical horopters using a minimum motion paradigm and then measured the fusional volume in the horizontal plane of regard and for locations in the median plane above and below the fixation point. We show that the fusional range of horizontal disparities increases to the left and right, and above and below fixation and is symmetrical about the empirical horizontal and vertical horopters.","PeriodicalId":252498,"journal":{"name":"2013 International Conference on 3D Imaging","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121299751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}