{"title":"A prediction model on 3D model compression and its printed quality based on subjective study","authors":"T. Yamasaki, Y. Nakano, K. Aizawa","doi":"10.1145/2787626.2792639","DOIUrl":"https://doi.org/10.1145/2787626.2792639","url":null,"abstract":"3D printing is becoming a more common technology and has a growing number of applications. Although 3D compression algorithms have been studied in the computer graphics (CG) community for decades, the quality of the compressed 3D models are discussed only in the CG space. In this paper, we discuss the relationship between the PSNR of the compressed 3D models and the human perception to the printed objects. We conducted subjective evaluation by inviting 13 people and found that there is a clear linear relationship between them. Such a quality perception model is useful for estimating the printing quality of the compressed 3D models and deciding reasonable compression parameters.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132284003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor
{"title":"Continuous and automatic registration of live RGBD video streams with partial overlapping views","authors":"Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor","doi":"10.1145/2787626.2792640","DOIUrl":"https://doi.org/10.1145/2787626.2792640","url":null,"abstract":"This paper presents a novel method for automatic registration of video streams originated from two depth-sensing cameras. The system consists of a sender and receiver, in which the sender obtains the streams from two RGBD sensors placed arbitrarily around a room and produces a unified scene as a registered point cloud. A conventional method to support a multi-depth sensor system is through calibration. However, calibration methods are time consuming and require the use of external markers prior to streaming. If the cameras are moved, calibration has to be repeated. The motivation of this work is to facilitate the use of RGBD sensors for non-expert users, so that cameras need not to be calibrated, and if cameras are moved, the system will automatically recover the alignment of the video streams. DeReEs [Seifi et al. 2014], a new registration algorithm, is used, since it is fast and successful in registering scenes with small overlapping sections.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113988297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroki Yamamoto, H. Kajita, Hanyuool Kim, Naoya Koizumi, T. Naemura
{"title":"Mid-air plus: a 2.5 D cross-sectional mid-air display with transparency control","authors":"Hiroki Yamamoto, H. Kajita, Hanyuool Kim, Naoya Koizumi, T. Naemura","doi":"10.1145/2787626.2792605","DOIUrl":"https://doi.org/10.1145/2787626.2792605","url":null,"abstract":"In design process and medical visualization, e.g. CT/MRI cross-sectional images, exterior and interior images can help users to understand the overall shape of volumetric objects. For this purpose, displays need to provide both vertical and horizontal images at the same time. To display cross-sectional images, an LCD display [Cassinelli et al. 2009] and image projection [Nagakura et al. 2006] have been proposed. Although these displays could show internal images of volumetric objects, seamless crossing of internal and external images cannot be realized since the images are limited to physical displays.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122080884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"atmoRefractor: spatial display by controlling heat haze","authors":"Toru Kawanabe, Tomoko Hashida","doi":"10.1145/2787626.2792611","DOIUrl":"https://doi.org/10.1145/2787626.2792611","url":null,"abstract":"In recent years, there has been rapid development of techniques for superimposing virtual information on real-world scenes and changing the appearance of actual scenes in arbitrary ways. We are particularly interested in means of arbitrarily changing the appearance of real-world scenes without the use of physical interfaces such as glasses or other devices worn by the user. In this paper, we refer to such means as spatial displays. Typical examples of spatial displays include a system that can change the transparency or physical properties of buildings [Rekimoto, 2012] and a system that projects video images [Raskar, 2001]. However, those systems have restrictions such as requiring some kind of physical interface between the user and the scene or not being usable in a well-lit environment. Taking a different approach, we turned our attention to a natural phenomenon referred to as heat haze, in which the appearance of objects is altered by changes in the refractive index of air caused by differences in temperature distribution. We propose the atmoRefractor, a system that can generate and control heat haze on a small scale without an additional physical interface such as lenses. That locally controllable heat haze effect can be used to direct attention by changing the appearance of certain parts of scenes.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125746820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image based relighting using room lighting basis","authors":"Antoine Toisoul, A. Ghosh","doi":"10.1145/2787626.2792618","DOIUrl":"https://doi.org/10.1145/2787626.2792618","url":null,"abstract":"We present a novel approach for image based relighting using the lighting controls available in a regular room. We employ individual light sources available in the room such as windows and house lights as basis lighting conditions. We further optimize the projection of a desired lighting environment into the sparse room lighting basis in order to closely approximate the target lighting environment with the given lighting basis. We achieve plausible relit results that compare favourably with ground truth relighting with dense sampling of the reflectance field.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122778705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extraction of a smooth surface from voxels preserving sharp creases","authors":"Kazutaka Nakashima, T. Igarashi","doi":"10.1145/2787626.2792635","DOIUrl":"https://doi.org/10.1145/2787626.2792635","url":null,"abstract":"Construction of a free-form 3D surface model is still difficult. However, in our point of view, construction of a simple voxel model is relatively easy because it can be built with blocks. Even small children can build a voxel model. We present a method to convert a voxel model into a free-form surface model in order to facilitate construction of surface models.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130842079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xueming Yu, Shanhe Wang, Jay Busch, Thai-Binh Phan, Tracy McSheery, M. Bolas, P. Debevec
{"title":"Virtual headcam: pan/tilt mirror-based facial performance tracking","authors":"Xueming Yu, Shanhe Wang, Jay Busch, Thai-Binh Phan, Tracy McSheery, M. Bolas, P. Debevec","doi":"10.1145/2787626.2792625","DOIUrl":"https://doi.org/10.1145/2787626.2792625","url":null,"abstract":"High-end facial performance capture solutions typically use head-mounted camera systems which provide one or more close-up video streams of each actor's performance. These provide clear views of each actor's performance, but can be bulky, uncomfortable, get in the way of sight lines, and prevent actors from getting close to each other. To address this, we propose a virtual head-mounted camera system: an array of cameras placed around around the performance capture volume which automatically track zoomed-in, sharply focussed, high-resolution views of the each actor's face from a multitude of directions. The resulting imagery can be used in conjunction with body motion capture data to derive nuanced facial performances without head-mounted cameras.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114410602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WAOH: virtual automotive HMI evaluation tool","authors":"Seunghyun Woo, Daeyun An, Jong-Gab Oh, G. Hong","doi":"10.1145/2787626.2792602","DOIUrl":"https://doi.org/10.1145/2787626.2792602","url":null,"abstract":"Due to various features being available in the vehicle such as multimedia, the dashboard has become rather complicated. Therefore, an increased need for HMI(Human Machine Interface) research has arisen in the design creation process. However, there are issues such as design changes occurring even after the design is selected due to the initial evaluation being too simple to cover all of the requirements. Designers do not consider carefully HMI the during sketching phase and issues with designs are discovered too far along in the process. This study suggests an HMI simulation tool system based on projection to pre-evaluate an HMI prior to selecting specifications through virtual function implementation. This system evaluates each function of centerfacia through quantitative criteria such as performance time and distraction time. As a result, the objective of the system is to quickly analyze and validate designs through virtual means and find interface issues with a quantitative method.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124199722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wobble strings: spatially divided stroboscopic effect for augmenting wobbly motion of stringed instruments","authors":"S. Fukushima, T. Naemura","doi":"10.1145/2787626.2792603","DOIUrl":"https://doi.org/10.1145/2787626.2792603","url":null,"abstract":"When we snap strings playing with a CMOS camera, the strings seems to vibrate in a wobbly slow motion pattern. Because a CMOS sensor scans one line of video in sequence, fast moving objects are distorted during the scanning sequence. The morphing and distorting are called a rolling shutter effect, which is considered to be an artistic photographic techniques like strip photography and slit-scan photography. However, the effect can only be seen on a camera finder or a PC screen; the guitar player and audience are quite unlikely to notice it by the naked eye.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121900219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature extraction on digital snow microstructures","authors":"Jérémy Levallois, D. Coeurjolly, J. Lachaud","doi":"10.1145/2787626.2792637","DOIUrl":"https://doi.org/10.1145/2787626.2792637","url":null,"abstract":"During a snowfall, the snow crystals accumulate on the ground and gradually form a complex porous medium constituted of air, water vapour, ice and sometimes liquid water. This ground-lying snow transforms with time, depending on the physical parameters of the environment. The main purpose of the digitalSnow project is to provide efficient computational tools to study the metamorphism of real snow microstructures from 3D images acquired using X tomography techniques. We design 3D image-based numerical models than can simulate the shape evolution of the snow microstructure during its metamorphism. As a key measurement, (mean) curvature of snow microstructure boundary plays a crucial role in metamorphosis equations (mostly driven by mean curvature flow). In our previous work, we have proposed robust 2D curvature and 3D mean and principal curvatures estimators using integral invariants. In short, curvature quantities are estimated using a spherical convolution kernel with given radius R applied on point surfaces [Coeurjolly et al. 2014]. The specific aspect of these estimators is that they are defined on (isothetic) digital surfaces (boundary of shape in Z3). Tailored for this digital model, these estimators allow us to mathematically prove their multigrid convergence, i.e. for a class of mathematical shapes (e.g. C3-boundary and bounded positive curvature), the estimated quantity converges to the underlying Euclidean one when shapes are digitized on grids with gridstep tending to zero. In this work, we propose to use the radius R of our curvature estimators as a scale-space parameter to extract features on digital shapes. Many feature estimators exist in the literature, either on point clouds or meshes (\"ridge-valley\", threshold on principal curvatures, spectral analysis from Laplacian matrix eigenvalues, . . . ). In the context of objects in Z3 and using our robust curvature estimator, we define a new feature extraction approach on which theoretical results can be proven in the multigrid framework.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122861294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}