{"title":"Real-time 3D rendering using depth-based geometry reconstruction and view-dependent texture mapping","authors":"Chih-Fan Chen, M. Bolas, Evan A. Suma","doi":"10.1145/2945078.2945162","DOIUrl":"https://doi.org/10.1145/2945078.2945162","url":null,"abstract":"With the recent proliferation of high-fidelity head-mounted displays (HMDs), there is increasing demand for realistic 3D content that can be integrated into virtual reality environments. However, creating photorealistic models is not only difficult but also time consuming. A simpler alternative involves scanning objects in the real world and rendering their digitized counterpart in the virtual world. Capturing objects can be achieved by performing a 3D scan using widely available consumer-grade RGB-D cameras. This process involves reconstructing the geometric model from depth images generated using a structured light or time-of-flight sensor. The colormap is determined by fusing data from multiple color images captured during the scan. Existing methods compute the color of each vertex by averaging the colors from all these images. Blending colors in this manner creates low-fidelity models that appear blurry. (Figure 1 right). Furthermore, this approach also yields textures with fixed lighting that is baked on the model. This limitation becomes more apparent when viewed in head-tracked virtual reality, as the illumination (e.g. specular reflections) does not change appropriately based on the user's viewpoint.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122010113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PlenoGap: panorama light field viewing for HMD with focusing on gazing point","authors":"Fuko Takano, T. Koike","doi":"10.1145/2945078.2945079","DOIUrl":"https://doi.org/10.1145/2945078.2945079","url":null,"abstract":"We propose a walk through imaging method for head-mounted display (HMD) named 'PlenoGap'. The method always displays a refocused image on a HMD. The refocused image is generated from a trimmed panorama light field image which is 360° cylindrical and always focused on center of HMD. In addition, we realized walkthrough experience by making some intermediate images between three panorama light field images. User can roam around small area using controller.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132625540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mike Lambeta, Matt Dridger, Paul J. White, J. Janssen, A. Byagowi
{"title":"Haptic wheelchair","authors":"Mike Lambeta, Matt Dridger, Paul J. White, J. Janssen, A. Byagowi","doi":"10.1145/2945078.2945168","DOIUrl":"https://doi.org/10.1145/2945078.2945168","url":null,"abstract":"Virtual reality aims to provide an immersive experience to a user, with the help of a virtual environment. This immersive experience requires two key components; one for capturing inputs from the real world, and the other for synthesizing real world outputs based on interactions with the virtual environment. However, a user in a real world environment experiences a greater set of feedback from real world inputs which relate directly to auditory, visual, and force feedback. As such, in a virtual environment, a dissociation is introduced between the user's inputs and the feedback from the virtual environment. This dissociation relates to the discomfort the user experiences with real world interaction. Our team has introduced a novel way of receiving synthesized feedback from the virtual environment through the use of a haptic wheelchair.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133073592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wakana Asahina, Naoya Iwamoto, Hubert P. H. Shum, S. Morishima
{"title":"Automatic dance generation system considering sign language information","authors":"Wakana Asahina, Naoya Iwamoto, Hubert P. H. Shum, S. Morishima","doi":"10.1145/2945078.2945101","DOIUrl":"https://doi.org/10.1145/2945078.2945101","url":null,"abstract":"In recent years, thanks to the development of 3DCG animation editing tools (e.g. MikuMikuDance), a lot of 3D character dance animation movies are created by amateur users. However it is very difficult to create choreography from scratch without any technical knowledge. Shiratori et al. [2006] produced the dance automatic generation system considering rhythm and intensity of dance motions. However each segment is selected randomly from database, so the generated dance motion has no linguistic or emotional meanings. Takano et al. [2010] produced a human motion generation system considering motion labels. However they use simple motion labels like \"running\" or \"jump\", so they cannot generate motions that express emotions. In reality, professional dancers make choreography based on music features or lyrics in music, and express emotion or how they feel in music. In our work, we aim at generating more emotional dance motion easily. Therefore, we use linguistic information in lyrics, and generate dance motion.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134630205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion compensated automatic image compositing for GoPro videos","authors":"Ryan Lustig, Balu Adsumilli, David Newman","doi":"10.1145/2945078.2945090","DOIUrl":"https://doi.org/10.1145/2945078.2945090","url":null,"abstract":"Image composition for GoPro videos captured in the presence of significant camera motion is a manual and time consuming process. Existing techniques typically fail to automate this process due to the wide-capture field of view and high camera motion of such videos. The proposed method seeks to solve these problems by developing an image registration algorithm for fisheye images without expensive pixel warping or loss of field of view. Background subtraction is performed to extract moving foreground objects, which are noise corrected and then layered on a reference image to build the final composite. The results show marked improvements in accuracy and efficiency for automating image composition.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121460091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimized mobile rendering techniques based on local cubemaps","authors":"Roberto Lopez Mendez, Sylwester Bala","doi":"10.1145/2945078.2945113","DOIUrl":"https://doi.org/10.1145/2945078.2945113","url":null,"abstract":"Local cubemaps (LC) were introduced for the first time more than ten years ago for rendering reflections [Bjorke 2004]. Nevertheless it is only in recent years that major game engines have incorporated this technique. In this paper we introduce a generalized concept of LC and present two new LC applications for rendering shadows and refractions. We show that limitations associated with the static nature of LC can be overcome by combining this technique with other well-known runtime techniques for reflections and shadows. Rendering techniques based on LC allow high quality shadows, reflections and refractions to be rendered very efficiently which makes them ideally suited to mobile devices where runtime resources must be carefully balanced [Ice Cave Demo 2015].","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115798651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kuo-Wei Chen, Chih-Yuan Yao, You-En Lin, Yu-Chi Lai
{"title":"Parallel 3D printing based on skeletal remeshing","authors":"Kuo-Wei Chen, Chih-Yuan Yao, You-En Lin, Yu-Chi Lai","doi":"10.1145/2945078.2945126","DOIUrl":"https://doi.org/10.1145/2945078.2945126","url":null,"abstract":"Although 3D printing is becoming more popular, but there are two major problem. The first is the slowness of the process because of requirement of processing information of an extra axis comparing to tradition 2D printers. The second is the printable dimension of 3D printers. Generally, the larger the model is printed, the larger a 3D printer has to be and the more expensive it is. Furthermore, it would also require a large amount of extra inflation materials. With the entrance of cheap 3D printers, such as OLO 3D printers [Inc. 2016], parallel printing with multiple cheap printers can possibly be the solution. In order to parallel print a 3D model, we must decompose a 3D model into smaller components. After printing out all the components, we assemble them together by attaching them to the skeleton through supporters and joints to form the final result. As shown in our results, our designed shell-and-bone-based model printing can not only save the printing time but also use lesser material than the original whole model printing.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123578323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coded skeleton: programmable bodies for shape changing user interfaces","authors":"Miyu Iwafune, T. Ohshima, Yoichi Ochiai","doi":"10.1145/2945078.2945096","DOIUrl":"https://doi.org/10.1145/2945078.2945096","url":null,"abstract":"We propose novel design method to fabricate user interfaces with mechanical metamaterial called Coded Skeleton. The Coded Skeleton is combination of shape memory alloys (SMA) and 3-D printed bodies, and it has computationally designed structure that is flexible in one deformation mode but is stiff in the other modes. This property helps to realize materials that automatically deform by a small and lightweight actuator such as SMA. Also it enables to sense user inputs with the resistance value of SMA. In this paper, we propose shape-changing user interfaces by integrating sensors and actuators as Coded Skeleton. The deformation and stiffness of this structure is computationally designed and also controllable. Further, we propose interactions and applications with user interfaces fabricated using our design method.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124973613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A tabletop stereoscopic 3DCG system with motion parallax for two users","authors":"S. Mizuno","doi":"10.1145/2945078.2945152","DOIUrl":"https://doi.org/10.1145/2945078.2945152","url":null,"abstract":"In this paper, I improve a tabletop stereoscopic 3DCG system with motion parallax so as to use it with two users and share a stereoscopic 3DCG scene together. I develop a method to calculate two users' viewpoints simultaneously by using depth images. I use a 3D-enabled projector to superimpose two 3DCG images for each user, and use active shutter glasses to separate them into individual images for each user. The improved system would be useable for cooperative works and match type games.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126795297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Light field completion using focal stack propagation","authors":"Terence Broad, M. Grierson","doi":"10.1145/2945078.2945132","DOIUrl":"https://doi.org/10.1145/2945078.2945132","url":null,"abstract":"Both light field photography and focal stack photography are rapidly becoming more accessible with Lytro's commercial light field cameras and the ever increasing processing power of mobile devices. Light field photography offers the ability of post capturing perspective changes and digital refocusing, but little is available in the way of post-production editing of light field images. We present a first approach for interactive content aware completion of light fields and focal stacks, allowing for the removal of foreground or background elements from a scene.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125967646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}