{"title":"Augmented dynamic shape for live high quality rendering","authors":"Tony Tung","doi":"10.1145/2787626.2787643","DOIUrl":"https://doi.org/10.1145/2787626.2787643","url":null,"abstract":"Consumer RGBD sensors are becoming ubiquitous and can be found in many devices such as laptops (e.g., Intel's RealSense) or tablets (e.g., Google Tango, Structure, etc.). They have become popular in graphics, vision, and HCI communities as they enable numerous applications such as 3D capture, gesture recognition, virtual fitting, etc. Nowadays, common sensors can deliver a stream of color images and depth maps in VGA resolution at 30 fps. While the color image is usually of sufficient quality for visualization, depth information (represented as a point cloud) is usually too sparse and noisy for readable rendering.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121125591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nazim Haouchine, A. Bilger, Jérémie Dequidt, S. Cotin
{"title":"Fracture in augmented reality","authors":"Nazim Haouchine, A. Bilger, Jérémie Dequidt, S. Cotin","doi":"10.1145/2787626.2792636","DOIUrl":"https://doi.org/10.1145/2787626.2792636","url":null,"abstract":"The considerable advances in Computer Vision for hand and finger tracking made it possible to have several sorts of interactions in Augmented Reality systems (AR), such as object grasping, object translation or surface deformation [Chun and Höllerer 2013]. However, no method has yet considered interaction than involves topological changes of the augmented model (like mesh cutting).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115017306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Kang, Chi Xu, Shujin Lin, Songhua Xu, Xiaonan Luo, Qiang Chen
{"title":"Component segmentation of sketches used in 3D model retrieval","authors":"Yang Kang, Chi Xu, Shujin Lin, Songhua Xu, Xiaonan Luo, Qiang Chen","doi":"10.1145/2787626.2792655","DOIUrl":"https://doi.org/10.1145/2787626.2792655","url":null,"abstract":"Sketching is a natural human practice. With the popularity of multi-touch tablets and styluses, sketching has become a more popular means of human-computer interaction. However, accurately recognizing sketches is rather challenging, especially when they are drawn by non-professionals. Therefore, automatic sketch understanding has attracted much research attention. To tackle the problem, we propose to segment sketch drawings before analyzing the semantic meanings of sketches for the purpose of developing a sketch-based 3D model retrieval system.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116542473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Z-drawing: a flying agent system for computer-assisted drawing","authors":"Sang-won Leigh, Harshit Agrawal, P. Maes","doi":"10.1145/2787626.2787652","DOIUrl":"https://doi.org/10.1145/2787626.2787652","url":null,"abstract":"We present a drone-based drawing system where a user's sketch on a desk is transformed across scale and time, and transferred onto a larger canvas at a distance in real-time. Various spatio-temporal transformations like scaling, mirroring, time stretching, recording and playing back over time, and simultaneously drawing at multiple locations allow for creating various artistic effects. The unrestricted motion of the drone promises scalability and a huge potential as an artistic medium.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122355716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mobile haptic system design to evoke relaxation through paced breathing","authors":"A. Bumatay, J. Seo","doi":"10.1145/2787626.2792627","DOIUrl":"https://doi.org/10.1145/2787626.2792627","url":null,"abstract":"Stress is physical response that affects everyone in varying degrees. Throughout history, people have developed various practices to help cope with stress. Many of these practices focus on bringing awareness to the body and breath. Studies have shown that mindfulness meditation and paced breathing are effective tools for stress management [Brown, 2005].","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114283049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic realistic lip animation using a limited number of control points","authors":"Slim Ouni, Guillaume Gris","doi":"10.1145/2787626.2787628","DOIUrl":"https://doi.org/10.1145/2787626.2787628","url":null,"abstract":"One main concern of audiovisual speech research is the intelligibility of audiovisual speech (i.e., talking head). In fact, lip reading is crucial for challenged population as hard of hearing people. For audiovisual synthesis and animation, this suggests that one should pay careful attention to modeling the region of the face that participates actively during speech. Above all, a facial animation system needs extremely good representations of lip motion and deformation in order to achieve realism and effective communication.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129772402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time rendering of atmospheric glories","authors":"Ari Rapkin Blenkhorn","doi":"10.1145/2787626.2787632","DOIUrl":"https://doi.org/10.1145/2787626.2787632","url":null,"abstract":"The glory is a colorful atmospheric phenomenon which resembles a small circular rainbow on the front surface of a cloudbank. It is most frequently seen from aircraft when the observer is directly between the sun and the clouds. Glories are also sometimes seen by skydivers looking down through thin cloud layers. They are always centered around the shadow of the observer's head (or camera).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128502957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Chiang, Po-Han Lin, Yuan-Hung Chang, M. Ouhyoung
{"title":"Synthesizing close combat using sequential Monte Carlo","authors":"I. Chiang, Po-Han Lin, Yuan-Hung Chang, M. Ouhyoung","doi":"10.1145/2787626.2787638","DOIUrl":"https://doi.org/10.1145/2787626.2787638","url":null,"abstract":"Synthesizing competitive interactions between two avatars in a physics-based simulation remains challenging. Most previous works rely on reusing motion capture data. They also need an offline preprocessing step to either build motion graphs or perform motion analysis. On the other hand, an online motion synthesis algorithm [Hämäläinen et al. 2014] can produce physically plausible motions including balance recovery and dodge projectiles without prior data. They use a kd-tree sequential Monte Carlo sampler to optimize the joint angle trajectories. We extend their approach and propose a new objective function to create two-character animations in a close-range combat. The principles of attack and defense are designed according to fundamental theory of Chinese martial arts. Instead of following a series of fixed Kung Fu forms, our method gives 3D avatars the freedom to explore diverse movements and through pruning can finally evolve an optimal way for fighting.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129486623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nobuhisa Hanamitsu, Kanata Nakamura, M. Y. Saraiji, K. Minamizawa, S. Tachi
{"title":"Twech: a mobile platform to search and share visuo-tactile experiences","authors":"Nobuhisa Hanamitsu, Kanata Nakamura, M. Y. Saraiji, K. Minamizawa, S. Tachi","doi":"10.1145/2787626.2792628","DOIUrl":"https://doi.org/10.1145/2787626.2792628","url":null,"abstract":"Twech is a mobile platform that enables users to share visuo-tactile experience and search other experiences for tactile data. User can record and share visuo-tactile experiences by using a visuo-tactile recording and displaying attachment for smartphone, allows the user to instantly such as tweet, and re-experience shared data such as visuo-motor coupling. Further, Twech's search engine finds similar other experiences, which were scratched material surfaces, communicated with animals or other experiences, for uploaded tactile data by using search engine is based on deep learning that ware expanded for recognizing tactile materials. Twech provides a sharing and finding haptic experiences and users re-experience uploaded visual-tactile data from cloud server.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133013573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reducing geometry-processing overhead for novel viewpoint creation","authors":"Francisco Inácio, J. P. Springer","doi":"10.1145/2787626.2792599","DOIUrl":"https://doi.org/10.1145/2787626.2792599","url":null,"abstract":"Maintaining a high steady frame rate is an important aspect in interactive real-time graphics. It is mainly influenced by the number of objects and the number of lights to be processed for a 3d scene. The upper-bound effort for rendering a scene is then defined by the number of objects times the number of lights, i. e. O(NO · NL). Deferred shading reduces this upper bound to the number of objects plus the number of lights, i. e. O(NO + NL), by separating the rendering process into two phases: geometry processing and lighting evaluation. The geometry processing rasterizes all objects but only retains visible fragments in a G-Buffer for the current viewpoint. The lighting evaluation then only needs to process those surviving fragments to compute the final image (for the current viewpoint). Unfortunately, this approach not only trades computational effort for memory but also requires the re-creation of the G-Buffer every time the viewpoint changes. Additionally, transparent objects cannot be encoded into a G-Buffer and must be separately processed. Post-rendering 3d warping [Mark et al. 1997] is one particular technique that allows to create images from G-Buffer information for new viewpoints. However, this only works with sufficient fragment information. Objects not encoded in the G-Buffer, because they were not visible from the original viewpoint, will create visual artifacts at discontinuities between objects. We propose fragment-history volumes (FHV) to create novel viewpoints from a discrete representation of the entire scene using current graphics hardware and present an initial performance comparison.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133117488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}