{"title":"Tracking water droplets under descent and deformation","authors":"Caleb Brose, M. Thuo, J. Sheaffer","doi":"10.1145/2787626.2787651","DOIUrl":"https://doi.org/10.1145/2787626.2787651","url":null,"abstract":"We present a system for tracking the movement and deformation of drops of water in free fall and collision. Our data comes from a high-speed camera which records 60,000 frames per second. The data is noisy, and is compromised by an unfortunate camera angle and poor lighting which contribute to caustics, reflections, and shadows in the image. Given an input video, we apply techniques from image processing, computer vision and computational geometry to track the the droplet's position and shape. While our tool could monitor the movement of transparent fluids in a more general environment, our data specifically depicts water colliding with hydrophobic materials. The output of our processing is used by materials scientists to better our understanding of the interactions between water and hydrophobic surfaces. These interactions have direct application in the materials engineering of next generation printing technologies.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123264729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Retargeting 3D objects and scenes","authors":"Chun-Kai Huang, Yi-Ling Chen, I-Chao Shen, Bing-Yu Chen","doi":"10.1145/2787626.2787655","DOIUrl":"https://doi.org/10.1145/2787626.2787655","url":null,"abstract":"We introduce an interactive method suitable for retargeting both 3D objects and scenes under a general framework. Initially, an input object or scene is decomposed into a collection of constituent components embraced by corresponding control bounding volumes which capture the intra-structures of the object or the semantic groupings of the objects in the scene. The overall retargeting is accomplished through a constrained optimization by manipulating the control bounding volumes. Without inferring the intricate dependencies between the components, we define a minimal set of constraints that maintain the spatial arrangement and connectivity between the components to regularize valid retargeting results. The default retargeting behavior can then be easily altered by additional semantic constraints imposed by users.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125558737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decomposition of 32 bpp into 16 bpp textures with alpha","authors":"Nobuki Yoda, T. Igarashi","doi":"10.1145/2787626.2792610","DOIUrl":"https://doi.org/10.1145/2787626.2792610","url":null,"abstract":"In 2D game graphics, textures are packed into a single texture called a sprite sheet in order to achieve efficient rendering. The sprite sheet can be compressed to save memory by using various compression methods such as block-based compressions and 16 bpp (bits per pixel) tone reduction. These methods are not without some problems, though. Block-based compressions are GPU-dependent, and high-quality compressions such as ASTC [Nystad et al. 2012] are often unavailable on mobile devices. 16 bpp tone reduction--often used with dithering--can create undesirable noise when it is scaled up (Figure 1c).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115970100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VISTouch","authors":"Masasuke Yasumoto, Takehiro Teraoka","doi":"10.1145/2787626.2787636","DOIUrl":"https://doi.org/10.1145/2787626.2787636","url":null,"abstract":"Various studies have been done on the combined use of mobile devices. Ohta's Pinch [Ohta and Tanaka 2012] and Leigh's THAW [Leigh et al. 2014] are representative studies. However, they have certain limitations; Pinch cannot dynamically correspond to the positional relations of the devices, and THAW cannot recognize the devices' spatial positional relations. We constructed VISTouch so that it does not require a particular kind of external sensor, and it enables multiple mobile devices to dynamically obtain other devices' relative positions in real time. We summarize VISTouch in this paper.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132099788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fumiya Narita, Shunsuke Saito, Takuya Kato, Tsukasa Fukusato, S. Morishima
{"title":"Texture preserving garment transfer","authors":"Fumiya Narita, Shunsuke Saito, Takuya Kato, Tsukasa Fukusato, S. Morishima","doi":"10.1145/2787626.2792622","DOIUrl":"https://doi.org/10.1145/2787626.2792622","url":null,"abstract":"Dressing virtual characters is necessary for many applications, while modeling clothing is a significant bottleneck. Therefore, it has been proposed that the idea of Garment Transfer for transfer-ring clothing model from one character to another character [Brouet et al. 2012]. In recent years, this idea has been extended to be applicable between characters in various poses and shapes [Narita et al. 2014]. However, texture design of clothing is not preserved in their method since they deform the source clothing model to fit the target body (see Figure 1(a)(c)).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134054292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crowd-powered parameter analysis for computational design exploration","authors":"Yuki Koyama, Daisuke Sakamoto, T. Igarashi","doi":"10.1145/2787626.2792620","DOIUrl":"https://doi.org/10.1145/2787626.2792620","url":null,"abstract":"Exploring various visual designs by tweaking parameters is a common practice when designing digital content. For example, if we want to clean up a photo for use at the top of a web page, we adjust the design parameters---brightness, contrast, saturation, etc.---to explore which combination of parameters provides the best result. Similar situations can be found anywhere in computer graphics applications, such as tweaking shader parameters for game development.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133428099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Photometric compensation for practical and complex textures","authors":"N. Hashimoto, K. Kosaka","doi":"10.1145/2787626.2787647","DOIUrl":"https://doi.org/10.1145/2787626.2787647","url":null,"abstract":"We propose a photometric compensation for projecting arbitrary images on practical surfaces of our everyday life. Although many previous proposals have achieved fine compensation at their experimental environments [Nayar et al. 2003], they cannot support practical targets including high-contrast texture. In order to adapt to such situation, we need a time-consuming iterative processing with camera feedback. Even though the iterative processing is applied, we cannot obtain fine compensation because no camera pixels of a projector-camera system (procam) correspond perfectly to the pixels of the projector [Mihara et al. 2014].","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132672884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Mizuno, Marino Isoda, Rei Ito, Mei Okamoto, Momoko Kondo, Saya Sugiura, Yuki Nakatani, M. Hirose
{"title":"Sketch dance stage","authors":"S. Mizuno, Marino Isoda, Rei Ito, Mei Okamoto, Momoko Kondo, Saya Sugiura, Yuki Nakatani, M. Hirose","doi":"10.1145/2787626.2792646","DOIUrl":"https://doi.org/10.1145/2787626.2792646","url":null,"abstract":"Drawing on a sketchbook is one of the most familiar arts and people of all ages can enjoy it. Thus a lot of CG applications on which a user can create 2D and 3DCG images with drawing operations have been developed [Kondo et al. 2013]. On the other hand, dancing is also familiar to many people. Thus a digital content that is a mixture of drawing and dancing could be very attractive.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132672896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Hiraki, Issei Takahashi, Shotaro Goto, S. Fukushima, T. Naemura
{"title":"Phygital field: integrated field with visible images and robot swarm controlled by invisible images","authors":"T. Hiraki, Issei Takahashi, Shotaro Goto, S. Fukushima, T. Naemura","doi":"10.1145/2787626.2792604","DOIUrl":"https://doi.org/10.1145/2787626.2792604","url":null,"abstract":"Forming images by using a swarm of mobile robots has emerged as a new platform for computer entertainment. Each robot has colored lighting, and the swarm represents various abstract patterns by using the lighting and the locomotion.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132872108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toshiaki Nakasu, T. Ike, Kazunori Imoto, Yasunobu Yamauchi
{"title":"Hands-free gesture operation for maintenance work using finger-mounted acceleration sensor","authors":"Toshiaki Nakasu, T. Ike, Kazunori Imoto, Yasunobu Yamauchi","doi":"10.1145/2787626.2792609","DOIUrl":"https://doi.org/10.1145/2787626.2792609","url":null,"abstract":"In maintenance of electric power control panels, a worker has to do a lot of manual work such as pushing buttons and turning on/off selector switches. Therefore, a hands-free gesture operating system is needed. Tsukada [Tsukada et al. 2002] proposed a gesture operating system using an acceleration sensor and switches. Although it is a simple task to control a home appliance by gesture, users have to use both gesture and switch on/off to perform more complicated tasks such as controlling and recording documents in maintenance work. Therefore, the system becomes complicated. We propose a novel switch-less assist system for maintenance work with a simple structure that recognizes gesture using only an acceleration sensor. Ike [Ike et al. 2014] proposed a hand gesture operating system that enables users to control a TV remotely by adopting \"Tapping\" as a click signal. The system recognizes tapping by detecting a pulse-like acceleration pattern corresponding to a micro collision generated by tapping. However, it is difficult to recognize tapping because maintenance work includes many micro collisions generated by touching things. We adopt \"Tapping & Finger up\", i.e., tapping fingers and turning up a finger, gestures that rarely occur in maintenance work, and design a gesture system enabling users to perform maintenance tasks and gesture operation seamlessly. Our system helps users do maintenance work easily and intuitively without interrupting work.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130806517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}