{"title":"Waving tentacles 8×8: controlling a SMA actuator by optical flow","authors":"Akira Nakayasu","doi":"10.1145/2820926.2820931","DOIUrl":"https://doi.org/10.1145/2820926.2820931","url":null,"abstract":"When we see the wriggling movement and the shape of a tentacle like the sea anemone under the sea, we feel an existence of a primitive life. The goal of this research is to realize the expression of a kinetic artwork or interactive artwork such as waving tentacles of sea anemones. At present, soft actuators that bend in multiple directions have been developed. However, these each have a complex structure or are expensive. To realize the expression of waving tentacles we need a large number of actuators. Therefore, we developed a budget actuator with a simple structure. Previously, we have introduced three motion patterns for controlling a SMA actuator that can bend in three directions and an experimental system with 9 actuators [Nakayasu 2014]. In this paper, we introduce an experimental system with 64 actuators that react to a hand's movement via an optical flow algorithm.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122376424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using unity for immersive natural hazards visualization","authors":"F. Woolard, M. Bolger","doi":"10.1145/2820926.2820970","DOIUrl":"https://doi.org/10.1145/2820926.2820970","url":null,"abstract":"Maps exist as two-dimensional representations of spatial information, generally designed for a single specific purpose. Our work focuses on representation of data relevant to natural hazards scenarios. Although visualization choices can be made on maps, their fundamental representation is recognizably the same as what it was hundreds of years ago. Video representations can improve on this by incorporating temporal information about disasters in a linear manner. Video still has restrictions though, as they require predetermined decisions about viewpoint and what information is presented at any time-point in the narrative. The current work aims to incorporate the strengths of these methods and expand on their impact. We create a highly customizable visualization tool that incorporates the Unity 3D game engine with scientific layers of information about natural hazards. We discuss the development of proof-of concept work in the bushfire hazard domain here.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122907241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wakana Asahina, N. Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, S. Morishima
{"title":"Automatic facial animation generation system of dancing characters considering emotion in dance and music","authors":"Wakana Asahina, N. Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, S. Morishima","doi":"10.1145/2820926.2820935","DOIUrl":"https://doi.org/10.1145/2820926.2820935","url":null,"abstract":"In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effectively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call \"dance emotion\"). In previous work considering music features, DiPaola et al. [2006] proposed music-driven emotionally expressive face system. To detect the mood of the input music, they used a hierarchical framework (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychological rules that uses score information, so they requires MIDI data. In this paper, we propose \"dance emotion model\" to visualize dancing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance motion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128569225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Timeline visualization of semantic content","authors":"Douglas J. Mason","doi":"10.1145/2820926.2820974","DOIUrl":"https://doi.org/10.1145/2820926.2820974","url":null,"abstract":"People interact with large corpuses of documents everyday, from Googling the internet, reading a book, or checking up on their email. Much of this content has a temporal component: a Website was published on a particular date, your email arrived yesterday, and Chapter 2 comes after Chapter 1. As we read this content, we create an internal map that correlates what we read with its place in time and with other parts that we've read. The quality of this map is critical to understanding the structure of any large corpus and for locating salient information.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133458922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guided path tracing using clustered virtual point lights","authors":"Binh-Son Hua, Kok-Lim Low","doi":"10.1145/2820926.2820955","DOIUrl":"https://doi.org/10.1145/2820926.2820955","url":null,"abstract":"Monte Carlo path tracing has been increasingly popular in movie production recently. It is a general and unbiased rendering technique that can easily handle diffuse and glossy surfaces. To trace light paths, most of existing path tracers rely on surface BRDFs for directional sampling. This works well for glossy appearance, but tends to be not effective for diffuse surfaces because in such cases, the rendering integral is mostly driven by the incoming radiance distribution, not the BRDFs. Therefore, with the same number of samples, it is more favorable to sample the incoming radiance distribution to achieve better effectiveness for diffuse scenes. [Vorba et al. 2014] addressed this sampling problem by using photons to estimate incoming radiance distributions which can then be compactly represented using Gaussian mixture functions.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128280155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An interactive 3D social media browsing system in a tech-art gallery","authors":"Shih-Wei Sun, Jheng-Wei Peng, Wei-Chih Lin, Ying-Ting Chen, Wen-Huang Cheng, K. Hua","doi":"10.1145/2820926.2820953","DOIUrl":"https://doi.org/10.1145/2820926.2820953","url":null,"abstract":"Using mobile devices to capture photos are very common behaviors in our daily life. With such many photos captured from the members belonging to a social network, [Yin et al. 2014] proposed to utilize the social context from the mobile devices, e.g., geo-tag from the GPS sensor, to help a user to capture better photos via a mobile device. Using the geo-tags of a photo and the analysis of the image content to construct a 3D model of a scene has been developed since the Photo Tourism [Snavely et al. 2006] project. The scene reconstruction scheme proposed by [Snavely et al. 2008] can visualize the photos in a 3D environment according to the photos collected from the social members. In addition, [Szeliski et al. 2013] indicated that it is a natural way to navigate the images from the social media sites in a 3D geo-located context. Therefore, for multimedia visualization with a natural and immersive 3D user experience in a tech-art gallery, we propose a 3D social media browsing system to allow users to use motion-sensing devices to interactively navigate the social photos in a virtual 3D scene constructed from a real physical space.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115744959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Biped control using multi-segment foot model based on the human feet","authors":"Seokjae Lee, Jehee Lee","doi":"10.1145/2820926.2820943","DOIUrl":"https://doi.org/10.1145/2820926.2820943","url":null,"abstract":"Physical simulation has been developed rapidly and recent work shows natural looking simulated motion through motion capture data and robust adaptation to external perturbations by using manually designed balance controller. However, developing general controller to simulate unpredictable or complex motion is still challenging.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127052234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen-Chi Hu, Tze-Hsiang Wei, Yu-Sheng Chen, Yi-Chieh Wu, Ming-Te Chi
{"title":"Intuitive 3D cubic style modeling system","authors":"Chen-Chi Hu, Tze-Hsiang Wei, Yu-Sheng Chen, Yi-Chieh Wu, Ming-Te Chi","doi":"10.1145/2820926.2820956","DOIUrl":"https://doi.org/10.1145/2820926.2820956","url":null,"abstract":"Modeling is a key application in 3D fabrication. Although numerous powerful 3D-modeling software packages exist, few people can freely build their desired model because of insufficient background knowledge in geometry and difficulties manipulating the complexities of the modeling interface; the learning curve is steep for most people. For this study, we chose a cubic model, a model assembled from small cubes, to reduce the learning curve of modeling. We proposed an intuitive modeling system designed for elementary school students. Users can sketch a rough 2D contour, and then the system enables them to generate the thickness and shape of a 3D cubic model.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128595097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akira Takeuchi, Hiromitsu Fujii, A. Yamashita, Masayuki Tanaka, R. Kataoka, Y. Miyoshi, M. Okutomi, H. Asama
{"title":"3D visualization of aurora from optional viewpoint at optional time","authors":"Akira Takeuchi, Hiromitsu Fujii, A. Yamashita, Masayuki Tanaka, R. Kataoka, Y. Miyoshi, M. Okutomi, H. Asama","doi":"10.1145/2820926.2820967","DOIUrl":"https://doi.org/10.1145/2820926.2820967","url":null,"abstract":"Three-dimensional analysis of the aurora is significant because the shape of aurora depends on solar wind which influences electric equipment such as satellites. Our research group set two fish-eye cameras in Alaska, U.S.A and reconstructed the Aurora's shape from a pair of stereo images [Fujii et al. 2014]. However, the method using the feature-based matching cannot detect dense enough feature points accurately since they are hard to detect from the aurora image whose most parts are with low contrast. In this paper, we achieved both increasing the detected feature points and improving accuracy. Applying this method, the 3D shape of aurora from optional view point at optional time can be visualized.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134475497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating face ink portrait from face photograph","authors":"P. Chiang, Kuo-Hao Chang, Tung-Ju Hsieh","doi":"10.1145/2820926.2820933","DOIUrl":"https://doi.org/10.1145/2820926.2820933","url":null,"abstract":"The Chinese ink portrait requires sophisticated skills and the training for Chinese ink painting takes a long time. In this research, a Chinese portrait generation system is proposed to allow the user to convert face images to Chinese ink portraits. We search the image using Active Shape Model (ASM) and extract facial features from an input face image. As a result, a feature-preserved ink diffused image is generated. In order to produce a feature-preserved Chinese ink portrait, we use artistic ink brush strokes to enhance face contour constructed with the facial features. The generated portraits can be used to replace faces in an ink painting.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127795064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}