{"title":"Automatic generation of 3D typography","authors":"Suzi Kim, Sunghee Choi","doi":"10.1145/2945078.2945099","DOIUrl":"https://doi.org/10.1145/2945078.2945099","url":null,"abstract":"Three-dimensional typography (3D typography) refers to the arrangement of text in three-dimensional space. It injects vitality into the letters, thereby giving the viewer a strong impression that is hard to forget. These days, 3D typography plays an important role in daily life beyond the artistic design. It is easy to observe the 3D typography used in the 3D virtual space such as movie or games. Also it is used frequently in signboard or furniture design. Despite its noticeable strength, most of the 3D typography is generated by just a simple extrusion of flat 2D typography. Comparing with 2D typography, 3D typography is more difficult to generate in short time due to its high complexity.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114781737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptable game experience through procedural content generation and brain computer interface","authors":"Henry Fernández, Koji Mikami, K. Kondo","doi":"10.1145/2945078.2945124","DOIUrl":"https://doi.org/10.1145/2945078.2945124","url":null,"abstract":"For high skilled players, an easy game might become boring and for low skilled players, a difficult game might become frustrating. This research's goal is to offer players a personalized experience adapted according to their performance and levels of attention. We created a simple side-scrolling 2D platform game using Procedural Content Generation, Dynamic Difficulty Adjustment techniques and brain computer data obtained from players in real time using an Electroencephalography device. We conducted a series of experiments with different players and got results that confirm that our method is adjusting each level according to performance and attention.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129750325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Caputo, Victoria McGowen, Joe Geigel, Steven Cerqueira, Q. Williams, M. Schweppe, Zhongyuan Fa, Anastasia Pembrook, Heather Roffe
{"title":"Farewell to dawn: a mixed reality dance performance in a virtual space","authors":"F. Caputo, Victoria McGowen, Joe Geigel, Steven Cerqueira, Q. Williams, M. Schweppe, Zhongyuan Fa, Anastasia Pembrook, Heather Roffe","doi":"10.1145/2945078.2945127","DOIUrl":"https://doi.org/10.1145/2945078.2945127","url":null,"abstract":"Farewell to Dawn is a mixed reality dance performance which explores two dancers' voyage from a physical space to a virtual stage and back, as the day passes before them.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124443285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Barbieri, Nicola Garau, Wenyu Hu, Zhidong Xiao, Xiaosong Yang
{"title":"Enhancing character posing by a sketch-based interaction","authors":"Simone Barbieri, Nicola Garau, Wenyu Hu, Zhidong Xiao, Xiaosong Yang","doi":"10.1145/2945078.2945134","DOIUrl":"https://doi.org/10.1145/2945078.2945134","url":null,"abstract":"Sketch as the most intuitive and powerful 2D design method has been used by artists for decades. However it is not fully integrated into current 3D animation pipeline as the difficulties of interpreting 2D line drawing into 3D. Several successful research for character posing from sketch has been presented in the past few years, such as the Line of Action [Guay et al. 2013] and Sketch Abstractions [Hahn et al. 2015]. However both of the methods require animators to manually give some initial setup to solve the corresponding problems. In this paper, we propose a new sketch based character posing system which is more flexible and efficient. It requires less input from the user than the system from [Hahn et al. 2015]. The character can be easily posed no matter the sketch represents a skeleton structure or shape contours.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125745411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non-humanoid creature performance from human acting","authors":"Gustavo Eggert Boehs, M. Vieira","doi":"10.1145/2945078.2945080","DOIUrl":"https://doi.org/10.1145/2945078.2945080","url":null,"abstract":"We propose a framework for using human acting as input for the animation of non-humanoid creatures; captured motion is classified using machine learning techniques, and a combination of preexisting clips and motion retargeting are used to synthetize new motions. This should lead to a broader use of motion capture.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133638995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rusting and corroding simulation taking into account chemical reaction processes","authors":"Tomokazu Ishikawa, Kousaku Kamata, Yuriko Takeshima, Masanori Kakimoto","doi":"10.1145/2945078.2945143","DOIUrl":"https://doi.org/10.1145/2945078.2945143","url":null,"abstract":"In recent years, expressions close to realities have become possible thanks to the technologically advanced computer graphics. Secular change and weathering are important factors to create realistic computer graphics images. Metal rust is an important secular change and there are much research work on rust [Kanazawa et al. 2015]. Although the rust forming processes vary according to coating rain-water and seawater, dissolved oxygen contents of them and flowing water effects, no rust forming methods which have examined the object geometry of models and chemical reaction processes exist as far as we know. Our proposed method calculates water flowing on 3D models to reproduce the process of corrosion which advances from the surface region coated with water. Our corrosion simulation model takes into account the quantity of coating water and the chemical reaction processes. As a result, we confirm that the images close to the rust formed in reality can be obtained.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130555730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yukari Konishi, Nobuhisa Hanamitsu, K. Minamizawa, Ayahiko Sato, Tetsuya Mizuguchi
{"title":"Synesthesia suit: the full body immersive experience","authors":"Yukari Konishi, Nobuhisa Hanamitsu, K. Minamizawa, Ayahiko Sato, Tetsuya Mizuguchi","doi":"10.1145/2945078.2945149","DOIUrl":"https://doi.org/10.1145/2945078.2945149","url":null,"abstract":"The Synesthesia Suit provides immersive embodied experience in Virtual Reality environment with vibro-tactile sensations on the entire body. Each vibro-tactile actuator provides not a simple vibration such as traditional game controller, but we designed the haptic sensation based on the haptic design method we have developed in the TECHTILE[Minamizawa et al. 2012] technology. In haptics research using multi-channel vibro-tactile feedback, Surround Haptics [Israr et al. 2012] proposed moving tactile strokes using multiple vibrators spaced on a gaming chair. And then they also proposed Po2[Israr et al. 2015], which shows illusion of tactile sensation for gesture based games by providing vibrations on the hand based on psycho-physical study.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"337 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134094603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takeshi Oozu, Aki Yamada, Yuki Enzaki, Hiroo Iwata
{"title":"Escaping chair: furniture-shaped device art","authors":"Takeshi Oozu, Aki Yamada, Yuki Enzaki, Hiroo Iwata","doi":"10.1145/2945078.2945086","DOIUrl":"https://doi.org/10.1145/2945078.2945086","url":null,"abstract":"Furniture-device is the device having furniture appearance and physical input and output function. The Escaping Chair is a furniture-device having physical and dynamic interaction with a user to let them perceive the intent of their action and personify the furniture. The Escaping Chair interacts with the bystanders by trying to move away from nearby people. By doing this, the device tries to make the person fail to sit on it, and stimulates their perception about sitting. The idea of a furniture-shaped device was extended from one of my previous artworks, which used furniture as input mechanisms. I exhibited the chair and observed the interaction sit produced with exhibition visitors. It succeeded in making people wonder during the interaction, as I planned, and making them further chase the chair, which indicates a new capability of the device. There were some challenges regarding load tolerance, detection latency and failure, which I have proposed improvements for.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130839534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VisLoiter: a system to visualize loiterers discovered from surveillance videos","authors":"Jianquan Liu, Shoji Nishimura, Takuya Araki","doi":"10.1145/2945078.2945125","DOIUrl":"https://doi.org/10.1145/2945078.2945125","url":null,"abstract":"This paper presents a system for visualizing the results of loitering discovery in surveillance videos. Since loitering is a suspicious behaviour that often leads to abnormal situations, such as pickpocketing, its analysis attracts attention from researchers [Bird et al. 2005; Ke et al. 2013; A. et al. 2015]. Most of them mainly focus on how to detect or identify loitering individuals by human tracking techniques. A robust approach in [Nam 2015] is one of the state-of-theart methods for detecting loitering persons in crowded scenes using pedestrian tracking based on spatio-temporal changes. However, such tracking-based methods are quite time-consuming. Therefore, it is hard to apply loitering detection across multiple cameras for a long time, or take into account the visualization of loiterers at a glance. To solve this problem, we propose a system, named VisLoiter (Figure 1), which enables efficient loitering discovery based on face features extracted from longtime videos across multiple cameras, instead of the tracking-based manner. By taking the advantage of efficiency, the VisLoiter realizes the visualization of loiterers at a glance. The visualization consists of three display components for (1) the appearance patterns of loitering individuals, (2) the frequency ranking of faces of loiterers, and (3) the lightweight playback of video clips where the discovered loiterer frequently appeared (see Figure 1 (b) and (c)).","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123646275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic frame rate: a study on viewer perception of changes in frame rate within an animated movie sequence","authors":"K. Chuang","doi":"10.1145/2945078.2945159","DOIUrl":"https://doi.org/10.1145/2945078.2945159","url":null,"abstract":"Dynamic Frame Rate (DFR) is the change in frame rate of a movie sequence in real time as the sequence is playing. Throughout the majority of the past century and after the introduction of sound in films, frame rates used in films have been kept at a standardization of 24 frame per second despite technological advancement [Salmon et. Al 2011]. In the past decade, spatial resolution has been increasing in display systems while the temporal resolution, the frame rate, has not been changed. Because of this, researchers and filmmakers stress that motion judders and blurriness are much more apparent and they propose that high frame rates will solve the issue [Emoto et. Al 2014] [Turnock 2013]. Some industry experts and critics, however, oppose the use of high frame rates [Wilcox 2015]. Despite all the research and attempts in using high frame rate, the idea of using dynamic frame rate in digital cinema has not been explored in depth. As such, there is very limited information on how people perceive DFR and how it actually works. By understanding DFR and how viewers perceive the changes in frame rate, it will help us adapt new techniques in the creation of cinema. We can utilize high frame rate in sequences that could benefit from high frame rate while keeping the rest of the sequences at standard frame rate. This thesis aims to understand the basics of DFR, how different implementations of DFR changes viewer perception and how people perceive a change of frame rate in an animated movie sequence displayed.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125133721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}