Hiroki Kagiyama, Masahide Kawai, Daiki Kuwahara, Takuya Kato, S. Morishima
{"title":"Automatic synthesis of eye and head animation according to duration and point of gaze","authors":"Hiroki Kagiyama, Masahide Kawai, Daiki Kuwahara, Takuya Kato, S. Morishima","doi":"10.1145/2787626.2792607","DOIUrl":"https://doi.org/10.1145/2787626.2792607","url":null,"abstract":"In movie and video game productions, synthesizing subtle eye and corresponding head movements of CG character is essential to make a content dramatic and impressive. However, to complete them costs a lot of time and labors because they often have to be made by manual operations of skilled artists.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129942236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, S. Shimamura, K. Kunze, M. Inami, M. Sugimoto
{"title":"AffectiveWear: toward recognizing facial expression","authors":"Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, S. Shimamura, K. Kunze, M. Inami, M. Sugimoto","doi":"10.1145/2787626.2792632","DOIUrl":"https://doi.org/10.1145/2787626.2792632","url":null,"abstract":"Facial expression is a powerful way for us to exchange information nonverbally. They can give us insights into how people feel and think. There are a number of works related to facial expression detection in computer vision. However, most works focus on camera-based systems installed in the environment. With this method, it is difficult to track user's face if user moves constantly. Moreover, user's facial expression can be recognized at only a limited place.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"7 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128729638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shadow shooter: 360-degree all-around virtual 3d interactive content","authors":"Masasuke Yasumoto, Takehiro Teraoka","doi":"10.1145/2787626.2787637","DOIUrl":"https://doi.org/10.1145/2787626.2787637","url":null,"abstract":"\"Shadow Shooter\" is a VR shooter game that uses the \"e-Yumi 3D\" bow interface and real physical interactive content that changes a 360-degree all-around view in a room into virtual game space (Figure 1). This system was constructed by developing our previous interactive \"Light Shooter\" content based on \"The Electric Bow Interface\" [Yasumoto and Ohta 2013]. Shadow Shooter expands the virtual game space to all the walls in a room just as in Jones' \"Room Alive\" [Jones et al. 2014]; however, it does not require large-scale equipment such as multiple projectors. It only requires the e-Yumi 3D device that consists of a real bow's components added to Willis's interface with a mobile projector [Willis et al. 2013]. Thus, we constructed a unique device for Shadow Shooter that easily changes the 360-degree all-around view into a virtual game space.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125987851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance and precision: mobile solutions for high quality engineering drawings","authors":"R. Krishnaswamy","doi":"10.1145/2787626.2792615","DOIUrl":"https://doi.org/10.1145/2787626.2792615","url":null,"abstract":"Engineering documents e.g. 'blueprints' are one of the traditional forms of paper based information moving more to the digital realm. With mobile and the evolution of GPUs on mobile, there are tremendous opportunities for applications that view and interact with engineering documents.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121190983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented reality for cryoablation procedures","authors":"Hugo Talbot, Frédérick Roy, S. Cotin","doi":"10.1145/2787626.2792649","DOIUrl":"https://doi.org/10.1145/2787626.2792649","url":null,"abstract":"Cryotherapy is a rapidly growing minimally invasive technique for the treatment of different kinds of tumors, such as breast cancer, renal and prostate cancer. Several hollow needles are percutaneously inserted in the target area under image guidance and a gas (usually argon) is then decompressed inside the needles. Based on the Thompson-Joule principle, the temperature drops drown and a ball of ice crystals forms around the tip of each needle. Radiologists rely on the geometry of this iceball (273 K), visible on computer tomographic (CT) or magnetic resonance (MR) images, to assess the status of the ablation. However, cellular death only occurs when the temperature falls below 233 K. The complexity of the procedure therefore resides in planning the optimal number, position and orientation of the needles required to treat the tumor, while avoiding any damage to the surrounding healthy tissues.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114699358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Color perception difference: white and gold, or black and blue?","authors":"Hisashi Watanabe, Toshiya Fujii, Tatsuya Nakamura, Tsuguhiro Korenaga","doi":"10.1145/2787626.2787630","DOIUrl":"https://doi.org/10.1145/2787626.2787630","url":null,"abstract":"It is a common philosophical question as to whether your blue is the same as my blue. The two-tone striped dress shown in Figure 1, which attracted a lot of attention on the Internet, gave us a clear answer: \"No.\" Some people see the dress as blue and black, whereas others insist it's white and gold. So your blue can be my white. Why is it that people looking at the same picture perceive totally different color combinations?","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127456606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinhong Park, Minkyu Kim, Sunho Ki, Youngduke Seo, Chulho Shin
{"title":"Half frame forwarding: frame-rate up conversion for tiled rendering GPU","authors":"Jinhong Park, Minkyu Kim, Sunho Ki, Youngduke Seo, Chulho Shin","doi":"10.1145/2787626.2787634","DOIUrl":"https://doi.org/10.1145/2787626.2787634","url":null,"abstract":"Although the mobile industry has recently begun trending towards high quality graphics content, it is still difficult to satisfy this trend due to performance, power and thermal issue of GPU/CPU in mobile application processor.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123080713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hue extraction and tone match: generating a theme color to enhance the emotional quality of an image","authors":"Eunjin Kim, Hyeon‐Jeong Suk","doi":"10.1145/2787626.2787657","DOIUrl":"https://doi.org/10.1145/2787626.2787657","url":null,"abstract":"In the process of editorial design, a harmonious match between a picture and a solid color is often essential to achieve a high quality of a graphical art work. Color as such is a compelling cue to elicit emotional responses and thus can enhance the emotional quality of an image. Tools and methods have been developed to automatize the color selection process, and a noticeable progress has been achieved to extract perceptually dominant colors of an image. However, little attention has been paid to the emotional characteristics of selected colors, and it has been highly relying on the color designers' manual judgments. In this study, we propose a computational method that creates a color that enhances both aesthetic and affective quality of an image, and call it a theme color.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131026776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contour guided surface deformation for volumetric segmentation","authors":"M. Holloway, T. Ju, C. Grimm","doi":"10.1145/2787626.2792638","DOIUrl":"https://doi.org/10.1145/2787626.2792638","url":null,"abstract":"In clinical practice, when a subject is imaged (i.e. CT scan or MRI) the result is a 3D image of volumetric data. In order to study the organ, bone, or other object of interest, this data needs to be segmented to obtain a 3D model that can be used in any number of down stream applications. When used for treatment planning these segmentations need to not only be accurate but also produced quickly to avoid health risks. Automatic segmentation methods are becoming more reliable but many experts in the scientific community still rely on time consuming manual segmentation.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131256121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive tree illustration generation system","authors":"Azusa Mama, Yuki Morimoto, K. Nakajima","doi":"10.1145/2787626.2792652","DOIUrl":"https://doi.org/10.1145/2787626.2792652","url":null,"abstract":"Modeling 3D trees is a major theme in the field of computer graphics [Steven et al. 2012]. However, there has been little research on generating illustrations of trees [Yu-Sheng et al. 2012]. One of the ways to generate them is to render their 3D models. However, it is difficult to obtain the characteristic flat representation of illustrations because of the concentration of foliage in the central part of the tree. We present a system to generate a wide variety of tree illustrations by controlling the density of branches, the shape of canopy, and the overlap of flowers and leaves (Fig. 1).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132012337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}