K. Kanazawa, K. Urabe, Tomoaki Moriya, Tokiichiro Takahashi
{"title":"An image query-based approach for urban modeling","authors":"K. Kanazawa, K. Urabe, Tomoaki Moriya, Tokiichiro Takahashi","doi":"10.1145/1899950.1899964","DOIUrl":"https://doi.org/10.1145/1899950.1899964","url":null,"abstract":"Effective 3D modeling for urban landscape reconstruction is one of the most important fields in 3D Computer Graphics. There have been proposed many modeling techniques. Procedural modeling method [Müller et al. 2006] is powerful and effective, however, it reconstructs buildings one by one precisely. Each building has to be specified as a procedure of various kinds of construction parts of buildings. Therefore, it takes long time to reconstruct urban landscapes.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129303836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Displaced subdivision surfaces of animated meshes","authors":"Hyunjung Lee, Minsu Ahn, Seungyong Lee","doi":"10.1145/1899950.1899987","DOIUrl":"https://doi.org/10.1145/1899950.1899987","url":null,"abstract":"We propose a novel technique for extracting a series of displaced subdivision surfaces sharing the same topology and the same displacement map from a given animated mesh. Our motion-based mesh simplification method creates control meshes with a small number of vertices but keeps the motion information. Extracted control meshes are simpler than the original meshes, and so easier to edit and take less storage. Our method uses only one displacement map for all frames, which greatly reduces the amount of data.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"8 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115592752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Komorowski, Vinod Melapudi, Darren Mortillaro, Gene S. Lee
{"title":"A hybrid approach to facial rigging","authors":"D. Komorowski, Vinod Melapudi, Darren Mortillaro, Gene S. Lee","doi":"10.1145/1899950.1899992","DOIUrl":"https://doi.org/10.1145/1899950.1899992","url":null,"abstract":"In production environments, facial rigging is commonly done by either geometric deformations or blendshapes. Geometric deformations are driven by simulated muscle actions, which are loosely based upon the dynamics of facial tissue [Magnenat-Thalmann et al. 1988]. Blendshapes interpolate a large number of sculpted shapes [Bergeron and Lachapelle 1985]. The former approach is intuitive, yet slow and less precise. The latter is fast, yet memory intensive and sensitive to model changes. Conventional implementations of both approaches are difficult to generalize in order to build rigs quickly and retarget animation efficiently.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122610189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Denis Kravtsov, O. Fryazinov, V. Adzhiev, A. Pasko, P. Comninos
{"title":"Real-time controlled metamorphosis of animated meshes using polygonal-functional hybrids","authors":"Denis Kravtsov, O. Fryazinov, V. Adzhiev, A. Pasko, P. Comninos","doi":"10.1145/1899950.1899986","DOIUrl":"https://doi.org/10.1145/1899950.1899986","url":null,"abstract":"Polygonal models are widely used in computer animation. Static polygonal models are commonly animated using an underlying skeleton controlling the deformation of the mesh. This technique, known as skeletal animation, allows the artist to produce complex animation sequences in a relatively easy way. However, performing complex transitions between arbitrary animated meshes remains a challenging problem. There is a set of established techniques to perform metamorphosis (3D morphing) between static 3D meshes [Lazarus and Verroust 1998], but most of these can not be easily applied to animated meshes. The approach presented in this poster allows us to produce with great ease metamorphosing transitions between animated meshes of arbitrary topology using polygonal-functional hybrids [Kravtsov et al. 2010a]. Our technique uses the meshes of the objects as well as their skeleton animations. As a result we are able to generate metamorphosis animations of time-varying meshes of arbitrary topologies in near real-time.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123169209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face reality: investigating the Uncanny Valley for virtual faces","authors":"R. Mcdonnell, M. Breidt","doi":"10.1145/1899950.1899991","DOIUrl":"https://doi.org/10.1145/1899950.1899991","url":null,"abstract":"The Uncanny Valley (UV) has become a standard term for the theory that near-photorealistic virtual humans often appear unintentionally erie or creepy. This UV theory was first hypothesized by robotics professor Masahiro Mori in the 1970's [Mori 1970] but is still taken seriously today by movie and game developers as it can stop audiences feeling emotionally engaged in their stories or games. It has been speculated that this is due to audiences feeling a lack of empathy towards the characters. With the increase in popularity of interactive drama video games (such as L.A. Noire or Heavy Rain), delivering realistic conversing virtual characters has now become very important in the real-time domain. Video game rendering techniques have advanced to a very high quality; however, most games still use linear blend skinning due to the speed of computation. This causes a mismatch between the realism of the appearance and animation, which can result in an uncanny character. Many game developers opt for a stylised rendering (such as cel-shading) to avoid the uncanny effect [Thompson 2004]. In this preliminary work, we begin to study the complex interaction between rendering style and perceived trust, in order to provide guidelines for developers for creating plausible virtual characters.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132409140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Direct ray tracing of Phong Tessellation","authors":"Shinji Ogaki","doi":"10.1145/1899950.1899969","DOIUrl":"https://doi.org/10.1145/1899950.1899969","url":null,"abstract":"There are two major ways of calculating ray and parametric surface intersections in rendering. The first is through the use of micropolygons, and the second is to use parametric surfaces such as NURBS surface together with numerical methods such as Newton Raphson. Both methods are computationally expensive and complicated to implement. In this paper, we introduce a direct ray tracing method for Phong Tessellation. Our method gives analytic solutions that can be readily derived by hand and enables rendering smooth surfaces in a computationally inexpensive yet robust way.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132017418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NinjaEdit: simultaneous and consistent editing of an unorganized set of photographs (Copyright restrictions prevent ACM from providing the full text for this article)","authors":"K. Honda, T. Igarashi","doi":"10.1145/1899950.1899997","DOIUrl":"https://doi.org/10.1145/1899950.1899997","url":null,"abstract":"","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131382614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Texture transfer based on continuous structure of texture patches for design of artistic Shodo fonts","authors":"Yutaka Goda, Tsuyoshi Nakamura, M. Kanoh","doi":"10.1145/1899950.1899968","DOIUrl":"https://doi.org/10.1145/1899950.1899968","url":null,"abstract":"This paper presents a texture transfer that considers continuous structure of texture patches in target and source images. Previous texture transfers focus on local features, such as color distance, standard deviation and directional factors. In addition to those local factors it might be important and useful to use continuous structure of texture patches in the images of some cases. For instance, we aim at design of \"Shodo\" fonts based on examples. Shodo, the Japanese calligraphy, is a form of artistic writing used for writing the Japanese and Chinese language. This study proposes and develops a method of designing Japanese calligraphic fonts which contain artistic representations similar to actual Shodo art. Figure 1c illustrates our result which preserves the flow of salient brushwork of the source image (Figure 1a). On the other hand, Figure 1d, which is generated by the algorithm of Texture by numbers [Hertzmann et al. 2001], doesn't seem to express the flow of brushwork of the source. Our approach deals with the flow of brushwork as the continuous structure of texture patches. Our results could be preserve the continuous structure of source texture patches and express the artistic effect of brushwork.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123193330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Solar system and gravity using visual metaphors and simulations","authors":"Sriranjan Rasakatla","doi":"10.1145/1899950.1899961","DOIUrl":"https://doi.org/10.1145/1899950.1899961","url":null,"abstract":"Explaining small kids about the basic concepts in physics, arts and social sciences is quite a challenge. The concepts that have been laid in the child's mind during the age of 7--11 actually go a long way in shaping his interest towards the subjects and the grasp they achieve while exploring a particular subject in further depth. Also if the concept has been wrongly taught and gets implanted in an incorrect manner it takes a long time and a lot of effort both from the kid's side and the teacher's side to rectify. Also gathering the attention of the kids (as they are generally known to be hyperactive and highly enthusiastic at this age) can be quite a challenge. However most of the children like to see color, animation and get attracted towards interactive games that are fun to play. Thus simulation and graphical visualization can be used to ease and further improve their understanding towards the subject. Also I feel kids try to relate to their learning when they play with toys, observe their surroundings (nature) etc and use this in their activities further. So by the use of visual metaphors one can make their learning easy and interactive simulations/games will make their learning fun. Use of computer graphics will also help in keeping their interest sustained through out the duration of the class. Here in this paper I have used some real life visual metaphors and have developed simulations based on them. The learning is in a step wise fashion and incremental.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124896722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual evaluation of human animation timewarping","authors":"M. Prazák, R. Mcdonnell, C. O'Sullivan","doi":"10.1145/1899950.1899980","DOIUrl":"https://doi.org/10.1145/1899950.1899980","url":null,"abstract":"Understanding the perception of humanoid character motion can provide insights that will enable realism, accuracy, computational cost and data storage space to be optimally balanced. In this sketch we describe a preliminary perceptual evaluation of human motion timewarping, a common editing method for motion capture data. During the experiment, participants were shown pairs of walking motion clips, both timewarped and at their original speed, and asked to identify the real animation. We found a statistically significant difference between speeding up and slowing down, which shows that displaying clips at higher speeds produces obvious artifacts, whereas even significant reductions in speed were perceptually acceptable.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127534734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}