{"title":"Pose-invariant 3D face reconstruction from a single image","authors":"A. Niswar, E. Ong, Zhiyong Huang","doi":"10.1145/1899950.1899963","DOIUrl":"https://doi.org/10.1145/1899950.1899963","url":null,"abstract":"This technical sketch presents a novel method to reconstruct 3D face model from only a single image. Different from other methods, ours does not require the face in the image to be in a specific pose. This method deforms a generic 3D face model to fit the shape of the face in the image. The reconstructed 3D face model is then textured using the image. There are many practical applications to this method. For example, it provides a photo-editing tool to change the face pose in the picture as required. It can also be used by the police to investigate the picture of a suspect where only 1 picture is available and it is necessary to have the picture from different poses. Another possible application is entertainment: the 3D face model can be used to personalize character in 3D games.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126781527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Synchronized partial-body motion graphs","authors":"W. Ng, C. Choy, D. Lun, Lap-Pui Chau","doi":"10.1145/1899950.1899978","DOIUrl":"https://doi.org/10.1145/1899950.1899978","url":null,"abstract":"Motion graphs are regarded as a promising technique for interactive applications. However, the graphs are generated based on the distance metric of whole body, which produce a limit set of possible transitions. In this paper, we present an automatic method to construct a new data structure that specifies transitions and correlations between partial-body motions, called Synchronized Partial-body Motion Graphs (SPbMGs). We exploit the similarity between lower-body motions to create synchronization conditions with upper-body motions. Under these conditions, we generate all possible transitions between partial-body motions. The proposed graph representation not only maximizes the reusability of motion data, but also increases the connectivity of motion graphs while retaining the quality of motion.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114493844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PETICA: an interactive painting tool with 3D geometrical brushes","authors":"Kazuki Kumagai, Tokiichiro Takahashi","doi":"10.1145/1899950.1899985","DOIUrl":"https://doi.org/10.1145/1899950.1899985","url":null,"abstract":"In painting pictures, painters sometimes use tools such as chopsticks and clothes, which are not as familiar as brush and pencil, solely or in combination with other tools [1][2](Fig. 1). This paper presents on a painting tool, PETICA (Fig. 2), which we developed to achieve our aim to reproduce strokes represented by aforementioned various tools.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"14 41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124746695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Layer-based single image dehazing by per-pixel haze detection","authors":"C. Ancuti, Cosmin Ancuti, C. Hermans, P. Bekaert","doi":"10.1145/1899950.1899995","DOIUrl":"https://doi.org/10.1145/1899950.1899995","url":null,"abstract":"In outdoor environments, light reflected from object surfaces is commonly scattered due to the impurities of the aerosol, or the presence of atmospheric phenomena such as fog and haze. Aside from scattering, the absorption coefficient presents another important factor that attenuates the reflected light of distant objects reaching the camera lens. As a result, images taken in bad weather conditions (or similarly, underwater and aerial photographs) are characterized by poor contrast, lower saturation and additional noise.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121607690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A correspondence matching technique of dense checkerboard pattern for one-shot geometry acquisition","authors":"Vinh Ninh Dao, Masanori Sugimoto","doi":"10.1145/1899950.1899965","DOIUrl":"https://doi.org/10.1145/1899950.1899965","url":null,"abstract":"This paper presents a correspondence matching technique for a dense checkerboard pattern displayed by a projector-camera system for one-shot geometry acquisition purposes. It does not require color coding or complicated spatial encoding techniques to encode the corresponding positions of corners, and can find corresponding positions for an incomplete checkerboard pattern. We introduce a combination of epipolar geometry and topology constraints in the checkerboard pattern to solve correspondence ambiguities. To verify the feasibility of the technique, we have created a prototype scanning system that can construct the 3D geometry of a scenario for each image frame. The results of our experiments show that the technique can identify correspondences for a checkerboard pattern displayed on discontinuous surfaces and can reconstruct 3D geometry structures in real time.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117061528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EmoCoW: an interface for real-time facial animation","authors":"Clemens K. H. Sielaff","doi":"10.1145/1899950.1899990","DOIUrl":"https://doi.org/10.1145/1899950.1899990","url":null,"abstract":"We present an original graphic interface approach for animating a virtual character using expressions in real-time. The Emotional Color Wheel (EmoCoW) is an innovative and, to the best of our knowledge, unprecedented approach to tangibly controlled realtime facial animation. The system is programmed in C++ employing the Qt framework and is compiled as plug-in for Frapper as testing and development platform. The tangible device in use is a 3DConnexion Space Navigator, a commercially available 6-axes mouse for less than 90€ / 100$. For the interface, two sets of expressions are mapped on a circle each, with one circle nested into the other. The artist controls the rotation of both circles as well as vertical blend slider cutting the two topmost positions, thereby blending between two shapes on each circle and between the inner and outer circle, respectively. With a proper setup, it is possible to blend between any two emotions in the model without triggering unwanted ones in the process.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125986300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OtoMushi: touching sound","authors":"Alexis Andre","doi":"10.1145/1899950.1899983","DOIUrl":"https://doi.org/10.1145/1899950.1899983","url":null,"abstract":"OtoMushi (sound insect in Japanese) is a new platform for interacting with sound samples. Each sample is represented as an insect where the body of the insect shows the waveform of the sample: stroking a part of the body plays the correspond part of the sample back, with the specified speed and direction. Using living insects as a representation of sounds, operations such as mixing and cutting have their natural equivalent: when two insects mate, they give birth to a new sound-insect that represents a mix of the two parents. Similarly, cutting a sound to trim some parts can be done by slashing the body of the insect. We applied this concept to three differents systems: a tabletop surface designed to record everyday's sound environment, a phone system to get the insects to speak on your behalf and a portable application to collect interesting sounds on the go. Finally we discuss the relevant interface issues.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128947152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Miura, K. Mitobe, Takaaki Kaiga, Takashi Yukawa, K. Tajima, H. Tamamoto
{"title":"Derivation of dance similarity from balance characteristics","authors":"T. Miura, K. Mitobe, Takaaki Kaiga, Takashi Yukawa, K. Tajima, H. Tamamoto","doi":"10.1145/1899950.1899981","DOIUrl":"https://doi.org/10.1145/1899950.1899981","url":null,"abstract":"It has been recognized that the design of similarity measures to compare multiple motion-capture (mocap) data streams is one of the major tasks in human motion analysis [Müller et al. 2008]. The information of similarity is utilized for many applications such as the retrieval of mocap data streams.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125904139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Space-time blending with improved user control in real-time","authors":"G. Pasko, Denis Kravtsov, A. Pasko","doi":"10.1145/1899950.1899988","DOIUrl":"https://doi.org/10.1145/1899950.1899988","url":null,"abstract":"Most of the existing methods of metamorphosis are based on the interpolation schemes, space-time blending is a geometric operation of bounded blending performed in the higher-dimensional space. It provides transformations between shapes of different topology without necessarily establishing their alignment or correspondence. The original formulation of space-time blending had several problems: fast uncontrolled transition between shapes within the given time interval, generation of disconnected components, and lack of intuitive user control over the transformation process. We improve the original technique to provide the user with a set of more intuitive controls. The problem of the fast transition between the shapes is solved by the introduction of additional controllable affine transformations applied to initial objects in space-time. The approach is further extended with the introduction of an additional non-linear deformation operation. The proposed techniques have been implemented and tested within an industrial computer animation system. We have also implemented our method on the GPU, so that it can be employed in real-time applications.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"2021 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127593642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High-frequency aware PIC/FLIP in liquid animation","authors":"R. Ando, R. Tsuruno","doi":"10.1145/1899950.1899975","DOIUrl":"https://doi.org/10.1145/1899950.1899975","url":null,"abstract":"We present a simple extension to PIC/FLIP(Particle-in-Cell/Fluid-Implicit-Particle) for animating liquid with enhanced behaviors such as pushing or eddying (Figure 1), which we call HFA/PIC/FLIP (High-frequency aware PIC/FLIP). As a fundamental approach we use PIC/FLIP [Brackbill and Ruppel 1986] and compute approximate low-frequency part and high-frequency part of particles velocities. Low-frequency velocities are entirely projected onto divergence free and high-frequency field is partially projected onto divergence free to achieve realistic liquid animations. In contrast to the PIC/FLIP proposed by Zhu and Bridson [Zhu and Bridson 2005]; our approach facilitates \"pushing\" or \"curly\" features whereas their approach disperses the momentum towards noisy directions, resulting in dispersion. Recently similar approach has been done with coarse to fine mesh grid [Lentine et al. 2010], we refined directly with particles. We produced several footage of liquid animation and compared with competitive alternatives to show the benefits of the proposed algorithm.","PeriodicalId":354911,"journal":{"name":"ACM SIGGRAPH ASIA 2010 Sketches","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130112547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}