{"title":"3D reconstruction from drawings with straight and curved edges","authors":"F. Fang, Yong Tsui Lee","doi":"10.1145/2542355.2542357","DOIUrl":"https://doi.org/10.1145/2542355.2542357","url":null,"abstract":"In this paper, we present a method to recover solid objects from 2D drawings with both straight and curved edges. Straight edges are recovered by recovering their end points; curves are recovered by recovering their control points. An input curve is approximated first as two polylines, such that we have a drawing with only straight edges. An effective established method is used to recover the object in this drawing as a planar polyhedron. We then reconstruct the 3D curved edges by recovering their control points using the planar 3D geometry, and then fit curved surfaces over face loops with curved bounding edges. The results of our implementation show that the recovered objects correspond to the human perception of what they should be. However, work remains in producing a measure on the goodness of the result and providing handles to allow the control of the final outcome, as curved surfaces recovered are not unique.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125870933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dense isocontour imaging","authors":"V. Matvienko, J. Krüger","doi":"10.1145/2542355.2542375","DOIUrl":"https://doi.org/10.1145/2542355.2542375","url":null,"abstract":"We present an imaging technique intended to explore multi-scale image structures, represented by isophotes (lines of constant brightness) for photo images or in general case isolines. The cornerstone of the discussed technique is a view dependent periodic transfer function with the period depending on the gradient magnitude of the underlying scalar function such as to create a dense visualization independent of the gradient magnitude. We demonstrate that our approach is easy to implement, computationally efficient, and suitable for the fields that have reasonably structured isolines, i.e., that are sufficiently smooth.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122188083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast multi-scale detail decomposition via accelerated iterative shrinkage","authors":"Hicham Badri, H. Yahia, D. Aboutajdine","doi":"10.1145/2542355.2542397","DOIUrl":"https://doi.org/10.1145/2542355.2542397","url":null,"abstract":"We present a fast solution for performing multi-scale detail decomposition. The proposed method is based on an accelerated iterative shrinkage algorithm, able to process high definition color images in real-time on modern GPUs. Our strategy to accelerate the smoothing process is based on the use of first order proximal operators. We use the approximation to both designing suitable shrinkage operators as well as deriving a proper warm-start solution. The method supports full color filtering and can be implemented efficiently and easily on both the CPU and the GPU. We demonstrate the performance of the proposed approach on fast multi-scale detail manipulation of low and high dynamic range images and show that we get good quality results with reduced processing time.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"442 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122735583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Akyüz, Kerem Hadimli, Merve Aydinlilar, Christian Bloch
{"title":"Style-based tone mapping for HDR images","authors":"A. Akyüz, Kerem Hadimli, Merve Aydinlilar, Christian Bloch","doi":"10.1145/2542355.2542384","DOIUrl":"https://doi.org/10.1145/2542355.2542384","url":null,"abstract":"In this paper we propose a different approach to high dynamic range (HDR) image tone mapping. We put away the assumption that there is a single optimal solution to tone mapping. We argue that tone mapping is inherently a personal process that is guided by the taste and preferences of the artist; different artists can produce different depictions of the same scene. However, most existing tone mapping operators (TMOs) compel the artists to produce similar renderings. Operators that give more freedom to artists require adjustment of many parameters which turns tone mapping into a laborious process. In contrast to these, we propose an algorithm which learns the taste and preferences of an artist from a small set of calibration images. Any new image is then tone mapped to convey the appearance that would be desired by the artist.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116863293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crowdsourcing facial expressions using popular gameplay","authors":"Chek Tien Tan, Daniel Rosser, Natalie Harrold","doi":"10.1145/2542355.2542388","DOIUrl":"https://doi.org/10.1145/2542355.2542388","url":null,"abstract":"Facial expression analysis systems often employ machine learning algorithms that depend a lot on the quality of the face database they are trained on. Unfortunately, generating high quality face databases is a major challenge that is rather time consuming. We have developed BeFaced, a tile-matching casual tablet game to enable massive crowdsourcing of facial expressions for the purpose of such machine learning algorithms. Based on the popular tile-matching gameplay mechanic, players are required to make facial expressions shown on matched tiles in order to clear them and advance in the game. Dynamic difficulty adjustment of the recognition accuracy is employed in the game in order to increase engagement and hence increase the quantity of varied facial expressions obtained. Each facial expression is automatically captured, labelled and sent to our online face database. At a more abstract level, BeFaced investigates a novel method of using popular game mechanics to aid the advancement of computer vision algorithms.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134402707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information-geometric lenses for multiple foci+contexts interfaces","authors":"R. Nock, F. Nielsen","doi":"10.1145/2542355.2542378","DOIUrl":"https://doi.org/10.1145/2542355.2542378","url":null,"abstract":"We present a new set of 2D/3D modeling and visualization techniques that build upon recent information geometric works, with desirable properties like seamless multiple foci+contexts abilities, several keeping of meaningful topological features and tangible shapes, and a very good Euclidean approximation near the focus, which make them reliable candidates to display (geographic) maps or pictures. We show that a slight modification of a popular fisheye view, namely Sarkar-Brown's, belongs to this set. We report on two experiments on 2D and 3D interfaces against contenders from hyperbolic geometry. It is a browsing task involving a real-world virtual library, whose map is a manifold learned from the traces of 60k+ users, and consisting of approximately 10k books. Observations and users' feedback suggest that information geometry makes a sound alternative to hyperbolic geometric approaches, and may help to craft appealing geometric focus+context interfaces tailored to specific displays or domains.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124776284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive art the qi of calligraphy: dance and imprint","authors":"He-Lin Luo, Y. Hung, I-Chun Chen","doi":"10.1145/2542355.2542389","DOIUrl":"https://doi.org/10.1145/2542355.2542389","url":null,"abstract":"In Chinese calligraphy, the concept of promoting the circulation of qi enables Chinese characters to express the essence, qi, and spirit of the calligrapher through the proper use of force and speed. As a result, the emotions and soul of the calligrapher are recorded in a work of Chinese calligraphy. In Cursive, a performance by internationally renowned Taiwanese dance company, Cloud Gate Dance Theater, the dancers try to figure out the copybook works of famous ancient Chinese calligraphers. Through dance, they attempt to reproduce the essence, qi, and spirit of these calligraphers along with the qi and rhythm within Chinese calligraphy, and re-integrate them with the force and beauty of modern dance. Focusing on reproducing the essence, qi, and spirit of a calligrapher, the interactive mechanical installation work, The Qi of Calligraphy, utilizes clear imagery of copybook works by famous ancient Chinese calligraphers. Operated by a robotic arm and a brush-like extension, a spotlight moves and shifts to reproduce the works of a copybook. With the form and rhythm of a trajectory, it depicts the copybook, jing (strength). Playing with the works game-like features, viewers utilize both hands and their bodies to chase after the trajectory of the spotlight of the mechanical arm, resulting in a dance along with the rhythm of the copybook. And, when viewers are able to touch the spotlight, the image of their bodies are imprinted onto the luminous wall, enabling the points in Chinese character to be displayed on the wall. In the end, the relationship between viewers bodies and the machine is presented along with an intertwining of their bodies with the copybook. As a result, body and qi are merged together.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124903623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion indexing of different emotional states using LMA components","authors":"A. Aristidou, Y. Chrysanthou","doi":"10.1145/2542355.2542381","DOIUrl":"https://doi.org/10.1145/2542355.2542381","url":null,"abstract":"Recently, there has been an increasing use of pre-recorded motion capture data, making motion indexing and classification essential for animating virtual characters and synthesising different actions. In this paper, we use a variety of features that encode characteristics of motion using the Body, Effort, Shape and Space components of Laban Movement Analysis (LMA), to explore the motion quality from acted dance performances. Using Principal Component Analysis (PCA), we evaluate the importance of the proposed features - with regards to their ability to separate the performer's emotional state - indicating the weight of each feature in motion classification. PCA has been also used for dimensionality reduction, laying the foundation for the qualitative and quantitative classification of movements based on their LMA characteristics. Early results show that the proposed features provide a representative space for indexing and classification of dance movements with regards to the emotion, which can be used for synthesis and composition purposes.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124118389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter-Pike J. Sloan, Jason Tranchida, Hao-Ming Chen, L. Kavan
{"title":"Ambient obscurance baking on the GPU","authors":"Peter-Pike J. Sloan, Jason Tranchida, Hao-Ming Chen, L. Kavan","doi":"10.1145/2542355.2542395","DOIUrl":"https://doi.org/10.1145/2542355.2542395","url":null,"abstract":"Ambient Occlusion and Ambient Obscurance are coarse approximations to global illumination from ambient lighting, commonly used in film and games. This paper describes a system that computes Ambient Obscurance over the vertices of complex polygon meshes. Novel contributions include pre-processing necessary for \"triangle soup\" scene representations to minimize artifacts, a compact model for different classes of instanced decorator objects such as trees and shrubs, a compact model for pre-computed visibility to be used on dynamically placed objects, and an approximation to model the occlusion of small decorator objects when ray tracing.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130291278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A study on the degrees of freedom in touchless interaction","authors":"Luigi Gallo","doi":"10.1145/2542355.2542390","DOIUrl":"https://doi.org/10.1145/2542355.2542390","url":null,"abstract":"During the last few years, we have been witnessing a widespread adoption of touchless technologies in the context of surgical procedures. Touchless interfaces are advantageous in that they can preserve sterility around the patient, allowing surgeons to visualize medical images without having to physically touch any control or to rely on a proxy. Such interfaces have been tailored to interact with 2D medical images but not with 3D reconstructions of anatomical data, since such an interaction requires at least three degrees of freedom. In this paper, we discuss the results of a user study in which a mouse-based interface has been compared with two Kinect-based touchless interfaces which allow users to interact with 3D data with up to nine degrees of freedom. The experimental results show that there is a significant relation between the number of degrees of freedom simultaneously controlled by the user and the number of degrees of freedom required to perform, in a touchless way, an accurate manipulation task.","PeriodicalId":232593,"journal":{"name":"SIGGRAPH Asia 2013 Technical Briefs","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133882044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}