Masatoshi Kakiuchi, A. Yutani, A. Inomata, K. Fujikawa, Keishi Kandori
{"title":"Uncompressed 4K2K and HD live transmission on global internet","authors":"Masatoshi Kakiuchi, A. Yutani, A. Inomata, K. Fujikawa, Keishi Kandori","doi":"10.1145/1666778.1666806","DOIUrl":"https://doi.org/10.1145/1666778.1666806","url":null,"abstract":"There are some researches in transmission of high definition image using Internet Protocol (IP) before. Materials for TV stations require lossless transmission by uncompressed real-time transmission. Also high performance camera and display require transmission methods for 4K2K (3,840 x 2,160 pixels) image over HD (High Definition; 1,920 x 1,080 pixels). Especially, projection to huge screen such planetarium requires at least 4K2K resolution. However, uncompressed transmission of both HD and 4K2K require such high bandwidth network as 1.6 Gbit/s and 6.4 Gbit/s, then we prepared dedicated networks such SONET or wide-area VLAN service. Therefore we spent a lot of procedures and costs.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127712265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryosuke Ichikari, Ryohei Hatano, Toshikazu Oshima, F. Shibata, H. Tamura
{"title":"Designing cinematic lighting by relighting in MR-based pre-visualization","authors":"Ryosuke Ichikari, Ryohei Hatano, Toshikazu Oshima, F. Shibata, H. Tamura","doi":"10.1145/1666778.1666813","DOIUrl":"https://doi.org/10.1145/1666778.1666813","url":null,"abstract":"This paper describes a relighting method of designing cinematic lighting for filmmaking. The relighting method enables mixed reality based pre-visualization called MR-PreViz to change conditions of illumination. The method allows the MR-PreViz to have additional virtual lighting and the removal of actual illumination in designing cinematic lighting. The effects of lighting are applied correctly to both real objects and virtual objects.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131434975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Volume-preserving LSM deformations","authors":"K. Takamatsu, T. Kanai","doi":"10.1145/1667146.1667165","DOIUrl":"https://doi.org/10.1145/1667146.1667165","url":null,"abstract":"Surface deformations based on physically-based simulations are used to represent elastic motions such as human skins or clothes in the field of 3DCG applications. LSM (Lattice Shape Matching) [Rivers and James 2007] has particularly attracted attention as a fast and robust method which achieves elastic-like motions. However, the original LSM deformation method generates far from realistic motions especially when stretching an object, because volume is not preserved.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131814525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Wada, Kimitoshi Sato, Keitaro Kuno, Haruka Yoshida
{"title":"Open real ensemble","authors":"E. Wada, Kimitoshi Sato, Keitaro Kuno, Haruka Yoshida","doi":"10.1145/1665137.1665175","DOIUrl":"https://doi.org/10.1145/1665137.1665175","url":null,"abstract":"Open Reel Ensemble is performance art by remodeled reel-to-reel tape recorders.\u0000 Ivan Illich once said: \"Convivial tools are those which give each person who uses them the greatest opportunity to enrich the environment with the fruits of his or her vision.\" Convivial is originally a French word that means \"live together with joy\", and by \"convivial tool\", Illich indicates \"an instrument (technology) used differently from the usage of industrial value\".","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129360756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transformers: Revenge of the Fallen: opening cinematic","authors":"Nathan Maddams, Dane Maddams","doi":"10.1145/1665208.1665235","DOIUrl":"https://doi.org/10.1145/1665208.1665235","url":null,"abstract":"The opening cinematic to the Transformers: Revenge of The Fallen videogame by Activision/Luxoflux. It features Autobots and Decepticons fighting it out on a city street.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129362432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A procedural modeling of woven textiles with fuzz","authors":"Kaisei Sakurai, K. Matsufuji","doi":"10.1145/1666778.1666836","DOIUrl":"https://doi.org/10.1145/1666778.1666836","url":null,"abstract":"The purpose of this article is to propose a procedure for generating woven textiles with fuzz through the modeling of surface staples. The procedure guarantees to control the appearance of fuzz. In woven textiles, fuzz is the result of staples untwisted from the yarns, as shown in Fig. 1(a). Staple is of average length in natural fiber (e.g. wool and cotton), but in silk or chemical fiber its length is shorter. Our modeling procedure also takes into account the arbitrary woven design which creates a quadrilateral mesh formed by warps and wefts, with the exception of gauze and leno weaves (twisting adjacent warps), as shown in Fig. 1 (b) and (c). The procedure improves the representation of woven textile.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114611550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Miura, K. Mitobe, Takaaki Kaiga, Takashi Yukawa, T. Taniguchi, H. Tamamoto
{"title":"Qualitative evaluation of quantitative dance motion data","authors":"T. Miura, K. Mitobe, Takaaki Kaiga, Takashi Yukawa, T. Taniguchi, H. Tamamoto","doi":"10.1145/1666778.1666787","DOIUrl":"https://doi.org/10.1145/1666778.1666787","url":null,"abstract":"In the field of dance motion analysis, the development of qualitative evaluation technique for the analysis of body motions described in the form of quantitative data is needed [Nakamura et al. 2008]; it makes dance motion data acquired by motion capture systems intuitively interpretable. In this study, the authors propose a method to automatically summarize the qualitative trend in a group of quantitative dance motion data; the motion features shown in all the dances are first quantitatively extracted by statistical analysis and then qualitatively categorized by cluster analysis.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114077949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Warmth through the night","authors":"Jonathan Elliott","doi":"10.1145/1665137.1665146","DOIUrl":"https://doi.org/10.1145/1665137.1665146","url":null,"abstract":"I investigate and utilize the imagery and symbolism of technological ideology and mythology, and how these images and symbols reinforce a sense of dominance over the environment and the rest of humanity. In recent work, I have forced together elements of this imagery with images of their unacceptable consequences. These are skeptical paintings, depicting mounds of old and obsolete computers and televisions rupturing the crisp, wire-frame façade of virtualesque scenes. Computers and televisions (these amalgams of plastic, heavy metals, and other toxic wastes, these transmitters of fantasy, ideology, identity, and creators of virtual worlds) are depicted as accumulating waste in the process of becoming toxic nightmares. Seen in the act of transmission, their screens flicker on and off to display scenes of pride and shame, glory and disgust, myth tainted with visions of what we wish to ignore or conceal about ourselves and our history.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121087590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gyuwan Choe, Jin Wan Park, Seonhee Park, Eunsun Jang, Hoyeon Jang
{"title":"Special habitation","authors":"Gyuwan Choe, Jin Wan Park, Seonhee Park, Eunsun Jang, Hoyeon Jang","doi":"10.1145/1665137.1665145","DOIUrl":"https://doi.org/10.1145/1665137.1665145","url":null,"abstract":"A new urban development in the area north of the Han river raises many complex questions. How much living space will be provided for residents? What will happen to the current residents of the area? How does the new development fit into the national housing plan? Who will profit from the development? The residents? The politicians? The real estate developers?","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115241810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient acquisition of light transport based on separation of direct and global components","authors":"K. Ochiai, N. Tsumura, T. Nakaguchi, Y. Miyake","doi":"10.1145/1666778.1666816","DOIUrl":"https://doi.org/10.1145/1666778.1666816","url":null,"abstract":"Photorealistic image synthesis is a challenging topic in computer graphics. Image-based techniques for capturing and reproducing the appearance of real scenes have received a great deal of attention. A long measurement time and a large amount of memory are required in order to acquire an image-based relightable dataset, i.e., light transport or reflectance field. Several approaches have been proposed with the goal of efficiently acquiring light transport [Sen et al. 2005; Fuchs et al. 2007]. However, since, with the exception of the recently proposed compressive sensing method [Peers et al. 2009], most previous studies have focused on scene adaptive sampling algorithms, conventional methods cannot perform efficiently in the case of a scene that has significant global illumination. In this paper, we present a non-adaptive sampling method for measuring light transport of a scene based on separation of the direct and global illumination components.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115245115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}