ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia最新文献

筛选
英文 中文
Human head modeling based on fast-automatic mesh completion 基于快速自动网格补全的人体头部建模
Akinobu Maejima, S. Morishima
{"title":"Human head modeling based on fast-automatic mesh completion","authors":"Akinobu Maejima, S. Morishima","doi":"10.1145/1666778.1666831","DOIUrl":"https://doi.org/10.1145/1666778.1666831","url":null,"abstract":"The need to rapidly create 3D human head models is still an important issue in game and film production. Blanz et al have developed a morphable model which can semi-automatically reconstruct the facial appearance (3D shape and texture) and simulated hairstyles of \"new\" faces (faces not yet scanned into an existing database) using photographs taken from the front or other angles [Blanz et al. 2004]. However, this method still requires manual marker specification and approximately 4 minutes of computational time. Moreover, the facial reconstruction produced by this system is not accurate unless a database containing a large variety of facial models is available. We have developed a system that can rapidly generate human head models using only frontal facial range scan data. Where it is impossible to measure the 3D geometry accurately (as with hair regions) the missing data is complemented using the 3D geometry of the template mesh (TM). Our main contribution is to achieve the fast mesh completion for the head modeling based on the \"Automatic Marker Setting\" and the \"Optimized Local Affine Transform (OLAT)\". The proposed system generates a head model in approximately 8 seconds. Therefore, if users utilize a range scanner which can quickly produce range data, it is possible to generate a complete 3D head model in one minute using our system on a PC.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131322640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hybrid cursor control for precise and fast positioning without clutching 混合光标控制精确和快速的定位,没有抓紧
M. Schlattmann, R. Klein
{"title":"Hybrid cursor control for precise and fast positioning without clutching","authors":"M. Schlattmann, R. Klein","doi":"10.1145/1667146.1667161","DOIUrl":"https://doi.org/10.1145/1667146.1667161","url":null,"abstract":"In virtual environments, selection is typically solved by moving a cursor above a virtual item/object and issuing a selection command. In the context of hand-tracking, the cursor movement is controlled by a certain mapping of the hand pose to the virtual cursor position, allowing the cursor to reach any place in the virtual working space. If the virtual working space is bounded, a linear mapping can be used. This is called a proportional control.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128848209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Happy wear 快乐的穿
Camille Scherrer, Julien Pilet
{"title":"Happy wear","authors":"Camille Scherrer, Julien Pilet","doi":"10.1145/1665137.1665170","DOIUrl":"https://doi.org/10.1145/1665137.1665170","url":null,"abstract":"Look at yourself in our mirror, and you might see a paper fox behind you. Strange hands might open your stomach, or you could find a cat asleep in your bag.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126198672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient multi-pass welding training with haptic guide 高效的多道焊接培训,触觉引导
Yongwan Kim, Ungyeon Yang, Dongsik Jo, Gun A. Lee, J. Choi, Jinah Park
{"title":"Efficient multi-pass welding training with haptic guide","authors":"Yongwan Kim, Ungyeon Yang, Dongsik Jo, Gun A. Lee, J. Choi, Jinah Park","doi":"10.1145/1666778.1666810","DOIUrl":"https://doi.org/10.1145/1666778.1666810","url":null,"abstract":"Recent progress in computer graphics and interaction technologies has brought virtual training in many applications. Virtual training is very effective at dangerous or costly works. A represetative example is a welding training in automobile, shipbuilding, and construction equipment. Welding is define as a joining process that produces coalescence of metallic materials by heating them. Key factors for effective welding training are realistic welding modeling and trainig method with respect to users' torch motions. Several weld training systems, such as CS WAVE, ARC+ of 123Certification, and SimWelder of VRSim, support either only single-pass or inaccurate multi-pass simulation, since multi-pass welding process requires complicate complexity or enormous bead DB sets. In addition, these welding simulators utilize only some graphical metaphors to teach welding motions. However, welding training using graphical metaphors is still insufficient for training precise welding motions, because users can not fully perceive graphical guide information in 3D space under even stereoscopic environment.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125920086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A tone reproduction operator accounting for mesopic vision 一种用于中视视觉的音调再现算子
M. Mikamo, M. Slomp, Toru Tamaki, K. Kaneda
{"title":"A tone reproduction operator accounting for mesopic vision","authors":"M. Mikamo, M. Slomp, Toru Tamaki, K. Kaneda","doi":"10.1145/1666778.1666819","DOIUrl":"https://doi.org/10.1145/1666778.1666819","url":null,"abstract":"High dynamic range (HDR) imaging provides more physically accurate measurements of pixel intensities, but displaying them may require tone mapping as the dynamic range between image and display device can differ. Most tone-mapping operators (TMO) focus on luminance compression ignoring chromatic assets. The human visual system (HVS), however, alters color perception according to the level of luminosity. At photopic conditions color perception is accurate and as conditions shift to scotopic, color perception decreases. Mesopic vision is a range in between where colors are perceived but in a distorted way: red intensities' responses fade faster producing a blue-shift effect known as Purkinje effect.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114103485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A smart agent for taking pictures 一个聪明的拍照代理
Hyunsang Ahn, Manjai Lee, I. Jeong, Jihwan Park
{"title":"A smart agent for taking pictures","authors":"Hyunsang Ahn, Manjai Lee, I. Jeong, Jihwan Park","doi":"10.1145/1666778.1666800","DOIUrl":"https://doi.org/10.1145/1666778.1666800","url":null,"abstract":"This research suggests a novel photo taking system that can interact with people. The goal is to make a system act like a human photographer. This system can recognize when people wave their hands, moves toward them, and takes pictures with designated compositions and user-chosen tastes. For the image composition, the user can also adjust the composition arbitrary depends on personal choice by looking through the screen attached to the system. For the resulting shot, user can select the picture he wants.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121020787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive work for feeling time by compositing multi-vision and generating sounds 通过合成多视觉和产生声音来感受时间的交互式工作
Yi-Hsiu Chen, W. Chou
{"title":"Interactive work for feeling time by compositing multi-vision and generating sounds","authors":"Yi-Hsiu Chen, W. Chou","doi":"10.1145/1666778.1666781","DOIUrl":"https://doi.org/10.1145/1666778.1666781","url":null,"abstract":"With the progress of computer technology, interaction does not only break the relationship between the audience and the art work but also reveals a new digital aesthetic view. The proposed, interactive work uses multi-webcam to capture the multi-view images of the user and generate sounds by synthesizing sonic tones according to the dynamic moments of images. The main concept is to utilize the interactive installation to allow users to experience and feel the flowing of time by catching sight of vision in the interstices of time, as if reproducing Nude Descending a Staircase, No. 2 drawn by Dada artist Marcel Duchamp. The work flattens people's dynamic movements into a brief and condensed audio-visual field. Besides, the users can create a composite memory of themselves with the sound and video.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116551458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An esthetics rule-based ranking system for amateur photos 一个基于美学规则的业余照片排名系统
C. Yeh, Wai-Seng Ng, B. Barsky, M. Ouhyoung
{"title":"An esthetics rule-based ranking system for amateur photos","authors":"C. Yeh, Wai-Seng Ng, B. Barsky, M. Ouhyoung","doi":"10.1145/1667146.1667177","DOIUrl":"https://doi.org/10.1145/1667146.1667177","url":null,"abstract":"With the current widespread use of digital cameras, the process of selecting and maintaining personal photos is becoming an onerous task. To our knowledge, there has been little research on photo evaluation based on computational esthetics. Photographers around the world have established some general rules for taking good photos. Building upon artistic theories and human visual perception is difficult since the results tend to be subjective. Although automatically ranking award-wining professional photos may not be a sensible pursuit, such an approach may be reasonable for photos taken by amateurs. In the next section, we introduce rules for such a system.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"34 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113975899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Simulation-based in-between creation for CACAni system 基于仿真的CACAni系统中间创建
Eiji Sugisaki, S. H. Soon, Fumihito Kyota, M. Nakajima
{"title":"Simulation-based in-between creation for CACAni system","authors":"Eiji Sugisaki, S. H. Soon, Fumihito Kyota, M. Nakajima","doi":"10.1145/1667146.1667156","DOIUrl":"https://doi.org/10.1145/1667146.1667156","url":null,"abstract":"In-between creation in traditional cel animation based on the hand-drawn key-frames is a fundamental element for the actual production and plays a symbolic role in an artistic interpretation of the scene. To create impressive in-betweens, however, animators are required to be skilled for hair animation creation. In the traditional cel animation, hair motions are generally used to express a character's affective change or showing environment condition. Despite this usability and importance, the hair motion is drawn relatively simply or is not animated at all because of the lack of skilled animators and time constraints in cel animation production. To assist this production process, P. Noble and W. Tang [Noble and Tang. 2004], and Sugisaki et al. [Sugisaki et al. 2006] introduced certain ways to create hair motion for cartoon animations. Both of them created the hair motion based on 3D simulation that is applied to the prepared 3D character model. In this paper, we introduce an in-between creation method, specialized for hair based on dynamic simulation, which does not need any 3D character model. Animators can create in-between frames for hair motion by setting a few parameters, and then our method automatically select the best in-between frames based on the specified frame number by animator. The advantage of our method is to create in-between frames for hair motion by applying simulation model to key-frames. Obviously, the key-frame images do not have any depth. In fact, our method can directly utilize the hand-drawn key-frames which are drawn by animators in CACAni (Computer-Assisted Cel Animation) system [CACAni Website].","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128014077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Direct 3D manipulation for volume segmentation using mixed reality 直接3D操作体分割使用混合现实
Takehiro Tawara, K. Ono
{"title":"Direct 3D manipulation for volume segmentation using mixed reality","authors":"Takehiro Tawara, K. Ono","doi":"10.1145/1666778.1666811","DOIUrl":"https://doi.org/10.1145/1666778.1666811","url":null,"abstract":"We propose a novel two-handed direct manipulation system to achieve complex volume segmentation of CT/MRI data in the real 3D space with a remote controller attached a motion tracking cube. At the same time segmented data is displayed by direct volume rendering using a programmable GPU. Our system achieves visualization of real time modification of volume data with complex shadings including transparency control by changing transfer functions, displaying any cross section and rendering multi materials using a local illumination model.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"100 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132708210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信