ACM SIGGRAPH 2015 Posters最新文献

筛选
英文 中文
Paint-like compositing based on RYB color model 基于RYB颜色模型的类绘画合成
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792648
Junichi Sugita, Tokiichiro Takahashi
{"title":"Paint-like compositing based on RYB color model","authors":"Junichi Sugita, Tokiichiro Takahashi","doi":"10.1145/2787626.2792648","DOIUrl":"https://doi.org/10.1145/2787626.2792648","url":null,"abstract":"Many people have been familiar with subtractive color model based on pigment color compositing since their early childhood. However, the RGB color space is not comprehensible for children due to additive color compositing. In the RGB color space, the resulting mixture color is often different from colors viewer expected. CMYK is a well-known subtractive color space, but its three primal colors are not familiar. Kubelka-Munk model (KM model in short) simulates pigment compositing as well as paint-like appearance by physically-based simulation. However, it is difficult to use KM model because of many simulation parameters.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125052667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Spherical light field environment capture for virtual reality using a motorized pan/tilt head and offset camera 球面光场环境捕获的虚拟现实使用机动平移/倾斜头和偏移相机
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787648
P. Debevec, G. Downing, M. Bolas, Hsuen-Yueh Peng, Jules Urbach
{"title":"Spherical light field environment capture for virtual reality using a motorized pan/tilt head and offset camera","authors":"P. Debevec, G. Downing, M. Bolas, Hsuen-Yueh Peng, Jules Urbach","doi":"10.1145/2787626.2787648","DOIUrl":"https://doi.org/10.1145/2787626.2787648","url":null,"abstract":"Todays most compelling virtual reality experiences shift the users viewpoint within the virtual environment based on input from a head-tracking system, giving a compelling sense of motion parallax. While this is straightforward for computer generated scenes, photographic VR content generally does not provide motion parallax in response to head motion. Even 360° stereo panoramas, which offer separated left and right views, fail to allow the vantage point to change in response to head motion.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128435881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Enhancing time and space efficiency of kd-tree for ray-tracing static scenes 提高kd-tree静态场景光线追踪的时间和空间效率
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787658
Byeongjun Choi, Woong Seo, I. Ihm
{"title":"Enhancing time and space efficiency of kd-tree for ray-tracing static scenes","authors":"Byeongjun Choi, Woong Seo, I. Ihm","doi":"10.1145/2787626.2787658","DOIUrl":"https://doi.org/10.1145/2787626.2787658","url":null,"abstract":"In the ray-tracing community, the surface-area heuristic (SAH) has been employed as a de facto standard strategy for building a high-quality kd-tree. Aiming to improve both time and space efficiency of the conventional SAH-based kd-tree in ray tracing, we propose to use an extended kd-tree representation for which an effective tree-construction algorithm is provided. Our experiments with several test scenes revealed that the presented kd-tree scheme significantly reduced the memory requirement for representing the tree structure, while also increasing the overall frame rate for rendering.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129587479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rigid fluid 刚性流体
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787654
Yu Wang, M. Olano
{"title":"Rigid fluid","authors":"Yu Wang, M. Olano","doi":"10.1145/2787626.2787654","DOIUrl":"https://doi.org/10.1145/2787626.2787654","url":null,"abstract":"We present a framework for modeling solid-fluid phase change. Our framework is physically-motivated, with geometric constraints applied to define rigid dynamics using shape matching. In each simulation step, particle positions are updated using an extended SPH solver where they are treated as fluid. Then a geometric constraint is computed based on current particle configuration, which consists of an optimal translation and an optimal rotation. Our approach differs from methods such as [Carlson et al. 2004] in that we solve rigid dynamics by using a stable geometric constraint [Müller et al. 2005] embedded in a fluid simulator.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130160212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
First-person view animation editing utilizing video see-through augmented reality 第一人称视角动画编辑利用视频透视增强现实
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787656
Liang-Chen Wu, Jia-Ye Li, Yu-Hsuan Huang, M. Ouhyoung
{"title":"First-person view animation editing utilizing video see-through augmented reality","authors":"Liang-Chen Wu, Jia-Ye Li, Yu-Hsuan Huang, M. Ouhyoung","doi":"10.1145/2787626.2787656","DOIUrl":"https://doi.org/10.1145/2787626.2787656","url":null,"abstract":"In making 3D animation with traditional method, we usually edit 3D objects in 3-dimension space on the screen; therefore, we have to use input devices to edit and to observe 3D models. However, those processes can be improved. With the improvement in gesture recognition nowadays, virtual information operations are no longer confined to the mouse and keyboard. We can use the recognized gestures to apply to difficult operations in editing model motion. And for observing 3D model, we would use head tracking from external devices to improve it. It would be easy to observe the interactive results without complicated operation because the system will accurately map the real world head movements.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132976718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creating near-field VR using stop motion characters and a touch of light-field rendering 使用定格动画角色和光场渲染来创建近场VR
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787640
M. Bolas, Ashok Kuruvilla, Shravani Chintalapudi, Fernando Rabelo, V. Lympouridis, Christine Barron, Evan A. Suma, Catalina Matamoros, Cristina Brous, Alicja Jasina, Yawen Zheng, Andrew Jones, P. Debevec, D. Krum
{"title":"Creating near-field VR using stop motion characters and a touch of light-field rendering","authors":"M. Bolas, Ashok Kuruvilla, Shravani Chintalapudi, Fernando Rabelo, V. Lympouridis, Christine Barron, Evan A. Suma, Catalina Matamoros, Cristina Brous, Alicja Jasina, Yawen Zheng, Andrew Jones, P. Debevec, D. Krum","doi":"10.1145/2787626.2787640","DOIUrl":"https://doi.org/10.1145/2787626.2787640","url":null,"abstract":"There is rapidly growing interest in the creation of rendered environments and content for tracked head-mounted stereoscopic displays for virtual reality. Currently, the most popular approaches include polygonal environments created with game engines, as well as 360 degree spherical cameras used to capture live action video. These tools were not originally designed to leverage the more complex visual cues available in VR when users laterally shift viewpoints, manually interact with models, and employ stereoscopic vision. There is a need for a fresh look at graphics techniques that can capitalize upon the unique affordances that make VR so compelling.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132021064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The XML3D architecture XML3D架构
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792623
K. Sons, F. Klein, Jan Sutter, P. Slusallek
{"title":"The XML3D architecture","authors":"K. Sons, F. Klein, Jan Sutter, P. Slusallek","doi":"10.1145/2787626.2792623","DOIUrl":"https://doi.org/10.1145/2787626.2792623","url":null,"abstract":"Graphics hardware has become ubiquitous: Integrated into CPUs and into mobile devices and recently even embedded into cars. With the advent of WebGL, accelerated graphics is finally accessible from within the web browser. However, still the capabilities of GPUs are almost exclusively exploited by the video game industry, where experts produce specialized content for game engines.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126364139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamic fur on mobile using textured offset surfaces 在移动设备上使用纹理偏移表面的动态皮毛
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787649
Shaohui Jiao, Xiaofeng Tong, Eric Li, Wenlong Li
{"title":"Dynamic fur on mobile using textured offset surfaces","authors":"Shaohui Jiao, Xiaofeng Tong, Eric Li, Wenlong Li","doi":"10.1145/2787626.2787649","DOIUrl":"https://doi.org/10.1145/2787626.2787649","url":null,"abstract":"Fur simulation is crucial in many graphic applications since it can greatly enhance the realistic visual effect of virtual objects, e.g. animal avatars. However, due to its high computational cost of massive fur strands processing and motion complexity, dynamic fur is regarded as a challenging task, especially on the mobile platforms with low computing power. In order to support real-time fur rendering in mobile applications, we propose a novel method called textured offset surfaces (TOS). In particular, the furry surface is represented by a set of offset surfaces, as shown in Figure 1(a). The offset surfaces are shifted outwards from the original mesh. Each offset surface is textured with scattering density (red rectangles in Figure 1(a)) to implicitly represent the fur geometry, whose value can be changed by texture warping to simulate the fur animation. In order to achieve high quality anisotropic illumination result, as shown in Figure 1(b), Kajiya/Banks lighting model is employed in the rendering phase.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113946401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BGMaker: example-based anime background image creation from a photograph BGMaker:基于实例的动画背景图像创建从一张照片
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787646
Shugo Yamaguchi, Chie Furusawa, Takuya Kato, Tsukasa Fukusato, S. Morishima
{"title":"BGMaker: example-based anime background image creation from a photograph","authors":"Shugo Yamaguchi, Chie Furusawa, Takuya Kato, Tsukasa Fukusato, S. Morishima","doi":"10.1145/2787626.2787646","DOIUrl":"https://doi.org/10.1145/2787626.2787646","url":null,"abstract":"Anime designers often paint actual sceneries to serve as background images based on photographs to complement characters. As painting background scenery is time consuming and cost ineffective, there is a high demand for techniques that can convert photographs into anime styled graphics. Previous approaches for this purpose, such as Image Quilting [Efros and Freeman 2001] transferred a source texture onto a target photograph. These methods synthesized corresponding source patches with the target elements in a photograph, and correspondence was achieved through nearest-neighbor search such as PatchMatch [Barnes et al. 2009]. However, the nearest-neighbor patch is not always the most suitable patch for anime transfer because photographs and anime background images differ in color and texture. For example, real-world color need to be converted into specific colors for anime; further, the type of brushwork required to realize an anime effect, is different for different photograph elements (e.g. sky, mountain, grass). Thus, to get the most suitable patch, we propose a method, wherein we establish global region correspondence before local patch match. In our proposed method, BGMaker, (1) we divide the real and anime images into regions; (2) then, we automatically acquire correspondence between each region on the basis of color and texture features, and (3) search and synthesize the most suitable patch within the corresponding region. Our primary contribution in this paper is a method for automatically acquiring correspondence between target regions and source regions of different color and texture, which allows us to generate an anime background image while preserving the details of the source image.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131280824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FlexAR: anatomy education through kinetic tangible augmented reality FlexAR:解剖学教育通过动态有形增强现实
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792629
M. Saenz, J. Strunk, Kelly Maset, J. Seo, E. Malone
{"title":"FlexAR: anatomy education through kinetic tangible augmented reality","authors":"M. Saenz, J. Strunk, Kelly Maset, J. Seo, E. Malone","doi":"10.1145/2787626.2792629","DOIUrl":"https://doi.org/10.1145/2787626.2792629","url":null,"abstract":"We present FlexAR, a kinetic tangible augmented reality [Billinghurst,2008] application for anatomy education. Anatomy has been taught traditionally in two dimensions, particularly for those in non-medical fields such as artists. Medical students gain hands-on experience through cadaver dissection [[Winkelmann, 2007]. However, with dissection becoming less practical, researchers have begun evaluating techniques for teaching anatomy through technology.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127685529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信