ACM SIGGRAPH 2015 Posters最新文献

筛选
英文 中文
A music video authoring system synchronizing climax of video clips and music via rearrangement of musical bars 一种音乐视频创作系统,通过重新排列音乐条来同步视频剪辑和音乐的高潮
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792608
Haruki Sato, T. Hirai, Tomoyasu Nakano, Masataka Goto, S. Morishima
{"title":"A music video authoring system synchronizing climax of video clips and music via rearrangement of musical bars","authors":"Haruki Sato, T. Hirai, Tomoyasu Nakano, Masataka Goto, S. Morishima","doi":"10.1145/2787626.2792608","DOIUrl":"https://doi.org/10.1145/2787626.2792608","url":null,"abstract":"This paper presents a system that can automatically add a soundtrack to a video clip by replacing and concatenating an existing song's musical bars considering a user's preference. Since a soundtrack makes a video clip attractive, adding a soundtrack to a clip is one of the most important processes in video editing. To make a video clip more attractive, an editor of the clip tends to add a soundtrack considering its timing and climax. For example, editors often add chorus sections to the climax of the clip by replacing and concatenating musical bars in an existing song. However, in the process, editors should take naturalness of rearranged soundtrack into account. Therefore, editors have to decide how to replace musical bars in a song considering its timing, climax, and naturalness of rearranged soundtrack simultaneously. In this case, editors are required to optimize the soundtrack by listening to the rearranged result as well as checking the naturalness and synchronization between the result and the video clip. However, this repetitious work is time-consuming. [Feng et al. 2010] proposed an automatic soundtrack addition method. However, since this method automatically adds soundtrack with data-driven approach, this method cannot consider timing and climax which a user prefers.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127764543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Mobile collaborative augmented reality with real-time AR/VR switching 移动协作增强现实与实时AR/VR切换
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792662
Prashanth Bollam, Eesha Gothwal, G. B. C. S. T. Vinnakota, Shailesh Kumar, Soumyajit Deb
{"title":"Mobile collaborative augmented reality with real-time AR/VR switching","authors":"Prashanth Bollam, Eesha Gothwal, G. B. C. S. T. Vinnakota, Shailesh Kumar, Soumyajit Deb","doi":"10.1145/2787626.2792662","DOIUrl":"https://doi.org/10.1145/2787626.2792662","url":null,"abstract":"The recent boom in computing capabilities of mobile devices has led to the introduction of Virtual Reality into the mobile ecosystem. We demonstrate a framework for the Samsung Gear VR headset that allows developers to create a totally immersive AR & VR experience with no need for interfacing with external devices or cables thereby making it a truly autonomous mobile VR experience. The significant benefits of this system over existing ones are - a fully hands free experience where hands could be used for gesture based input, the ability to use the Head Mounted Display (HMD) sensor for improved head and positional tracking and automatic peer to peer network creation for communication between phones. The most important factor in our system is to provide an intuitive way to interact with virtual objects in AR and VR. And users should be able to switch from AR to VR world and vice versa seamlessly.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134318494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
V3: an interactive real-time visualization of vocal vibrations V3:声音振动的交互式实时可视化
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792624
Rébecca Kleinberger
{"title":"V3: an interactive real-time visualization of vocal vibrations","authors":"Rébecca Kleinberger","doi":"10.1145/2787626.2792624","DOIUrl":"https://doi.org/10.1145/2787626.2792624","url":null,"abstract":"Our voice is an important part of our individuality but the relationship we have with our own voice is not obvious. We don't hear it the same way others do, and our brain treats it differently from any other sound we hear [Houde et al. 2002]. Yet its sonority is highly linked to our body and mind, and deeply connected with how we are perceived by society and how we see ourselves. The V3 system (Vocal Vibrations Visualization) offers a interactive visualization of vocal vibration patterns. We developed the hexauscultation mask, a head set sensor that measures bioacoustic signals from the voice at 6 points of the face and throat. Those signals are sent and processed to offer a real-time visualization of the relative vibration intensities at the 6 measured points. This system can be used in various situations such as vocal training, tool design for the deaf community, design of HCI for speech disorder treatment and prosody acquisition but also simply for personal vocal exploration.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134063215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Inferring gaze shifts from captured body motion 从捕捉到的身体动作推断目光的转移
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787663
D. Rakita, T. Pejsa, Bilge Mutlu, Michael Gleicher
{"title":"Inferring gaze shifts from captured body motion","authors":"D. Rakita, T. Pejsa, Bilge Mutlu, Michael Gleicher","doi":"10.1145/2787626.2787663","DOIUrl":"https://doi.org/10.1145/2787626.2787663","url":null,"abstract":"Motion-captured performances seldom include eye gaze, because capturing this motion requires eye tracking technology that is not typically part of a motion capture setup. Yet having eye gaze information is important, as it tells us what the actor was attending to during capture and it adds to the expressivity of their performance.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131744074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Increasing realism of animated grass in real-time game environments 增加实时游戏环境中草动画的真实感
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787660
Benjamin Knowles, O. Fryazinov
{"title":"Increasing realism of animated grass in real-time game environments","authors":"Benjamin Knowles, O. Fryazinov","doi":"10.1145/2787626.2787660","DOIUrl":"https://doi.org/10.1145/2787626.2787660","url":null,"abstract":"With the increasing quality of real-time graphics it is vital to make sure assets move in a convincing manner otherwise the players immersion can be broken. Grass is an important area as it can move substantially and often takes up a large portion of screen space in games. Animation of grass is a subject to academic research [Fernando 2004; Perbet and Cani 2001] as well as a technology which is implemented in a number of video games. The list includes, but is not limited to, games such as Far Cry 4, Battlefield 4, Dear Esther and Unigine Valley. Comparing video games assets with reality, it can be seen that the current methods have a number of problems which decrease the realism of the resulting grass animation. These problems include: 1) the visible planar nature of grass geometry and 2) problems with the grass movement which include over-connectivity of grass blades in respect to their neighbours, no obvious wind direction and exaggerated swaying motions. In this paper we propose to increase realism of the grass by focusing on its movement. The main contributions of this work are: 1) Distinguishing ambient and directional components of the wind and 2) The method for calculating directional wind by using a grayscale map and wind vector. The grass was implemented with vertex shaders in line with the majority of methods described in academic literature (e.g. [Fernando 2004]) and implemented in modern games.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133274530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Display of diamond dispersion using wavelength-division rendering and integral photography 用波分渲染和积分摄影显示钻石色散
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792642
Nahomi Maki, K. Yanaka
{"title":"Display of diamond dispersion using wavelength-division rendering and integral photography","authors":"Nahomi Maki, K. Yanaka","doi":"10.1145/2787626.2792642","DOIUrl":"https://doi.org/10.1145/2787626.2792642","url":null,"abstract":"Various colors, such as in a prism, are observed in properly cut diamond even under white light because of dispersion. Properly-cut diamond brings about scintillation when viewing angle is changed, because total reflection inside a diamond tends to occur frequently due to the large refractive index. Moreover, strong rainbow colors are seen because of high dispersion ratio.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116330968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fully automatic ID mattes with support for motion blur and transparency 全自动ID磨砂与支持运动模糊和透明度
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787629
J. Friedman, Andrew C. Jones
{"title":"Fully automatic ID mattes with support for motion blur and transparency","authors":"J. Friedman, Andrew C. Jones","doi":"10.1145/2787626.2787629","DOIUrl":"https://doi.org/10.1145/2787626.2787629","url":null,"abstract":"In 3D production for commercials, television, and film, ID mattes are commonly used to modify rendered images without re-rendering. ID mattes are bitmap images used to isolate specific objects, or multiple objects, such as all of the buttons on a shirt. Many 3D pipelines are built to provide compositors with ID mattes in addition to beauty renders to allow flexibility.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116837353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Jigsaw: multi-modal big data management in digital film production 拼图:数字电影制作中的多模态大数据管理
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792617
S. Pabst, Hansung Kim, L. Polok, V. Ila, Ted Waine, A. Hilton, J. Clifford
{"title":"Jigsaw: multi-modal big data management in digital film production","authors":"S. Pabst, Hansung Kim, L. Polok, V. Ila, Ted Waine, A. Hilton, J. Clifford","doi":"10.1145/2787626.2792617","DOIUrl":"https://doi.org/10.1145/2787626.2792617","url":null,"abstract":"Modern digital film production uses large quantities of data captured on-set, such as videos, digital photographs, LIDAR scans, spherical photography and many other sources to create the final film frames. The processing and management of this massive amount of heterogeneous data consumes enormous resources. We propose an integrated pipeline for 2D/3D data registration aimed at film production, based around the prototype application Jigsaw. It allows users to efficiently manage and process various data types from digital photographs to 3D point clouds. A key step in the use of multi-modal 2D/3D data for content production is the registration into a common coordinate frame (match moving). 3D geometric information is reconstructed from 2D data and registered to the reference 3D models using 3D feature matching [Kim and Hilton 2014]. We present several highly efficient and robust approaches to this problem. Additionally, we have developed and integrated a fast algorithm for incremental marginal covariance calculation [Ila et al. 2015]. This allows us to estimate and visualize the 3D reconstruction error directly on-set, where insufficient coverage or other problems can be addressed right away. We describe the fast hybrid multi-core and GPU accelerated techniques that let us run these algorithms on a laptop. Jigsaw has been used and evaluated in several major digital film productions and significantly reduced the time and work required to manage and process on-set data.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115539387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Art directed rendering & shading using control images 艺术指导渲染和阴影使用控制图像
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792612
E. Akleman, Siran Liu, D. House
{"title":"Art directed rendering & shading using control images","authors":"E. Akleman, Siran Liu, D. House","doi":"10.1145/2787626.2792612","DOIUrl":"https://doi.org/10.1145/2787626.2792612","url":null,"abstract":"In this work, we present a simple mathematical approach to art directed shader development. We have tested this approach over two semesters in an introductory level graduate rendering & shading class at Texas A&M University. The students in the class each chose an artist's style to mimic, and then easily created rendered images strongly resembling that style (see Figures 1). The method provides shader developers an intuitive process, giving them a high level of visual control in the creation of stylized depictions.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128179133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fractured 3D object restoration and completion 断裂的3D物体恢复和完成
ACM SIGGRAPH 2015 Posters Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792633
Anthousis Andreadis, Robert Gregor, I. Sipiran, P. Mavridis, Georgios Papaioannou, T. Schreck
{"title":"Fractured 3D object restoration and completion","authors":"Anthousis Andreadis, Robert Gregor, I. Sipiran, P. Mavridis, Georgios Papaioannou, T. Schreck","doi":"10.1145/2787626.2792633","DOIUrl":"https://doi.org/10.1145/2787626.2792633","url":null,"abstract":"The problem of object restoration from eroded fragments where large parts could be missing is of high relevance in archaeology. Manual restoration is possible and common in practice but it is a tedious and error-prone process, which does not scale well. Solutions for specific parts of the problem have been proposed but a complete reassembly and repair pipeline is absent from the bibliography. We propose a shape restoration pipeline consisting of appropriate methods for automatic fragment reassembly and shape completion. We demonstrate the effectiveness of our approach using real-world fractured objects.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128488664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信