SIGGRAPH Asia 2020 Posters最新文献

筛选
英文 中文
Construction of Virtual Large-Scale Road Environment for Developing Control Algorithms for Autonomous and Electric Vehicles 面向自动驾驶和电动汽车控制算法的虚拟大尺度道路环境构建
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425459
Yasuo Kawai, M. Ogasawara, Takehisa Kaito, Keita Nagao
{"title":"Construction of Virtual Large-Scale Road Environment for Developing Control Algorithms for Autonomous and Electric Vehicles","authors":"Yasuo Kawai, M. Ogasawara, Takehisa Kaito, Keita Nagao","doi":"10.1145/3415264.3425459","DOIUrl":"https://doi.org/10.1145/3415264.3425459","url":null,"abstract":"Control algorithms developed for autonomous and electric vehicles undergo limited trials because of the high cost of using actual vehicles. This study constructs a low-cost virtual environment for developing such control algorithms using open data and a game engine. Specifically, a large-scale three-dimensional urban road model with elevation differences is generated. This model is connected with hardware-in-the-loop simulations (HILS) as a vehicle running model to realize a practical system. This study reveals the need to automate the model development process in light of its high cost.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116704660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Colored Cast Shadows for Improved Visibility in Optical See-Through AR 彩色阴影用于提高光学透视AR的可视性
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425442
Chun Wei Ooi, J. Dingliana
{"title":"Colored Cast Shadows for Improved Visibility in Optical See-Through AR","authors":"Chun Wei Ooi, J. Dingliana","doi":"10.1145/3415264.3425442","DOIUrl":"https://doi.org/10.1145/3415264.3425442","url":null,"abstract":"","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"18 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125628620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Robust Low-cost Mocap System with Sparse Sensors 基于稀疏传感器的鲁棒低成本动作捕捉系统
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425463
S. Kim, Hanyoung Jang, Jongmin Kim
{"title":"A Robust Low-cost Mocap System with Sparse Sensors","authors":"S. Kim, Hanyoung Jang, Jongmin Kim","doi":"10.1145/3415264.3425463","DOIUrl":"https://doi.org/10.1145/3415264.3425463","url":null,"abstract":"In this paper, we propose a robust low-cost mocap system (mocap) with sparse sensors. Although the sensor with an accelerometer, magnetometer, and gyroscope is cost-effective and offers the measured positions and rotations from these devices, it potentially suffers from noise, drift, and lost issues over time. The resulting character obtained from a sensor-based low-cost mocap system is thus generally not satisfactory. We address these issues by using a novel deep learning framework that consists of two networks, a motion estimator and a sensor data generator. When the aforementioned issues occur, the motion estimator feeds the newly synthesized sensor data obtained with the measured and predicted data from the sensor data generator until the issues have been resolved. Otherwise, the motion estimator receives the measured sensor data to accurately and continuously reconstruct the new character poses. In our examples, we show that our system outperforms the previous approach without the sensor data generator and we believe that it can be considered a handy and robust mocap system.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123455018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Social Interactions for Live Performance in Virtual Reality 探索虚拟现实中现场表演的社会互动
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425466
Shuo Yan, Xuning Yan, Xukun Shen
{"title":"Exploring Social Interactions for Live Performance in Virtual Reality","authors":"Shuo Yan, Xuning Yan, Xukun Shen","doi":"10.1145/3415264.3425466","DOIUrl":"https://doi.org/10.1145/3415264.3425466","url":null,"abstract":"Social interaction is important when audience are participating in a live performance. With strong immersion and high degree of freedom (DoF), virtual reality (VR) is often used to present performance events, such as virtual concert. A live performance is also a space with the potential for shared social realities, which could hardly be emulated by remote VR. This paper proposes new approaches to design social interactions that arouse social awareness for the audience in a VR performance. These interactions could be implemented in three different modes, which include both individual expression and group effect. The project brings new possibilities for future VR performance experience.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114459202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Indoor Scene Semantic Modeling for Virtual Reality 面向虚拟现实的室内场景语义建模
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425446
Sheng-Han Wu, Hsin-Wei Yu, Cheng-Wei Lin, Ping-Hsuan Han, Kuan-Wen Chen
{"title":"Indoor Scene Semantic Modeling for Virtual Reality","authors":"Sheng-Han Wu, Hsin-Wei Yu, Cheng-Wei Lin, Ping-Hsuan Han, Kuan-Wen Chen","doi":"10.1145/3415264.3425446","DOIUrl":"https://doi.org/10.1145/3415264.3425446","url":null,"abstract":"","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130739665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refining SfM Reconstructed Models of Indian Heritage Sites 改进印度遗产地的SfM重建模型
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425472
T. Santoshkumar, Deepti Hegde, R. Tabib, U. Mudenagudi
{"title":"Refining SfM Reconstructed Models of Indian Heritage Sites","authors":"T. Santoshkumar, Deepti Hegde, R. Tabib, U. Mudenagudi","doi":"10.1145/3415264.3425472","DOIUrl":"https://doi.org/10.1145/3415264.3425472","url":null,"abstract":"In this paper, we propose a method to refine sparse point clouds of complex structures generated by Structure from Motion in order to achieve improved visual fidelity of ancient Indian heritage sites. We compare our results with the state-of-the-art upsampling networks.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125645747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Creating Virtual Reality Cartoons from Traditional Cartoons 从传统漫画中创造虚拟现实漫画
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425445
Kyungjin Han, Jieun Hwang, Jong-Weon Lee
{"title":"Creating Virtual Reality Cartoons from Traditional Cartoons","authors":"Kyungjin Han, Jieun Hwang, Jong-Weon Lee","doi":"10.1145/3415264.3425445","DOIUrl":"https://doi.org/10.1145/3415264.3425445","url":null,"abstract":"","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127962107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial and Photometric Consistent Matching for Structure-from-Motion in Highly Ambiguous Scenes 高度模糊场景中运动结构的空间和光度一致性匹配
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425444
Szu-Chun Su, Ze-Yiou Chen, Kuan-Wen Chen
{"title":"Spatial and Photometric Consistent Matching for Structure-from-Motion in Highly Ambiguous Scenes","authors":"Szu-Chun Su, Ze-Yiou Chen, Kuan-Wen Chen","doi":"10.1145/3415264.3425444","DOIUrl":"https://doi.org/10.1145/3415264.3425444","url":null,"abstract":"","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122310631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PaperTracker: A Gamified Music & Tech Teaching Tool PaperTracker:游戏化的音乐和技术教学工具
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425440
Tristan Bunn, Jon He, A. Murnieks, R. Rudnicki
{"title":"PaperTracker: A Gamified Music & Tech Teaching Tool","authors":"Tristan Bunn, Jon He, A. Murnieks, R. Rudnicki","doi":"10.1145/3415264.3425440","DOIUrl":"https://doi.org/10.1145/3415264.3425440","url":null,"abstract":"Papertracker is an interactive educational platform that engages audiences of all ages with music and technology. The focus is on providing fun and inexpensive challenges that promote creative problem solving, collaborative work, and programming using a system of placeable tiles.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121109394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deferred Neural Rendering for View Extrapolation 视图外推的延迟神经渲染
SIGGRAPH Asia 2020 Posters Pub Date : 2020-12-04 DOI: 10.1145/3415264.3425441
Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Rodrigo Ortiz Cayon, Stefan Holzer, Christian Richardt
{"title":"Deferred Neural Rendering for View Extrapolation","authors":"Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Rodrigo Ortiz Cayon, Stefan Holzer, Christian Richardt","doi":"10.1145/3415264.3425441","DOIUrl":"https://doi.org/10.1145/3415264.3425441","url":null,"abstract":"Image-based rendering methods that support visually pleasing specular surface reflections require accurate surface geometry and a large number of input images. Recent advances in neural scene representations show excellent visual quality while requiring only imperfect mesh proxies or no surface-based proxies at all. While providing state-of-the-art visual quality, the inference time of learned models is usually too slow for interactive applications. While using a casually captured circular video sweep as input, we extend Deferred Neural Rendering to extrapolate smooth viewpoints around specular objects like a car.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129378427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信