Yasuo Kawai, M. Ogasawara, Takehisa Kaito, Keita Nagao
{"title":"Construction of Virtual Large-Scale Road Environment for Developing Control Algorithms for Autonomous and Electric Vehicles","authors":"Yasuo Kawai, M. Ogasawara, Takehisa Kaito, Keita Nagao","doi":"10.1145/3415264.3425459","DOIUrl":"https://doi.org/10.1145/3415264.3425459","url":null,"abstract":"Control algorithms developed for autonomous and electric vehicles undergo limited trials because of the high cost of using actual vehicles. This study constructs a low-cost virtual environment for developing such control algorithms using open data and a game engine. Specifically, a large-scale three-dimensional urban road model with elevation differences is generated. This model is connected with hardware-in-the-loop simulations (HILS) as a vehicle running model to realize a practical system. This study reveals the need to automate the model development process in light of its high cost.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116704660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Colored Cast Shadows for Improved Visibility in Optical See-Through AR","authors":"Chun Wei Ooi, J. Dingliana","doi":"10.1145/3415264.3425442","DOIUrl":"https://doi.org/10.1145/3415264.3425442","url":null,"abstract":"","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"18 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125628620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Robust Low-cost Mocap System with Sparse Sensors","authors":"S. Kim, Hanyoung Jang, Jongmin Kim","doi":"10.1145/3415264.3425463","DOIUrl":"https://doi.org/10.1145/3415264.3425463","url":null,"abstract":"In this paper, we propose a robust low-cost mocap system (mocap) with sparse sensors. Although the sensor with an accelerometer, magnetometer, and gyroscope is cost-effective and offers the measured positions and rotations from these devices, it potentially suffers from noise, drift, and lost issues over time. The resulting character obtained from a sensor-based low-cost mocap system is thus generally not satisfactory. We address these issues by using a novel deep learning framework that consists of two networks, a motion estimator and a sensor data generator. When the aforementioned issues occur, the motion estimator feeds the newly synthesized sensor data obtained with the measured and predicted data from the sensor data generator until the issues have been resolved. Otherwise, the motion estimator receives the measured sensor data to accurately and continuously reconstruct the new character poses. In our examples, we show that our system outperforms the previous approach without the sensor data generator and we believe that it can be considered a handy and robust mocap system.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123455018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Social Interactions for Live Performance in Virtual Reality","authors":"Shuo Yan, Xuning Yan, Xukun Shen","doi":"10.1145/3415264.3425466","DOIUrl":"https://doi.org/10.1145/3415264.3425466","url":null,"abstract":"Social interaction is important when audience are participating in a live performance. With strong immersion and high degree of freedom (DoF), virtual reality (VR) is often used to present performance events, such as virtual concert. A live performance is also a space with the potential for shared social realities, which could hardly be emulated by remote VR. This paper proposes new approaches to design social interactions that arouse social awareness for the audience in a VR performance. These interactions could be implemented in three different modes, which include both individual expression and group effect. The project brings new possibilities for future VR performance experience.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114459202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Santoshkumar, Deepti Hegde, R. Tabib, U. Mudenagudi
{"title":"Refining SfM Reconstructed Models of Indian Heritage Sites","authors":"T. Santoshkumar, Deepti Hegde, R. Tabib, U. Mudenagudi","doi":"10.1145/3415264.3425472","DOIUrl":"https://doi.org/10.1145/3415264.3425472","url":null,"abstract":"In this paper, we propose a method to refine sparse point clouds of complex structures generated by Structure from Motion in order to achieve improved visual fidelity of ancient Indian heritage sites. We compare our results with the state-of-the-art upsampling networks.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125645747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial and Photometric Consistent Matching for Structure-from-Motion in Highly Ambiguous Scenes","authors":"Szu-Chun Su, Ze-Yiou Chen, Kuan-Wen Chen","doi":"10.1145/3415264.3425444","DOIUrl":"https://doi.org/10.1145/3415264.3425444","url":null,"abstract":"","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122310631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PaperTracker: A Gamified Music & Tech Teaching Tool","authors":"Tristan Bunn, Jon He, A. Murnieks, R. Rudnicki","doi":"10.1145/3415264.3425440","DOIUrl":"https://doi.org/10.1145/3415264.3425440","url":null,"abstract":"Papertracker is an interactive educational platform that engages audiences of all ages with music and technology. The focus is on providing fun and inexpensive challenges that promote creative problem solving, collaborative work, and programming using a system of placeable tiles.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121109394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Rodrigo Ortiz Cayon, Stefan Holzer, Christian Richardt
{"title":"Deferred Neural Rendering for View Extrapolation","authors":"Tobias Bertel, Yusuke Tomoto, Srinivas Rao, Rodrigo Ortiz Cayon, Stefan Holzer, Christian Richardt","doi":"10.1145/3415264.3425441","DOIUrl":"https://doi.org/10.1145/3415264.3425441","url":null,"abstract":"Image-based rendering methods that support visually pleasing specular surface reflections require accurate surface geometry and a large number of input images. Recent advances in neural scene representations show excellent visual quality while requiring only imperfect mesh proxies or no surface-based proxies at all. While providing state-of-the-art visual quality, the inference time of learned models is usually too slow for interactive applications. While using a casually captured circular video sweep as input, we extend Deferred Neural Rendering to extrapolate smooth viewpoints around specular objects like a car.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129378427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}