T. Bednarz, D. Branchaud, Florence Wang, Justin Baker, M. Marquarding
{"title":"Digital Twin of the Australian Square Kilometre Array (ASKAP)","authors":"T. Bednarz, D. Branchaud, Florence Wang, Justin Baker, M. Marquarding","doi":"10.1145/3415264.3425462","DOIUrl":"https://doi.org/10.1145/3415264.3425462","url":null,"abstract":"In this work, we present the Digital Twin of the Australian Square Kilometre Array Pathfinder (ASKAP) - an extended reality framework for telescope monitoring. Currently, most of the immersive visualisation tools developed in astronomy primarily focus on educational aspects of astronomical data or concepts. We extend this paradigm, allowing complex operational network controls with the aim of combining telescope monitoring, processing and observational data into the same framework.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123118450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaoyan Huang, Sakthi P B Ranganathan, Isaac Parsons
{"title":"To touch or not to touch? Comparing Touch, mid-air gesture, mid-air haptics for public display in post COVID-19 society","authors":"Shaoyan Huang, Sakthi P B Ranganathan, Isaac Parsons","doi":"10.1145/3415264.3425438","DOIUrl":"https://doi.org/10.1145/3415264.3425438","url":null,"abstract":"We developed a mid-air touch Mixed Reality application which combined hand tracking sensing and haptic feedback on a desktop display. We evaluated three hand interaction techniques: 1) Touch, 2) Mid Air Gesture Touch, 3) Mid Air Haptic Touch through preliminary user testing with ten adults. Results suggest the user’s willingness to use self-service devices in public places increases in the post COVID-19 world, while their concerns about the possibility of getting in touch with the virus reduces. However before the large scale deployment of this technology, accuracy and user experience design need to be improved.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114542257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Immersive 3D Body Painting System","authors":"Yoon-Seok Choi, Soonchul Jung, Jin-Seo Kim","doi":"10.1145/3415264.3425467","DOIUrl":"https://doi.org/10.1145/3415264.3425467","url":null,"abstract":"In recent virtual reality systems, users can now experience various types of content through precise interaction with virtual world by incorporating HMD’s and interacts with the virtual world by projecting the user actions in real world beyond audiovisual viewing. Recently, virtual reality technology is also actively applied in the field of arts. This paper proposes a novel immersive virtual 3D body painting system which provides various drawing tools and paint effects used in conventional body painting works for producing high-quality works by giving insights through the stages of either concept design or pre-production. We analyzed the drawing effect of airbrush and painting brush through collaboration with body painting experts and provide excellent drawing effect through GPU-based real-time rendering. Our system also provides users with the management functions such as save/load, undo they need to create works in virtual reality.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131340904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HEY!: Exploring Virtual Character Interaction for Immersive Storytelling via Electroencephalography","authors":"Yi-Hsuan Tseng, Tian-Jyun Lin, Tzu-Hsuan Yang, Ping-Hsuan Han, Saiau-Yue Tsau","doi":"10.1145/3415264.3425447","DOIUrl":"https://doi.org/10.1145/3415264.3425447","url":null,"abstract":"The Virtual Reality (VR) headset has become potential equipment for immersive storytelling. However, we know limited things about the users when they are experiencing the VR context. Sometimes, the users miss the narration because they are looking around, which makes designing a compelling VR story a challenge. With the advancement of electroencephalography (EEG) in VR, the story’s rhythm or structure could dynamically change based on the audience’s brain waves to create a personal dramatic moment. In this paper, we conduct a preliminary study to investigate the potential use of a consumer-level brainwave headset, and attempts to explore virtual character interaction to enhance immersive storytelling.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128826988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generation of Origami Folding Animations from 3D Point Cloud Using Latent Space Interpolation","authors":"Chiaki Nakagaito, Takanori Nishino, K. Takeda","doi":"10.1145/3415264.3425450","DOIUrl":"https://doi.org/10.1145/3415264.3425450","url":null,"abstract":"","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123032746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhongqi Wu, Chuanqing Zhuang, Jian Shi, Jun Xiao, Jianwei Guo
{"title":"Deep Specular Highlight Removal for Single Real-world Image","authors":"Zhongqi Wu, Chuanqing Zhuang, Jian Shi, Jun Xiao, Jianwei Guo","doi":"10.1145/3415264.3425454","DOIUrl":"https://doi.org/10.1145/3415264.3425454","url":null,"abstract":"Specular highlight removal is a challenging task. We present a novel data-driven approach for automatic specular highlight removal from a single image. To this end, we build a new dataset of real-world images for specular highlight removal with corresponding ground-truth diffuse images. Based on the dataset, we also present a specular highlight removal network by introducing the detection of specular reflections information as guidance. The experimental evaluations indicate that the proposed approach outperforms recent state-of-the-art methods.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"227 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120958572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Induced Finger Movements Effect","authors":"Agata Marta Soccini","doi":"10.1145/3415264.3425448","DOIUrl":"https://doi.org/10.1145/3415264.3425448","url":null,"abstract":"The Sense of Embodiment in Virtual Reality is one of the key components to provide users with a convincing experience. Our contribution to a better understanding of the phenomenon focuses on the analysis of the motor reaction of the users to an alien finger movement. We assess quantitatively that the view of an alien movement (i.e. a movement of the self-avatar caused by an alien will) induces a finger posture variation, that we refer to as the Induced Finger Movements Effect. This only happens in case of embodiment, while in a disembodied setup the effect disappears. The principle of the investigation is being tested as a base for neuro-rehabilitation, based on the concept of inducing movements in post-stroke hemiplegic patients.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sound Reactive Bio-Inspired Snake Robot Simulation","authors":"Sriranjan Rasakatla, I. Mizuuchi, B. Indurkhya","doi":"10.1145/3415264.3425439","DOIUrl":"https://doi.org/10.1145/3415264.3425439","url":null,"abstract":"We present a hardware and software framework in which we use the direction of the sound source to interact with the simulation of a snake robot. We present a gamification idea (similar to hide and seek) of how one can use the direction of the sound and develop interactive simulations in robotics and especially use the bio-inspired idea of a snake's reactive locomotion to sound. We use multiple microphones and calculate the direction of sound coming from the sound source in near real-time and make the simulation respond to it. Since a biological snake moves away from a sound source when it senses vibrations, we bio-mimic this behavior in a simulated snake robot. This idea can be used for developing games that are reactive to multiple people interacting with a computer, based on sound direction input. This is a novel interface and first of its kind presented in this paper.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131650451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shanthika Naik, U. Mudenagudi, R. Tabib, Adarsh Jamadandi
{"title":"FeatureNet: Upsampling of Point Cloud and it’s Associated Features","authors":"Shanthika Naik, U. Mudenagudi, R. Tabib, Adarsh Jamadandi","doi":"10.1145/3415264.3425471","DOIUrl":"https://doi.org/10.1145/3415264.3425471","url":null,"abstract":"In this paper, we address the problem of 3D Point Cloud Upsampling, that is, given a set of points, the objective is to obtain denser point cloud representation. We achieve this by proposing a deep learning architecture that along with consuming point clouds directly, also accepts associated auxiliary information such as Normals and Colors and consequently upsamples them. We design a novel feature loss function to train this model. We demonstrate our work on ModelNet dataset and show consistent improvements over existing methods.","PeriodicalId":372541,"journal":{"name":"SIGGRAPH Asia 2020 Posters","volume":"293 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133818787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}