{"title":"SEER","authors":"Takayuki Todo","doi":"10.1145/3214907.3214921","DOIUrl":"https://doi.org/10.1145/3214907.3214921","url":null,"abstract":"SEER (Simulative Emotional Expression Robot) is an animatronic humanoid robot that generates gaze and emotional facial expressions to improve animativity, lifelikeness, and impresssiveness by the integrated design of modeling, mechanism, materials, and computing. The robot can simulated a user?s movement, gaze, and facial expressions detected by a camera sensor. This system can be applied to puppetry, telepresence avatar, and interactive automation.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132869200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Grinshpoon, S. Sadri, Gabrielle J. Loeb, Carmine Elvezio, S. Siu, Steven K. Feiner
{"title":"Hands-free augmented reality for vascular interventions","authors":"A. Grinshpoon, S. Sadri, Gabrielle J. Loeb, Carmine Elvezio, S. Siu, Steven K. Feiner","doi":"10.1145/3214907.3236462","DOIUrl":"https://doi.org/10.1145/3214907.3236462","url":null,"abstract":"During a vascular intervention (a type of minimally invasive surgical procedure), physicians maneuver catheters and wires through a patient's blood vessels to reach a desired location in the body. Since the relevant anatomy is typically not directly visible in these procedures, virtual reality and augmented reality systems have been developed to assist in 3D navigation. Because both of a physician's hands may already be occupied, we developed an augmented reality system supporting hands-free interaction techniques that use voice and head tracking to enable the physician to interact with 3D virtual content on a head-worn display while leaving both hands available intraoperatively. We demonstrate how a virtual 3D anatomical model can be rotated and scaled using small head rotations through first-order (rate) control, and can be rigidly coupled to the head for combined translation and rotation through zero-order control. This enables easy manipulation of a model while it stays close to the center of the physician's field of view.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131925225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomoya Sasaki, R. S. Hartanto, Kao-Hua Liu, Keitaro Tsuchiya, Atsushi Hiyama, Masahiko Inami
{"title":"Leviopole","authors":"Tomoya Sasaki, R. S. Hartanto, Kao-Hua Liu, Keitaro Tsuchiya, Atsushi Hiyama, Masahiko Inami","doi":"10.1145/3214907.3214913","DOIUrl":"https://doi.org/10.1145/3214907.3214913","url":null,"abstract":"We present LevioPole, a rod-like device that provides mid-air haptic feedback for full-body interaction in virtual reality, augmented reality, or other daily activities. The device is constructed from two rotor units, which are designed using propellers, motors, speed controllers, batteries, and sensors, allowing portability and ease of use. Having each group of rotor units on both ends of the pole, these rotors generate both rotational and linear forces that can be driven according to the target application. In this paper, we introduce example applications in both VR and physical environment; embodied gaming with haptic feedback and walking navigation in a specific direction.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121019841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew O'Toole, David B. Lindell, Gordon Wetzstein
{"title":"Real-time non-line-of-sight imaging","authors":"Matthew O'Toole, David B. Lindell, Gordon Wetzstein","doi":"10.1145/3214907.3214920","DOIUrl":"https://doi.org/10.1145/3214907.3214920","url":null,"abstract":"Non-line-of-sight (NLOS) imaging aims at recovering the shape of objects hidden outside the direct line of sight of a camera. In this work, we report on a new approach for acquiring time-resolved measurements that are suitable for NLOS imaging. The system uses a confocalized single-photon detector and pulsed laser. As opposed to previously-proposed NLOS imaging systems, our setup is very similar to LIDAR systems used for autonomous vehicles and it facilitates a closed-form solution of the associated inverse problem, which we derive in this work. This algorithm, dubbed the Light Cone Transform, is three orders of magnitude faster and more memory efficient than existing methods. We demonstrate experimental results for indoor and outdoor scenes captured and reconstructed with the proposed confocal NLOS imaging system.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123335958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fairlift","authors":"Yuji Matsuura, Naoya Koizumi","doi":"10.1145/3214907.3214919","DOIUrl":"https://doi.org/10.1145/3214907.3214919","url":null,"abstract":"FairLift is an interaction system involving mid-air images, which are visible to the naked eye under and on a water surface. In this system, the water surface reflects the light from micro-mirror array plates, and a mid-air image appears. The system enables a user to interact with the mid-air image by controlling the image position of a light-source display from the water level measured with an ultrasonic sensor. The contributions of this system are enriching interaction with mid-air images and addressing the limitations of conventional water-display systems.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pierre-Yves Laffont, Ali Hasnain, Pierre-Yves Guillemet, Samuel Wirajaya, Joe Khoo, D. Teng, Jean-Charles Bazin
{"title":"Verifocal","authors":"Pierre-Yves Laffont, Ali Hasnain, Pierre-Yves Guillemet, Samuel Wirajaya, Joe Khoo, D. Teng, Jean-Charles Bazin","doi":"10.1145/3214907.3214925","DOIUrl":"https://doi.org/10.1145/3214907.3214925","url":null,"abstract":"The vergence-accommodation conflict is a fundamental cause of discomfort in today's Virtual and Augmented Reality (VR/AR). We present a novel software platform and hardware for varifocal head-mounted displays (HMDs) to generate consistent accommodation cues and account for the user's prescription. We investigate multiple varifocal optical systems and propose the world's first varifocal mobile HMD based on Alvarez lenses. We also introduce a varifocal rendering pipeline, which corrects for distortion introduced by the optical focus adjustment, approximates retinal blur, incorporates eye tracking and leverages on rendered content to correct noisy eye tracking results. We demonstrate the platform running in compact VR headsets and present initial results in video pass-through AR.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130869399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takashi Yamamoto, Tamaki Nishino, H. Kajima, M. Ohta, Koichi Ikeda
{"title":"Human support robot (HSR)","authors":"Takashi Yamamoto, Tamaki Nishino, H. Kajima, M. Ohta, Koichi Ikeda","doi":"10.1145/3214907.3233972","DOIUrl":"https://doi.org/10.1145/3214907.3233972","url":null,"abstract":"There has been an increasing interest in mobile manipulators that is capable of performing physical work in living spaces worldwide, corresponding to population aging with declining birth rates with the expectation of improving quality of life (QOL). Research and development is a must in intelligent sensing and software which enable advanced recognition, judgment, and motion to realize household work by robots. In order to accelerate this research, we have developed a compact and safe research platform, Human Support Robot (HSR), which can be operated in an actual home environment. We assume that overall R&D will accelerate by using a common robot platform among many researchers since that enables them to share their research results. In this paper, we introduce HSR design and its utilization.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128426226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A full-color single-chip-DLP projector with an embedded 2400-fps homography warping engine","authors":"S. Kagami, K. Hashimoto","doi":"10.1145/3214907.3214927","DOIUrl":"https://doi.org/10.1145/3214907.3214927","url":null,"abstract":"We demonstrate a 24-bit full-color projector that achieves over 2400-fps motion adaptability to a fast moving planar surface using single-chip DLP technology, which will be useful for projection mapping applications in highly dynamic scenes. The projector can be interfaced with a host PC via standard HDMI and USB without need of high computational burden.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123140175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kishore Rathinavel, Praneeth Chakravarthula, K. Akşit, J. Spjut, Ben Boudaoud, T. Whitted, D. Luebke, H. Fuchs
{"title":"Steerable application-adaptive near eye displays","authors":"Kishore Rathinavel, Praneeth Chakravarthula, K. Akşit, J. Spjut, Ben Boudaoud, T. Whitted, D. Luebke, H. Fuchs","doi":"10.1145/3214907.3214911","DOIUrl":"https://doi.org/10.1145/3214907.3214911","url":null,"abstract":"The design challenges of see-through near-eye displays can be mitigated by specializing an augmented reality device for a particular application. We present a novel optical design for augmented reality near-eye displays exploiting 3D stereolithography printing techniques to achieve similar characteristics to progressive prescription binoculars. We propose to manufacture inter-changeable optical components using 3D printing, leading to arbitrary shaped static projection screen surfaces that are adaptive to the targeted applications. We identify a computational optical design methodology to generate various optical components accordingly, leading to small compute and power demands. To this end, we introduce our augmented reality prototype with a moderate form-factor, large field of view. We have also presented that our prototype is promising high resolutions for a foveation technique using a moving lens in front of a projection system. We believe our display technique provides a gate-way to application-adaptive, easily replicable, customizable, and cost-effective near-eye display designs.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114220277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}