SIGGRAPH Asia 2020 Emerging Technologies最新文献

筛选
英文 中文
Dual Body: Method of Tele-Cooperative Avatar Robot with Passive Sensation Feedback to Reduce Latency Perception 双体:具有被动感觉反馈的远程协作Avatar机器人减少延迟感知的方法
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422893
Vibol Yem, Kentaro Yamaoka, Gaku Sueta, Y. Ikei
{"title":"Dual Body: Method of Tele-Cooperative Avatar Robot with Passive Sensation Feedback to Reduce Latency Perception","authors":"Vibol Yem, Kentaro Yamaoka, Gaku Sueta, Y. Ikei","doi":"10.1145/3415255.3422893","DOIUrl":"https://doi.org/10.1145/3415255.3422893","url":null,"abstract":"Dual Body was developed to be a telexistence or telepresence system, one in which the user does not need to continuously operate an avatar robot but is still able to passively perceive feedback sensations when the robot performs actions. This system can recognize user speech commands, and the robot performs the task cooperatively. The system that we propose, in which passive sensation feedback and cooperation of the robot are used, highly reduces the perception of latency and the feeling of fatigue, which increases the quality of experience and task efficiency. In the demo experience, participants will be able to command the robot from individual rooms via a URL and RoomID, and they will perceive sound and visual feedback, such as images or landscapes of the campus of Tokyo Metropolitan University, from the robot as it travels.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129982404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OmniPhotos: Casual 360° VR Photography with Motion Parallax OmniPhotos:休闲360°VR摄影与运动视差
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422884
Tobias Bertel, Mingze Yuan, Reuben Lindroos, Christian Richardt
{"title":"OmniPhotos: Casual 360° VR Photography with Motion Parallax","authors":"Tobias Bertel, Mingze Yuan, Reuben Lindroos, Christian Richardt","doi":"10.1145/3415255.3422884","DOIUrl":"https://doi.org/10.1145/3415255.3422884","url":null,"abstract":"Until now, immersive 360° VR panoramas could not be captured casually and reliably at the same time as state-of-the-art approaches involve time-consuming or expensive capture processes that prevent the casual capture of real-world VR environments. Existing approaches are also often limited in their supported range of head motion. We introduce OmniPhotos, a novel approach for casually and reliably capturing high-quality 360° VR panoramas. Our approach only requires a single sweep of a consumer 360° video camera as input, which takes less than 3 seconds with a rotating selfie stick. The captured video is transformed into a hybrid scene representation consisting of a coarse scene-specific proxy geometry and optical flow between consecutive video frames, enabling 5-DoF real-world VR experiences. The large capture radius and 360° field of view significantly expand the range of head motion compared to previous approaches. Among all competing methods, ours is the simplest, and fastest by an order of magnitude. We have captured more than 50 OmniPhotos and show video results for a large variety of scenes. We will make our code and datasets publicly available.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121488769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HaptoMapping: Visuo-Haptic AR System usingProjection-based Control of Wearable Haptic Devices HaptoMapping:使用基于投影的可穿戴触觉设备控制的视觉-触觉AR系统
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422891
Yamato Miyatake, T. Hiraki, Tomosuke Maeda, D. Iwai, Kosuke Sato
{"title":"HaptoMapping: Visuo-Haptic AR System usingProjection-based Control of Wearable Haptic Devices","authors":"Yamato Miyatake, T. Hiraki, Tomosuke Maeda, D. Iwai, Kosuke Sato","doi":"10.1145/3415255.3422891","DOIUrl":"https://doi.org/10.1145/3415255.3422891","url":null,"abstract":"Visuo-haptic augmented reality (AR) systems that present visual and haptic sensations in a spatially and temporally consistent manner have the potential to improve AR applications’ performance. However, there are issues such as enclosing the user’s view with a display, restricting the workspace to a limited amount of flat space, or changing the visual information presented in conventional systems. In this paper, we propose “HaptoMapping,” a novel projection-based AR system, that can present consistent visuo-haptic sensations on a non-planar physical surface without installing any visual displays to users and by keeping the quality of visual information. We implemented a prototype of HaptoMapping consisting of a projection system and a wearable haptic device. Also, we introduce three application scenarios in daily scenes.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124010150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CoiLED Display: Make Everything Displayable 卷曲显示:使所有内容都可显示
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422889
Saya Suzunaga, Yuichi Itoh, Kazuyuki Fujita, Ryo Shirai, T. Onoye
{"title":"CoiLED Display: Make Everything Displayable","authors":"Saya Suzunaga, Yuichi Itoh, Kazuyuki Fujita, Ryo Shirai, T. Onoye","doi":"10.1145/3415255.3422889","DOIUrl":"https://doi.org/10.1145/3415255.3422889","url":null,"abstract":"We propose CoiLED Display, a flexible and scalable display that transforms ordinary objects in our environment into displays simply by coiling the device around them. CoiLED Display consists of a strip-shaped display unit with a single row of attached LEDs, and it can represent information, after a calibration process, as it is wrapped onto a target object. The calibration required for fitting each object to the system can be achieved by capturing the entire object from multiple angles with an RGB camera, which recognizes the relative positional relationship among the LEDs. The advantage of this approach is that the calibration is quite simple but robust, even if the coiled strips are misaligned or overlap each other. We demonstrated a proof-of-concept prototype using strips with a 5-mm width and containing LEDs mounted at 2-mm intervals. This paper discusses various example applications of the proposed system.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131211894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realistic Volumetric 3D Display Using Physical Materials 使用物理材料的逼真体积3D显示
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422879
Ray Asahina, Takashi Nomoto, Takatoshi Yoshida, Yoshihiro Watanabe
{"title":"Realistic Volumetric 3D Display Using Physical Materials","authors":"Ray Asahina, Takashi Nomoto, Takatoshi Yoshida, Yoshihiro Watanabe","doi":"10.1145/3415255.3422879","DOIUrl":"https://doi.org/10.1145/3415255.3422879","url":null,"abstract":"Conventional swept volumetric displays can provide accurate physical cues for depth perception. However, the quality of texture reproduction is not high because these displays use high-speed projectors with low bit depth and low resolution. In this study, to address this limitation of swept volumetric displays while retaining their advantages, a new swept volumetric three-dimensional (3D) display is designed using physical materials as screens. Physical materials are directly used to reproduce textures on a displayed 3D surface. Further, our system can achieve hidden-surface removal based on real-time viewpoint tracking.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131438604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive Minimal Latency Laser Graphics Pipeline 交互式最小延迟激光图形管道
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422885
Jayson Haebich, C. Sandor, Á. Cassinelli
{"title":"Interactive Minimal Latency Laser Graphics Pipeline","authors":"Jayson Haebich, C. Sandor, Á. Cassinelli","doi":"10.1145/3415255.3422885","DOIUrl":"https://doi.org/10.1145/3415255.3422885","url":null,"abstract":"We present the design and implementation of a ”Laser Graphics Processing Unit” (LGPU) featuring a proposed re-configurable graphics pipeline capable of minimal latency interactive feedback, without the need of computer communication. This is a novel approach for creating interactive graphics where a simple program describes the interaction on a vertex. Similar in design to a geometry or fragment shader on a GPU, these programs are uploaded on initialisation and do not require input from any external micro-controller while running. The interaction shader takes input from a light sensor and updates the vertex and fragment shader, an operation that can be parallelised. Once loaded onto our prototype LGPU the pipeline can create laser graphics that react within 4 ms of interaction and can run without input from a computer. The pipeline achieves this low latency by having the interaction shader communicate with the geometry and vertex shaders that are also running on the LGPU. This enables the creation of low latency displays such as car counters, musical instrument interfaces, and non-touch projected widgets or buttons. From our testing we were able to achieve a reaction time of 4 ms and from a range of up to 15 m.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"79 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126991371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic Projection Mapping with Networked Multi-projectors Based on Pixel-parallel Intensity Control 基于像素并行强度控制的网络化多投影机动态投影映射
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422888
Takashi Nomoto, Wanlong Li, Hao-Lun Peng, Yoshihiro Watanabe
{"title":"Dynamic Projection Mapping with Networked Multi-projectors Based on Pixel-parallel Intensity Control","authors":"Takashi Nomoto, Wanlong Li, Hao-Lun Peng, Yoshihiro Watanabe","doi":"10.1145/3415255.3422888","DOIUrl":"https://doi.org/10.1145/3415255.3422888","url":null,"abstract":"We present a new method of mapping projections onto dynamic scenes by using multiple high-speed projectors. The proposed method controls the intensity in a pixel-parallel manner for each projector. As each projected image is updated in real time with low latency, adaptive shadow removal can be achieved for a projected image even in a complicated dynamic scene. Additionally, our pixel-parallel calculation method allows a distributed system configuration so that the number of projectors can be increased by networked connections for high scalability. We demonstrated seamless mapping onto dynamic scenes at 360 fps by using ten cameras and four projectors.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125480648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Mid-air Thermal Display via High-intensity Ultrasound 空中热显示通过高强度超声波
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422895
Takaaki Kamigaki, Shun Suzuki, H. Shinoda
{"title":"Mid-air Thermal Display via High-intensity Ultrasound","authors":"Takaaki Kamigaki, Shun Suzuki, H. Shinoda","doi":"10.1145/3415255.3422895","DOIUrl":"https://doi.org/10.1145/3415255.3422895","url":null,"abstract":"1 OVERVIEW This paper proposes a mid-air system to provide both heating and cooling sensations for hand via a high-intensity ultrasound spot (Fig. 1 (Left)). We employ airborne ultrasound phased arrays (AUPAs), which can generate a high-intensity focal point, at an arbitrary position in the air. By changing the position of the focal point against the hand, our system can provide each thermal sensation, as shown in","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125739427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Bubble Mirror: An Interactive Face Image Display Using Electrolysis Bubbles 气泡镜:利用电解气泡的交互式面部图像显示
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422890
Ayaka Ishii, Namiki Tanaka, I. Siio
{"title":"Bubble Mirror: An Interactive Face Image Display Using Electrolysis Bubbles","authors":"Ayaka Ishii, Namiki Tanaka, I. Siio","doi":"10.1145/3415255.3422890","DOIUrl":"https://doi.org/10.1145/3415255.3422890","url":null,"abstract":"","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133303988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Speed Human Arm Projection Mapping with Skin Deformation 具有皮肤变形的高速人体手臂投影映射
SIGGRAPH Asia 2020 Emerging Technologies Pub Date : 2020-12-04 DOI: 10.1145/3415255.3422887
Hao-Lun Peng, Yoshihiro Watanabe
{"title":"High-Speed Human Arm Projection Mapping with Skin Deformation","authors":"Hao-Lun Peng, Yoshihiro Watanabe","doi":"10.1145/3415255.3422887","DOIUrl":"https://doi.org/10.1145/3415255.3422887","url":null,"abstract":"Augmenting the human arm surface via projection mapping can have a great impact on our daily lives with regards to entertainment, human-computer interaction, and education. However, conventional methods ignore skin deformation and have a high latency from motion to projection, which degrades the user experience. In this paper, we propose a projection mapping system that can solve such problems. First, we newly combine a state-of-the-art parametric deformable surface model with an efficient regression-based accuracy compensation method of skin deformation. The compensation method modifies the texture coordinate to achieve high-speed and highly accurate image generation for projection using joint-tracking results. Second, we develop a high-speed system that reduces latency from motion to projection within 10 ms. Compared to the conventional methods, this system provides more realistic experiences.","PeriodicalId":344160,"journal":{"name":"SIGGRAPH Asia 2020 Emerging Technologies","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132416314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信