2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)最新文献

筛选
英文 中文
The Impact of Sound Systems on the Perception of Cinematic Content in Immersive Audiovisual Productions 音效系统对沉浸式视听作品中电影内容感知的影响
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-05-07 DOI: 10.1109/APMAR.2019.8709163
Victoria Korshunova, G. Remijn, Synes Elischka, Catarina Mendonça
{"title":"The Impact of Sound Systems on the Perception of Cinematic Content in Immersive Audiovisual Productions","authors":"Victoria Korshunova, G. Remijn, Synes Elischka, Catarina Mendonça","doi":"10.1109/APMAR.2019.8709163","DOIUrl":"https://doi.org/10.1109/APMAR.2019.8709163","url":null,"abstract":"With fast technological developments, traditional perceptual environments disappear and new ones emerge. These changes make the human senses adapt to new ways of perceptual understanding, for example, regarding the perceptual integration of sound and vision. Proceeding from the fact that hearing cooperates with visual attention processes, the aim of this study is to investigate the effect of different sound design conditions on the perception of cinematic content in immersive audiovisual reproductions. Here we introduce the results of a visual selective attention task (counting objects) performed by participants watching a 270-degree immersive audiovisual display, on which a movie (\"Ego Cure\") was shown. Four sound conditions were used, which employed an increasing number of loudspeakers, i.e., mono, stereo, 5.1 and 7.1.4. Eye tracking was used to record the participant’s eye gaze during the task. The eye tracking data showed that an increased number of speakers and a wider spatial audio distribution diffused the participants’ attention from the task-related part of the display to non-task-related directions. The number of participants looking at the task-irrelevant display in the 7.1.4 condition was significantly higher than in the mono audio condition. This implies that additional spatial cues in the auditory modality automatically influence human visual attention (involuntary eye movements) and human analysis of visual information. Sound engineers should consider this when mixing educational or any other information-oriented productions.","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126360111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D Body and Background Reconstruction in a Large-scale Indoor Scene using Multiple Depth Cameras 基于多深度相机的大规模室内场景三维体与背景重建
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-05-07 DOI: 10.1109/APMAR.2019.8709280
Daisuke Kobayashi, D. Thomas, Hideaki Uchiyama, R. Taniguchi
{"title":"3D Body and Background Reconstruction in a Large-scale Indoor Scene using Multiple Depth Cameras","authors":"Daisuke Kobayashi, D. Thomas, Hideaki Uchiyama, R. Taniguchi","doi":"10.1109/APMAR.2019.8709280","DOIUrl":"https://doi.org/10.1109/APMAR.2019.8709280","url":null,"abstract":"3D reconstruction of indoor scenes that contain non-rigidly moving human body using depth cameras is a task of extraordinary difficulty. Despite intensive efforts from the researchers in the 3D vision community, existing methods are still limited to reconstruct small scale scenes. This is because of the difficulty to track the camera motion when a target person moves in a totally different direction. Due to the narrow field of view (FoV) of consumer-grade red-green-blue-depth (RGB-D) cameras, a target person (generally put at about 2–3 meters from the camera) covers most of the FoV of the camera. Therefore, there are not enough features from the static background to track the motion of the camera. In this paper, we propose a system which reconstructs a moving human body and the background of an indoor scene using multiple depth cameras. Our system is composed of three Kinects that are approximately set in the same line and facing the same direction so that their FoV do not overlap (to avoid interference). Owing to this setup, we capture images of a person moving in a large scale indoor scene. The three Kinect cameras are calibrated with a robust method that uses three large non parallel planes. A moving person is detected by using human skeleton information, and is reconstructed separately from the static background. By separating the human body and the background, static 3D reconstruction can be adopted for the static background area while a method specialized for the human body area can be used to reconstruct the 3D model of the moving person. The experimental result shows the performance of proposed system for human body in a large-scale indoor scene.","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122655434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How the Multimodal Media in Augmented Reality Affects Museum Learning Experience 增强现实中的多模式媒体如何影响博物馆学习体验
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-03-28 DOI: 10.1109/APMAR.2019.8709286
Wei-Jane Lin, W. Lo, Hsiu-Ping Yueh
{"title":"How the Multimodal Media in Augmented Reality Affects Museum Learning Experience","authors":"Wei-Jane Lin, W. Lo, Hsiu-Ping Yueh","doi":"10.1109/APMAR.2019.8709286","DOIUrl":"https://doi.org/10.1109/APMAR.2019.8709286","url":null,"abstract":"As AR is envisioned to become the leading yet paradigm-shifting user interface metaphor for situated computing, it is becoming essential to understand how users process and perceive AR and its enabling technologies since the effect of AR relied heavily on users’ perception to integrate digital information with the real world. From the perspective of situated cognition and learning, this evaluation study investigated how the integrative multimodal representations in AR affects visitors’ museum experiences. A genuine exhibit of citrus fruits in an experimental gallery was constructed using marker-based AR and video see-through displays. A between-subject experiment with 48 college students was conducted to evaluate the effects of integrative multimodal representations on users’ flow experiences and interaction behaviors. Preliminary findings suggested that users perceived the integrative multimodal representations in AR as more engaging, compared to the static exhibit. Participants regarded themselves to have clearer goals of visit toward the AR exhibit, and they appreciated the immediate feedback provided by AR for them to confirm their actions.","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129976435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Simultaneous 3D Tracking and Reconstruction of Multiple Moving Rigid Objects 多个运动刚性物体的同步三维跟踪与重建
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-03-28 DOI: 10.1109/APMAR.2019.8709158
Takehiro Ozawa, Yoshikatsu Nakajima, H. Saito
{"title":"Simultaneous 3D Tracking and Reconstruction of Multiple Moving Rigid Objects","authors":"Takehiro Ozawa, Yoshikatsu Nakajima, H. Saito","doi":"10.1109/APMAR.2019.8709158","DOIUrl":"https://doi.org/10.1109/APMAR.2019.8709158","url":null,"abstract":"Most SLAM works based on the assumption of static scene, so localization of the camera and mapping of the scene can fail and lose accuracy of the scene, including moving objects. This paper presents a method for simultaneous mapping of moving objects in the target scene and localization of the moving camera, based on geometrical segmentation of each temporal frame. Taking advantage of segmentation of the target scene, using only the geometric structure of the scene, our method can estimate relative pose for camera and every geometrically segmented areas even without recognizing each object. For confirming the effectiveness of the proposed method, we experimentally show that our method can estimate relative poses for all segmented areas in the scene, so that we can achieve SLAM for the scene, including multiple moving objects.","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130173230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Semantic Segmentation of 3D Point Cloud to Virtually Manipulate Real Living Space 三维点云语义分割,虚拟操纵真实生活空间
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-03-28 DOI: 10.1109/APMAR.2019.8709156
Yuki Ishikawa, Ryo Hachiuma, Naoto Ienaga, W. Kuno, Yuta Sugiura, H. Saito
{"title":"Semantic Segmentation of 3D Point Cloud to Virtually Manipulate Real Living Space","authors":"Yuki Ishikawa, Ryo Hachiuma, Naoto Ienaga, W. Kuno, Yuta Sugiura, H. Saito","doi":"10.1109/APMAR.2019.8709156","DOIUrl":"https://doi.org/10.1109/APMAR.2019.8709156","url":null,"abstract":"This paper presents a method for the virtual manipulation of real living space using semantic segmentation of a 3D point cloud captured in the real world. We applied PointNet to segment each piece of furniture from the point cloud of a real indoor environment captured by moving a RGB-D camera. For semantic segmentation, we focused on local geometric information not used in PointNet, and we proposed a method to refine the class probability of labels attached to each point in PointNet’s output. The effectiveness of our method was experimentally confirmed. We then created 3D models of real-world furniture using a point cloud with corrected labels, and we virtually manipulated real living space using Dollhouse VR, a layout system.","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133547269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Analysis and Evaluation of Behavior of R-V Dynamics Illusion in Various Conditions 不同条件下R-V动力学错觉行为的分析与评价
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-03-28 DOI: 10.1109/APMAR.2019.8709273
Yuta Kataoka, Kaiki Ban, Tsubasa Fujimitsu, Taiki Yamada, Satoshi Hashiguchi, F. Shibata, Asako Kimura
{"title":"Analysis and Evaluation of Behavior of R-V Dynamics Illusion in Various Conditions","authors":"Yuta Kataoka, Kaiki Ban, Tsubasa Fujimitsu, Taiki Yamada, Satoshi Hashiguchi, F. Shibata, Asako Kimura","doi":"10.1109/APMAR.2019.8709273","DOIUrl":"https://doi.org/10.1109/APMAR.2019.8709273","url":null,"abstract":"This study examines the R-V Dynamics Illusion caused by different motion states of real and virtual objects. We discovered that various perceptual changes occur when a CG image imitating a liquid is superimposed onto a real object. The real object was perceived to be lighter when the real object was swung and the CG liquid moved, compared to when the liquid did not move, and the amount of muscle activity was found to decrease. In this research, the influence of the R-V Dynamics Illusion was analyzed by measuring the acceleration of the real object and the muscle fatigue of the subject. The experimental results showed that, when the real object was swung and the liquid moved, the object was swung at a low acceleration and the subjects’ muscles tended to be fatigued.","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131450520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Multi-View Video Browsing Interface Specialized for Developmental Child Training 发展性儿童训练专用多视点视频浏览界面的开发
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-03-28 DOI: 10.1109/APMAR.2019.8709275
Nobuyuki Kitamura, Hidehiko Shishido, Takuya Enomoto, Y. Kameda, Jun-ichi Yamamoto, I. Kitahara
{"title":"Development of Multi-View Video Browsing Interface Specialized for Developmental Child Training","authors":"Nobuyuki Kitamura, Hidehiko Shishido, Takuya Enomoto, Y. Kameda, Jun-ichi Yamamoto, I. Kitahara","doi":"10.1109/APMAR.2019.8709275","DOIUrl":"https://doi.org/10.1109/APMAR.2019.8709275","url":null,"abstract":"Efforts to support the skills of developmental child training/education workers are implemented using visual information captured by nursery teachers, therapists, and children’s intervention work. This paper proposes a highly operable multi-view video browsing interface utilizing a multi-touch input method, which is suitable for users who are unfamiliar with handling devices that require complicated operations. We conduct evolutional questionnaires and request surveys at an actual developmental child training site and propose an interface based on the results. The proposed interface improves the operability and visibility by two approaches. First, we differentiate between the operation methods to reduce the operation errors that occur during image browsing. Second, we introduce image processing to promote the understanding of the presented image. In experimental evaluations, we investigate the video presentation of scenes that users can easily understand and verify the operability improvement of the proposed interface by a comparative experiment using our pilot system.","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130779034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Study of 3D Target Replacement in AR Based On Target Tracking 基于目标跟踪的AR三维目标替换研究
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-03-28 DOI: 10.1109/APMAR.2019.8709287
Jiahui Bai, Guangyu Nie, Weitao Song, Yue Liu, Yongtian Wang
{"title":"Study of 3D Target Replacement in AR Based On Target Tracking","authors":"Jiahui Bai, Guangyu Nie, Weitao Song, Yue Liu, Yongtian Wang","doi":"10.1109/APMAR.2019.8709287","DOIUrl":"https://doi.org/10.1109/APMAR.2019.8709287","url":null,"abstract":"Augmented reality application faces the problem of 3D target replacement for better mixing effect, however, the existing methods have such problems as large amount of calculation and high hardware requirements. Inspired by the development of deep learning in the target detection and target tracking, this paper introduces a neural network and trains a detector to identify the target from the binocular picture to generate the three-dimensional position of the target. By using the difference of the positions between the two images and the camera parameters, the depth calculation formula is used to generate the position of the target. Experimental result shows our method can realize the 3D position generation of the target, which provides a new idea for solving the replacement of objects in the augmented reality system.","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131445496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
[Copyright notice] (版权)
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-03-01 DOI: 10.1109/apmar.2019.8709150
{"title":"[Copyright notice]","authors":"","doi":"10.1109/apmar.2019.8709150","DOIUrl":"https://doi.org/10.1109/apmar.2019.8709150","url":null,"abstract":"","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"219 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133110186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
APMAR 2019 Keynote Speaker APMAR 2019主题演讲
2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR) Pub Date : 2019-03-01 DOI: 10.1109/apmar.2019.8709152
{"title":"APMAR 2019 Keynote Speaker","authors":"","doi":"10.1109/apmar.2019.8709152","DOIUrl":"https://doi.org/10.1109/apmar.2019.8709152","url":null,"abstract":"","PeriodicalId":156273,"journal":{"name":"2019 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR)","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123556638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信