2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)最新文献

筛选
英文 中文
FLASH: Video-Embeddable AR Anchors for Live Events FLASH:视频嵌入AR主播直播事件
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00066
E. Lu, John Miller, Nuno Pereira, Anthony G. Rowe
{"title":"FLASH: Video-Embeddable AR Anchors for Live Events","authors":"E. Lu, John Miller, Nuno Pereira, Anthony G. Rowe","doi":"10.1109/ismar52148.2021.00066","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00066","url":null,"abstract":"Public spaces like concert stadiums and sporting arenas are ideal venues for AR content delivery to crowds of mobile phone users. Unfortunately, these environments tend to be some of the most challenging in terms of lighting and dynamic staging for vision-based relocalization. In this paper, we introduce FLASH1, a system for delivering AR content within challenging lighting environments that uses active tags (i.e., blinking) with detectable features from passive tags (quads) for marking regions of interest and determining pose. This combination allows the tags to be detectable from long distances with significantly less computational overhead per frame, making it possible to embed tags in existing video displays like large jumbotrons. To aid in pose acquisition, we implement a gravity-assisted pose solver that removes the ambiguous solutions that are often encountered when trying to localize using standard passive tags. We show that our technique outperforms similarly sized passive tags in terms of range by 20-30% and is fast enough to run at 30 FPS even within a mobile web browser on a smartphone.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133704056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Measuring the Perceived Three-Dimensional Location of Virtual Objects in Optical See-Through Augmented Reality 光学透视增强现实中虚拟物体感知三维位置的测量
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00025
Farzana Alam Khan, V. V. R. M. K. Muvva, Dennis Wu, M. S. Arefin, Nate Phillips, J. Swan
{"title":"Measuring the Perceived Three-Dimensional Location of Virtual Objects in Optical See-Through Augmented Reality","authors":"Farzana Alam Khan, V. V. R. M. K. Muvva, Dennis Wu, M. S. Arefin, Nate Phillips, J. Swan","doi":"10.1109/ismar52148.2021.00025","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00025","url":null,"abstract":"For optical see-through augmented reality (AR), a new method for measuring the perceived three-dimensional location of virtual objects is presented, where participants verbally report a virtual object’s location relative to both a vertical and horizontal grid. The method is tested with a small (1.95 × 1.95 × 1.95 cm) virtual object at distances of 50 to 80 cm, viewed through a Microsoft HoloLens 1st generation AR display. Two experiments examine two different virtual object designs, whether turning in a circle between reported object locations disrupts HoloLens tracking, and whether accuracy errors, including a rightward bias and underestimated depth, might be due to systematic errors that are restricted to a particular display. Turning in a circle did not disrupt HoloLens tracking, and testing with a second display did not suggest systematic errors restricted to a particular display. Instead, the experiments are consistent with the hypothesis that, when looking downwards at a horizontal plane, HoloLens 1st generation displays exhibit a systematic rightward perceptual bias. Precision analysis suggests that the method could measure the perceived location of a virtual object within an accuracy of less than 1 mm.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124611441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
VR Collaborative Object Manipulation Based on Viewpoint Quality 基于视点质量的VR协同对象操作
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00020
Lili Wang, Xiaolong Liu, Xiangyu Li
{"title":"VR Collaborative Object Manipulation Based on Viewpoint Quality","authors":"Lili Wang, Xiaolong Liu, Xiangyu Li","doi":"10.1109/ismar52148.2021.00020","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00020","url":null,"abstract":"We introduce a collaborative manipulation method to improve the efficiency and accuracy of object manipulation in virtual reality applications with multiple users. When multiple users manipulate an object in collaboration, a certain user may have a better perspective than other users at a certain moment, and can clearly observe the object to be manipulated and the target position, and it is more efficient and accurate for him to manipulate the object. We construct a viewpoint quality function and evaluate the viewpoints of multiple users by calculating its three components: the visibility of the object need to be manipulated, the visibility of target, the depth and distance combined of the target. By comparing the viewpoint quality of multiple users, the user with the highest viewpoint quality is determined as the dominant manipulator, who can manipulate the object at the moment. A temporal filter is proposed to filter the dominant sequence generated by the previous frames and the current frame, which reduces the dominant manipulator jumping back and forth between multiple users in a short time slice, making the determination of the dominant manipulator more stable. We have designed a user study and tested our method with three multi-user collaborative manipulation tasks. Compared to the previous methods, our method showed significant improvement in task completion time, rotation accuracy, user participation and task load.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128188667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
ISMAR 2021 Paper Reviewers for Conference Papers ISMAR 2021会议论文审稿人
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00009
{"title":"ISMAR 2021 Paper Reviewers for Conference Papers","authors":"","doi":"10.1109/ismar52148.2021.00009","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00009","url":null,"abstract":"","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115352033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Cameras: Learning Camera Characteristics for Coherent Mixed Reality Rendering 神经相机:学习相机特性的连贯混合现实渲染
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00068
D. Mandl, P. Roth, T. Langlotz, Christoph Ebner, Shohei Mori, S. Zollmann, Peter Mohr, Denis Kalkofen
{"title":"Neural Cameras: Learning Camera Characteristics for Coherent Mixed Reality Rendering","authors":"D. Mandl, P. Roth, T. Langlotz, Christoph Ebner, Shohei Mori, S. Zollmann, Peter Mohr, Denis Kalkofen","doi":"10.1109/ismar52148.2021.00068","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00068","url":null,"abstract":"Coherent rendering is important for generating plausible Mixed Reality presentations of virtual objects within a user’s real-world environment. Besides photo-realistic rendering and correct lighting, visual coherence requires simulating the imaging system that is used to capture the real environment. While existing approaches either focus on a specific camera or a specific component of the imaging system, we introduce Neural Cameras, the first approach that jointly simulates all major components of an arbitrary modern camera using neural networks. Our system allows for adding new cameras to the framework by learning the visual properties from a database of images that has been captured using the physical camera. We present qualitative and quantitative results and discuss future direction for research that emerge from using Neural Cameras.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126058573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Supporting Iterative Virtual Reality Analytics Design and Evaluation by Systematic Generation of Surrogate Clustered Datasets 通过系统生成代理聚类数据集支持迭代虚拟现实分析设计和评估
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00054
S. Tadeja, P. Langdon, P. Kristensson
{"title":"Supporting Iterative Virtual Reality Analytics Design and Evaluation by Systematic Generation of Surrogate Clustered Datasets","authors":"S. Tadeja, P. Langdon, P. Kristensson","doi":"10.1109/ismar52148.2021.00054","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00054","url":null,"abstract":"Virtual Reality (VR) is a promising technology platform for immersive visual analytics. However, the design space of VR analytics interface design is vast and difficult to explore using traditional A/B comparisons in formal or informal controlled experiments— a fundamental part of an iterative design process. A key factor that complicates such comparisons is the dataset. Exposing participants to the same dataset in all conditions introduces an unavoidable learning effect. On the other hand, using different datasets for all experimental conditions introduces the dataset itself as an uncontrolled variable, which reduces internal validity to an unacceptable degree. In this paper, we propose to rectify this problem by introducing a generative process for synthesizing clustered datasets for VR analytics experiments. This process generates datasets that are distinct while simultaneously allowing systematic comparisons in experiments. A key advantage is that these datasets can then be used in iterative design processes. In a two-part experiment, we show the validity of the generative process and demonstrate how new insights in VR-based visual analytics can be gained using synthetic datasets.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133952363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ISMAR 2021 Science and Technology Program Committee for Conference Papers ISMAR 2021科学技术计划委员会会议论文
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00008
{"title":"ISMAR 2021 Science and Technology Program Committee for Conference Papers","authors":"","doi":"10.1109/ismar52148.2021.00008","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00008","url":null,"abstract":"","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131459411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the ISMAR 2021 Science and Technology Conference Paper Program Chairs ISMAR 2021科学与技术会议论文项目主席的致辞
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00006
{"title":"Message from the ISMAR 2021 Science and Technology Conference Paper Program Chairs","authors":"","doi":"10.1109/ismar52148.2021.00006","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00006","url":null,"abstract":"","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133714611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAR: Spatial-Aware Regression for 3D Hand Pose and Mesh Reconstruction from a Monocular RGB Image SAR:基于单目RGB图像的三维手部姿态和网格重建的空间感知回归
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00024
Xiaozheng Zheng, Pengfei Ren, Haifeng Sun, Jingyu Wang, Q. Qi, J. Liao
{"title":"SAR: Spatial-Aware Regression for 3D Hand Pose and Mesh Reconstruction from a Monocular RGB Image","authors":"Xiaozheng Zheng, Pengfei Ren, Haifeng Sun, Jingyu Wang, Q. Qi, J. Liao","doi":"10.1109/ismar52148.2021.00024","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00024","url":null,"abstract":"3D hand reconstruction is a popular research topic in recent years, which has great potential for VR/AR applications. However, due to the limited computational resource of VR/AR equipment, the reconstruction algorithm must balance accuracy and efficiency to make the users have a good experience. Nevertheless, current methods are not doing well in balancing accuracy and efficiency. Therefore, this paper proposes a novel framework that can achieve a fast and accurate 3D hand reconstruction. Our framework relies on three essential modules, including spatial-aware initial graph building (SAIGB), graph convolutional network (GCN) based belief maps regression (GBBMR), and pose-guided refinement (PGR). At first, given image feature maps extracted by convolutional neural networks, SAIGB builds a spatial-aware and compact initial feature graph. Each node in this graph represents a vertex of the mesh and has vertex-specific spatial information that is helpful for accurate and efficient regression. After that, GBBMR first utilizes adaptive-GCN to introduce interactions between vertices to capture short-range and long-range dependencies between vertices efficiently and flexibly. Then, it maps vertices’ features to belief maps that can model the uncertainty of predictions for more accurate predictions. Finally, we apply PGR to compress the redundant vertices’ belief maps to compact-joints’ belief maps with the pose guidance and use these joints’ belief maps to refine previous predictions better to obtain more accurate and robust reconstruction results. Our method achieves state-of-the-art performance on four public benchmarks, FreiHAND, HO-3D, RHD, and STB. Moreover, our method can run at a speed of two to three times that of previous state-of-the-art methods. Our code is available at https://github.com/zxz267/SAR.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129308682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The Object at Hand: Automated Editing for Mixed Reality Video Guidance from Hand-Object Interactions 手边的对象:手-对象交互的混合现实视频指导的自动编辑
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-09-29 DOI: 10.1109/ismar52148.2021.00023
Yao Lu, Walterio W. Mayol-Cuevas
{"title":"The Object at Hand: Automated Editing for Mixed Reality Video Guidance from Hand-Object Interactions","authors":"Yao Lu, Walterio W. Mayol-Cuevas","doi":"10.1109/ismar52148.2021.00023","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00023","url":null,"abstract":"In this paper, we concern with the problem of how to automatically extract the steps that compose real-life hand activities. This is a key competence towards processing, monitoring and providing video guidance in Mixed Reality systems. We use egocentric vision to observe hand-object interactions in real-world tasks and automatically decompose a video into its constituent steps. Our approach combines hand-object interaction (HOI) detection, object similarity measurement and a finite state machine (FSM) representation to automatically edit videos into steps. We use a combination of Convolutional Neural Networks (CNNs) and the FSM to discover, edit cuts and merge segments while observing real hand activities. We evaluate quantitatively and qualitatively our algorithm on two datasets: the GTEA [19], and a new dataset we introduce for Chinese Tea making. Results show our method is able to segment hand-object interaction videos into key step segments with high levels of precision.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130284814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信