2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)最新文献

筛选
英文 中文
A Comparison of the Fatigue Progression of Eye-Tracked and Motion-Controlled Interaction in Immersive Space 沉浸式空间中眼动交互与动作控制交互的疲劳进程比较
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00063
Lukas Maximilian Masopust, David Bauer, Siyuan Yao, Kwan-Liu Ma
{"title":"A Comparison of the Fatigue Progression of Eye-Tracked and Motion-Controlled Interaction in Immersive Space","authors":"Lukas Maximilian Masopust, David Bauer, Siyuan Yao, Kwan-Liu Ma","doi":"10.1109/ismar52148.2021.00063","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00063","url":null,"abstract":"Eye-tracking enabled virtual reality (VR) headsets have recently become more widely available. This opens up opportunities to incorporate eye gaze interaction methods in VR applications. However, studies on the fatigue-induced performance fluctuations of these new input modalities are scarce and rarely provide a direct comparison with established interaction methods. We conduct a study to compare the selection-interaction performance between commonly used handheld motion control devices and emerging eye interaction technology in VR. We investigate each interaction’s unique fatigue progression pattern in study sessions with ten minutes of continuous engagement. The results support and extend previous findings regarding the progression of fatigue in eye-tracked interaction over prolonged periods. By directly comparing gaze-with motion-controlled interaction, we put the emerging eye-trackers into perspective with the state-of-the-art interaction method for immersive space. We then discuss potential implications for future extended reality (XR) interaction design based on our findings.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124788502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DVIO: Depth-Aided Visual Inertial Odometry for RGBD Sensors RGBD传感器的深度辅助视觉惯性里程计
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00034
Abhishek Tyagi, Yangwen Liang, Shuangquan Wang, Dongwoon Bai
{"title":"DVIO: Depth-Aided Visual Inertial Odometry for RGBD Sensors","authors":"Abhishek Tyagi, Yangwen Liang, Shuangquan Wang, Dongwoon Bai","doi":"10.1109/ismar52148.2021.00034","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00034","url":null,"abstract":"In past few years we have observed an increase in the usage of RGBD sensors in mobile devices. These sensors provide a good estimate of the depth map for the camera frame, which can be used in numerous augmented reality applications. This paper presents a new visual inertial odometry (VIO) system, which uses measurements from a RGBD sensor and an inertial measurement unit (IMU) sensor for estimating the motion state of the mobile device. The resulting system is called the depth-aided VIO (DVIO) system. In this system we add the depth measurement as part of the nonlinear optimization process. Specifically, we propose methods to use the depth measurement using one-dimensional (1D) feature parameterization as well as three-dimensional (3D) feature parameterization. In addition, we propose to utilize the depth measurement for estimating time offset between the unsynchronized IMU and the RGBD sensors. Last but not least, we propose a novel block-based marginalization approach to speed up the marginalization processes and maintain the real-time performance of the overall system. Experimental results validate that the proposed DVIO system outperforms the other state-of-the-art VIO systems in terms of trajectory accuracy as well as processing time.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"22 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116858066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Varying user agency and interaction opportunities in a home mobile augmented virtuality story 在家庭移动增强虚拟故事中不同的用户代理和交互机会
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00051
Gideon Raeburn, L. Tokarchuk
{"title":"Varying user agency and interaction opportunities in a home mobile augmented virtuality story","authors":"Gideon Raeburn, L. Tokarchuk","doi":"10.1109/ismar52148.2021.00051","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00051","url":null,"abstract":"New opportunities for immersive storytelling experiences have arrived through the technology in mobile phones, including the ability to overlay or register digital content on a user’s real world surroundings, to greater immerse the user in the world of the story. This raises questions around the methods and freedom to interact with the digital elements, that will lead to a more immersive and engaging experience. To investigate these areas the Augmented Virtuality (AV) mobile phone application Home Story was developed for iOS devices. It allows a user to move and interact with objects in a virtual environment displayed on their phone, by physically moving in the real world, completing particular actions to progress a story. A mixed methods study with Home Story either guided participants to the next interaction, or offered them increased agency to choose what object to interact with next. Virtual objects could also be interacted with in one of three ways; imagining the interaction, an embodied interaction using the user’s free hand, or a virtual interaction performed on the phone’s touchscreen. Similar levels of immersion were recorded across both study conditions suggesting both can be effective, though highlighting different issues in each case. The embodied free hand interactions proved particularly memorable, though further work is required to improve their implementation, arising from their novelty and lack of familiarity.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123244550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities TransforMR:姿态感知对象替代组合交替混合现实
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00021
Mohamed Kari, T. Große-Puppendahl, Luis Falconeri Coelho, A. Fender, David Bethge, Reinhard Schütte, Christian Holz
{"title":"TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities","authors":"Mohamed Kari, T. Große-Puppendahl, Luis Falconeri Coelho, A. Fender, David Bethge, Reinhard Schütte, Christian Holz","doi":"10.1109/ismar52148.2021.00021","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00021","url":null,"abstract":"Despite the advances in machine perception, semantic scene understanding is still a limiting factor in mixed reality scene composition. In this paper, we present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes. In real-time and for previously unseen and unprepared real-world environments, TransforMR composes mixed reality scenes so that virtual objects assume behavioral and environment-contextual properties of replaced real-world objects. This yields meaningful, coherent, and humaninterpretable scenes, not yet demonstrated by today’s augmentation techniques. TransforMR creates these experiences through our novel pose-aware object substitution method building on different 3D object pose estimators, instance segmentation, video inpainting, and pose-aware object rendering. TransforMR is designed for use in the real-world, supporting the substitution of humans and vehicles in everyday scenes, and runs on mobile devices using just their monocular RGB camera feed as input. We evaluated TransforMR with eight participants in an uncontrolled city environment employing different transformation themes. Applications of TransforMR include real-time character animation analogous to motion capturing in professional film making, however without the need for preparation of either the scene or the actor, as well as narrative-driven experiences that allow users to explore fictional parallel universes in mixed reality. We make all of our source code and assets available1.1TransforMR code release: https://github.com/MohamedKari/transformr","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123279859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Perception-Driven Hybrid Foveated Depth of Field Rendering for Head-Mounted Displays 头戴式显示器的感知驱动混合注视点景深渲染
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00014
Jingyu Liu, Claire Mantel, Søren Forchhammer
{"title":"Perception-Driven Hybrid Foveated Depth of Field Rendering for Head-Mounted Displays","authors":"Jingyu Liu, Claire Mantel, Søren Forchhammer","doi":"10.1109/ismar52148.2021.00014","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00014","url":null,"abstract":"In this paper, we present a novel perception-driven hybrid rendering method leveraging the limitation of the human visual system (HVS). Features accounted in our model include: foveation from the visual acuity eccentricity (VAE), depth of field (DOF) from vergence & accommodation, and longitudinal chromatic aberration (LCA) from color vision. To allocate computational workload efficiently, first we apply a gaze-contingent geometry simplification. Then we convert the coordinates from screen space to polar space with a scaling strategy coherent with VAE. Upon that, we apply a stochastic sampling based on DOF. Finally, we post-process the Bokeh for DOF, which can at the same time achieve LCA and anti-aliasing. A virtual reality (VR) experiment on 6 Unity scenes with a head-mounted display (HMD) HTC VIVE Pro Eye yields frame rates range from 25.2 to 48.7 fps. Objective evaluation with FovVideoVDP - a perceptual based visible difference metric - suggests that the proposed method gives satisfactory just-objectionable-difference (JOD) scores across 6 scenes from 7.61 to 8.69 (in a 10 unit scheme). Our method achieves better performance compared with the existing methods while having the same or better level of quality scores.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129589030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Now I’m Not Afraid: Reducing Fear of Missing Out in 360° Videos on a Head-Mounted Display using a Panoramic Thumbnail 现在我不怕了:使用全景缩略图在头戴式显示器上减少对错过360°视频的恐惧
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00032
Shoma Yamaguchi, Nami Ogawa, Takuji Narumi
{"title":"Now I’m Not Afraid: Reducing Fear of Missing Out in 360° Videos on a Head-Mounted Display using a Panoramic Thumbnail","authors":"Shoma Yamaguchi, Nami Ogawa, Takuji Narumi","doi":"10.1109/ismar52148.2021.00032","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00032","url":null,"abstract":"Cinematic virtual reality, or 360° video, provides viewers with an immersive experience, allowing them to enjoy a video while moving their head to watch in any direction. However, there is an inevitable problem of feeling fear of missing out (FOMO) when viewing a 360° video, as only a part of the video is visible to the viewer at any given time. To solve this problem, we developed a technique to present a panoramic thumbnail of a full 360° video to users through a head-mounted display. With this technique, the user can grasp the overall view of the video as needed. We conducted an experiment to evaluate the FOMO, presence, and quality of viewing experience while using this technique compared to normal viewing without it. The results of the experiment show that the proposed technique relieved FOMO, the quality of viewing experience was improved, and there was no difference in presence. We also investigated how users interacted with this new interface based on eye tracking and head tracking data during viewing, which suggested that users used the panoramic thumbnail to actively explore outside their field of view.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128877259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Taxonomy of Interaction Techniques for Immersive Augmented Reality based on an Iterative Literature Review 基于文献回顾的沉浸式增强现实交互技术分类
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00060
Julia Hertel, Sukran Karaosmanoglu, S. Schmidt, Julia Bräker, Martin Semmann, Frank Steinicke
{"title":"A Taxonomy of Interaction Techniques for Immersive Augmented Reality based on an Iterative Literature Review","authors":"Julia Hertel, Sukran Karaosmanoglu, S. Schmidt, Julia Bräker, Martin Semmann, Frank Steinicke","doi":"10.1109/ismar52148.2021.00060","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00060","url":null,"abstract":"Developers of interactive systems have a variety of interaction techniques to choose from, each with individual strengths and limitations in terms of the considered task, context, and users. While there are taxonomies for desktop, mobile, and virtual reality applications, augmented reality (AR) taxonomies have not been established yet. However, recent advances in immersive AR technology (i.e., head-worn or projection-based AR), such as the emergence of untethered headsets with integrated gesture and speech sensors, have enabled the inclusion of additional input modalities and, therefore, novel multimodal interaction methods have been introduced. To provide an overview of interaction techniques for current immersive AR systems, we conducted a literature review of publications between 2016 and 2021. Based on 44 relevant papers, we developed a comprehensive taxonomy focusing on two identified dimensions – task and modality. We further present an adaptation of an iterative taxonomy development method to the field of human-computer interaction. Finally, we discuss observed trends and implications for future work.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122810800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Investigating Textual Visual Sound Effects in a Virtual Environment and their impacts on Object Perception and Sound Perception 研究虚拟环境中的文本视觉声音效果及其对物体感知和声音感知的影响
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00048
Thibault Fabre, Adrien Verhulst, A. Balandra, M. Sugimoto, M. Inami
{"title":"Investigating Textual Visual Sound Effects in a Virtual Environment and their impacts on Object Perception and Sound Perception","authors":"Thibault Fabre, Adrien Verhulst, A. Balandra, M. Sugimoto, M. Inami","doi":"10.1109/ismar52148.2021.00048","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00048","url":null,"abstract":"In comics, Textual Sound Effects (TE) can describe sounds, but also actions, events, etc. TE could be used in Virtual Environment to efficiently create an easily recognizable scene and add more information to objects at a relatively low design cost. We investigate the impact of TE in a Virtual Environment on objects’ material perception (on category and properties) and on sound perception (on volume [dB] and spatial position). Participants (N=13, repeated measures) categorized metallic and wooden spheres and significantly changed their reaction time depending on the TE congruence with the spheres’ material/sound. They then rated a sphere’s properties (i.e., wetness, warmness, softness, smoothness, and dullness) and significantly changed their rating depending on the TE. When comparing 2 sound volumes, they perceived a sound associated with a shrinking TE as less loud and a sound associated with a growing TE as louder. When locating an audio source location, they located it significantly closer to a TE.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122326405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Selective Foveated Ray Tracing for Head-Mounted Displays 头戴式显示器的选择性注视点光线跟踪
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00058
Youngwook Kim, Yunmin Ko, I. Ihm
{"title":"Selective Foveated Ray Tracing for Head-Mounted Displays","authors":"Youngwook Kim, Yunmin Ko, I. Ihm","doi":"10.1109/ismar52148.2021.00058","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00058","url":null,"abstract":"Although ray tracing produces significantly more realistic images than traditional rasterization techniques, it is still considered computationally burdensome when implemented on a head-mounted display (HMD) system that demands both wide field of view and high rendering rate. A further challenge is that to present high-quality images on an HMD screen, a sufficient number of ray samples should be taken per pixel for effective antialiasing to reduce visually annoying artifacts. In this paper, we present a novel foveated real-time rendering framework that realizes classic Whitted-style ray tracing on an HMD system. In particular, our method proposes combining the selective supersampling technique by Jin et al. [8] with the foveated rendering scheme, resulting in perceptually highly efficient pixel sampling suitable for HMD ray tracing. We show that further enhanced by foveated temporal antialiasing, our ray tracer renders nontrivial 3D scenes in real time on commodity GPUs at high sampling rates as effective as up to 36 samples per pixel (spp) in the foveal area, gradually reducing to at least 1 spp in the periphery.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123013112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
ARENA: The Augmented Reality Edge Networking Architecture ARENA:增强现实边缘网络架构
2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) Pub Date : 2021-10-01 DOI: 10.1109/ismar52148.2021.00065
Nuno Pereira, Anthony Rowe, Michael W. Farb, Ivan Liang, Edward Lu, E. Riebling
{"title":"ARENA: The Augmented Reality Edge Networking Architecture","authors":"Nuno Pereira, Anthony Rowe, Michael W. Farb, Ivan Liang, Edward Lu, E. Riebling","doi":"10.1109/ismar52148.2021.00065","DOIUrl":"https://doi.org/10.1109/ismar52148.2021.00065","url":null,"abstract":"Many have predicted the future of the Web to be the integration of Web content with the real-world through technologies such as Augmented Reality (AR). This has led to the rise of Extended Reality (XR) Web Browsers used to shorten the long AR application development and deployment cycle of native applications especially across different platforms. As XR Browsers mature, we face new challenges related to collaborative and multi-user applications that span users, devices, and machines. These collaborative XR applications require: (1) networking support for scaling to many users, (2) mechanisms for content access control and application isolation, and (3) the ability to host application logic near clients or data sources to reduce application latency. In this paper, we present the design and evaluation of the AR Edge Networking Architecture (ARENA) which is a platform that simplifies building and hosting collaborative XR applications on WebXR capable browsers. ARENA provides a number of critical components including: a hierarchical geospatial directory service that connects users to nearby servers and content, a token-based authentication system for controlling user access to content, and an application/service runtime supervisor that can dispatch programs across any network connected device. All of the content within ARENA exists as endpoints in a PubSub scene graph model that is synchronized across all users. We evaluate ARENA in terms of client performance as well as benchmark end-to-end response-time as load on the system scales. We show the ability to horizontally scale the system to Internet-scale with scenes containing hundreds of users and latencies on the order of tens of milliseconds. Finally, we highlight projects built using ARENA and showcase how our approach dramatically simplifies collaborative multi-user XR development compared to monolithic approaches.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127324349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信