IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Interactions between Vibroacoustic Discomfort and Visual Stimuli: Comparison of Real, 3D and 360 Environments.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549158
Charlotte Scarpa, Toinon Vigier, Gwenaelle Haese, Patrick Le Callet
{"title":"Interactions between Vibroacoustic Discomfort and Visual Stimuli: Comparison of Real, 3D and 360 Environments.","authors":"Charlotte Scarpa, Toinon Vigier, Gwenaelle Haese, Patrick Le Callet","doi":"10.1109/TVCG.2025.3549158","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549158","url":null,"abstract":"<p><p>The building industry and the design of interior environments are increasingly focusing on the user experience, incorporating sensory analysis to reconsider how office environments can be optimized. New immersive technologies offer significant opportunities for sensory science, enhancing our understanding of human perception and enabling the collection of multi-sensory data under controlled laboratory conditions. While the potential of Virtual Reality (VR) for these types of studies is well recognized, certain limitations still need to be addressed, including the lack of standardized research practices and the challenge of ensuring the simulated environment closely mirrors the real world. In this study, we compare 360° and 3D formats, to real-life settings in order to determine which format offers greater ecological validity for visual perception and immersion. Additionally, we examine the effects of vibroacoustic stimuli with different levels of intensity on perception and cognition of 30 participants. Subjective, physiological and cognitive data was collected throughout the test to tackle the participant's experience. This preliminary study introduces an immersive methodology that leverages advanced techniques to gain deeper insights into multisensory user experience in VR, marking a significant step forward in the optimization of VR for building evaluation.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring the Impact of Objects' Physicalization, Avatar Appearance, and Their Consistency on Pick-and-Place Performance in Augmented Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549151
Antonin Cheymol, Jacob Wallace, Juri Yoneyama, Rebecca Fribourg, Jean-Marie Normand, Ferran Argelaguet
{"title":"Measuring the Impact of Objects' Physicalization, Avatar Appearance, and Their Consistency on Pick-and-Place Performance in Augmented Reality.","authors":"Antonin Cheymol, Jacob Wallace, Juri Yoneyama, Rebecca Fribourg, Jean-Marie Normand, Ferran Argelaguet","doi":"10.1109/TVCG.2025.3549151","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549151","url":null,"abstract":"<p><p>Augmented Reality (AR) is a growing technology that enables interaction with both virtual and real objects. However, in order to support the future development of efficient and usable AR interactions, there is still a lack of systematic knowledge establishing basic interaction performance across different conditions. Therefore, in this paper, we report a user study measuring the impact of objects' physicalization (object's set composed of (i) virtual, (ii) real, or (iii) a composite mix of real and virtual objects) and hand appearance (hand's appearance displayed as (i) the real hand, (ii) an avatar, or (iii) dynamically adapting to the surrounding objects' physicalization) on the speed performance of a pick-and-place task. Overall, our results reveal that objects' physicalization plays a significant role in interaction performance, with the more real objects in a set the better the performance. Moreover, our results also suggest that pick-and-place interaction performances are mostly unaffected by the hand appearance. Interestingly, we also observed that interactions with real objects were less efficient as the object condition required the user to alternate between interactions with virtual and real objects (object condition (iii)), which provides novel insights into an important - mostly AR-specific - factor to consider for designing future AR interactions. Taken together, our results provide a rich characterization of different factors influencing different phases of a pick-and-place interaction, which could be employed to improve the design of future AR applications.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Order up! Multimodal Interaction Techniques for Notifications in Augmented Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-07 DOI: 10.1109/TVCG.2025.3549186
Lucas Plabst, Florian Niebling, Sebastian Oberdorfer, Francisco Ortega
{"title":"Order up! Multimodal Interaction Techniques for Notifications in Augmented Reality.","authors":"Lucas Plabst, Florian Niebling, Sebastian Oberdorfer, Francisco Ortega","doi":"10.1109/TVCG.2025.3549186","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3549186","url":null,"abstract":"<p><p>As augmented reality (AR) headsets become increasingly integrated into professional and social settings, a critical challenge emerges: how can users effectively manage and interact with the frequent notifications they receive? With adults receiving nearly 200 notifications daily on their smartphones, which serve as primary computing devices for many, translating this interaction to AR systems is paramount. Unlike traditional devices, AR systems augment the physical world, requiring interaction techniques that blend seamlessly with real-world behaviors. This study explores the complexities of multimodal interaction with notifications in AR. We investigated user preferences, usability, workload, and performance during a virtual cooking task, where participants managed customer orders while interacting with notifications. Various interaction techniques were tested: Point and Pinch, Gaze and Pinch, Point and Voice, Gaze and Voice, and Touch. Our findings reveal significant impacts on workload, performance, and usability based on the interaction method used. We identify key issues in multimodal interaction and offer guidance for optimizing these techniques in AR environments.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143576066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Neural Homogeneous Translucent Material Rendering Using Diffusion Blocks.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-06 DOI: 10.1109/TVCG.2025.3548442
Di An, Liangfu Kang, Kun Xu
{"title":"Real-Time Neural Homogeneous Translucent Material Rendering Using Diffusion Blocks.","authors":"Di An, Liangfu Kang, Kun Xu","doi":"10.1109/TVCG.2025.3548442","DOIUrl":"10.1109/TVCG.2025.3548442","url":null,"abstract":"<p><p>Rendering realistic appearances of homogeneous translucent materials, such as milk and marble, poses challenges due to the complexity of subsurface scattering. In this paper, we present a neural method for real-time rendering of homogeneous translucent objects. Based on the observation that light propagation inside a highly scattered media is like a diffusion process [1], we propose a neural data structure named diffusion block to mimic the behavior of the diffusion process. The diffusion block is built upon a recent network structure named DiffusionNet [2] with a few modifications to adapt to our problem of translucent rendering. Our network is lightweight and efficient, leading to a real-time rendering method. Furthermore, our method supports dynamic material properties and diverse lighting conditions. Comparisons with state-of-the-art real-time translucent rendering methods demonstrate the superiority of our method in rendering quality.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143574755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retraction Notice: iMetaTown: A Metaverse System with Multiple Interactive Functions Based on Virtual Reality.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-05 DOI: 10.1109/TVCG.2025.3546144
Zhihan Lyu, Mikael Fridenfalk
{"title":"Retraction Notice: iMetaTown: A Metaverse System with Multiple Interactive Functions Based on Virtual Reality.","authors":"Zhihan Lyu, Mikael Fridenfalk","doi":"10.1109/TVCG.2025.3546144","DOIUrl":"10.1109/TVCG.2025.3546144","url":null,"abstract":"<p><p>This work aims to pioneer the development of a real-time interactive and immersive Metaverse Human-Computer Interaction (HCI) system leveraging Virtual Reality (VR). The system incorporates a three-dimensional (3D) face reconstruction method, grounded in weakly supervised learning, to enhance player-player interactions within the Metaverse. The proposed method, two-dimensional (2D) face images, are effectively employed in a 2D Self-Supervised Learning (2DASL) approach, significantly optimizing 3D model learning outcomes and improving the quality of 3D face alignment in HCI systems. The work outlines the functional modules of the system, encompassing user interactions such as hugs and handshakes and communication through voice and text via blockchain. Solutions for managing multiple simultaneous online users are presented. Performance evaluation of the HCI system in a 3D reconstruction scene indicates that the 2DASL face reconstruction method achieves noteworthy results, enhancing the system's interaction capabilities by aiding 3D face modeling through 2D face images. The experimental system achieves a maximum processing speed of 18 frames of image data on a personal computer, meeting real-time processing requirements. User feedback regarding social acceptance, action interaction usability, emotions, and satisfaction with the VR interactive system reveals consistently high scores. The designed VR HCI system exhibits outstanding performance across diverse applications.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Non-Rigid Human Point Cloud Registration Based on Deformation Field Fusion.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-05 DOI: 10.1109/TVCG.2025.3547778
Yinghao Li, Yue Liu, Zhiyuan Dong, Linjun Jiang, Yusong Lin
{"title":"Unsupervised Non-Rigid Human Point Cloud Registration Based on Deformation Field Fusion.","authors":"Yinghao Li, Yue Liu, Zhiyuan Dong, Linjun Jiang, Yusong Lin","doi":"10.1109/TVCG.2025.3547778","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3547778","url":null,"abstract":"<p><p>Human point cloud registration is a critical problem in the fields of computer vision and computer graphics applications. Currently, due to the presence of joint hinges and limb occlusions in human point clouds, point cloud alignment is challenging. To address these two limits, this paper proposes an unsupervised non-rigid human point cloud registration method based on deformation field fusion. The method mainly consists of the deep dynamic link deformation field estimation module and the probabilistic alignment deformation field estimation module. The deep dynamic link deformation field estimation module uses a time series network to convert non-rigid deformation into multiple rigid deformations. Then, feature extraction is performed to estimate the deformation field based on the rigid deformations. The probabilistic alignment deformation field estimation module builds on a Gaussian mixture model and adds local and global constraint conditions for deformation field estimation. Finally, the two deformation fields are fused into the total deformed field by aligning them, which enhances the sensitivity to both global and local feature information. The experimental results on public datasets and real private datasets demonstrate that the proposed method has higher accuracy and better robustness under joint hinges and limb adhesion conditions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Sensorimotor Contingencies and Eye Scanpath Entropy in Presence in Virtual Reality: a Reinforcement Learning Paradigm.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-04 DOI: 10.1109/TVCG.2025.3547241
Esen Kucuktutuncu, Francisco Macia-Varela, Joan Llobera, Mel Slater
{"title":"The Role of Sensorimotor Contingencies and Eye Scanpath Entropy in Presence in Virtual Reality: a Reinforcement Learning Paradigm.","authors":"Esen Kucuktutuncu, Francisco Macia-Varela, Joan Llobera, Mel Slater","doi":"10.1109/TVCG.2025.3547241","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3547241","url":null,"abstract":"<p><p>Sensorimotor contingencies (SC) refer to the rules by which we use our body to perceive. It has been argued that to the extent that a virtual reality (VR) application affords natural SC so the greater likelihood that participants will experience Place Illusion (PI), the illusion of 'being there' (a component of presence) in the virtual environment. However, notwithstanding numerous studies this only has anecdotal support. Here we used a reinforcement learning (RL) paradigm where 26 participants experienced a VR scenario where the RL agent could sequentially propose changes to 5 binary factors: mono or stereo vision, 3 or 6 degrees of freedom head tracking, mono or spatialised sound, low or high display resolution, or one of two color schemes. The first 4 are SC, whereas the last is not. Participants could reject or accept each change proposed by the RL, until convergence. Participants were more likely to accept changes from low to high SC than changes to the color. Additionally, theory suggests that increased PI should be associated with lower eye scanpath entropy. Our results show that mean entropy did decrease over time and the final level of entropy was negatively correlated with a post exposure questionnaire-based assessment of PI.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Point Cloud Edge Reconstruction Via Surface Patch Segmentation.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-03 DOI: 10.1109/TVCG.2025.3547411
Yuanqi Li, Hongshen Wang, Yansong Liu, Jingcheng Huang, Shun Liu, Chenyu Huang, Jianwei Guo, Jie Guo, Yanwen Guo
{"title":"Deep Point Cloud Edge Reconstruction Via Surface Patch Segmentation.","authors":"Yuanqi Li, Hongshen Wang, Yansong Liu, Jingcheng Huang, Shun Liu, Chenyu Huang, Jianwei Guo, Jie Guo, Yanwen Guo","doi":"10.1109/TVCG.2025.3547411","DOIUrl":"10.1109/TVCG.2025.3547411","url":null,"abstract":"<p><p>Parametric edge reconstruction for point cloud data is a fundamental problem in computer graphics. Existing methods first classify points as either edge points (including corners) or non-edge points, and then fit parametric edges to the edge points. However, few points are exactly sampled on edges in practical scenarios, leading to significant fitting errors in the reconstructed edges. Prominent deep learning-based methods also primarily emphasize edge points, overlooking the potential of non-edge areas. Given that sparse and non-uniform edge points cannot provide adequate information, we address this challenge by leveraging neighboring segmented patches to supply additional cues. We introduce a novel two-stage framework that reconstructs edges precisely and completely via surface patch segmentation. First, we propose PCER-Net, a Point Cloud Edge Reconstruction Network that segments surface patches, detects edge points, and predicts normals simultaneously. Second, a joint optimization module is designed to reconstruct a complete and precise 3D wireframe by fully utilizing the predicted results of the network. Concretely, the segmented patches enable accurate fitting of parametric edges, even when sparse points are not precisely distributed along the model's edges. Corners can also be naturally detected from the segmented patches. Benefiting from fitted edges and detected corners, a complete and precise 3D wireframe model with topology connections can be reconstructed by geometric optimization. Finally, we present a versatile patch-edge dataset, including CAD and everyday models (furniture), to generalize our method. Extensive experiments and comparisons against previous methods demonstrate our effectiveness and superiority. We will release the code and dataset to facilitate future research.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GlossyGS: Inverse Rendering of Glossy Objects With 3D Gaussian Splatting.
IEEE transactions on visualization and computer graphics Pub Date : 2025-03-03 DOI: 10.1109/TVCG.2025.3547063
Shuichang Lai, Letian Huang, Jie Guo, Kai Cheng, Bowen Pan, Xiaoxiao Long, Jiangjing Lyu, Chengfei Lv, Yanwen Guo
{"title":"GlossyGS: Inverse Rendering of Glossy Objects With 3D Gaussian Splatting.","authors":"Shuichang Lai, Letian Huang, Jie Guo, Kai Cheng, Bowen Pan, Xiaoxiao Long, Jiangjing Lyu, Chengfei Lv, Yanwen Guo","doi":"10.1109/TVCG.2025.3547063","DOIUrl":"10.1109/TVCG.2025.3547063","url":null,"abstract":"<p><p>Reconstructing objects from posed images is a crucial and complex task in computer graphics and computer vision. While NeRF-based neural reconstruction methods have exhibited impressive reconstruction ability, they tend to be time-comsuming. Recent strategies have adopted 3D Gaussian Splatting (3D-GS) for inverse rendering, which have led to quick and effective outcomes. However, these techniques generally have difficulty in producing believable geometries and materials for glossy objects, a challenge that stems from the inherent ambiguities of inverse rendering. To address this, we introduce GlossyGS, an innovative 3D-GS-based inverse rendering framework that aims to precisely reconstruct the geometry and materials of glossy objects by integrating material priors. The key idea is the use of micro-facet geometry segmentation prior, which helps to reduce the intrinsic ambiguities and improve the decomposition of geometries and materials. Additionally, we introduce a normal map prefiltering strategy to more accurately simulate the normal distribution of reflective surfaces. These strategies are integrated into a hybrid geometry and material representation that employs both explicit and implicit methods to depict glossy objects. We demonstrate through quantitative analysis and qualitative visualization that the proposed method is effective to reconstruct high-fidelity geometries and materials of glossy objects, and performs favorably against state-of-the-arts.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AttributionScanner: A Visual Analytics System for Model Validation with Metadata-Free Slice Finding.
IEEE transactions on visualization and computer graphics Pub Date : 2025-02-28 DOI: 10.1109/TVCG.2025.3546644
Xiwei Xuan, Jorge Piazentin Ono, Liang Gou, Kwan-Liu Ma, Liu Ren
{"title":"AttributionScanner: A Visual Analytics System for Model Validation with Metadata-Free Slice Finding.","authors":"Xiwei Xuan, Jorge Piazentin Ono, Liang Gou, Kwan-Liu Ma, Liu Ren","doi":"10.1109/TVCG.2025.3546644","DOIUrl":"https://doi.org/10.1109/TVCG.2025.3546644","url":null,"abstract":"<p><p>Data slice finding is an emerging technique for validating machine learning (ML) models by identifying and analyzing subgroups in a dataset that exhibit poor performance, often characterized by distinct feature sets or descriptive metadata. However, in the context of validating vision models involving unstructured image data, this approach faces significant challenges, including the laborious and costly requirement for additional metadata and the complex task of interpreting the root causes of underperformance. To address these challenges, we introduce AttributionScanner, an innovative human-in-the-loop Visual Analytics (VA) system, designed for metadata-free data slice finding. Our system identifies interpretable data slices that involve common model behaviors and visualizes these patterns through an Attribution Mosaic design. Our interactive interface provides straightforward guidance for users to detect, interpret, and annotate predominant model issues, such as spurious correlations (model biases) and mislabeled data, with minimal effort. Additionally, it employs a cutting-edge model regularization technique to mitigate the detected issues and enhance the model's performance. The efficacy of AttributionScanner is demonstrated through use cases involving two benchmark datasets, with qualitative and quantitative evaluations showcasing its substantial effectiveness in vision model validation, ultimately leading to more reliable and accurate models.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信