IEEE transactions on visualization and computer graphics最新文献

筛选
英文 中文
Shape It Up: An Empirically Grounded Approach for Designing Shape Palettes 塑造它:基于经验的形状调色板设计方法》。
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456385
Chin Tseng;Arran Zeyu Wang;Ghulam Jilani Quadri;Danielle Albers Szafir
{"title":"Shape It Up: An Empirically Grounded Approach for Designing Shape Palettes","authors":"Chin Tseng;Arran Zeyu Wang;Ghulam Jilani Quadri;Danielle Albers Szafir","doi":"10.1109/TVCG.2024.3456385","DOIUrl":"10.1109/TVCG.2024.3456385","url":null,"abstract":"Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"349-359"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DracoGPT: Extracting Visualization Design Preferences from Large Language Models DracoGPT:从大型语言模型中提取可视化设计偏好
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456350
Huichen Will Wang;Mitchell Gordon;Leilani Battle;Jeffrey Heer
{"title":"DracoGPT: Extracting Visualization Design Preferences from Large Language Models","authors":"Huichen Will Wang;Mitchell Gordon;Leilani Battle;Jeffrey Heer","doi":"10.1109/TVCG.2024.3456350","DOIUrl":"10.1109/TVCG.2024.3456350","url":null,"abstract":"Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, a method for extracting, modeling, and assessing visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines—DracoGPT-Rank and DracoGPT-Recommend—to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT can accurately model the preferences expressed by LLMs, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantially diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and to provide a robust and cost-effective stand-in for LLMs.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"710-720"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HaptoFloater: Visuo-Haptic Augmented Reality by Embedding Imperceptible Color Vibration Signals for Tactile Display Control in a Mid-Air Image HaptoFloater:通过在半空图像中嵌入用于触觉显示控制的可感知彩色振动信号,实现视觉-触觉增强现实技术
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456175
Rina Nagano;Takahiro Kinoshita;Shingo Hattori;Yuichi Hiroi;Yuta Itoh;Takefumi Hiraki
{"title":"HaptoFloater: Visuo-Haptic Augmented Reality by Embedding Imperceptible Color Vibration Signals for Tactile Display Control in a Mid-Air Image","authors":"Rina Nagano;Takahiro Kinoshita;Shingo Hattori;Yuichi Hiroi;Yuta Itoh;Takefumi Hiraki","doi":"10.1109/TVCG.2024.3456175","DOIUrl":"10.1109/TVCG.2024.3456175","url":null,"abstract":"We propose HaptoFloater, a low-latency mid-air visuo-haptic augmented reality (VHAR) system that utilizes imperceptible color vibrations. When adding tactile stimuli to the visual information of a mid-air image, the user should not perceive the latency between the tactile and visual information. However, conventional tactile presentation methods for mid-air images, based on camera-detected fingertip positioning, introduce latency due to image processing and communication. To mitigate this latency, we use a color vibration technique; humans cannot perceive the vibration when the display alternates between two different color stimuli at a frequency of 25 Hz or higher. In our system, we embed this imperceptible color vibration into the mid-air image formed by a micromirror array plate, and a photodiode on the fingertip device directly detects this color vibration to provide tactile stimulation. Thus, our system allows for the tactile perception of multiple patterns on a mid-air image in 59.5 ms. In addition, we evaluate the visual-haptic delay tolerance on a mid-air display using our VHAR system and a tactile actuator with a single pattern and faster response time. The results of our user study indicate a visual-haptic delay tolerance of 110.6 ms, which is considerably larger than the latency associated with systems using multiple tactile patterns.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7463-7472"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PREVis: Perceived Readability Evaluation for Visualizations PREVis:可视化感知可读性评估
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456318
Anne-Flore Cabouat;Tingying He;Petra Isenberg;Tobias Isenberg
{"title":"PREVis: Perceived Readability Evaluation for Visualizations","authors":"Anne-Flore Cabouat;Tingying He;Petra Isenberg;Tobias Isenberg","doi":"10.1109/TVCG.2024.3456318","DOIUrl":"10.1109/TVCG.2024.3456318","url":null,"abstract":"We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1083-1093"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow AdversaFlow:利用多层次对抗流为大型语言模型提供可视化红队服务
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456150
Dazhen Deng;Chuhan Zhang;Huawei Zheng;Yuwen Pu;Shouling Ji;Yingcai Wu
{"title":"AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow","authors":"Dazhen Deng;Chuhan Zhang;Huawei Zheng;Yuwen Pu;Shouling Ji;Yingcai Wu","doi":"10.1109/TVCG.2024.3456150","DOIUrl":"10.1109/TVCG.2024.3456150","url":null,"abstract":"Large Language Models (LLMs) are powerful but also raise significant security concerns, particularly regarding the harm they can cause, such as generating fake news that manipulates public opinion on social media and providing responses to unethical activities. Traditional red teaming approaches for identifying AI vulnerabilities rely on manual prompt construction and expertise. This paper introduces AdversaFlow, a novel visual analytics system designed to enhance LLM security against adversarial attacks through human-AI collaboration. AdversaFlow involves adversarial training between a target model and a red model, featuring unique multi-level adversarial flow and fluctuation path visualizations. These features provide insights into adversarial dynamics and LLM robustness, enabling experts to identify and mitigate vulnerabilities effectively. We present quantitative evaluations and case studies validating our system's utility and offering insights for future AI security solutions. Our method can enhance LLM security, supporting downstream scenarios like social media regulation by enabling more effective detection, monitoring, and mitigation of harmful content and behaviors.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"492-502"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does This Have a Particular Meaning? Interactive Pattern Explanation for Network Visualizations 这有什么特殊含义吗?网络可视化的互动模式解释
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456192
Xinhuan Shu;Alexis Pister;Junxiu Tang;Fanny Chevalier;Benjamin Bach
{"title":"Does This Have a Particular Meaning? Interactive Pattern Explanation for Network Visualizations","authors":"Xinhuan Shu;Alexis Pister;Junxiu Tang;Fanny Chevalier;Benjamin Bach","doi":"10.1109/TVCG.2024.3456192","DOIUrl":"10.1109/TVCG.2024.3456192","url":null,"abstract":"This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who do not understand these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then automatically mines the underlying data patterns, and explains both visual and data patterns present in the viewer's selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to textual-only and visual-only (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"677-687"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts 人类图表的启示与 LLM 预测的一致性如何?不同布局条形图案例研究
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456378
Huichen Will Wang;Jane Hoffswell;Sao Myat Thazin Thane;Victor S. Bursztyn;Cindy Xiong Bearfield
{"title":"How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts","authors":"Huichen Will Wang;Jane Hoffswell;Sao Myat Thazin Thane;Victor S. Bursztyn;Cindy Xiong Bearfield","doi":"10.1109/TVCG.2024.3456378","DOIUrl":"10.1109/TVCG.2024.3456378","url":null,"abstract":"Large Language Models (LLMs) have been adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as spatial layouts. In this work, we examine the extent to which LLMs exhibit such sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We conducted three experiments and tested four common bar chart layouts: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. In Experiment 1, we identified the optimal configurations to generate meaningful chart takeaways by testing four LLMs, two temperature settings, nine chart specifications, and two prompting strategies. We found that even state-of-the-art LLMs struggled to generate semantically diverse and factually accurate takeaways. In Experiment 2, we used the optimal configurations to generate 30 chart takeaways each for eight visualizations across four layouts and two datasets in both zero-shot and one-shot settings. Compared to human takeaways, we found that the takeaways LLMs generated often did not match the types of comparisons made by humans. In Experiment 3, we examined the effect of chart context and data on LLM takeaways. We found that LLMs, unlike humans, exhibited variation in takeaway comparison types for different bar charts using the same bar layout. Overall, our case study evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human chart takeaways.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"536-546"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discursive Patinas: Anchoring Discussions in Data Visualizations 论述性的彩绘:数据可视化中的锚定讨论
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456334
Tobias Kauer;Derya Akbaba;Marian Dörk;Benjamin Bach
{"title":"Discursive Patinas: Anchoring Discussions in Data Visualizations","authors":"Tobias Kauer;Derya Akbaba;Marian Dörk;Benjamin Bach","doi":"10.1109/TVCG.2024.3456334","DOIUrl":"10.1109/TVCG.2024.3456334","url":null,"abstract":"This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization and we lack ways to relate these discussions back to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization. Discursive patinas are made of overlaid visual marks (anchors), attached to textual comments with category labels, likes, and replies. By coloring and styling the anchors, a meta visualization emerges, showing what and where people comment and annotate the visualization. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. We ran workshops with 90 students, domain experts, and visualization researchers to study how people use anchors to discuss visualizations and how patinas influence people's understanding of the discussion. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of anchors and patinas to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1246-1256"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeLVE into Earth's Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts DeLVE into Earth's Past:在多种博物馆背景下部署的可视化展览
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456174
Mara Solen;Nigar Sultana;Laura Lukes;Tamara Munzner
{"title":"DeLVE into Earth's Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts","authors":"Mara Solen;Nigar Sultana;Laura Lukes;Tamara Munzner","doi":"10.1109/TVCG.2024.3456174","DOIUrl":"10.1109/TVCG.2024.3456174","url":null,"abstract":"While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"952-961"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpreadLine: Visualizing Egocentric Dynamic Influence SpreadLine:以自我为中心的动态影响可视化
IEEE transactions on visualization and computer graphics Pub Date : 2024-09-13 DOI: 10.1109/TVCG.2024.3456373
Yun-Hsin Kuo;Dongyu Liu;Kwan-Liu Ma
{"title":"SpreadLine: Visualizing Egocentric Dynamic Influence","authors":"Yun-Hsin Kuo;Dongyu Liu;Kwan-Liu Ma","doi":"10.1109/TVCG.2024.3456373","DOIUrl":"10.1109/TVCG.2024.3456373","url":null,"abstract":"Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies (disease surveillance, social media trends, and academic career evolution) and a usability study.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"31 1","pages":"1050-1060"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142262125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信