Visual Informatics最新文献

筛选
英文 中文
ClayVolume: A progressive refinement interaction system for immersive visualization ClayVolume:一个用于沉浸式可视化的渐进细化交互系统
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2025-03-01 DOI: 10.1016/j.visinf.2025.01.003
Zhenyuan Wang , Qing Zhao , Yue Zhang , Jinhui Zhang , Guihua Shan , Xiao Zhou , Dong Tian
{"title":"ClayVolume: A progressive refinement interaction system for immersive visualization","authors":"Zhenyuan Wang ,&nbsp;Qing Zhao ,&nbsp;Yue Zhang ,&nbsp;Jinhui Zhang ,&nbsp;Guihua Shan ,&nbsp;Xiao Zhou ,&nbsp;Dong Tian","doi":"10.1016/j.visinf.2025.01.003","DOIUrl":"10.1016/j.visinf.2025.01.003","url":null,"abstract":"<div><div>Immersive visualization has become an important tool for discovering hidden patterns and obtaining insights from data. Target acquisition in immersive visualization is a fundamental step in visual analysis. However, limited visual encoding attributes and the presence of stacking and occlusion in immersive environments pose challenges in discovering valuable targets and making unambiguous selections. In this paper, we present ClayVolume, an interactive system designed for immersive visualization. It comprises metaphorical tools for customizing regions of interest (ROIs) and multiple views that serve as interactive and analytical mediums. ClayVolume empowers analysts to efficiently acquire valuable targets through a progressive refinement of interactive methods, enabling further extraction of insights. We evaluate ClayVolume in the scenario of immersive visualization of network data and perform a comparative analysis of its performance against other techniques in target selection tasks. The results indicate that ClayVolume enables flexible target selection in immersive visualization and provides fast target discovery and localization capabilities.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 71-83"},"PeriodicalIF":3.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What about thematic information? An analysis of the multidimensional visualization of individual mobility 主题信息呢?个人流动的多维可视化分析
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2025-03-01 DOI: 10.1016/j.visinf.2025.02.002
Aline Menin , Clément Quere , Jorge Wagner , Sonia Chardonnel , Paule-Annick Davoine , Wolfgang Stuerzlinger , Carla Maria Dal Sasso Freitas , Luciana Nedel , Marco Winckler
{"title":"What about thematic information? An analysis of the multidimensional visualization of individual mobility","authors":"Aline Menin ,&nbsp;Clément Quere ,&nbsp;Jorge Wagner ,&nbsp;Sonia Chardonnel ,&nbsp;Paule-Annick Davoine ,&nbsp;Wolfgang Stuerzlinger ,&nbsp;Carla Maria Dal Sasso Freitas ,&nbsp;Luciana Nedel ,&nbsp;Marco Winckler","doi":"10.1016/j.visinf.2025.02.002","DOIUrl":"10.1016/j.visinf.2025.02.002","url":null,"abstract":"<div><div>This paper reviews the literature on the visualization of individual mobility data, with a focus on thematic integration. It emphasizes the importance of visualization in understanding mobility patterns within a population and how it helps mobility experts address domain-specific questions. We analyze 38 papers published between 2010 and 2024 in GIS and VIS venues that describe visualizations of multidimensional data related to individual movements in urban environments, concentrating on individual mobility rather than traffic data. Our primary aim is to report advances in interactive visualization for individual mobility analysis, particularly regarding the representation of thematic information about people’s motivations for mobility. Our findings indicate that the thematic dimension is only partially represented in the literature, despite its critical significance in transportation. This gap often stems from the challenge of identifying data sources that inherently provide this information, necessitating visualization designers and developers to navigate multiple, heterogeneous data sources. We identify the strengths and limitations of existing visualizations and suggest potential research directions for the field.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 99-115"},"PeriodicalIF":3.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EmotionLens: Interactive visual exploration of the circumplex emotion space in literary works via affective word clouds EmotionLens:通过情感词云对文学作品中复杂的情感空间进行交互式视觉探索
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2025-03-01 DOI: 10.1016/j.visinf.2025.02.003
Bingyuan Wang , Qing Shi , Xiaohan Wang , You Zhou , Wei Zeng , Zeyu Wang
{"title":"EmotionLens: Interactive visual exploration of the circumplex emotion space in literary works via affective word clouds","authors":"Bingyuan Wang ,&nbsp;Qing Shi ,&nbsp;Xiaohan Wang ,&nbsp;You Zhou ,&nbsp;Wei Zeng ,&nbsp;Zeyu Wang","doi":"10.1016/j.visinf.2025.02.003","DOIUrl":"10.1016/j.visinf.2025.02.003","url":null,"abstract":"<div><div>Emotion (e.g., valence and arousal) is an important factor in literature (e.g., poetry and prose), and has rich values for plotting the life and knowledge of historical figures and appreciating the aesthetics of literary works. Currently, digital humanities and computational literature apply data statistics extensively in emotion analysis but lack visual analytics for efficient exploration. To fill the gap, we propose a user-centric approach that integrates advanced machine learning models and intuitive visualization for emotion analysis in literature. We make three main contributions. First, we consolidate a new emotion dataset of literary works in different periods, literary genres, and language contexts, augmented with fine-grained valence and arousal labels. Next, we design an interactive visual analytic system named <em>EmotionLens</em>, which allows users to perform multi-granularity (e.g., individual, group, society) and multi-faceted (e.g., distribution, chronology, correlation) analyses of literary emotions, supporting both exploratory and confirmatory approaches in digital humanities. Specifically, we introduce a novel affective word cloud with augmented word weight, position, and color, to facilitate literary text analysis from an emotional perspective. To validate the usability and effectiveness of <em>EmotionLens</em>, we provide two consecutive case studies, two user studies, and interviews with experts from different domains. Our results show that <em>EmotionLens</em> bridges literary text, emotion, and various other attributes, enables efficient knowledge discovery in massive data, and facilitates raising and validating domain-specific hypotheses in literature.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 84-98"},"PeriodicalIF":3.8,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143644924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging personality as a proxy of perceived transparency in hierarchical visualizations 在层次可视化中,利用个性作为感知透明度的代理
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2025-02-22 DOI: 10.1016/j.visinf.2025.01.002
Tomás Alves , Carlota Dias , Daniel Gonçalves , Sandra Gama
{"title":"Leveraging personality as a proxy of perceived transparency in hierarchical visualizations","authors":"Tomás Alves ,&nbsp;Carlota Dias ,&nbsp;Daniel Gonçalves ,&nbsp;Sandra Gama","doi":"10.1016/j.visinf.2025.01.002","DOIUrl":"10.1016/j.visinf.2025.01.002","url":null,"abstract":"<div><div>Understanding which factors affect information visualization transparency continues to be one of the most relevant challenges in current research, especially since trust models how users build on the knowledge and use it. This work extends the current body of research by studying the user’s subjective evaluation of the visualization transparency of hierarchical charts through the clarity, coverage, and look and feel dimensions. Additionally, we extend the user profile to better understand whether personality facets manifest a biasing effect on the trust-building process. Our results show that the data encodings do not affect how users perceive visualization transparency while controlling for personality factors. Regarding personality, the propensity to trust affects how they judge the clarity of a hierarchical chart. Our findings provide new insights into the research challenges of measuring trust and understanding the transparency of information visualization. Specifically, we explore how personality factors manifest in this trust-building relationship and user interaction within visualization systems.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 43-57"},"PeriodicalIF":3.8,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual comparative analytics of multimodal transportation 多式联运的视觉比较分析
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2025-01-16 DOI: 10.1016/j.visinf.2025.01.001
Zikun Deng , Haoming Chen , Qing-Long Lu , Zicheng Su , Tobias Schreck , Jie Bao , Yi Cai
{"title":"Visual comparative analytics of multimodal transportation","authors":"Zikun Deng ,&nbsp;Haoming Chen ,&nbsp;Qing-Long Lu ,&nbsp;Zicheng Su ,&nbsp;Tobias Schreck ,&nbsp;Jie Bao ,&nbsp;Yi Cai","doi":"10.1016/j.visinf.2025.01.001","DOIUrl":"10.1016/j.visinf.2025.01.001","url":null,"abstract":"<div><div>Contemporary urban transportation systems frequently depend on a variety of modes to provide residents with travel services. Understanding a multimodal transportation system is pivotal for devising well-informed planning; however, it is also inherently challenging for traffic analysts and planners. This challenge stems from the necessity of evaluating and contrasting the quality of transportation services across multiple modes. Existing methods are constrained in offering comprehensive insights into the system, primarily due to the inadequacy of multimodal traffic data necessary for fair comparisons and their inability to equip analysts and planners with the means for exploration and reasoned analysis within the urban spatial context. To this end, we first acquire sufficient multimodal trips leveraging well-established navigation platforms that can estimate the routes with the least travel time given an origin and a destination (an OD pair). We also propose TraDyssey, a visual analytics system that enables analysts and planners to evaluate and compare multiple modes by exploring acquired massive multimodal trips. TraDyssey follows a streamlined query-and-explore workflow supported by user-friendly and effective interactive visualizations. Specifically, a revisited difference-aware parallel coordinate plot (PCP) is designed for overall mode comparisons based on multimodal trips. Trip groups can be flexibly queried on the PCP based on differential features across modes. The queried trips are then organized and presented on a geographic map by OD pairs, forming a group-OD-trip hierarchy of visual exploration. Domain experts gained valuable insights into transportation planning through real-world case studies using TraDyssey.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 18-30"},"PeriodicalIF":3.8,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Out-of-focus artifacts mitigation and autofocus methods for 3D displays 3D显示器的失焦伪影缓解和自动对焦方法
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2024-12-20 DOI: 10.1016/j.visinf.2024.12.001
T. Chlubna , T. Milet , P. Zemčík
{"title":"Out-of-focus artifacts mitigation and autofocus methods for 3D displays","authors":"T. Chlubna ,&nbsp;T. Milet ,&nbsp;P. Zemčík","doi":"10.1016/j.visinf.2024.12.001","DOIUrl":"10.1016/j.visinf.2024.12.001","url":null,"abstract":"<div><div>This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display. The method addresses a common problem that visualized content is often out of focus, which adversely affects perceived 3D content. The method outperforms existing focusing method, having the error lower by almost 30%. The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts. The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments. A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance. A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory. The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses. The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 31-42"},"PeriodicalIF":3.8,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming cinematography lighting education in the metaverse 在虚拟世界中改变电影照明教育
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2024-12-05 DOI: 10.1016/j.visinf.2024.11.003
Xian Xu , Wai Tong , Zheng Wei , Meng Xia , Lik-Hang Lee , Huamin Qu
{"title":"Transforming cinematography lighting education in the metaverse","authors":"Xian Xu ,&nbsp;Wai Tong ,&nbsp;Zheng Wei ,&nbsp;Meng Xia ,&nbsp;Lik-Hang Lee ,&nbsp;Huamin Qu","doi":"10.1016/j.visinf.2024.11.003","DOIUrl":"10.1016/j.visinf.2024.11.003","url":null,"abstract":"<div><div>Lighting education is a foundational component of cinematography education. However, many art schools do not have expensive soundstages for traditional cinematography lessons. Migrating physical setups to virtual experiences is a potential solution driven by metaverse initiatives. Yet there is still a lack of knowledge on the design of a VR system for teaching cinematography. We first analyzed the educational needs for cinematography lighting education by conducting interviews with six cinematography professionals from academia and industry. Accordingly, we presented <em>Art Mirror</em>, a VR soundstage for teachers and students to emulate cinematography lighting in virtual scenarios. We evaluated <em>Art Mirror</em> from the aspects of usability, realism, presence, sense of agency, and collaboration. Sixteen participants were invited to take a cinematography lighting course and assess the design elements of <em>Art Mirror</em>. Our results demonstrate that <em>Art Mirror</em> is usable and useful for cinematography lighting education, which sheds light on the design of VR cinematography education.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 1-17"},"PeriodicalIF":3.8,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArtEyer: Enriching GPT-based agents with contextual data visualizations for fine art authentication ArtEyer:为美术认证丰富基于gpt的代理与上下文数据可视化
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.11.001
Tan Tang , Yanhong Wu , Junming Gao , Kejia Ruan , Yanjie Zhang , Shuainan Ye , Yingcai Wu , Xiaojiao Chen
{"title":"ArtEyer: Enriching GPT-based agents with contextual data visualizations for fine art authentication","authors":"Tan Tang ,&nbsp;Yanhong Wu ,&nbsp;Junming Gao ,&nbsp;Kejia Ruan ,&nbsp;Yanjie Zhang ,&nbsp;Shuainan Ye ,&nbsp;Yingcai Wu ,&nbsp;Xiaojiao Chen","doi":"10.1016/j.visinf.2024.11.001","DOIUrl":"10.1016/j.visinf.2024.11.001","url":null,"abstract":"<div><div>Fine art authentication plays a significant role in protecting cultural heritage and ensuring the integrity of artworks. Traditional authentication methods require professionals to collect many reference materials and conduct detailed analyses. To ease the difficulty, we collaborate with domain experts to develop a GPT-based agent, namely ArtEyer, that offers accurate attributions, determines the origin and authorship, and executes visual analytics. Despite the convenience of the conversational user interface, novice users may still face challenges due to the hallucination issue and the steep learning curve associated with prompting. To face these obstacles, we propose a novel solution that places interactive data visualizations into the conversations. We create contextual visualizations from an external domain-dependent database to ensure data trustworthiness and allow users to provide precise instructions to the agent by interacting directly with these visualizations, thus overcoming the vagueness inherent in natural language-based prompting. We evaluate ArtEyer through an in-lab user study and demonstrate its usage with a real-world case.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 48-59"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer Vision in Augmented, Virtual, Mixed and Extended Reality environments—A bibliometric review 增强、虚拟、混合和扩展现实环境中的计算机视觉——文献计量学综述
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.11.002
Júlio Castro Lopes, Rui Pedro Lopes
{"title":"Computer Vision in Augmented, Virtual, Mixed and Extended Reality environments—A bibliometric review","authors":"Júlio Castro Lopes,&nbsp;Rui Pedro Lopes","doi":"10.1016/j.visinf.2024.11.002","DOIUrl":"10.1016/j.visinf.2024.11.002","url":null,"abstract":"<div><div>This work describes a bibliometric analysis of the literature on the use of computer vision algorithms in Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), and Extended Reality (XR) environments. The analysis aims to highlight the evolution, trends, and effects of research in this field. This review provides an overview of immersive technologies and their applications, as well as the role of computer vision algorithms in enabling these technologies and the potential benefits of using such algorithms. This study identifies important authors, institutions, and research themes by using bibliometric indicators such as citation counts, co-citation analysis, and network analysis. The analysis also identifies gaps and opportunities for additional research in this area, as well as a critical assessment of the quality and relevance of the publications.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 13-22"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChemNav: An interactive visual tool to navigate in the latent space for chemical molecules discovery ChemNav:一个交互式可视化工具,用于在化学分子发现的潜在空间中导航
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.10.002
Yang Zhang, Jie Li, Xu Chao
{"title":"ChemNav: An interactive visual tool to navigate in the latent space for chemical molecules discovery","authors":"Yang Zhang,&nbsp;Jie Li,&nbsp;Xu Chao","doi":"10.1016/j.visinf.2024.10.002","DOIUrl":"10.1016/j.visinf.2024.10.002","url":null,"abstract":"<div><div>In recent years, AI-driven drug development has emerged as a prominent research topic in computer chemistry. A key focus is the application of generative models for molecule synthesis, which create extensive virtual libraries of chemical molecules based on latent spaces. However, locating molecules with desirable properties within the vast latent spaces remains a significant challenge. Large regions of invalid samples in the latent space, called “dead zones”, can impede the exploration efficiency. The process is always time-consuming and repetitive. Therefore, we aim to propose a visualization system to help experts identify potential molecules with desirable properties as they wander in the latent space. Specifically, we conducted a literature survey about the application of generative networks in drug synthesis to summarize the tasks and followed this with expert interviews to determine their requirements. Based on the above requirements, we introduce ChemNav, an interactive visual tool for navigating latent space for desirable molecules search. ChemNav incorporates a heuristic latent space interpolation path search algorithm to enhance the efficiency of valid molecule generation, and a similar sample search algorithm to accelerate the discovery of similar molecules. Evaluations of ChemNav through two case studies, a user study, and experiments demonstrated its effectiveness in inspiring researchers to explore the latent space for chemical molecule discovery.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 60-70"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信