Visual Informatics最新文献

筛选
英文 中文
TPA-Vis: Visual analytics for Systematic Teaching Pattern Analysis in online learning TPA-Vis:用于在线学习系统教学模式分析的可视化分析
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2026-03-01 Epub Date: 2025-12-26 DOI: 10.1016/j.visinf.2025.100302
Lei Wang , Li Ye , Yuhua Liu , Jingfang Mao , Zhiguang Zhou
{"title":"TPA-Vis: Visual analytics for Systematic Teaching Pattern Analysis in online learning","authors":"Lei Wang , Li Ye , Yuhua Liu , Jingfang Mao , Zhiguang Zhou","doi":"10.1016/j.visinf.2025.100302","DOIUrl":"10.1016/j.visinf.2025.100302","url":null,"abstract":"","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"10 1","pages":"Article 100302"},"PeriodicalIF":3.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147417371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MetaCineMoji: Visualizing film set communication in an interactive interface for collaboration in virtual LED production MetaCineMoji:在虚拟LED生产中协作的交互式界面中可视化电影集通信
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2026-03-01 Epub Date: 2025-09-25 DOI: 10.1016/j.visinf.2025.100284
Zheng Wei , Shan Jin , Wai Tong , Pan Hui , Lik-Hang Lee , Xian Xu
{"title":"MetaCineMoji: Visualizing film set communication in an interactive interface for collaboration in virtual LED production","authors":"Zheng Wei ,&nbsp;Shan Jin ,&nbsp;Wai Tong ,&nbsp;Pan Hui ,&nbsp;Lik-Hang Lee ,&nbsp;Xian Xu","doi":"10.1016/j.visinf.2025.100284","DOIUrl":"10.1016/j.visinf.2025.100284","url":null,"abstract":"<div><div>LED-VP is now mainstream in high-end studios, yet the capital cost of real-time volume stages renders training with this technology prohibitively expensive. Most film institutions simply cannot afford to build one LED-VP set for their students. This paper explores an alternative solution to the training of LED-VP by developing a virtual counterpart. However, significant challenges arise in simulating a virtual environment where students can operate in an LED-VP. Our study proposes the virtual environment of LED-VP, a virtual reality collaborative learning (VRCL) system. We developed an interface featuring visual symbols, namely <em>MetaCineMoji</em>, for film operation to facilitate smooth communication and learning processes in LED-VP workflows. <em>MetaCineMoji</em> demonstrates the feasibility of multi-person collaborative learning in a virtual filming studio, translating film set communication to visual symbols for lighting design, scene construction, and collaborative work with key stakeholders, e.g., directors, cinematographers, and gaffers. We explore the impact of the film-operation interfaces containing visual symbols on social interaction factors within the virtual studio. We conducted evaluations with 24 participants. Our findings show that our system equipped with the film-operation interface, enabled by visual symbols, significantly enhances social interaction among learners and results in significantly higher learning outcomes than systems without the interface.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"10 1","pages":"Article 100284"},"PeriodicalIF":3.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147543490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConTopic: Human-in-the-loop neural topic modeling with constraint loss for topic quality improvement 主题:基于约束损失的人在环神经主题建模以提高主题质量
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2026-03-01 Epub Date: 2025-10-16 DOI: 10.1016/j.visinf.2025.100286
Qiuchen Fan, Yue Shen, Jie Li
{"title":"ConTopic: Human-in-the-loop neural topic modeling with constraint loss for topic quality improvement","authors":"Qiuchen Fan,&nbsp;Yue Shen,&nbsp;Jie Li","doi":"10.1016/j.visinf.2025.100286","DOIUrl":"10.1016/j.visinf.2025.100286","url":null,"abstract":"<div><div>Existing neural topic models often produce semantically ambiguous or low-quality topics, limiting their effectiveness in real-world applications. To address this, we propose <em>ConTopic</em>, a human-in-the-loop framework that integrates user-defined “must-link” and “cannot-link” constraints to improve topic quality. Our method employs an autoencoder-based neural network to jointly embed words, documents, and topics into a unified semantic space, enabling constraint-guided optimization via a dedicated loss function. We also introduce an interactive editing tool with three visualization strategies that help users assess topic quality, explore semantic relations, and refine topics with minimal cognitive effort. Experiments on real-world datasets, supported by quantitative evaluations and user studies, confirm the effectiveness and usability of <em>ConTopic</em> in enhancing topic modeling workflows.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"10 1","pages":"Article 100286"},"PeriodicalIF":3.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147543848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CateSift: An interactive steering approach for classifying large scale text CateSift:一种用于大规模文本分类的交互式转向方法
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2026-03-01 Epub Date: 2025-09-26 DOI: 10.1016/j.visinf.2025.100273
Chundong Wang , Yuhan Tian , Xumeng Wang , Yixuan Song , Haotian Zhang , Yongxin Zhao
{"title":"CateSift: An interactive steering approach for classifying large scale text","authors":"Chundong Wang ,&nbsp;Yuhan Tian ,&nbsp;Xumeng Wang ,&nbsp;Yixuan Song ,&nbsp;Haotian Zhang ,&nbsp;Yongxin Zhao","doi":"10.1016/j.visinf.2025.100273","DOIUrl":"10.1016/j.visinf.2025.100273","url":null,"abstract":"<div><div>Concept management for large-scale text data is critical in domains such as healthcare informatics, digital libraries, and news classification. However, the variability in concept structures and the diversity of application requirements pose challenges for existing automated methods, which often lack the flexibility to accommodate customized needs. Meanwhile, manual classification remains resource-intensive and inefficient. To address this issue, we propose CateSift, an interactive approach that integrates public knowledge to streamline the classification process and incorporates expert knowledge to formulate classification models. The main contributions of this work are as follows: (1) a visualization interface, called CateSift, that facilitates users in constructing and refining classification models for large-scale data, and (2) A prompt-based model that can integrate expert knowledge to iteratively refine hierarchical classification structures. Specifically, CateSift provides users with a hierarchical concept tree that highlights concepts with uncertain classifications and invites users to optimize the classification models by injecting knowledge. To address the issue of large-scale data, CateSift allows users to steer the classification model by adjusting the classification tree or annotating classifications. Case studies indicate that the proposed approach effectively and efficiently supports classification for large-scale data.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"10 1","pages":"Article 100273"},"PeriodicalIF":3.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147416847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CustMatcher: Enhancing preference-driven people-to-people recommendation 客户匹配:加强偏好驱动的人与人之间的推荐
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2026-03-01 Epub Date: 2025-12-16 DOI: 10.1016/j.visinf.2025.100298
Ji Ma , Jiachen Wang , Xiao Xie , Zheng Zhou , Hui Zhang , Yingcai Wu
{"title":"CustMatcher: Enhancing preference-driven people-to-people recommendation","authors":"Ji Ma ,&nbsp;Jiachen Wang ,&nbsp;Xiao Xie ,&nbsp;Zheng Zhou ,&nbsp;Hui Zhang ,&nbsp;Yingcai Wu","doi":"10.1016/j.visinf.2025.100298","DOIUrl":"10.1016/j.visinf.2025.100298","url":null,"abstract":"<div><div>People-to-people recommendation involves suggesting connections or relationships between individuals based on shared interests, skills, or other relevant factors, such as preferred field of study or geographical location. It is a common matching task across various domains, notably in education, where it assists in forming study groups, mentorship programs, and collaborative projects. However, generating people-to-people recommendations that satisfy users’ preferences is laborious, requiring extensive profile analysis and thus prompting the need for interactive visualization systems. In this work, we collaborated with experts from various education domains and developed CustMatcher which coordinates automatic matching algorithms and visualizations to enable efficient people-to-people recommendations. We first propose a steerable matching framework considering both the flexibility and the efficiency. A constraint space is defined to allow users to express their explicit preferences and implicit preferences about the matching. Visualizations and interactions are designed based on the framework and the constraint space to help users generate the initial matching result, handle the conflicts between preferences, and improve the matching result progressively. We evaluate the effectiveness and usability of the system with a user study and a case study.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"10 1","pages":"Article 100298"},"PeriodicalIF":3.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147543849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge and multi-detail enhanced GAN for human-driven text-to-image synthesis 知识和多细节增强GAN用于人类驱动的文本到图像合成
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2026-03-01 Epub Date: 2025-09-30 DOI: 10.1016/j.visinf.2025.100283
Ning Xu , Zhewen Shen , Hongshuo Tian , Bolun Zheng , Chenggang Yan , Jinbo Cao , Rongbao Kang , An-An Liu
{"title":"Knowledge and multi-detail enhanced GAN for human-driven text-to-image synthesis","authors":"Ning Xu ,&nbsp;Zhewen Shen ,&nbsp;Hongshuo Tian ,&nbsp;Bolun Zheng ,&nbsp;Chenggang Yan ,&nbsp;Jinbo Cao ,&nbsp;Rongbao Kang ,&nbsp;An-An Liu","doi":"10.1016/j.visinf.2025.100283","DOIUrl":"10.1016/j.visinf.2025.100283","url":null,"abstract":"<div><div>Human-driven text-to-image synthesis aims to create controllable images, which not only adhere to the semantic of given text but also incorporate the visual characteristics of given human. For example, given “a man on the beach” (text) along with a photo of human, the model aims to generate an image depicting the human on the beach. Although current diffusion-based methods have shown promise in this task, they face two major limitations: (1) The generated images appear to be a bit stiff and unnatural, almost like collages of human and backgrounds; (2) The details of human in the generated image are inconsistent with those in the input, losing the original identity. To address these issues, we present the Knowledge and Multi-Detail Enhanced GAN for the task of human-driven text-to-image synthesis. It employs external knowledge as references to improve the harmony between human and backgrounds, and uses CLIP’s multi-layer features to intensify human details. First, we search the database to retrieve external images that are similar to the given text, serving as our knowledge. Second, to preserve the human details, we present the Multi-Detail Enhancer, which uses the image encoder of CLIP to extract human representation at multiple levels. Third, to enhance the human-background naturalness, we present the Knowledge Attention Enhancer, which can seamlessly blend human, text, and knowledge by attentively retain useful information and filter out noise from knowledge. Finally, we introduce the dual discriminators to guide the entire network, which can facilitate the accurate capture of human details and generation of images. Extensive experiments demonstrate the superiority of our method with its efficiency and lower computational demands. It is about 300 times faster than diffusion-based models, uses only 5% of the parameters, and completes training in just two days on three V100 GPUs.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"10 1","pages":"Article 100283"},"PeriodicalIF":3.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147543861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data visualization for improving financial literacy: A systematic review 提高财务知识的数据可视化:系统回顾
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2026-03-01 Epub Date: 2025-09-01 DOI: 10.1016/j.visinf.2025.100272
Meng Du , Robert Amor , Kwan-Liu Ma , Burkhard C. Wünsche
{"title":"Data visualization for improving financial literacy: A systematic review","authors":"Meng Du ,&nbsp;Robert Amor ,&nbsp;Kwan-Liu Ma ,&nbsp;Burkhard C. Wünsche","doi":"10.1016/j.visinf.2025.100272","DOIUrl":"10.1016/j.visinf.2025.100272","url":null,"abstract":"<div><div>Financial literacy empowers individuals to make informed and effective financial decisions, improving their overall financial well-being and security. However, for many people understanding financial concepts can be daunting and only half of US adults are considered financially literate. Data visualization simplifies these concepts, making them accessible and engaging for learners of all ages. This systematic review analyzes 37 research papers exploring the use of data visualization and visual analytics in financial education and literacy enhancement. We classify these studies into five key areas: (1) the evolution of visualization use across time and space, (2) motivations for using visualization tools, (3) the financial topics addressed and instructional approaches used, (4) the types of tools and technologies applied, and (5) how the effectiveness of teaching interventions was evaluated. Furthermore, we identify research gaps and highlight opportunities for advancing financial literacy. Our findings offer practical insights for educators and professionals to effectively utilize or design visual tools for financial literacy.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"10 1","pages":"Article 100272"},"PeriodicalIF":3.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147417372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How well will LLMs perform for graph layout tasks? llm在图形布局任务中的表现如何?
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2026-03-01 Epub Date: 2025-09-29 DOI: 10.1016/j.visinf.2025.100285
Yilun Fan , Xianglei Lyu , Lei Wang , Ying Zhao , Fangfang Zhou , Yong Wang
{"title":"How well will LLMs perform for graph layout tasks?","authors":"Yilun Fan ,&nbsp;Xianglei Lyu ,&nbsp;Lei Wang ,&nbsp;Ying Zhao ,&nbsp;Fangfang Zhou ,&nbsp;Yong Wang","doi":"10.1016/j.visinf.2025.100285","DOIUrl":"10.1016/j.visinf.2025.100285","url":null,"abstract":"<div><div>Large Language Models (LLMs) have demonstrated impressive capabilities in various applications, motivating visualization researchers to explore the usage of LLMs for visualization tasks such as automated visualization recommendation, code generation and misleading visualization detection. However, it remains unclear how well will LLMs perform for graph layout, a classic and fundamental research question in visualization. To fill this gap, this paper presents a systematic evaluation of three state-of-the-art LLMs (i.e., GPT-4o, Gemini 2.0, DeepSeek-V3) on three key dimensions of graph layout: graph data understanding, layout generation, and layout evaluation. Our experiments cover five representative types of graphs, two graph scales, and two widely used graph representation formats. Our results provide insightful findings on the capabilities of LLMs for each key dimension of graph layout tasks. First, LLMs exhibit strong performance in fundamental graph understanding tasks when code generation is permitted, but their structural reasoning ability declines significantly in pure text-based scenarios. Second, LLMs have the potential to produce promising layouts, though they occasionally generate poor results. Third, visual input generally enhances their ability to evaluate layout quality, while text-only prompts may result in unreliable assessments of graph layout quality. These findings provide valuable insights for advancing future research on leveraging LLMs for graph layout.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"10 1","pages":"Article 100285"},"PeriodicalIF":3.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147543850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PorceVis: An interactive visual analytics system for exploring the history and culture of ancient Chinese porcelain PorceVis:一个交互式可视化分析系统,用于探索中国古代瓷器的历史和文化
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2026-03-01 Epub Date: 2025-10-08 DOI: 10.1016/j.visinf.2025.100281
Ling Chen , Peng Jiang , Yi Shen , Xinjie Yang , Xiaojie Pan , Jinhui Chu , Jian Liu , Guodao Sun , Ronghua Liang
{"title":"PorceVis: An interactive visual analytics system for exploring the history and culture of ancient Chinese porcelain","authors":"Ling Chen ,&nbsp;Peng Jiang ,&nbsp;Yi Shen ,&nbsp;Xinjie Yang ,&nbsp;Xiaojie Pan ,&nbsp;Jinhui Chu ,&nbsp;Jian Liu ,&nbsp;Guodao Sun ,&nbsp;Ronghua Liang","doi":"10.1016/j.visinf.2025.100281","DOIUrl":"10.1016/j.visinf.2025.100281","url":null,"abstract":"<div><div>Porcelain, as a significant component of traditional Chinese culture, carries a profound historical legacy and rich cultural connotations. Its study involves a complex knowledge system spanning multiple dynasties and regions. Traditional research methods often rely on documentary analysis and artifact examination, which may not fully reveal the artistic and cultural characteristics of porcelain. In recent years, the rapid development of digital technologies has presented new opportunities for the research, preservation, and dissemination of cultural heritage. Therefore, this paper leverages image processing techniques and large language models to conduct a multidimensional quantitative analysis of the artistic features of porcelain, employing scientific methods to investigate its artistic value. Additionally, we developed an interactive visualization system that enables users to comprehend the development of porcelain from a spatiotemporal perspective and engage in interactive exploration of its artistic features at both macro and micro levels. Case studies and user evaluations demonstrate the system’s high usability and efficiency, providing a novel academic perspective and tools for the in-depth research and digital dissemination of Chinese cultural heritage.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"10 1","pages":"Article 100281"},"PeriodicalIF":3.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147451170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HuGe: Towards Human-controllable image Generation in autonomous driving HuGe:面向自动驾驶中人类可控的图像生成
IF 3.8 3区 计算机科学
Visual Informatics Pub Date : 2025-12-01 Epub Date: 2025-08-09 DOI: 10.1016/j.visinf.2025.100262
Yuanzhi Zeng , Shiwei Chen , Yutian Zhang , Dong Sun , Yong Wang , Haipeng Zeng
{"title":"HuGe: Towards Human-controllable image Generation in autonomous driving","authors":"Yuanzhi Zeng ,&nbsp;Shiwei Chen ,&nbsp;Yutian Zhang ,&nbsp;Dong Sun ,&nbsp;Yong Wang ,&nbsp;Haipeng Zeng","doi":"10.1016/j.visinf.2025.100262","DOIUrl":"10.1016/j.visinf.2025.100262","url":null,"abstract":"<div><div>The rapid advancement of autonomous driving technology has reshaped the automotive industry, highlighting the need for diverse and high-quality image data. Existing image datasets for training and improving autonomous driving technologies lack rare scenarios like extreme weather, limiting the effectiveness and reliability of autonomous driving technologies. One possible way of expanding the dataset coverage is to augment the existing dataset with artificial ones, which, however, still suffers from various challenges like limited controllability and unclear corner case boundaries. To address these challenges, we design and develop an interactive visual analysis system, <em>HuGe</em>, to achieve efficient and semi-automatic controllable image generation. <em>HuGe</em> incorporates weather transformation models and a novel semi-automatic knowledge-based controllable object insertion method which leverages the controllability of convex optimization and the variability of diffusion models. We formulate the design requirements, propose an effective framework, and design four coordinated views to support controllable image generation, multidimensional dataset analysis, and evaluation of the generated samples. Two case studies, a metric-based evaluation and interviews with domain experts demonstrate the practicality and effectiveness of <em>HuGe</em> in controllable image generation for autonomous driving.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 4","pages":"Article 100262"},"PeriodicalIF":3.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145340531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书