Visual Informatics最新文献

筛选
英文 中文
Exploring and visualizing temporal relations in multivariate time series 探索和可视化多元时间序列中的时间关系
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-12-01 DOI: 10.1016/j.visinf.2023.09.001
Gota Shirato , Natalia Andrienko , Gennady Andrienko
{"title":"Exploring and visualizing temporal relations in multivariate time series","authors":"Gota Shirato ,&nbsp;Natalia Andrienko ,&nbsp;Gennady Andrienko","doi":"10.1016/j.visinf.2023.09.001","DOIUrl":"10.1016/j.visinf.2023.09.001","url":null,"abstract":"<div><p>This paper introduces an approach to analyzing multivariate time series (MVTS) data through <em>progressive temporal abstraction</em> of the data into <em>patterns</em> characterizing the behavior of the studied dynamic phenomenon. The paper focuses on two core challenges: identifying basic behavior patterns of individual attributes and examining the <em>temporal relations</em> between these patterns across the range of attributes to derive higher-level abstractions of multi-attribute behavior. The proposed approach combines existing methods for univariate pattern extraction, computation of temporal relations according to the Allen’s time interval algebra, visual displays of the temporal relations, and interactive query operations into a cohesive visual analytics workflow. The paper describes the application of the approach to real-world examples of population mobility data during the COVID-19 pandemic and characteristics of episodes in a football match, illustrating its versatility and effectiveness in understanding composite patterns of interrelated attribute behaviors in MVTS data.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 57-72"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000396/pdfft?md5=9b0ac41932e7ef9a3c5ba8074dca4e23&pid=1-s2.0-S2468502X23000396-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135347772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Desirable molecule discovery via generative latent space exploration 通过生成潜在空间探索发现理想分子
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-12-01 DOI: 10.1016/j.visinf.2023.10.002
Wanjie Zheng, Jie Li, Yang Zhang
{"title":"Desirable molecule discovery via generative latent space exploration","authors":"Wanjie Zheng,&nbsp;Jie Li,&nbsp;Yang Zhang","doi":"10.1016/j.visinf.2023.10.002","DOIUrl":"10.1016/j.visinf.2023.10.002","url":null,"abstract":"<div><p>Drug molecule design is a classic research topic. Drug experts traditionally design molecules relying on their experience. Manual drug design is time-consuming and may produce low-efficacy and off-target molecules. With the popularity of deep learning, drug experts are beginning to use generative models to design drug molecules. A well-trained generative model can learn the distribution of training samples and infinitely generate drug-like molecules similar to the training samples. The automatic process improves design efficiency. However, most existing methods focus on proposing and optimizing generative models. How to discover ideal molecules from massive candidates is still an unresolved challenge. We propose a visualization system to discover ideal drug molecules generated by generative models. In this paper, we investigated the requirements and issues of drug design experts when using generative models, i.e., generating molecular structures with specific constraints and finding other molecular structures similar to potential drug molecular structures. We formalized the first problem as an optimization problem and proposed using a genetic algorithm to solve it. For the second problem, we proposed using a neighborhood sampling algorithm based on the continuity of the latent space to find solutions. We integrated the proposed algorithms into a visualization tool, and a case study for discovering potential drug molecules to make KOR agonists and experiments demonstrated the utility of our approach.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 13-21"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000475/pdfft?md5=6a9b7eb869496bec1ac323f40edb7d54&pid=1-s2.0-S2468502X23000475-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135810245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IVMS: An immersive virtual meteorological sandbox based on WYSIWYG IVMS:基于所见即所得的沉浸式虚拟气象沙盒
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-12-01 DOI: 10.1016/j.visinf.2023.08.001
Hao Hu, Song Wang, Yonghui Chen
{"title":"IVMS: An immersive virtual meteorological sandbox based on WYSIWYG","authors":"Hao Hu,&nbsp;Song Wang,&nbsp;Yonghui Chen","doi":"10.1016/j.visinf.2023.08.001","DOIUrl":"10.1016/j.visinf.2023.08.001","url":null,"abstract":"<div><p>A novel approach to visually represent meteorological data has emerged with the maturation of Immersive Analytics (IA). We have proposed an immersive meteorological virtual sandbox as a solution to the limitations of 2D analysis in expressing and perceiving data. This innovative visual method enables users to interact directly with data through non-contact aerial gestures (NCAG). Referring to the “What you see is what you get” concept in scientific visualization, we proposed a novel approach for the visual exploration of meteorological data that aims to immerse users in the analysis process. We hope this approach can inspire immersive visualization techniques for other types of geographic data as well. Finally, we conducted a user questionnaire to evaluate our system and work. The evaluation results demonstrate that our system effectively reduces cognitive burden, alleviates mental workload, and enhances users’ retention of analysis findings.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 100-109"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000384/pdfft?md5=cc9d521a9365aafbe68c4d864b3827fc&pid=1-s2.0-S2468502X23000384-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73112441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of immersive visualization: Focus on perception and interaction 沉浸式可视化研究综述:关注感知与互动
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-12-01 DOI: 10.1016/j.visinf.2023.10.003
Yue Zhang, Zhenyuan Wang, Jinhui Zhang, Guihua Shan, Dong Tian
{"title":"A survey of immersive visualization: Focus on perception and interaction","authors":"Yue Zhang,&nbsp;Zhenyuan Wang,&nbsp;Jinhui Zhang,&nbsp;Guihua Shan,&nbsp;Dong Tian","doi":"10.1016/j.visinf.2023.10.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.10.003","url":null,"abstract":"<div><p>Immersive visualization utilizes virtual reality, mixed reality devices, and other interactive devices to create a novel visual environment that integrates multimodal perception and interaction. This technology has been maturing in recent years and has found broad applications in various fields. Based on the latest research advancements in visualization, this paper summarizes the state-of-the-art work in immersive visualization from the perspectives of multimodal perception and interaction in immersive environments, additionally discusses the current hardware foundations of immersive setups. By examining the design patterns and research approaches of previous immersive methods, the paper reveals the design factors for multimodal perception and interaction in current immersive environments. Furthermore, the challenges and development trends of immersive multimodal perception and interaction techniques are discussed, and potential areas of growth in immersive visualization design directions are explored.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 22-35"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000499/pdfft?md5=ca15c57dc835b96ed696bf3f8614e814&pid=1-s2.0-S2468502X23000499-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138467322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TopicBubbler: An interactive visual analytics system for cross-level fine-grained exploration of social media data TopicBubbler:一个交互式可视化分析系统,用于跨级别细粒度的社交媒体数据探索
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-12-01 DOI: 10.1016/j.visinf.2023.08.002
Jielin Feng , Kehao Wu , Siming Chen
{"title":"TopicBubbler: An interactive visual analytics system for cross-level fine-grained exploration of social media data","authors":"Jielin Feng ,&nbsp;Kehao Wu ,&nbsp;Siming Chen","doi":"10.1016/j.visinf.2023.08.002","DOIUrl":"10.1016/j.visinf.2023.08.002","url":null,"abstract":"<div><p>How to explore fine-grained but meaningful information from the massive amount of social media data is critical but challenging. To address this challenge, we propose the TopicBubbler, a visual analytics system that supports the cross-level fine-grained exploration of social media data. To achieve the goal of cross-level fine-grained exploration, we propose a new workflow. Under the procedure of the workflow, we construct the fine-grained exploration view through the design of bubble-based word clouds. Each bubble contains two rings that can display information through different levels, and recommends six keywords computed by different algorithms. The view supports users collecting information at different levels and to perform fine-grained selection and exploration across different levels based on keyword recommendations. To enable the users to explore the temporal information and the hierarchical structure, we also construct the Temporal View and Hierarchical View, which satisfy users to view the cross-level dynamic trends and the overview hierarchical structure. In addition, we use the storyline metaphor to enable users to consolidate the fragmented information extracted across levels and topics and ultimately present it as a complete story. Case studies from real-world data confirm the capability of the TopicBubbler from different perspectives, including event mining across levels and topics, and fine-grained mining of specific topics to capture events hidden beneath the surface.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 41-56"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000372/pdfft?md5=85a43c9c0e54f4a8a3bdc84b5a0a856c&pid=1-s2.0-S2468502X23000372-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76676300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perspectives on point cloud-based 3D scene modeling and XR presentation within the cloud-edge-client architecture 基于点云的3D场景建模和云边缘客户端架构内的XR表示的观点
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.06.007
Hongjia Wu , Hongxin Zhang , Jiang Cheng , Jianwei Guo , Wei Chen
{"title":"Perspectives on point cloud-based 3D scene modeling and XR presentation within the cloud-edge-client architecture","authors":"Hongjia Wu ,&nbsp;Hongxin Zhang ,&nbsp;Jiang Cheng ,&nbsp;Jianwei Guo ,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2023.06.007","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.007","url":null,"abstract":"<div><p>With the support of edge computing, the synergy and collaboration among central cloud, edge cloud, and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture. This integration unlocks the value of data and computational power, presenting significant opportunities for large-scale 3D scene modeling and XR presentation. In this paper, we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture. We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 59-64"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-scale visual analysis of cycle characteristics in spatially-embedded graphs 空间嵌入图中周期特征的多尺度可视化分析
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.06.005
Farhan Rasheed , Talha Bin Masood , Tejas G. Murthy , Vijay Natarajan , Ingrid Hotz
{"title":"Multi-scale visual analysis of cycle characteristics in spatially-embedded graphs","authors":"Farhan Rasheed ,&nbsp;Talha Bin Masood ,&nbsp;Tejas G. Murthy ,&nbsp;Vijay Natarajan ,&nbsp;Ingrid Hotz","doi":"10.1016/j.visinf.2023.06.005","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.005","url":null,"abstract":"<div><p>We present a visual analysis environment based on a multi-scale partitioning of a 2d domain into regions bounded by cycles in weighted planar embedded graphs. The work has been inspired by an application in granular materials research, where the question of scale plays a fundamental role in the analysis of material properties. We propose an efficient algorithm to extract the hierarchical cycle structure using persistent homology. The core of the algorithm is a filtration on a dual graph exploiting Alexander’s duality. The resulting partitioning is the basis for the derivation of statistical properties that can be explored in a visual environment. We demonstrate the proposed pipeline on a few synthetic and one real-world dataset.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 49-58"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing ordered bivariate data on node-link diagrams 在节点链接图上可视化有序的二元数据
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.06.003
Osman Akbulut , Lucy McLaughlin , Tong Xin , Matthew Forshaw , Nicolas S. Holliman
{"title":"Visualizing ordered bivariate data on node-link diagrams","authors":"Osman Akbulut ,&nbsp;Lucy McLaughlin ,&nbsp;Tong Xin ,&nbsp;Matthew Forshaw ,&nbsp;Nicolas S. Holliman","doi":"10.1016/j.visinf.2023.06.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.003","url":null,"abstract":"<div><p>Node-link visual representation is a widely used tool that allows decision-makers to see details about a network through the appropriate choice of visual metaphor. However, existing visualization methods are not always effective and efficient in representing bivariate graph-based data. This study proposes a novel node-link visual model – visual entropy (Vizent) graph – to effectively represent both primary and secondary values, such as uncertainty, on the edges simultaneously. We performed two user studies to demonstrate the efficiency and effectiveness of our approach in the context of static node-link diagrams. In the first experiment, we evaluated the performance of the Vizent design to determine if it performed equally well or better than existing alternatives in terms of response time and accuracy. Three static visual encodings that use two visual cues were selected from the literature for comparison: Width-Lightness, Saturation-Transparency, and Numerical values. We compared the Vizent design to the selected visual encodings on various graphs ranging in complexity from 5 to 25 edges for three different tasks. The participants achieved higher accuracy of their responses using Vizent and Numerical values; however, both Width-Lightness and Saturation-Transparency did not show equal performance for all tasks. Our results suggest that increasing graph size has no impact on Vizent in terms of response time and accuracy. The performance of the Vizent graph was then compared to the Numerical values visualization. The Wilcoxon signed-rank test revealed that mean response time in seconds was significantly less when the Vizent graphs were presented, while no significant difference in accuracy was found. The results from the experiments are encouraging and we believe justify using the Vizent graph as a good alternative to traditional methods for representing bivariate data in the context of node-link diagrams.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 22-36"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49731815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PubExplorer: An interactive analytical system for visualizing publication data pubeexplorer:用于可视化出版物数据的交互式分析系统
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.07.001
Minzhu Yu , Yang Wang , Xiaomin Yu , Guihua Shan , Zhong Jin
{"title":"PubExplorer: An interactive analytical system for visualizing publication data","authors":"Minzhu Yu ,&nbsp;Yang Wang ,&nbsp;Xiaomin Yu ,&nbsp;Guihua Shan ,&nbsp;Zhong Jin","doi":"10.1016/j.visinf.2023.07.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.07.001","url":null,"abstract":"<div><p>With the intersection and convergence of multiple disciplines and technologies, more and more researchers are actively exploring interdisciplinary cooperation outside their main research fields. Facing a new research field, researchers often hope to quickly learn what is being studied in this field, which research points are receiving high attention, which researchers are studying these research points, and then consider the possibility of collaborating with core researchers on these research points. In addition, students who are preparing for academic further education usually conduct research on mentors and mentors’ research platforms, including academic connections, employment opportunities, etc. In order to satisfy these requirements, we (1) design a research point state map based on a science map to help researchers and students understand the development state of a new research field; (2) design a bar-link author-affiliation information graph to help researchers and students clarify academic networks of scholars and find suitable collaborators or mentors; (3) designs citation pattern histogram to quickly discover research achievements with high research value, such as the Sleeping Beauty papers, recently hot papers, classic papers and so on. Finally, an interactive analytical system named PubExplorer was implemented with IEEE VIS publication data, and its effectiveness is verified through case studies.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 65-74"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MEinVR: Multimodal interaction techniques in immersive exploration MEinVR:沉浸式探索中的多模式交互技术
IF 3 3区 计算机科学
Visual Informatics Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.06.001
Ziyue Yuan, Shuqi He, Yu Liu, Lingyun Yu
{"title":"MEinVR: Multimodal interaction techniques in immersive exploration","authors":"Ziyue Yuan,&nbsp;Shuqi He,&nbsp;Yu Liu,&nbsp;Lingyun Yu","doi":"10.1016/j.visinf.2023.06.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.001","url":null,"abstract":"<div><p>Immersive environments have become increasingly popular for visualizing and exploring large-scale, complex scientific data because of their key features: immersion, engagement, and awareness. Virtual reality offers numerous new interaction possibilities, including tactile and tangible interactions, gestures, and voice commands. However, it is crucial to determine the most effective combination of these techniques for a more natural interaction experience. In this paper, we present MEinVR, a novel multimodal interaction technique for exploring 3D molecular data in virtual reality. MEinVR combines VR controller and voice input to provide a more intuitive way for users to manipulate data in immersive environments. By using the VR controller to select locations and regions of interest and voice commands to perform tasks, users can efficiently perform complex data exploration tasks. Our findings provide suggestions for the design of multimodal interaction techniques in 3D data exploration in virtual reality.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 37-48"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信