Computers & Graphics-Uk最新文献

筛选
英文 中文
Example-based authoring of expressive space curves 基于实例的表达空间曲线的创作
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-24 DOI: 10.1016/j.cag.2025.104249
JiříMinarčík , Jakub Fišer , Daniel Sýkora
{"title":"Example-based authoring of expressive space curves","authors":"JiříMinarčík ,&nbsp;Jakub Fišer ,&nbsp;Daniel Sýkora","doi":"10.1016/j.cag.2025.104249","DOIUrl":"10.1016/j.cag.2025.104249","url":null,"abstract":"<div><div>In this paper we present a novel example-based stylization method for 3D space curves. Inspired by image-based arbitrary style transfer (Gatys et al., 2016), we introduce a workflow that allows artists to transfer the stylistic characteristics of a short exemplar curve to a longer target curve in 3D—a problem, to the best of our knowledge, previously unexplored. Our approach involves extracting the underlying, unstyled form of the exemplar curve using a novel smoothing flow. This unstyled representation is then aligned with the target curve using a modified Fréchet distance. To achieve precise matching with reduced computational cost, we employ a semi-discrete optimization scheme, which outperforms existing methods for similar curve alignment problems. Furthermore, our formulation provides intuitive controls for adjusting stylization strength and transfer temperature, enabling greater creative flexibility. Its versatility also allows for the simultaneous stylization of additional attributes along the curve, which is particularly valuable in 3D applications where curves may represent medial axes of complex structures. We demonstrate the effectiveness of our method through a variety of expressive stylizations across different application contexts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104249"},"PeriodicalIF":2.5,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DimenFix: A novel meta-strategy to preserve user-defined data values on dimensionality reduction layouts DimenFix:一种新颖的元策略,用于在降维布局上保留用户定义的数据值
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-22 DOI: 10.1016/j.cag.2025.104231
Zixuan Han , Diede van der Hoorn , Thomas Höllt , Qiaodan Luo , Leonardo Christino , Evangelos Milios , Fernando V. Paulovich
{"title":"DimenFix: A novel meta-strategy to preserve user-defined data values on dimensionality reduction layouts","authors":"Zixuan Han ,&nbsp;Diede van der Hoorn ,&nbsp;Thomas Höllt ,&nbsp;Qiaodan Luo ,&nbsp;Leonardo Christino ,&nbsp;Evangelos Milios ,&nbsp;Fernando V. Paulovich","doi":"10.1016/j.cag.2025.104231","DOIUrl":"10.1016/j.cag.2025.104231","url":null,"abstract":"<div><div>Dimensionality Reduction (DR) methods have become essential tools for the data analysis toolbox. Typically, DR methods combine features of a multivariate dataset to produce dimensions in a reduced space, preserving some data properties, usually pairwise distances or local neighborhoods. Preserving such properties makes DR methods attractive, but it is also one of their weaknesses. When calculating the embedded dimensions, usually through non-linear strategies, the original feature values are lost and not explicitly represented in the spatialization of the produced layouts, making it challenging to interpret the results and understand the features’ contributions to the attained representations. Some strategies have been proposed to tackle this issue, such as coloring the DR layouts or generating explanations. Still, they are post-processes, so specific features (values) are not guaranteed to be preserved or represented. This paper proposes <em>DimenFix</em>, a novel meta-DR strategy that explicitly preserves the values of a particular user-defined feature or external data (not used to generate a layout) in one of the embedded axes. <em>DimenFix</em> can be used to preserve ordinal (e.g., numerical measures) and nominal (e.g., labels) values and works with virtually any gradient-descent DR method. It requires minimum changes to the underlying DR technique, running in linear time considering the number of data instances. In our results, involving Force Scheme and t-SNE adaptations, <em>DimenFix</em> was capable of representing features without heavily impacting distance or neighborhood preservation, allowing for creating hybrid layouts that join characteristics of scatter plots and DR methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104231"},"PeriodicalIF":2.5,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foundation model assisted visual analytics: Opportunities and Challenges 基础模型辅助视觉分析:机遇与挑战
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-21 DOI: 10.1016/j.cag.2025.104246
Maeve Hutchinson, Radu Jianu, Aidan Slingsby, Pranava Madhyastha
{"title":"Foundation model assisted visual analytics: Opportunities and Challenges","authors":"Maeve Hutchinson,&nbsp;Radu Jianu,&nbsp;Aidan Slingsby,&nbsp;Pranava Madhyastha","doi":"10.1016/j.cag.2025.104246","DOIUrl":"10.1016/j.cag.2025.104246","url":null,"abstract":"<div><div>We explore the integration of foundation models, such as large language models (LLMs) and multimodal LLMs (MLLMs), into visual analytics (VA) systems through intuitive natural language interactions. We survey current research directions in this emerging field, examining how foundation models have already been integrated into key visualisation-related processes in VA: visual mapping, the creation of data visualisations; visualisation observation, the process of generating a finding through visualisation; and visualisation manipulation, changing the viewport or highlighting areas of interest within a visualisation. We also highlight new possibilities that foundation models bring to VA, in particular, the opportunities to use MLLMs to interpret visualisations directly, to integrate multimodal interactions, and to provide guidance to users. We finally conclude with a vision of future VA systems as collaborative partners in analysis and address the prominent challenges in realising this vision through foundation models. Our discussions in this paper aim to guide future researchers working on foundation model assisted VA systems and help them navigate common obstacles when developing these systems.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104246"},"PeriodicalIF":2.5,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144147777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visually-supported topic modeling for understanding behavioral patterns from spatio–temporal events 用于从时空事件中理解行为模式的可视化支持主题建模
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-20 DOI: 10.1016/j.cag.2025.104245
Laleh Moussavi , Gennady Andrienko , Natalia Andrienko , Aidan Slingsby
{"title":"Visually-supported topic modeling for understanding behavioral patterns from spatio–temporal events","authors":"Laleh Moussavi ,&nbsp;Gennady Andrienko ,&nbsp;Natalia Andrienko ,&nbsp;Aidan Slingsby","doi":"10.1016/j.cag.2025.104245","DOIUrl":"10.1016/j.cag.2025.104245","url":null,"abstract":"<div><div>Spatio-temporal event sequences consist of activities or occurrences involving various interconnected elements in space and time. We show how topic modeling—typically used in text analysis—can be adapted to abstract and conceptualize such data. We propose an overall analytical workflow that combines computational and visual analytics methods to support some tasks, enabling the transformation of raw event data into meaningful insights. We apply our workflow to football matches as an example of important yet under-explored spatio-temporal event data. A key step in topic modeling is determining the appropriate number of topics; to address this, we introduce a visual method that organizes multiple modeling runs into a similarity-based layout, helping analysts identify patterns that balance interpretability and granularity.</div><div>We demonstrate how our workflow, which integrates visual analytics, supports five core analysis tasks: identifying common behavioral patterns, tracking their distribution across individuals or groups, observing progression at different temporal scales, comparing behavior under varied conditions, and detecting deviations from typical behavior.</div><div>Using real-world football data, we illustrate how our end-to-end process enables deeper insights into both tactical details and broader trends — from single match analyses to season wide perspectives. While our case study focuses on football, the proposed workflow is domain-agnostic and can be readily applied to other spatio-temporal event datasets, offering a flexible foundation for extracting and interpreting complex behavioral patterns.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104245"},"PeriodicalIF":2.5,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144134223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data meets creativity: Authentic learning through data art design and exhibition 数据与创意相遇:通过数据艺术设计和展示进行真实的学习
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-20 DOI: 10.1016/j.cag.2025.104248
Jonathan C. Roberts
{"title":"Data meets creativity: Authentic learning through data art design and exhibition","authors":"Jonathan C. Roberts","doi":"10.1016/j.cag.2025.104248","DOIUrl":"10.1016/j.cag.2025.104248","url":null,"abstract":"<div><div>We introduce an authentic learning task, where students create data art visualisations from selected datasets to be showcased in a public exhibition. Our vision is to explore how creativity and visualisation intersect and how combining these elements results in an authentic learning task for computing students. Run over two completed academic years, with a third cohort nearing completion, this initiative offered an active learning environment that fostered student engagement, creativity, and the application of practical skills. We detail the structured approach, outlining eight steps that students perform: topic selection and research, data analysis, researching artistic inspiration, conceptualising designs, proposing solutions, creating visualisations, reflection and curating an exhibition. Our framework equips educators with detailed lectures and activities, enabling them to implement similar tasks in their own teaching. Finally, we present illustrative examples of student outcomes and share reflective insights, showcasing the impact of integrating authentic learning with public-facing creative projects. This approach enhances technical skills while connecting academic learning to real-world professional practice.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104248"},"PeriodicalIF":2.5,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144130830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeFT-Net: N-window extended frequency transformer for rhythmic motion prediction 用于节奏运动预测的n窗扩展频率互感器
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-17 DOI: 10.1016/j.cag.2025.104244
Adeyemi Ademola , David Sinclair , Babis Koniaris , Samantha Hannah , Kenny Mitchell
{"title":"NeFT-Net: N-window extended frequency transformer for rhythmic motion prediction","authors":"Adeyemi Ademola ,&nbsp;David Sinclair ,&nbsp;Babis Koniaris ,&nbsp;Samantha Hannah ,&nbsp;Kenny Mitchell","doi":"10.1016/j.cag.2025.104244","DOIUrl":"10.1016/j.cag.2025.104244","url":null,"abstract":"<div><div>Advancements in prediction of human motion sequences are critical for enabling online virtual reality (VR) users to dance and move in ways that accurately mirror real-world actions, delivering a more immersive and connected experience. However, latency in networked motion tracking remains a significant challenge, disrupting engagement and necessitating predictive solutions to achieve real-time synchronization of remote motions. To address this issue, we propose a novel approach leveraging a synthetically generated dataset based on supervised foot anchor placement timings for rhythmic motions, ensuring periodicity and reducing prediction errors. Our model integrates a discrete cosine transform (DCT) to encode motion, refine high-frequency components, and smooth motion sequences, mitigating jittery artifacts. Additionally, we introduce a feed-forward attention mechanism designed to learn from N-window pairs of 3D key-point pose histories for precise future motion prediction. Quantitative and qualitative evaluations on the Human3.6M dataset highlight significant improvements in mean per joint position error (MPJPE) metrics, demonstrating the superiority of our technique over state-of-the-art approaches. We further introduce novel result pose visualizations through the use of generative AI methods.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104244"},"PeriodicalIF":2.5,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144154879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Class-Centric Visual Interactive Labeling 可扩展的以类为中心的视觉交互标签
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-17 DOI: 10.1016/j.cag.2025.104240
Matthias Matt , Jana Sedlakova , Jürgen Bernard , Matthias Zeppelzauer , Manuela Waldner
{"title":"Scalable Class-Centric Visual Interactive Labeling","authors":"Matthias Matt ,&nbsp;Jana Sedlakova ,&nbsp;Jürgen Bernard ,&nbsp;Matthias Zeppelzauer ,&nbsp;Manuela Waldner","doi":"10.1016/j.cag.2025.104240","DOIUrl":"10.1016/j.cag.2025.104240","url":null,"abstract":"<div><div>Large unlabeled datasets demand efficient and scalable data labeling solutions, in particular when the number of instances and classes is large. This leads to significant visual scalability challenges and imposes a high cognitive load on the users. Traditional instance-centric labeling methods, where (single) instances are labeled in each iteration struggle to scale effectively in these scenarios. To address these challenges, we introduce cVIL, a <em>Class-Centric Visual Interactive Labeling</em> methodology designed for interactive visual data labeling. By shifting the paradigm from <em>assigning-classes-to-instances</em> to <em>assigning-instances-to-classes</em>, cVIL reduces labeling effort and enhances efficiency for annotators working with large, complex and class-rich datasets. We propose a novel visual analytics labeling interface built on top of the conceptual cVIL workflow, enabling improved scalability over traditional visual labeling. In a user study, we demonstrate that cVIL can improve labeling efficiency and user satisfaction over instance-centric interfaces. The effectiveness of cVIL is further demonstrated through a usage scenario, showcasing its potential to alleviate cognitive load and support experts in managing extensive labeling tasks efficiently.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104240"},"PeriodicalIF":2.5,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144170213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic-aware hierarchical clustering for inverse rendering in indoor scenes 基于语义感知的分层聚类室内场景逆向渲染
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-17 DOI: 10.1016/j.cag.2025.104236
Xin Lv , Lijun Li , Zetao Chen
{"title":"Semantic-aware hierarchical clustering for inverse rendering in indoor scenes","authors":"Xin Lv ,&nbsp;Lijun Li ,&nbsp;Zetao Chen","doi":"10.1016/j.cag.2025.104236","DOIUrl":"10.1016/j.cag.2025.104236","url":null,"abstract":"<div><div>Decomposing a scene into its material properties and illumination, given the geometry and multi-view HDR observations of an indoor environment, is a fundamental yet challenging problem in computer vision and graphics. Existing approaches, combined with neural rendering techniques, have shown promising results in object-specific scenarios but often struggle with inconsistencies in material estimation within complex indoor scenes. Besides, ambiguities frequently arise between lighting and material properties. To address these limitations, we propose an adaptive inverse rendering pipeline based on Factorized Inverse Path Tracing (FIPT) that incorporates a semantic-aware hierarchical clustering approach. This enhancement enables the disentanglement of lighting and material properties, facilitating more accurate and consistent estimations of albedo, roughness, and metallic characteristics. Additionally, we introduce a voxel grid filter to further reduce computational time. Experimental results on both synthetic and real-world room-scale scenes demonstrate that our method produces more accurate material estimations compared to state-of-the-art methods. Furthermore, we demonstrate the potential of our method through several applications, including novel view synthesis, object insertion, and relighting.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104236"},"PeriodicalIF":2.5,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144088956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MultiInv: Inverting multidimensional scaling projections and computing decision maps by multilateration MultiInv:反转多维尺度投影和计算决策图的multilatation
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-17 DOI: 10.1016/j.cag.2025.104234
Daniela Blumberg , Yu Wang , Alexandru Telea , Daniel A. Keim , Frederik L. Dennig
{"title":"MultiInv: Inverting multidimensional scaling projections and computing decision maps by multilateration","authors":"Daniela Blumberg ,&nbsp;Yu Wang ,&nbsp;Alexandru Telea ,&nbsp;Daniel A. Keim ,&nbsp;Frederik L. Dennig","doi":"10.1016/j.cag.2025.104234","DOIUrl":"10.1016/j.cag.2025.104234","url":null,"abstract":"<div><div>Inverse projections enable a variety of tasks such as the exploration of classifier decision boundaries, creating counterfactual explanations, and generating synthetic data. Yet, many existing inverse projection methods are difficult to implement, challenging to predict, and sensitive to parameter settings. To address these, we propose to invert distance-preserving projections like Multidimensional Scaling (MDS) projections by using multilateration – a method used for geopositioning. Our approach finds data values for locations where no data point is projected under the key assumption that a given projection technique preserves pairwise distances among data samples in the low-dimensional space. Being based on a geometrical relationship, our technique is more interpretable than comparable machine learning-based approaches and can invert 2-dimensional projections up to <span><math><mrow><mfenced><mrow><mi>D</mi></mrow></mfenced><mo>−</mo><mn>1</mn></mrow></math></span> dimensional spaces if given at least <span><math><mfenced><mrow><mi>D</mi></mrow></mfenced></math></span> data points. We compare several strategies for multilateration point selection, show the application of our technique on three additional projection techniques apart from MDS, and use established quality metrics to evaluate its accuracy in comparison to existing inverse projections. We also show its application to computing decision maps for exploring the behavior of trained classification models. When the projection to invert captures data distances well, our inverse performs similarly to existing approaches while being interpretable and considerably simpler to compute.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"129 ","pages":"Article 104234"},"PeriodicalIF":2.5,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144116981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on eXtended Reality for Industrial and Occupational Supports (XRIOS) 工业和职业支持的扩展现实(XRIOS)特别部分前言
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-05-15 DOI: 10.1016/j.cag.2025.104242
Isaac Cho, Heejin Jeong, Kangsoo Kim, Hyungil Kim, Myounghoon Jeon
{"title":"Foreword to the special section on eXtended Reality for Industrial and Occupational Supports (XRIOS)","authors":"Isaac Cho,&nbsp;Heejin Jeong,&nbsp;Kangsoo Kim,&nbsp;Hyungil Kim,&nbsp;Myounghoon Jeon","doi":"10.1016/j.cag.2025.104242","DOIUrl":"10.1016/j.cag.2025.104242","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"130 ","pages":"Article 104242"},"PeriodicalIF":2.5,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144166354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信