Computers & Graphics-Uk最新文献

筛选
英文 中文
Global Recurrent Mask R-CNN: Marine ship instance segmentation
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2024.104112
Ming Yuan, Hao Meng, Junbao Wu, Shouwen Cai
{"title":"Global Recurrent Mask R-CNN: Marine ship instance segmentation","authors":"Ming Yuan,&nbsp;Hao Meng,&nbsp;Junbao Wu,&nbsp;Shouwen Cai","doi":"10.1016/j.cag.2024.104112","DOIUrl":"10.1016/j.cag.2024.104112","url":null,"abstract":"<div><div>In intelligent ship navigation, instance segmentation technology is considered an accurate and efficient tool for vision perception in marine scenarios. However, the complex sea surface background and the diversity of ship types and sizes in marine environments pose significant challenges for instance segmentation, especially for small-scale targets. Therefore, this paper presents an end-to-end Global Recurrent Mask R-CNN (GR R-CNN) algorithm designed to enhance the multi-scale segmentation performance of ship instances in marine settings. Initially, this method proposes the Recurrent Enhanced Feature Pyramid Network (RE-FPN) module, which uses a feature recurrence and bidirectional chaining fusion mechanism to deeply integrate both deep and shallow features of images, effectively extracting multi-scale features and semantic information. Subsequently, we propose the Fine-Grained Global Fusion Mask Head (FGFMH) module, utilizing a fine-grained multi-layer receptive field extraction mechanism to enhance the extraction of global and multi-scale features. These two modules collaborate to further improve the ship instance segmentation capability. Experiments conducted on the MS COCO test-dev, PASCAL VOC, and custom OVSD datasets demonstrate accuracy improvements of 1.8%, 3.29%, and 1.3%, respectively, compared to Mask R-CNN. Our method surpasses various advanced techniques and provides valuable insights for the research on multi-scale instance segmentation of ships in complex environments.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104112"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-light image enhancement via illumination optimization and color correction
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2024.104138
Wenbo Zhang , Liang Xu , Jianjun Wu , Wei Huang , Xiaofan Shi , Yanli Li
{"title":"Low-light image enhancement via illumination optimization and color correction","authors":"Wenbo Zhang ,&nbsp;Liang Xu ,&nbsp;Jianjun Wu ,&nbsp;Wei Huang ,&nbsp;Xiaofan Shi ,&nbsp;Yanli Li","doi":"10.1016/j.cag.2024.104138","DOIUrl":"10.1016/j.cag.2024.104138","url":null,"abstract":"<div><div>The issue of low-light image enhancement is investigated in this paper. Specifically, a trainable low-light image enhancer based on illumination optimization and color correction, called LLOCNet, is proposed to enhance the visibility of such low-light image. First, an illumination correction network is designed, leveraging residual and encoding-decoding structure, to correct the illumination information of the <span><math><mi>V</mi></math></span>-channel for lighting up the low-light image. After that, the illumination difference map is derived by difference between before and after luminance correction. Furthermore, an illumination-guided color correction network based on illumination-guided multi-head attention is developed to fine-tune the <span><math><mrow><mi>H</mi><mi>S</mi></mrow></math></span> color channels. Finally, a feature fusion block with asymmetric parallel convolution operation is adopted to reconcile these enhanced features to obtain the desired high-quality image. Both qualitative and quantitative experimental results show that the proposed network favorably performs against other state-of-the-art low-light enhancement methods on both real-world and synthetic low-light image dataset.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104138"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring user reception of speech-controlled virtual reality environment for voice and public speaking training
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2024.104160
Patryk Bartyzel , Magdalena Igras-Cybulska , Daniela Hekiert , Magdalena Majdak , Grzegorz Łukawski , Thomas Bohné , Sławomir Tadeja
{"title":"Exploring user reception of speech-controlled virtual reality environment for voice and public speaking training","authors":"Patryk Bartyzel ,&nbsp;Magdalena Igras-Cybulska ,&nbsp;Daniela Hekiert ,&nbsp;Magdalena Majdak ,&nbsp;Grzegorz Łukawski ,&nbsp;Thomas Bohné ,&nbsp;Sławomir Tadeja","doi":"10.1016/j.cag.2024.104160","DOIUrl":"10.1016/j.cag.2024.104160","url":null,"abstract":"<div><div>In this paper, we explore the development and assessment of a virtual reality (VR) system designed to enhance public speaking and vocal skills among professional and non-professional speech users alike. The system’s foundation lies in a speech recordings corpus of 529 utterances given during presentations by a total of 15 students. From these data, we extracted voice parameters such as pitch, timbre, and speech rate using speech processing methods. We also asked six expert annotators to evaluate the stress levels present within each presentation. This multi-faceted analysis facilitated the selection of specific parameters for real-time animation control of virtual characters responding dynamically to the change in the speaker’s voice. Through these mechanics, we could cultivate user proficiency in voice modulation, thereby improving overall speaking abilities and confidence. Furthermore, the system fosters self-awareness of vocal quality, promoting proper utilization of the voice in professional settings. Our VR system offers a dual-mode environment that combines traditional public speaking scenarios in front of a virtual audience with a relaxing forest setting, where users can control weather conditions with their voice. To assess the system’s efficacy, we conducted a pilot study with five participants. Additionally, we provide preliminary design guidelines informed by our user study to support the development of future VR-based speech trainers.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104160"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Augmented Reality for Heavy Machinery Using Laser Projections
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2024.104161
Maximilian Tschulik, Thomas Kernbauer, Philipp Fleck, Clemens Arth
{"title":"Spatial Augmented Reality for Heavy Machinery Using Laser Projections","authors":"Maximilian Tschulik,&nbsp;Thomas Kernbauer,&nbsp;Philipp Fleck,&nbsp;Clemens Arth","doi":"10.1016/j.cag.2024.104161","DOIUrl":"10.1016/j.cag.2024.104161","url":null,"abstract":"<div><div>Operating heavy machinery is challenging and requires the full attention of the operator to perform several complex tasks simultaneously. Although commonly used augmented reality (AR) devices, such as head-mounted or head-up displays, can provide occupational support to operators, they can also cause problems. Particularly in off-highway scenarios, i.e., when driving machines in bumpy environments, the usefulness of current AR devices and the willingness of operators to wear them are limited. Therefore, we explore how laser-projection-based AR can help the operators facilitate their tasks under real-world outdoor conditions. For this, we present a compact hardware unit and introduce a flexible and declarative software system. Furthermore, we examine the calibration process to leverage a camera projector setup and outline a process for creating images suitable for display by a laser projector from a set of line segments. We showcase its ability to provide efficient instructions to operators and bystanders and propose concrete applications for our setup. Finally, we perform an accuracy evaluation and test our system hands-on in snow grooming.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104161"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CtrlNeRF: The generative neural radiation fields for the controllable synthesis of high-fidelity 3D-aware images
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2025.104163
Jian Liu , Zhen Yu
{"title":"CtrlNeRF: The generative neural radiation fields for the controllable synthesis of high-fidelity 3D-aware images","authors":"Jian Liu ,&nbsp;Zhen Yu","doi":"10.1016/j.cag.2025.104163","DOIUrl":"10.1016/j.cag.2025.104163","url":null,"abstract":"<div><div>The neural radiance field (NERF) advocates learning the continuous representation of 3D geometry through a multilayer perceptron (MLP). By integrating this into a generative model, the generative neural radiance field (GRAF) is capable of producing images from random noise <span><math><mi>z</mi></math></span> without 3D supervision. In practice, the shape and appearance are modeled by <span><math><msub><mrow><mi>z</mi></mrow><mrow><mi>s</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>z</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span>, respectively, to manipulate them separately during inference. However, it is challenging to represent multiple scenes using a solitary MLP and precisely control the generation of 3D geometry in terms of shape and appearance. In this paper, we introduce a controllable generative model (<span><math><mrow><mi>i</mi><mo>.</mo><mi>e</mi><mo>.</mo></mrow></math></span> <strong>CtrlNeRF</strong>) that uses a single MLP network to represent multiple scenes with shared weights. Consequently, we manipulated the shape and appearance codes to realize the controllable generation of high-fidelity images with 3D consistency. Moreover, the model enables the synthesis of novel views that do not exist in the training sets via camera pose alteration and feature interpolation. Extensive experiments were conducted to demonstrate its superiority in 3D-aware image generation compared to its counterparts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104163"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dyn-E: Local appearance editing of dynamic neural radiance fields
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2024.104140
Yinji ShenTu , Shangzhan Zhang , Mingyue Xu , Qing Shuai , Tianrun Chen , Sida Peng , Xiaowei Zhou
{"title":"Dyn-E: Local appearance editing of dynamic neural radiance fields","authors":"Yinji ShenTu ,&nbsp;Shangzhan Zhang ,&nbsp;Mingyue Xu ,&nbsp;Qing Shuai ,&nbsp;Tianrun Chen ,&nbsp;Sida Peng ,&nbsp;Xiaowei Zhou","doi":"10.1016/j.cag.2024.104140","DOIUrl":"10.1016/j.cag.2024.104140","url":null,"abstract":"<div><div>Recently, the editing of neural radiance fields (NeRFs) has gained considerable attention, but most prior works focus on static scenes while research on the appearance editing of dynamic scenes is relatively lacking. In this paper, we propose a novel framework to edit the local appearance of dynamic NeRFs by manipulating pixels in a single frame of training video. Specifically, to locally edit the appearance of dynamic NeRFs while preserving unedited regions, we introduce a local surface representation of the edited region, which can be inserted into and rendered along with the original NeRF and warped to arbitrary other frames through a learned invertible motion representation network. By employing our method, users without professional expertise can easily add desired content to the appearance of a dynamic scene. We extensively evaluate our approach on various scenes and show that our approach achieves spatially and temporally consistent editing results. Notably, our approach is versatile and applicable to different variants of dynamic NeRF representations.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104140"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced multi-scale feature adaptive fusion sparse convolutional network for large-scale scenes semantic segmentation
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2024.104105
Lingfeng Shen , Yanlong Cao , Wenbin Zhu , Kai Ren , Yejun Shou , Haocheng Wang , Zhijie Xu
{"title":"Enhanced multi-scale feature adaptive fusion sparse convolutional network for large-scale scenes semantic segmentation","authors":"Lingfeng Shen ,&nbsp;Yanlong Cao ,&nbsp;Wenbin Zhu ,&nbsp;Kai Ren ,&nbsp;Yejun Shou ,&nbsp;Haocheng Wang ,&nbsp;Zhijie Xu","doi":"10.1016/j.cag.2024.104105","DOIUrl":"10.1016/j.cag.2024.104105","url":null,"abstract":"<div><div>Semantic segmentation has made notable strides in analyzing homogeneous large-scale 3D scenes, yet its application to varied scenes with diverse characteristics poses considerable challenges. Traditional methods have been hampered by the dependence on resource-intensive neighborhood search algorithms, leading to elevated computational demands. To overcome these limitations, we introduce the MFAF-SCNet, a novel and computationally streamlined approach for voxel-based sparse convolutional. Our key innovation is the multi-scale feature adaptive fusion (MFAF) module, which intelligently applies a spectrum of convolution kernel sizes at the network’s entry point, enabling the extraction of multi-scale features. It adaptively calibrates the feature weighting to achieve optimal scale representation for different objects. Further augmenting our methodology is the LKSNet, an original sparse convolutional backbone designed to tackle the inherent inconsistencies in point cloud distribution. This is achieved by integrating inverted bottleneck structures with large kernel convolutions, significantly bolstering the network’s feature extraction and spatial correlation proficiency. The efficacy of MFAF-SCNet was rigorously tested against three large-scale benchmark datasets—ScanNet and S3DIS for indoor scenes, and SemanticKITTI for outdoor scenes. The experimental results underscore our method’s competitive edge, achieving high-performance benchmarks while ensuring computational efficiency.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104105"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An introduction to and survey of biological network visualization
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2024.104115
Henry Ehlers , Nicolas Brich , Michael Krone , Martin Nöllenburg , Jiacheng Yu , Hiroaki Natsukawa , Xiaoru Yuan , Hsiang-Yun Wu
{"title":"An introduction to and survey of biological network visualization","authors":"Henry Ehlers ,&nbsp;Nicolas Brich ,&nbsp;Michael Krone ,&nbsp;Martin Nöllenburg ,&nbsp;Jiacheng Yu ,&nbsp;Hiroaki Natsukawa ,&nbsp;Xiaoru Yuan ,&nbsp;Hsiang-Yun Wu","doi":"10.1016/j.cag.2024.104115","DOIUrl":"10.1016/j.cag.2024.104115","url":null,"abstract":"<div><div>Biological networks describe complex relationships in biological systems, which represent biological entities as vertices and their underlying connectivity as edges. Ideally, for a complete analysis of such systems, domain experts need to visually integrate multiple sources of heterogeneous data, and visually, as well as numerically, probe said data in order to explore or validate (mechanistic) hypotheses. Such visual analyses require the coming together of biological domain experts, bioinformaticians, as well as network scientists to create useful visualization tools. Owing to the underlying graph data becoming ever larger and more complex, the visual representation of such biological networks has become challenging in its own right. This introduction and survey aims to describe the current state of biological network visualization in order to identify scientific gaps for visualization experts, network scientists, bioinformaticians, and domain experts, such as biologists, or biochemists, alike. Specifically, we revisit the classic visualization pipeline, upon which we base this paper’s taxonomy and structure, which in turn forms the basis of our literature classification. This pipeline describes the process of visualizing data, starting with the raw data itself, through the construction of data tables, to the actual creation of visual structures and views, as a function of task-driven user interaction. Literature was systematically surveyed using API-driven querying where possible, and the collected papers were manually read and categorized based on the identified sub-components of this visualization pipeline’s individual steps. From this survey, we highlight a number of exemplary visualization tools from multiple biological sub-domains in order to explore how they adapt these discussed techniques and why. Additionally, this taxonomic classification of the collected set of papers allows us to identify existing gaps in biological network visualization practices. We finally conclude this report with a list of open challenges and potential research directions. Examples of such gaps include (i) the overabundance of visualization tools using schematic or straight-line node-link diagrams, despite the availability of powerful alternatives, or (ii) the lack of visualization tools that also integrate more advanced network analysis techniques beyond basic graph descriptive statistics.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104115"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GaussianAvatar: Human avatar Gaussian splatting from monocular videos
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2024.104155
Haian Lin, Yinwei Zhan
{"title":"GaussianAvatar: Human avatar Gaussian splatting from monocular videos","authors":"Haian Lin,&nbsp;Yinwei Zhan","doi":"10.1016/j.cag.2024.104155","DOIUrl":"10.1016/j.cag.2024.104155","url":null,"abstract":"<div><div>Many application fields including virtual reality and movie production demand reconstructing high-quality digital human avatars from monocular videos and real-time rendering. However, existing neural radiance field (NeRF)-based methods are costly to train and render. In this paper, we propose GaussianAvatar, a novel framework that extends 3D Gaussian to dynamic human scenes, enabling fast training and real-time rendering. The human 3D Gaussian in canonical space is initialized and transformed to posed space using Linear Blend Skinning (LBS), based on pose parameters, to learn the fine details of the human body at a very small computational cost. We design a pose parameter refinement module and a LBS weight optimization module to increase the accuracy of the pose parameter detection in the real dataset and introduce multi-resolution hash coding to accelerate the training speed. Experimental results demonstrate that our method outperforms existing methods in terms of training time, rendering speed, and reconstruction quality.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104155"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on graphics interface 2023
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2025-02-01 DOI: 10.1016/j.cag.2025.104162
KangKang Yin , Paul Kry
{"title":"Foreword to the special section on graphics interface 2023","authors":"KangKang Yin ,&nbsp;Paul Kry","doi":"10.1016/j.cag.2025.104162","DOIUrl":"10.1016/j.cag.2025.104162","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"126 ","pages":"Article 104162"},"PeriodicalIF":2.5,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信