Computers & Graphics-Uk最新文献

筛选
英文 中文
LightingFormer: Transformer-CNN hybrid network for low-light image enhancement LightingFormer:用于弱光图像增强的变换器-CNN 混合网络
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-18 DOI: 10.1016/j.cag.2024.104089
Cong Bi , Wenhua Qian , Jinde Cao , Xue Wang
{"title":"LightingFormer: Transformer-CNN hybrid network for low-light image enhancement","authors":"Cong Bi ,&nbsp;Wenhua Qian ,&nbsp;Jinde Cao ,&nbsp;Xue Wang","doi":"10.1016/j.cag.2024.104089","DOIUrl":"10.1016/j.cag.2024.104089","url":null,"abstract":"<div><div>Recent deep-learning methods have shown promising results in low-light image enhancement. However, current methods often suffer from noise and artifacts, and most are based on convolutional neural networks, which have limitations in capturing long-range dependencies resulting in insufficient recovery of extremely dark parts in low-light images. To tackle these issues, this paper proposes a novel Transformer-based low-light image enhancement network called LightingFormer. Specifically, we propose a novel Transformer-CNN hybrid block that captures global and local information via mixed attention. It combines the advantages of the Transformer in capturing long-range dependencies and the advantages of CNNs in extracting low-level features and enhancing locality to recover extremely dark parts and enhance local details in low-light images. Moreover, we adopt the U-Net discriminator to enhance different regions in low-light images adaptively, avoiding overexposure or underexposure, and suppressing noise and artifacts. Extensive experiments show that our method outperforms the state-of-the-art methods quantitatively and qualitatively. Furthermore, the application to object detection demonstrates the potential of our method in high-level vision tasks.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104089"},"PeriodicalIF":2.5,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
APE-GAN: A colorization method for focal areas of infrared images guided by an improved attention mask mechanism APE-GAN:以改进的注意力掩码机制为指导的红外图像焦点区域着色方法
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-18 DOI: 10.1016/j.cag.2024.104086
Wenchao Ren, Liangfu Li, Shiyi Wen, Lingmei Ai
{"title":"APE-GAN: A colorization method for focal areas of infrared images guided by an improved attention mask mechanism","authors":"Wenchao Ren,&nbsp;Liangfu Li,&nbsp;Shiyi Wen,&nbsp;Lingmei Ai","doi":"10.1016/j.cag.2024.104086","DOIUrl":"10.1016/j.cag.2024.104086","url":null,"abstract":"<div><div>Due to their minimal susceptibility to environmental changes, infrared images are widely applicable across various fields, particularly in the realm of traffic. Nonetheless, a common drawback of infrared images lies in their limited chroma and detail information, posing challenges for clear information retrieval. While extensive research has been conducted on colorizing infrared images in recent years, existing methods primarily focus on overall translation without adequately addressing the foreground area containing crucial details. To address this issue, we propose a novel approach that distinguishes and colors the foreground content with important information and the background content with less significant details separately before fusing them into a colored image. Consequently, we introduce an enhanced generative adversarial network based on Attention mask to meticulously translate the foreground content containing vital information more comprehensively. Furthermore, we have carefully designed a new composite loss function to optimize high-level detail generation and improve image colorization at a finer granularity. Detailed testing on IRVI datasets validates the effectiveness of our proposed method in solving the problem of infrared image coloring.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104086"},"PeriodicalIF":2.5,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ST2SI: Image Style Transfer via Vision Transformer using Spatial Interaction ST2SI:通过视觉转换器利用空间交互进行图像风格转换
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-16 DOI: 10.1016/j.cag.2024.104084
Wenshu Li , Yinliang Chen , Xiaoying Guo , Xiaoyu He
{"title":"ST2SI: Image Style Transfer via Vision Transformer using Spatial Interaction","authors":"Wenshu Li ,&nbsp;Yinliang Chen ,&nbsp;Xiaoying Guo ,&nbsp;Xiaoyu He","doi":"10.1016/j.cag.2024.104084","DOIUrl":"10.1016/j.cag.2024.104084","url":null,"abstract":"<div><div>While retaining the original content structure, image style transfer uses style image to render it to obtain stylized images with artistic features. Because the content image contains different detail units and the style image has various style patterns, it is easy to cause the distortion of the stylized image. We proposes a new Style Transfer based on Vision Transformer using Spatial Interaction (ST2SI), which takes advantage of Spatial Interactive Convolution (SIC) and Spatial Unit Attention (SUA) to further enhance the content and style representation, so that the encoder can not only better learn the features of the content domain and the style domain, but also maintain the structural integrity of the image content and the effective integration of style features. Concretely, the high-order spatial interaction ability of Spatial Interactive Convolution can capture complex style patterns, and Spatial Unit Attention can balance the content information of different detail units through the change of attention weight, thus solving the problem of image distortion. Comprehensive qualitative and quantitative experiments prove the efficacy of our approach.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104084"},"PeriodicalIF":2.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note Computers & Graphics Issue 123 编者按 《计算机与图形》第 123 期
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-13 DOI: 10.1016/j.cag.2024.104072
{"title":"Editorial Note Computers & Graphics Issue 123","authors":"","doi":"10.1016/j.cag.2024.104072","DOIUrl":"10.1016/j.cag.2024.104072","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"123 ","pages":"Article 104072"},"PeriodicalIF":2.5,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SHAPE: A visual computing pipeline for interactive landmarking of 3D photograms and patient reporting for assessing craniosynostosis SHAPE:用于交互式三维照片标记和患者报告的视觉计算管道,以评估颅骨发育不良症
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-12 DOI: 10.1016/j.cag.2024.104056
Carsten Görg , Connor Elkhill , Jasmine Chaij , Kristin Royalty , Phuong D. Nguyen , Brooke French , Ines A. Cruz-Guerrero , Antonio R. Porras
{"title":"SHAPE: A visual computing pipeline for interactive landmarking of 3D photograms and patient reporting for assessing craniosynostosis","authors":"Carsten Görg ,&nbsp;Connor Elkhill ,&nbsp;Jasmine Chaij ,&nbsp;Kristin Royalty ,&nbsp;Phuong D. Nguyen ,&nbsp;Brooke French ,&nbsp;Ines A. Cruz-Guerrero ,&nbsp;Antonio R. Porras","doi":"10.1016/j.cag.2024.104056","DOIUrl":"10.1016/j.cag.2024.104056","url":null,"abstract":"<div><div>3D photogrammetry is a cost-effective, non-invasive imaging modality that does not require the use of ionizing radiation or sedation. Therefore, it is specifically valuable in pediatrics and is used to support the diagnosis and longitudinal study of craniofacial developmental pathologies such as craniosynostosis — the premature fusion of one or more cranial sutures resulting in local cranial growth restrictions and cranial malformations. Analysis of 3D photogrammetry requires the identification of craniofacial landmarks to segment the head surface and compute metrics to quantify anomalies. Unfortunately, commercial 3D photogrammetry software requires intensive manual landmark placements, which is time-consuming and prone to errors. We designed and implemented SHAPE, a System for Head-shape Analysis and Pediatric Evaluation. It integrates our previously developed automated landmarking method in a visual computing pipeline to evaluate a patient’s 3D photogram while allowing for manual confirmation and correction. It also automatically computes advanced metrics to quantify craniofacial anomalies and automatically creates a report that can be uploaded to the patient’s electronic health record. We conducted a user study with a professional clinical photographer to compare SHAPE to the existing clinical workflow. We found that SHAPE allows for the evaluation of a craniofacial 3D photogram more than three times faster than the current clinical workflow (<span><math><mrow><mn>3</mn><mo>.</mo><mn>85</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>99</mn></mrow></math></span> vs. <span><math><mrow><mn>13</mn><mo>.</mo><mn>07</mn><mo>±</mo><mn>5</mn><mo>.</mo><mn>29</mn></mrow></math></span> minutes, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>001</mn></mrow></math></span>). Our qualitative study findings indicate that the SHAPE workflow is well aligned with the existing clinical workflow and that SHAPE has useful features and is easy to learn.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"125 ","pages":"Article 104056"},"PeriodicalIF":2.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142525947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GIC-Flow: Appearance flow estimation via global information correlation for virtual try-on under large deformation GIC-Flow:通过大变形下虚拟试穿的全局信息相关性进行外观流估计
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-12 DOI: 10.1016/j.cag.2024.104071
Peng Zhang , Jiamei Zhan , Kexin Sun , Jie Zhang , Meng Wei , Kexin Wang
{"title":"GIC-Flow: Appearance flow estimation via global information correlation for virtual try-on under large deformation","authors":"Peng Zhang ,&nbsp;Jiamei Zhan ,&nbsp;Kexin Sun ,&nbsp;Jie Zhang ,&nbsp;Meng Wei ,&nbsp;Kexin Wang","doi":"10.1016/j.cag.2024.104071","DOIUrl":"10.1016/j.cag.2024.104071","url":null,"abstract":"<div><p>The primary aim of image-based virtual try-on is to seamlessly deform the target garment image to align with the human body. Owing to the inherent non-rigid nature of garments, current methods prioritise flexible deformation through appearance flow with high degrees of freedom. However, existing appearance flow estimation methods solely focus on the correlation of local feature information. While this strategy successfully avoids the extensive computational effort associated with the direct computation of the global information correlation of feature maps, it leads to challenges in garments adapting to large deformation scenarios. To overcome these limitations, we propose the GIC-Flow framework, which obtains appearance flow by calculating the global information correlation while reducing computational regression. Specifically, our proposed global streak information matching module is designed to decompose the appearance flow into horizontal and vertical vectors, effectively propagating global information in both directions. This innovative approach considerably diminishes computational requirements, contributing to an enhanced and efficient process. In addition, to ensure the accurate deformation of local texture in garments, we propose the local aggregate information matching module to aggregate information from the nearest neighbours before computing the global correlation and to enhance weak semantic information. Comprehensive experiments conducted using our method on the VITON and VITON-HD datasets show that GIC-Flow outperforms existing state-of-the-art algorithms, particularly in cases involving complex garment deformation.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104071"},"PeriodicalIF":2.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MuSic-UDF: Learning Multi-Scale dynamic grid representation for high-fidelity surface reconstruction from point clouds MuSic-UDF:学习多尺度动态网格表示法,实现点云高保真表面重建
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-10 DOI: 10.1016/j.cag.2024.104081
Chuan Jin , Tieru Wu , Yu-Shen Liu , Junsheng Zhou
{"title":"MuSic-UDF: Learning Multi-Scale dynamic grid representation for high-fidelity surface reconstruction from point clouds","authors":"Chuan Jin ,&nbsp;Tieru Wu ,&nbsp;Yu-Shen Liu ,&nbsp;Junsheng Zhou","doi":"10.1016/j.cag.2024.104081","DOIUrl":"10.1016/j.cag.2024.104081","url":null,"abstract":"<div><p>Surface reconstruction for point clouds is a central task in 3D modeling. Recently, the attractive approaches solve this problem by learning neural implicit representations, e.g., unsigned distance functions (UDFs), from point clouds, which have achieved good performance. However, the existing UDF-based methods still struggle to recover the local geometrical details. One of the difficulties arises from the used inflexible representations, which is hard to capture the local high-fidelity geometry details. In this paper, we propose a novel neural implicit representation, named MuSic-UDF, which leverages <strong>Mu</strong>lti-<strong>S</strong>cale dynam<strong>ic</strong> grids for high-fidelity and flexible surface reconstruction from raw point clouds with arbitrary typologies. Specifically, we initialize a hierarchical voxel grid where each grid point stores a learnable 3D coordinate. Then, we optimize these grids such that different levels of geometry structures can be captured adaptively. To further explore the geometry details, we introduce a frequency encoding strategy to hierarchically encode these coordinates. MuSic-UDF does not require any supervisions like ground truth distance values or point normals. We conduct comprehensive experiments under widely-used benchmarks, where the results demonstrate the superior performance of our proposed method compared to the state-of-the-art methods.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104081"},"PeriodicalIF":2.5,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voice user interfaces for effortless navigation in medical virtual reality environments 在医疗虚拟现实环境中轻松导航的语音用户界面
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-07 DOI: 10.1016/j.cag.2024.104069
Jan Hombeck, Henrik Voigt, Kai Lawonn
{"title":"Voice user interfaces for effortless navigation in medical virtual reality environments","authors":"Jan Hombeck,&nbsp;Henrik Voigt,&nbsp;Kai Lawonn","doi":"10.1016/j.cag.2024.104069","DOIUrl":"10.1016/j.cag.2024.104069","url":null,"abstract":"<div><p>In various situations, such as clinical environments with sterile conditions or when hands are occupied with multiple devices, traditional methods of navigation and scene adjustment are impractical or even impossible. We explore a new solution by using voice control to facilitate interaction in virtual worlds to avoid the use of additional controllers. Therefore, we investigate three scenarios: Object Orientation, Visualization Customization, and Analytical Tasks and evaluate whether natural language interaction is possible and promising in each of these scenarios. In our quantitative user study participants were able to control virtual environments effortlessly using verbal instructions. This resulted in rapid orientation adjustments, adaptive visual aids, and accurate data analysis. In addition, user satisfaction and usability surveys showed consistently high levels of acceptance and ease of use. In conclusion, our study shows that the use of natural language can be a promising alternative for the improvement of user interaction in virtual environments. It enables intuitive interactions in virtual spaces, especially in situations where traditional controls have limitations.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104069"},"PeriodicalIF":2.5,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0097849324002048/pdfft?md5=5dba80971d593332ff92694bfbd894e8&pid=1-s2.0-S0097849324002048-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic surface mesh generation of aortic dissections using statistical shape modeling 利用统计形状建模生成主动脉夹层的合成表面网格
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-06 DOI: 10.1016/j.cag.2024.104070
Kai Ostendorf , Kathrin Bäumler , Domenico Mastrodicasa , Dominik Fleischmann , Bernhard Preim , Gabriel Mistelbauer
{"title":"Synthetic surface mesh generation of aortic dissections using statistical shape modeling","authors":"Kai Ostendorf ,&nbsp;Kathrin Bäumler ,&nbsp;Domenico Mastrodicasa ,&nbsp;Dominik Fleischmann ,&nbsp;Bernhard Preim ,&nbsp;Gabriel Mistelbauer","doi":"10.1016/j.cag.2024.104070","DOIUrl":"10.1016/j.cag.2024.104070","url":null,"abstract":"<div><p>Aortic dissection is a rare disease affecting the aortic wall layers splitting the aortic lumen into two flow channels: the true and false lumen. The rarity of the disease leads to a sparsity of available datasets resulting in a low amount of available training data for in-silico studies or the training of machine learning algorithms. To mitigate this issue, we use statistical shape modeling to create a database of Stanford type B dissection surface meshes. We account for the complex disease anatomy by modeling two separate flow channels in the aorta, the true and false lumen. Former approaches mainly modeled the aortic arch including its branches but not two separate flow channels inside the aorta. To our knowledge, our approach is the first to attempt generating synthetic aortic dissection surface meshes. For the statistical shape model, the aorta is parameterized using the centerlines of the respective lumen and the according ellipses describing the cross-section of the lumen while being aligned along the centerline employing rotation-minimizing frames. To evaluate our approach we introduce disease-specific quality criteria by investigating the torsion and twist of the true lumen.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104070"},"PeriodicalIF":2.5,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S009784932400205X/pdfft?md5=f0b8f98a6ffb57b157863af63c74d980&pid=1-s2.0-S009784932400205X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142166764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A semantic edge-aware parameter efficient image filtering technique 语义边缘感知参数高效图像过滤技术
IF 2.5 4区 计算机科学
Computers & Graphics-Uk Pub Date : 2024-09-06 DOI: 10.1016/j.cag.2024.104068
Kunal Pradhan , Swarnajyoti Patra
{"title":"A semantic edge-aware parameter efficient image filtering technique","authors":"Kunal Pradhan ,&nbsp;Swarnajyoti Patra","doi":"10.1016/j.cag.2024.104068","DOIUrl":"10.1016/j.cag.2024.104068","url":null,"abstract":"<div><p>The success of a structure preserving filtering technique has relied on its capability to recognize structures and textures present in the input image. In this paper a novel structure preserving filtering technique is presented that first, generates an edge-map of the input image by exploiting semantic information. Then, an edge-aware adaptive recursive median filter is utilized to produce the filter image. The technique provides satisfactory results for a wide variety of images with minimal fine-tuning of its parameters. Moreover, along with the various computer graphics applications the proposed technique also shows its robustness to incorporate spatial information for spectral-spatial classification of hyperspectral images. A MATLAB implementation of the proposed technique is available at-<span><span>https://www.github.com/K-Pradhan/A-semantic-edge-aware-parameter-efficient-image-filtering-technique</span><svg><path></path></svg></span></p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"124 ","pages":"Article 104068"},"PeriodicalIF":2.5,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信