IEEE Transactions on Image Processing最新文献

筛选
英文 中文
Designing an Illumination-Aware Network for Deep Image Relighting 设计一种用于深度图像重照明的照明感知网络
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2022-07-21 DOI: 10.48550/arXiv.2207.10582
Zuo-Liang Zhu, Z. Li, Ruimao Zhang, Chunle Guo, Ming-Ming Cheng
{"title":"Designing an Illumination-Aware Network for Deep Image Relighting","authors":"Zuo-Liang Zhu, Z. Li, Ruimao Zhang, Chunle Guo, Ming-Ming Cheng","doi":"10.48550/arXiv.2207.10582","DOIUrl":"https://doi.org/10.48550/arXiv.2207.10582","url":null,"abstract":"Lighting is a determining factor in photography that affects the style, expression of emotion, and even quality of images. Creating or finding satisfying lighting conditions, in reality, is laborious and time-consuming, so it is of great value to develop a technology to manipulate illumination in an image as post-processing. Although previous works have explored techniques based on the physical viewpoint for relighting images, extensive supervisions and prior knowledge are necessary to generate reasonable images, restricting the generalization ability of these works. In contrast, we take the viewpoint of image-to-image translation and implicitly merge ideas of the conventional physical viewpoint. In this paper, we present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image with high efficiency. In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process and to extract precise descriptors of light sources for further manipulations. We also introduce a depth-guided geometry encoder for acquiring valuable geometry- and structure-related representations once the depth information is available. Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods. The code and models are publicly available on https://github.com/NK-CS-ZZL/IAN.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"5396-5411"},"PeriodicalIF":10.6,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49347222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Content-Aware Scalable Deep Compressed Sensing 内容感知可扩展深度压缩传感
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2022-07-19 DOI: 10.48550/arXiv.2207.09313
Bin Chen, Jian Zhang
{"title":"Content-Aware Scalable Deep Compressed Sensing","authors":"Bin Chen, Jian Zhang","doi":"10.48550/arXiv.2207.09313","DOIUrl":"https://doi.org/10.48550/arXiv.2207.09313","url":null,"abstract":"To more efficiently address image compressed sensing (CS) problems, we present a novel content-aware scalable network dubbed CASNet which collectively achieves adaptive sampling rate allocation, fine granular scalability and high-quality reconstruction. We first adopt a data-driven saliency detector to evaluate the importance of different image regions and propose a saliency-based block ratio aggregation (BRA) strategy for sampling rate allocation. A unified learnable generating matrix is then developed to produce sampling matrix of any CS ratio with an ordered structure. Being equipped with the optimization-inspired recovery subnet guided by saliency information and a multi-block training scheme preventing blocking artifacts, CASNet jointly reconstructs the image blocks sampled at various sampling rates with one single model. To accelerate training convergence and improve network robustness, we propose an SVD-based initialization scheme and a random transformation enhancement (RTE) strategy, which are extensible without introducing extra parameters. All the CASNet components can be combined and learned end-to-end. We further provide a four-stage implementation for evaluation and practical deployments. Experiments demonstrate that CASNet outperforms other CS networks by a large margin, validating the collaboration and mutual supports among its components and strategies. Codes are available at https://github.com/Guaishou74851/CASNet.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"5412-5426"},"PeriodicalIF":10.6,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45018690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Unsupervised High-Resolution Portrait Gaze Correction and Animation 无监督的高分辨率肖像凝视校正和动画
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2022-07-01 DOI: 10.48550/arXiv.2207.00256
Jichao Zhang, Jingjing Chen, Hao Tang, E. Sangineto, Peng Wu, Yan Yan, N. Sebe, Wei Wang
{"title":"Unsupervised High-Resolution Portrait Gaze Correction and Animation","authors":"Jichao Zhang, Jingjing Chen, Hao Tang, E. Sangineto, Peng Wu, Yan Yan, N. Sebe, Wei Wang","doi":"10.48550/arXiv.2207.00256","DOIUrl":"https://doi.org/10.48550/arXiv.2207.00256","url":null,"abstract":"This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images, which can be trained without the gaze angle and the head pose annotations. Common gaze-correction methods usually require annotating training data with precise gaze, and head pose information. Solving this problem using an unsupervised method remains an open problem, especially for high-resolution face images in the wild, which are not easy to annotate with gaze and head pose labels. To address this issue, we first create two new portrait datasets: CelebGaze ( $256 times 256$ ) and high-resolution CelebHQGaze ( $512 times 512$ ). Second, we formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module (GCM) and a Gaze Animation Module (GAM). Moreover, we propose an unsupervised training strategy, i.e., Synthesis-As-Training, to learn the correlation between the eye region features and the gaze angle. As a result, we can use the learned latent space for gaze animation with semantic interpolation in this space. Moreover, to alleviate both the memory and the computational costs in the training and the inference stage, we propose a Coarse-to-Fine Module (CFM) integrated with GCM and GAM. Extensive experiments validate the effectiveness of our method for both the gaze correction and the gaze animation tasks in both low and high-resolution face datasets in the wild and demonstrate the superiority of our method with respect to the state of the art.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"5272-5286"},"PeriodicalIF":10.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47912929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Motion Feature Aggregation for Video-Based Person Re-Identification 基于视频的人物再识别运动特征聚合
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2022-05-27 DOI: 10.1109/TIP.2022.3175593
Xinqian Gu, Hong Chang, Bingpeng Ma, S. Shan
{"title":"Motion Feature Aggregation for Video-Based Person Re-Identification","authors":"Xinqian Gu, Hong Chang, Bingpeng Ma, S. Shan","doi":"10.1109/TIP.2022.3175593","DOIUrl":"https://doi.org/10.1109/TIP.2022.3175593","url":null,"abstract":"Most video-based person re-identification (re-id) methods only focus on appearance features but neglect motion features. In fact, motion features can help to distinguish the target persons that are hard to be identified only by appearance features. However, most existing temporal information modeling methods cannot extract motion features effectively or efficiently for v ideo-based re-id. In this paper, we propose a more efficient Motion Feature Aggregation (MFA) method to model and aggregate motion information in the feature map level for video-based re-id. The proposed MFA consists of (i) a coarse-grained motion learning module, which extracts coarse-grained motion features based on the position changes of body parts over time, and (ii) a fine-grained motion learning module, which extracts fine-grained motion features based on the appearance changes of body parts over time. These two modules can model motion information from different granularities and are complementary to each other. It is easy to combine the proposed method with existing network architectures for end-to-end training. Extensive experiments on four widely used datasets demonstrate that the motion features extracted by MFA are crucial complements to appearance features for video-based re-id, especially for the scenario with large appearance changes. Besides, the results on LS-VID, the current largest publicly available video-based re-id dataset, surpass the state-of-the-art methods by a large margin. The code is available at: https://github.com/guxinqian/Simple-ReID.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"3908-3919"},"PeriodicalIF":10.6,"publicationDate":"2022-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62591748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Data Augmentation Using Bitplane Information Recombination Model 基于位面信息重组模型的数据增强
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2022-05-20 DOI: 10.1109/TIP.2022.3175429
Huan Zhang, Zhiyi Xu, Xiaolin Han, Weidong Sun
{"title":"Data Augmentation Using Bitplane Information Recombination Model","authors":"Huan Zhang, Zhiyi Xu, Xiaolin Han, Weidong Sun","doi":"10.1109/TIP.2022.3175429","DOIUrl":"https://doi.org/10.1109/TIP.2022.3175429","url":null,"abstract":"The performance of deep learning heavily depend on the quantity and quality of training data. But in many fields, well-annotated data are so difficult to collect, which makes the data scale hard to meet the needs of network training. To deal with this issue, a novel data augmentation method using the bitplane information recombination model (termed as BIRD) is proposed in this paper. Considering each bitplane can provide different structural information at different levels of detail, this method divides the internal hierarchical structure of a given image into different bitplanes, and reorganizes them by bitplane extraction, bitplane selection and bitplane recombination, to form an augmented data with different image details. This method can generate up to 62 times of the training data, for a given 8-bits image. In addition, this generalized method is model free, parameter free and easy to combine with various neural networks, without changing the original annotated data. Taking the task of target detection for remotely sensed images and classification for natural images as an example, experimental results on DOTA dataset and CIFAR-100 dataset demonstrated that, our proposed method is not only effective for data augmentation, but also helpful to improve the accuracy of target detection and image classification.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"12 1","pages":"3713-3725"},"PeriodicalIF":10.6,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62591682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Real Image Denoising With a Locally-Adaptive Bitonic Filter 基于局部自适应Bitonic滤波器的实景图像去噪
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2021-09-08 DOI: 10.17863/CAM.75234
Graham M. Treece
{"title":"Real Image Denoising With a Locally-Adaptive Bitonic Filter","authors":"Graham M. Treece","doi":"10.17863/CAM.75234","DOIUrl":"https://doi.org/10.17863/CAM.75234","url":null,"abstract":"Image noise removal is a common problem with many proposed solutions. The current standard is set by learning-based approaches, however these are not appropriate in all scenarios, perhaps due to lack of training data or the need for predictability in novel circumstances. The bitonic filter is a non-learning-based filter for removing noise from signals, with a mathematical morphology (ranking) framework in which the signal is postulated to be locally bitonic (having only one minimum or maximum) over some domain of finite extent. A novel version of this filter is developed in this paper, with a domain that is locally-adaptive to the signal, and other adjustments to allow application to real image sensor noise. These lead to significant improvements in noise reduction performance at no cost to processing times. The new bitonic filter performs better than the block-matching 3D filter for high levels of additive white Gaussian noise. It also surpasses this and other more recent non-learning-based filters for two public data sets containing real image noise at various levels. This is despite an additional adjustment to the block-matching filter, which leads to significantly better performance than has previously been cited on these data sets. The new bitonic filter has a signal-to-noise ratio 2.4dB lower than the best learning-based techniques when they are optimally trained. However, the performance gap is closed completely when these techniques are trained on data sets not directly related to the benchmark data. This demonstrates what can be achieved with a predictable, explainable, entirely local technique, which makes no assumptions of repeating patterns either within an image or across images, and hence creates residual images which are well behaved even in very high noise. Since the filter does not require training, it can still be used in situations where training is either difficult or inappropriate.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"3151-3165"},"PeriodicalIF":10.6,"publicationDate":"2021-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47479607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fractional Super-Resolution of Voxelized Point Clouds Voxeized点云的分数超分辨率
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2021-01-15 DOI: 10.36227/techrxiv.15032052.v1
Tomás M. Borges, Diogo C. Garcia, R. Queiroz
{"title":"Fractional Super-Resolution of Voxelized Point Clouds","authors":"Tomás M. Borges, Diogo C. Garcia, R. Queiroz","doi":"10.36227/techrxiv.15032052.v1","DOIUrl":"https://doi.org/10.36227/techrxiv.15032052.v1","url":null,"abstract":"We present a method to super-resolve voxelized point clouds downsampled by a fractional factor, using lookup-tables (LUT) constructed from self-similarities from their own downsampled neighborhoods. The proposed method was developed to densify and to increase the precision of voxelized point clouds, and can be used, for example, as improve compression and rendering. We super-resolve the geometry, but we also interpolate texture by averaging colors from adjacent neighbors, for completeness. Our technique, as we understand, is the first specifically developed for intra-frame super-resolution of voxelized point clouds, for arbitrary resampling scale factors. We present extensive test results over different point clouds, showing the effectiveness of the proposed approach against baseline methods.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":" ","pages":"1-1"},"PeriodicalIF":10.6,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46834443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Tone Mapping Beyond the Classical Receptive Field 超越经典感受域的音调映射
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2020-02-07 DOI: 10.1109/TIP.2020.2970541
Shaobing Gao, Min Tan, Zhen He, Yongjie Li
{"title":"Tone Mapping Beyond the Classical Receptive Field","authors":"Shaobing Gao, Min Tan, Zhen He, Yongjie Li","doi":"10.1109/TIP.2020.2970541","DOIUrl":"https://doi.org/10.1109/TIP.2020.2970541","url":null,"abstract":"Some neurons in the primary visual cortex (V1) of human visual system (HVS) conduct dynamic center-surround computation, which is thought to contribute to compress the high dynamic range (HDR) scene and preserve the details. We simulate this dynamic receptive field (RF) property of V1 neurons to solve the so-called tone mapping (TM) task in this paper. The novelties of our method are as follows. (1) Cortical processing mechanisms of HVS are modeled to build a local TM operation based on two Gaussian functions whose kernels and weights adapt according to the center-surround contrast, thus reducing halo artifacts and effectively enhancing the local details of bright and dark parts of image. (2) Our method uses an adaptive filter that follows the contrast levels of the image, which is computationally very efficient. (3) The local fusion between the center and surround responses returned by a cortical processing flow and the global signals returned by a sub-cortical processing flow according to the local contrast forms a dynamic mechanism that selectively enhances the details. Extensive experiments show that the proposed method can efficiently render the HDR scenes with good contrast, clear details, and high structural fidelity. In addition, the proposed method can also obtain promising performance when applied to enhance the low-light images. Furthermore, by modeling these biological solutions, our technique is simple and robust considering that our results were obtained using the same parameters for all the datasets (e.g., HDR images or low-light images), that is, mimicking how HVS operates.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":"4174-4187"},"PeriodicalIF":10.6,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TIP.2020.2970541","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48616807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Ring Difference Filter for Fast and Noise Robust Depth From Focus 环形差值滤波器的快速和噪声鲁棒深度从焦点
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2020-01-01 DOI: 10.1109/TIP.2019.2937064
Hae-Gon Jeon, Jaeheung Surh, Sunghoon Im, I. Kweon
{"title":"Ring Difference Filter for Fast and Noise Robust Depth From Focus","authors":"Hae-Gon Jeon, Jaeheung Surh, Sunghoon Im, I. Kweon","doi":"10.1109/TIP.2019.2937064","DOIUrl":"https://doi.org/10.1109/TIP.2019.2937064","url":null,"abstract":"Depth from focus (DfF) is a method of estimating the depth of a scene by using information acquired through changes in the focus of a camera. Within the DfF framework of, the focus measure (FM) forms the foundation which determines the accuracy of the output. With the results from the FM, the role of a DfF pipeline is to determine and recalculate unreliable measurements while enhancing those that are reliable. In this paper, we propose a new FM, which we call the “ring difference filter” (RDF), that can more accurately and robustly measure focus. FMs can usually be categorized as confident local methods or noise robust non-local methods. The RDF’s unique ring-and-disk structure allows it to have the advantages of both local and non-local FMs. We then describe an efficient pipeline that utilizes the RDF’s properties. Part of this pipeline is our proposed RDF-based cost aggregation method, which is able to robustly refine the initial results in the presence of image noise. Our method is able to reproduce results that are on par with or even better than those of state-of-the-art methods, while spending less time in computation.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":"1045-1060"},"PeriodicalIF":10.6,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TIP.2019.2937064","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62585977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Semi-Linearized Proximal Alternating Minimization for a Discrete Mumford–Shah Model 离散Mumford-Shah模型的半线性化近端交替极小化
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2020-01-01 DOI: 10.1109/TIP.2019.2944561
Marion Foare, N. Pustelnik, Laurent Condat
{"title":"Semi-Linearized Proximal Alternating Minimization for a Discrete Mumford–Shah Model","authors":"Marion Foare, N. Pustelnik, Laurent Condat","doi":"10.1109/TIP.2019.2944561","DOIUrl":"https://doi.org/10.1109/TIP.2019.2944561","url":null,"abstract":"The Mumford–Shah model is a standard model in image segmentation, and due to its difficulty, many approximations have been proposed. The major interest of this functional is to enable joint image restoration and contour detection. In this work, we propose a general formulation of the discrete counterpart of the Mumford–Shah functional, adapted to nonsmooth penalizations, fitting the assumptions required by the Proximal Alternating Linearized Minimization (PALM), with convergence guarantees. A second contribution aims to relax some assumptions on the involved functionals and derive a novel Semi-Linearized Proximal Alternated Minimization (SL-PAM) algorithm, with proved convergence. We compare the performances of the algorithm with several nonsmooth penalizations, for Gaussian and Poisson denoising, image restoration and RGB-color denoising. We compare the results with state-of-the-art convex relaxations of the Mumford–Shah functional, and a discrete version of the Ambrosio–Tortorelli functional. We show that the SL-PAM algorithm is faster than the original PALM algorithm, and leads to competitive denoising, restoration and segmentation results.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":"2176-2189"},"PeriodicalIF":10.6,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TIP.2019.2944561","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62590102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信