IEEE Transactions on Image Processing最新文献

筛选
英文 中文
Collective Affinity Learning for Partial Cross-Modal Hashing. 部分跨模态哈希的集体亲和学习。
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-23 DOI: 10.1109/TIP.2019.2941858
Jun Guo, Wenwu Zhu
{"title":"Collective Affinity Learning for Partial Cross-Modal Hashing.","authors":"Jun Guo, Wenwu Zhu","doi":"10.1109/TIP.2019.2941858","DOIUrl":"10.1109/TIP.2019.2941858","url":null,"abstract":"<p><p>In the past decade, various unsupervised hashing methods have been developed for cross-modal retrieval. However, in real-world applications, it is often the incomplete case that every modality of data may suffer from some missing samples. Most existing works assume that every object appears in both modalities, hence they may not work well for partial multi-modal data. To address this problem, we propose a novel Collective Affinity Learning Method (CALM), which collectively and adaptively learns an anchor graph for generating binary codes on partial multi-modal data. In CALM, we first construct modality-specific bipartite graphs collectively, and derive a probabilistic model to figure out complete data-to-anchor affinities for each modality. Theoretical analysis reveals its ability to recover missing adjacency information. Moreover, a robust model is proposed to fuse these modality-specific affinities by adaptively learning a unified anchor graph. Then, the neighborhood information from the learned anchor graph acts as feedback, which guides the previous affinity reconstruction procedure. To solve the formulated optimization problem, we further develop an effective algorithm with linear time complexity and fast convergence. Last, Anchor Graph Hashing (AGH) is conducted on the fused affinities for cross-modal retrieval. Experimental results on benchmark datasets show that our proposed CALM consistently outperforms the existing methods.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62588314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-rank quaternion approximation for color image processing. 用于彩色图像处理的低秩四元近似法
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-19 DOI: 10.1109/TIP.2019.2941319
Yongyong Chen, Xiaolin Xiao, Yicong Zhou
{"title":"Low-rank quaternion approximation for color image processing.","authors":"Yongyong Chen, Xiaolin Xiao, Yicong Zhou","doi":"10.1109/TIP.2019.2941319","DOIUrl":"10.1109/TIP.2019.2941319","url":null,"abstract":"<p><p>Low-rank matrix approximation (LRMA)-based methods have made a great success for grayscale image processing. When handling color images, LRMA either restores each color channel independently using the monochromatic model or processes the concatenation of three color channels using the concatenation model. However, these two schemes may not make full use of the high correlation among RGB channels. To address this issue, we propose a novel low-rank quaternion approximation (LRQA) model. It contains two major components: first, instead of modeling a color image pixel as a scalar in conventional sparse representation and LRMA-based methods, the color image is encoded as a pure quaternion matrix, such that the cross-channel correlation of color channels can be well exploited; second, LRQA imposes the low-rank constraint on the constructed quaternion matrix. To better estimate the singular values of the underlying low-rank quaternion matrix from its noisy observation, a general model for LRQA is proposed based on several nonconvex functions. Extensive evaluations for color image denoising and inpainting tasks verify that LRQA achieves better performance over several state-of-the-art sparse representation and LRMA-based methods in terms of both quantitative metrics and visual quality.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62587764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noise-Robust Iterative Back-Projection. 噪声抑制迭代反投影
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-16 DOI: 10.1109/TIP.2019.2940414
Jun-Sang Yoo, Jong-Ok Kim
{"title":"Noise-Robust Iterative Back-Projection.","authors":"Jun-Sang Yoo, Jong-Ok Kim","doi":"10.1109/TIP.2019.2940414","DOIUrl":"10.1109/TIP.2019.2940414","url":null,"abstract":"<p><p>Noisy image super-resolution (SR) is a significant challenging process due to the smoothness caused by denoising. Iterative back-projection (IBP) can be helpful in further enhancing the reconstructed SR image, but there is no clean reference image available. This paper proposes a novel back-projection algorithm for noisy image SR. Its main goal is to pursuit the consistency between LR and SR images. We aim to estimate the clean reconstruction error to be back-projected, using the noisy and denoised reconstruction errors. We formulate a new cost function on the principal component analysis (PCA) transform domain to estimate the clean reconstruction error. In the data term of the cost function, noisy and denoised reconstruction errors are combined in a region-adaptive manner using texture probability. In addition, the sparsity constraint is incorporated into the regularization term, based on the Laplacian characteristics of the reconstruction error. Finally, we propose an eigenvector estimation method to minimize the effect of noise. The experimental results demonstrate that the proposed method can perform back-projection in a more noise-robust manner than the conventional IBP, and harmoniously work with any other SR methods as a post-processing.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62587355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Nonclassical Receptive Field Modulation for Contour Detection. 学习用于轮廓检测的非经典感知场调制
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-16 DOI: 10.1109/TIP.2019.2940690
Qiling Tang, Nong Sang, Haihua Liu
{"title":"Learning Nonclassical Receptive Field Modulation for Contour Detection.","authors":"Qiling Tang, Nong Sang, Haihua Liu","doi":"10.1109/TIP.2019.2940690","DOIUrl":"10.1109/TIP.2019.2940690","url":null,"abstract":"<p><p>This work develops a biologically inspired neural network for contour detection in natural images by combining the nonclassical receptive field modulation mechanism with a deep learning framework. The input image is first convolved with the local feature detectors to produce the classical receptive field responses, and then a corresponding modulatory kernel is constructed for each feature map to model the nonclassical receptive field modulation behaviors. The modulatory effects can activate a larger cortical area and thus allow cortical neurons to integrate a broader range of visual information to recognize complex cases. Additionally, to characterize spatial structures at various scales, a multiresolution technique is used to represent visual field information from fine to coarse. Different scale responses are combined to estimate the contour probability. Our method achieves state-of-the-art results among all biologically inspired contour detection models. This study provides a method for improving visual modeling of contour detection and inspires new ideas for integrating more brain cognitive mechanisms into deep neural networks.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62587689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Single-Stage Pedestrian Detector by Asymptotic Localization Fitting and Multi-Scale Context Encoding. 通过渐近定位拟合和多尺度上下文编码实现高效的单级行人检测器
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-16 DOI: 10.1109/TIP.2019.2938877
Wei Liu, Shengcai Liao, Weidong Hu
{"title":"Efficient Single-Stage Pedestrian Detector by Asymptotic Localization Fitting and Multi-Scale Context Encoding.","authors":"Wei Liu, Shengcai Liao, Weidong Hu","doi":"10.1109/TIP.2019.2938877","DOIUrl":"10.1109/TIP.2019.2938877","url":null,"abstract":"<p><p>Though Faster R-CNN based two-stage detectors have witnessed significant boost in pedestrian detection accuracy, they are still slow for practical applications. One solution is to simplify this working flow as a single-stage detector. However, current single-stage detectors (e.g. SSD) have not presented competitive accuracy on common pedestrian detection benchmarks. Accordingly, a structurally simple but effective module called Asymptotic Localization Fitting (ALF) is proposed, which stacks a series of predictors to directly evolve the default anchor boxes of SSD step by step to improve detection results. Additionally, combining the advantages from residual learning and multi-scale context encoding, a bottleneck block is proposed to enhance the predictors' discriminative power. On top of the above designs, an efficient single-stage detection architecture is designed, resulting in an attractive pedestrian detector in both accuracy and speed. A comprehensive set of experiments on two of the largest pedestrian detection datasets (i.e. CityPersons and Caltech) demonstrate the superiority of the proposed method, comparing to the state of the arts on both the benchmarks.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62586789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Deep Hashing for Large-scale Social Image Retrieval. 用于大规模社交图像检索的可扩展深度哈希算法
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-16 DOI: 10.1109/TIP.2019.2940693
Hui Cui, Lei Zhu, Jingjing Li, Yang Yang, Liqiang Nie
{"title":"Scalable Deep Hashing for Large-scale Social Image Retrieval.","authors":"Hui Cui, Lei Zhu, Jingjing Li, Yang Yang, Liqiang Nie","doi":"10.1109/TIP.2019.2940693","DOIUrl":"10.1109/TIP.2019.2940693","url":null,"abstract":"<p><p>Recent years have witnessed the wide application of hashing for large-scale image retrieval, because of its high computation efficiency and low storage cost. Particularly, benefiting from current advances in deep learning, supervised deep hashing methods have greatly boosted the retrieval performance, under the strong supervision of large amounts of manually annotated semantic labels. However, their performance is highly dependent upon the supervised labels, which significantly limits the scalability. In contrast, unsupervised deep hashing without label dependence enjoys the advantages of well scalability. Nevertheless, due to the relaxed hash optimization, and more importantly, the lack of semantic guidance, existing methods suffer from limited retrieval performance. In this paper, we propose a SCAlable Deep Hashing (SCADH) to learn enhanced hash codes for social image retrieval. We formulate a unified scalable deep hash learning framework which explores the weak but free supervision of discriminative user tags that are commonly accompanied with social images. It jointly learns image representations and hash functions with deep neural networks, and simultaneously enhances the discriminative capability of image hash codes with the refined semantics from the accompanied social tags. Further, instead of simple relaxed hash optimization, we propose a discrete hash optimization method based on Augmented Lagrangian Multiplier to directly solve the hash codes and avoid the binary quantization information loss. Experiments on two standard social image datasets demonstrate the superiority of the proposed approach compared with stateof- the-art shallow and deep hashing techniques.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62587873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain-Transformable Sparse Representation for Anomaly Detection in Moving-Camera Videos. 用于移动摄像机视频异常检测的可域变换稀疏表示法
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-16 DOI: 10.1109/TIP.2019.2940686
Eric Jardim, Lucas A Thomaz, Eduardo A B da Silva, Sergio L Netto
{"title":"Domain-Transformable Sparse Representation for Anomaly Detection in Moving-Camera Videos.","authors":"Eric Jardim, Lucas A Thomaz, Eduardo A B da Silva, Sergio L Netto","doi":"10.1109/TIP.2019.2940686","DOIUrl":"10.1109/TIP.2019.2940686","url":null,"abstract":"<p><p>This paper presents a special matrix factorization based on sparse representation that detects anomalies in video sequences generated with moving cameras. Such representation is made by associating the frames of the target video, that is a sequence to be tested for the presence of anomalies, with the frames of an anomaly-free reference video, which is a previously validated sequence. This factorization is done by a sparse coefficient matrix, and any target-video anomaly is encapsulated into a residue term. In order to cope with camera trepidations, domaintransformations are incorporated into the sparse representation process. Approximations of the transformed-domain optimization problem are introduced to turn it into a feasible iterative process. Results obtained from a comprehensive video database acquired with moving cameras on a visually cluttered environment indicate that the proposed algorithm provides a better geometric registration between reference and target videos, greatly improving the overall performance of the anomaly-detection system.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62587136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hazy Image Decolorization with Color Contrast Restoration. 利用色彩对比度修复技术为模糊图像脱色
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-12 DOI: 10.1109/TIP.2019.2939946
Wei Wang, Zhengguo Li, Shiqian Wu, Liangcai Zeng
{"title":"Hazy Image Decolorization with Color Contrast Restoration.","authors":"Wei Wang, Zhengguo Li, Shiqian Wu, Liangcai Zeng","doi":"10.1109/TIP.2019.2939946","DOIUrl":"10.1109/TIP.2019.2939946","url":null,"abstract":"<p><p>It is challenging to convert a hazy color image into a gray-scale image because the color contrast field of a hazy image is distorted. In this paper, a novel decolorization algorithm is proposed to transfer a hazy image into a distortionrecovered gray-scale image. To recover the color contrast field, the relationship between the restored color contrast and its distorted input is presented in CIELab color space. Based on this restoration, a nonlinear optimization problem is formulated to construct the resultant gray-scale image. A new differentiable approximation solution is introduced to solve this problem with an extension of the Huber loss function. Experimental results show that the proposed algorithm effectively preserves the global luminance consistency while represents the original color contrast in gray-scales, which is very close to the corresponding ground truth gray-scale one.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62586658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quality Measurement of Images on Mobile Streaming Interfaces Deployed at Scale. 大规模部署的移动流媒体界面上的图像质量测量。
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-11 DOI: 10.1109/TIP.2019.2939733
Zeina Sinno, Anush Moorthy, Jan De Cock, Zhi Li, Alan C Bovik
{"title":"Quality Measurement of Images on Mobile Streaming Interfaces Deployed at Scale.","authors":"Zeina Sinno, Anush Moorthy, Jan De Cock, Zhi Li, Alan C Bovik","doi":"10.1109/TIP.2019.2939733","DOIUrl":"10.1109/TIP.2019.2939733","url":null,"abstract":"<p><p>With the growing use of smart cellular devices for entertainment purposes, audio and video streaming services now offer an increasingly wide variety of popular mobile applications that offer portable and accessible ways to consume content. The user interfaces of these applications have become increasingly visual in nature, and are commonly loaded with dense multimedia content such as thumbnail images, animated GIFs, and short videos. To efficiently render these and to aid rapid download to the client display, it is necessary to compress, scale and color subsample them. These operations introduce distortions, reducing the appeal of the application. It is desirable to be able to automatically monitor and govern the visual qualities of these small images, which are usually small images. However, while there exists a variety of high-performing image quality assessment (IQA) algorithms, none have been designed for this particular use case. This kind of content often has unique characteristics, such as overlaid graphics, intentional brightness, gradients, text, and warping. We describe a study we conducted on the subjective and objective quality of images embedded in the displayed user interfaces of mobile streaming applications. We created a database of typical \"billboard\" and \"thumbnail\" images viewed on such services. Using the collected data, we studied the effects of compression, scaling and chroma-subsampling on perceived quality by conducting a subjective study. We also evaluated the performance of leading picture quality prediction models on the new database. We report some surprising results regarding algorithm performance, and find that there remains ample scope for future model development.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62586518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Control Functions for Multiprimary Displays I: Robustness Analysis and Optimization Formulations. 多主显示器的色彩控制函数 I:稳健性分析和优化公式。
IF 10.6 1区 计算机科学
IEEE Transactions on Image Processing Pub Date : 2019-09-06 DOI: 10.1109/TIP.2019.2937067
Carlos Eduardo Rodriguez-Pardo, Gaurav Sharma
{"title":"Color Control Functions for Multiprimary Displays I: Robustness Analysis and Optimization Formulations.","authors":"Carlos Eduardo Rodriguez-Pardo, Gaurav Sharma","doi":"10.1109/TIP.2019.2937067","DOIUrl":"10.1109/TIP.2019.2937067","url":null,"abstract":"<p><p>Color management for a multiprimary display requires, as a fundamental step, the determination of a color control function (CCF) that specifies control values for reproducing each color in the display's gamut. Multiprimary displays offer alternative choices of control values for reproducing a color in the interior of the gamut and accordingly alternative choices of CCFs. Under ideal conditions, alternative CCFs render colors identically. However, deviations in the spectral distributions of the primaries and the diversity of cone sensitivities among observers impact alternative CCFs differently, and, in particular, make some CCFs prone to artifacts in rendered images. We develop a framework for analyzing robustness of CCFs for multiprimary displays against primary and observer variations, incorporating a common model of human color perception. Using the framework, we propose analytical and numerical approaches for determining robust CCFs. First, via analytical development, we: (a) demonstrate that linearity of the CCF in tristimulus space endows it with resilience to variations, particularly, linearity can ensure invariance of the gray axis, (b) construct an axially linear CCF that is defined by the property of linearity over constant chromaticity loci, and (c) obtain an analytical form for the axially linear CCF that demonstrates it is continuous but suffers from the limitation that it does not have continuous derivatives. Second, to overcome the limitation of the axially linear CCF, we motivate and develop two variational objective functions for optimization of multiprimary CCFs, the first aims to preserve color transitions in the presence of primary/observer variations and the second combines this objective with desirable invariance along the gray axis, by incorporating the axially linear CCF. A companion Part II paper, presents an algorithmic approach for numerically computing optimal CCFs for the two alternative variational objective functions proposed here and presents results comparing alternative CCFs for several different 4,5, and 6 primary designs.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2019-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62585578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信