2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)最新文献

筛选
英文 中文
A Large-scale Sports Tracking Dataset and Progressive Re-detection Based Sports Tracking 大规模运动跟踪数据集及基于递进重检测的运动跟踪
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008906
Han Wang, Xiaojun Zhou, Qinyu Xu, Huaqiang Ren, Rong Xie, Li Song
{"title":"A Large-scale Sports Tracking Dataset and Progressive Re-detection Based Sports Tracking","authors":"Han Wang, Xiaojun Zhou, Qinyu Xu, Huaqiang Ren, Rong Xie, Li Song","doi":"10.1109/VCIP56404.2022.10008906","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008906","url":null,"abstract":"Recent years have witnessed the great progress of Visual Object Tracking (VOT) which aims to predict the position of an object in each video frame given only its initial appearance. However, even the state-of-the-art methods are confronted with performance degradation, i.e., the tracker drift problem, in sports video scenes (e.g., soccer, basketball). There are two main causes that should be responsible for the tracker drift problem. First, the object of interest is often occluded by other objects that share a similar appearance. Such severe occlusion prevents the model from distinguishing the correct tracking object from other distractors in the future frames. Second, in sports videos, the objects often move fast from one place to another, which incurs severe blurry visual effects among consecutive frames. To address the issues of the tracker drift problem, we treat VOT as a tracking-by-re-detection task. Specifically, we detect candidate objects within a searching area (determined by object location in the previous frame) in the current frame and develop a progressive algorithm to filter out distractors in the area, which proves robust towards occlusion scenarios and tracker drift problems. Combining the advantages of our settings, the proposed framework method is robust to motion blur and object occlusion issues and achieves state-of-the-art tracking results on our challenging dataset.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127330055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Space and Level Cooperation Framework for Pathological Cancer Grading 病理性肿瘤分级空间与层次合作框架
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008824
Xiaotian Yu, Zunlei Feng, Xiuming Zhang, Yuexuan Wang, Thomas Li
{"title":"Space and Level Cooperation Framework for Pathological Cancer Grading","authors":"Xiaotian Yu, Zunlei Feng, Xiuming Zhang, Yuexuan Wang, Thomas Li","doi":"10.1109/VCIP56404.2022.10008824","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008824","url":null,"abstract":"Clinically, the pathological images are intuitive for cancer diagnosis and have been considered as the ‘gold standard’. There are two challenges for applying deep learning into the pathological images analysis: the ultra-large size and the noisy annotations. A pathological image usually contains billions of pixels, which is unsuitable for normal classification models. Furthermore, the ultra-large size and mixed cancerous cells compel the doctor to draw rough boundaries of lesion area according to the cancerous level, which brings two kinds of noisy labels: space noise (annotating inaccurate scope of cancerous area) and level noise (annotating inaccurate cancerous level). Based on the above findings, we propose the space and level cooperation framework, comprising a space-aware branch and a level-aware branch, for pathological cancer grading with noisy annotations. The space-aware branch first turns the ultra-large image into a Multilayer Superpixel (MS) graph, significantly reducing the size and preserving the global features. Then, a global-to-local rectifying strategy is adopted to solve the space noise. The level-aware branch adopts different grouped kernels and a novel grading loss function to handle level noise. Mean-while, two branches cooperate through complementing missing features of each other for handling the above two challenges. Extensive experiments demonstrate that with noisy annotations, the proposed framework achieves SOTA performance on our HCC dataset and two public datasets.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124919652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAD360: Spherical Viewport-Aware Dynamic Tiling for 360-Degree Video Streaming SAD360:球面视口感知动态平铺360度视频流
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008862
Zhijun Li, Yumei Wang, Yu Liu
{"title":"SAD360: Spherical Viewport-Aware Dynamic Tiling for 360-Degree Video Streaming","authors":"Zhijun Li, Yumei Wang, Yu Liu","doi":"10.1109/VCIP56404.2022.10008862","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008862","url":null,"abstract":"As a kind of medium that provides strongly immersive experience, 360° videos suffer greatly from pixel inefficiency as the content will not be fully viewed by users, leading to a high-bandwidth requirement of streaming. Recently, Tile-based streaming systems have become popular to lower bandwidth usage. However, most of these systems inevitably treat non-viewport areas as viewport because the fixed tiling configuration fails to adapt to the viewport effectively. A finer-grained tiling configuration helps adapt to the viewport, but also introduces significant encoding overhead. Recently proposed dynamic tiling systems address the issue by tiling chunks dynamically based on the features of projected 360° videos. However, because projection inherently introduces serious distortion to image, the results can be misleading. To overcome the viewport adaption problem, we propose Spherical Viewport-Aware Dynamic Tiling for 360° Video Streaming (SAD360). Given that popularity of different areas can be reflected by viewers' behaviours on the whole, a dynamic tiling algorithm is proposed to find the optimal tiling configuration for each chunk by analysing head movement data in hand on a sphere. The algorithm tries its best to generate bigger tiles to reduce encoding overhead and still manages to adapt to the viewport effectively. We also use Reinforcement Learning (RL) to solve the problem of bitrate allocation of tiles varying in size. Experiments demonstrate that our system can get a 14% average QoE gain compared with fixed tiling configuration.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116718323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single Image Super-Resolution Using ConvNeXt 使用ConvNeXt的单图像超分辨率
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008798
Chenghui You, Chao-qun Hong, Lijuan Liu, Xuehan Lin
{"title":"Single Image Super-Resolution Using ConvNeXt","authors":"Chenghui You, Chao-qun Hong, Lijuan Liu, Xuehan Lin","doi":"10.1109/VCIP56404.2022.10008798","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008798","url":null,"abstract":"In recent years, a lot of deep convolution neural networks have been successfully applied in single image super-resolution (SISR). Even in the case of using small convolution kernel, those methods still require large number of parameters and computation. To tackle the problem above, we propose a novel framework to extract features more efficiently. Inspired by the idea of deep separable convolution, we improve the standard residual block and propose the inverted bottleneck block (IBNB). The IBNB replaces the small-sized convolution kernel with the large-sized convolution kernel without introducing additional computation. The proposed IBNB proves that large kernel size convolution is available for SISR. Comprehensive experiments demonstrate that our method surpasses most methods by up to 0.10 ~ 0.32dB in quantitative metrics with fewer parameters.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"406 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129446952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face Super Resolution based on Contrastive Learning 基于对比学习的人脸超分辨率
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008836
Wenlin Zhang, Sumei Li, Liqin Huang
{"title":"Face Super Resolution based on Contrastive Learning","authors":"Wenlin Zhang, Sumei Li, Liqin Huang","doi":"10.1109/VCIP56404.2022.10008836","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008836","url":null,"abstract":"Face super resolution (FSR) is a sub-field of super resolution (SR), which is to reconstruct low resolution (LR) face image into high resolution (HR) face image. Recently, the FSR methods based on face prior have been proved to be effective in FSR on higher upscaling factors. However, existing prior guided methods mostly adopt supervised prior extraction models trained with labels. The performance of supervised prior extraction method mainly depends on the accuracy of label so that the implicit informations of data are not fully utilized. And in practical application, the label acquisition work is routine and laborious. Therefore, to solve these problems, this paper proposes a novel contrastive learning (CL) based FSR method, which is based on the iterative collaboration of image reconstruction network and contrastive learning network. In each iteration, the reconstruction network uses the priors generated by the contrastive learning network to assist the image reconstruction and generates higher-quality SR images. Then, the SR image will feed into contrastive learning network to obtain more accurate prior. In addition, a new contrastive learning constraint function is designed to extract the representation of the augmented facial image as a prior by analysing the principal component information of the image. Quantitative and qualitative experimental results show that the proposed method is superior to the most advanced FSR method in high-quality face images super resolution reconstruction.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129036316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCGFormer: Lossy Point Cloud Geometry Compression via Local Self-Attention PCGFormer:基于局部自关注的有损点云几何压缩
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008892
Gexin Liu, Jianqiang Wang, Dandan Ding, Zhan Ma
{"title":"PCGFormer: Lossy Point Cloud Geometry Compression via Local Self-Attention","authors":"Gexin Liu, Jianqiang Wang, Dandan Ding, Zhan Ma","doi":"10.1109/VCIP56404.2022.10008892","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008892","url":null,"abstract":"Although the multiscale sparse tensor using stacked convolutions has attained noticeable gains for lossy compression of point cloud geometry (PCG), its capability suffers because convolutions with fixed receptive field and fixed weights after training cannot aggregate sufficient information collection due to the extremely sparse and unevenly distributed nature of points. To best tackle the sparsity and adaptively exploit inter-point correlations, we apply local self-attention on $k$ nearest neighbors (kNN) that are instantaneously formed for each point, with which attention-based mechanism can effectively characterize and embed spatial information conditioned on the dynamic neighborhood. This kNN self-attention is implemented using the prevalent Transformer architecture and stacked with sparse convolutions to capture neighborhood information in a progres-sively re-sampling framework, referred to as the PCGFormer. Compared with the MPEG standard Geometry-based PCC (G-PCC) using the latest octree codec, the proposed PCGFormer provides more than 90% and 87% BD-rate (Bjøntegaard Delta Rate) reduction in average across three different object point cloud datasets for point-to-point (D1) and point-to-plane (D2) distortion measures. Compared with the state-of-the-art learning-based approach, the PCGFormer achieves 17.39% and 15.75% BD-rate gains on D1 and D2, respectively.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134510498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Enhanced motion list reordering for video coding 增强了视频编码的动作列表重新排序
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008886
Yang Wang, Kai Zhang, Na Zhang, Z. Deng, Li Zhang
{"title":"Enhanced motion list reordering for video coding","authors":"Yang Wang, Kai Zhang, Na Zhang, Z. Deng, Li Zhang","doi":"10.1109/VCIP56404.2022.10008886","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008886","url":null,"abstract":"In video coding, motion information consisting of motion vectors and reference index is typically involved in motion compensation. Motion list is widely used to efficiently compress the motion information, in which a motion index indicating the motion information is signaled. And the compression efficiency can be improved by template matching based motion list reordering. Besides, the motion information is further refined before being used in motion compensation by motion refinement process such as decoder-side motion vector refinement and template matching. However, the original motion information of the motion list rather than the refined motion information is used in the motion list reordering, which limits the coding performance. Therefore, an enhanced motion list reordering (EMLR) approach is proposed in this paper, in which the refined motion information is used in the motion list reordering. To derive the refined motion information, a dedicated motion refinement with a simplified version of motion refinement process is proposed. Furthermore, a simplified version of EMLR with two fast algorithms (EMLR-S) is proposed. Experimental results demonstrate that EMLR can achieve 0.19% BD-rate saving on average, and EMLR-S can achieve 0.1 % BD-rate saving with negligible coding complexity change compared to ECM-4.0 under random access configuration.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133114775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning from the NN-based Compressed Domain with Deep Feature Reconstruction Loss 基于深度特征重构损失的神经网络压缩域学习
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008841
Liuhong Chen, Heming Sun, Xiaoyang Zeng, Yibo Fan
{"title":"Learning from the NN-based Compressed Domain with Deep Feature Reconstruction Loss","authors":"Liuhong Chen, Heming Sun, Xiaoyang Zeng, Yibo Fan","doi":"10.1109/VCIP56404.2022.10008841","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008841","url":null,"abstract":"To speedup the image classification process which conventionally takes the reconstructed images as input, compressed domain methods choose to use the compressed images without decompression as input. Correspondingly, there will be a certain decline about the accuracy. Our goal in this paper is to raise the accuracy of compressed domain classification method using compressed images output by the NN-based image compression networks. Firstly, we design a hybrid objective loss function which contains the reconstruction loss of deep feature map. Secondly, one image reconstruction layer is inte-grated into the image classification network for up-sampling the compressed representation. These methods greatly help increase the compressed domain image classification accuracy and need no extra computational complexity. Experimental results on the benchmark ImageNet prove that our design outperforms the latest work ResNet-41 with a large accuracy gain, about 4.49% on the top-1 classification accuracy. Besides, the accuracy lagging behinds the method using reconstructed images is also reduced to 0.47 %. Moreover, our designed classification network has the lowest computational complexity and model complexity.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115360735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the Importance of Temporal Dependencies of Weight Updates in Communication Efficient Federated Learning 论权值更新时间依赖性在通信高效联邦学习中的重要性
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008860
Homayun Afrabandpey, Rangu Goutham, Honglei Zhang, Francesco Criri, Emre B. Aksu, H. R. Tavakoli
{"title":"On the Importance of Temporal Dependencies of Weight Updates in Communication Efficient Federated Learning","authors":"Homayun Afrabandpey, Rangu Goutham, Honglei Zhang, Francesco Criri, Emre B. Aksu, H. R. Tavakoli","doi":"10.1109/VCIP56404.2022.10008860","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008860","url":null,"abstract":"This paper studies the effect of exploiting temporal dependency of successive weight updates on compressing communications in Federated Learning (FL). For this, we propose residual coding for FL, which utilizes temporal dependencies by communicating compressed residuals of the weight updates whenever they are beneficial to bandwidth. We further consider Temporal Context Adaptation (TCA) which compares co-located elements of consecutive weight updates to select optimal setting for compression of bitstream in DeepCABAC encoder. Following experimental settings of MPEG standard on Neural Network Compression (NNC), we demonstrate that both temporal dependency based technologies reduce communication overhead, where the maximum reduction is obtained using both technologies, simultaneously.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"35 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116851889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSCI: A Multi-Source Composite Image Database for Compression Distortion Quality Assessment 用于压缩失真质量评估的多源合成图像数据库
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008864
Xiaofang Zhang, Zhuowei Xu, Zhiheng Lin, Miaohui Wang
{"title":"MSCI: A Multi-Source Composite Image Database for Compression Distortion Quality Assessment","authors":"Xiaofang Zhang, Zhuowei Xu, Zhiheng Lin, Miaohui Wang","doi":"10.1109/VCIP56404.2022.10008864","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008864","url":null,"abstract":"With the rapid development of multi-sensor fusion technology in various industrial fields, many composite images closely related to human life have been produced. To meet the rapidly growing needs of various image-based applications, we have established the first multi-source composite image (MSCI) database for image quality assessment (IQA). Our MSCI database contains 80 reference images and 1600 distorted images, generated by four advanced compression standards with five distortion levels. In particular, these five distortion levels are determined based on the first five just noticeable difference (JND) levels. Moreover, we verify the IQA performance of some representative methods on our MSCI database. The experimental results show that the performance of the existing methods on the MSCI database needs to be further improved.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"76 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120896806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信