2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)最新文献

筛选
英文 中文
Human pose based video compression via forward-referencing using deep learning 基于深度学习前向参考的基于人体姿态的视频压缩
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008897
S. Rajin, M. Murshed, M. Paul, S. Teng, Jiangang Ma
{"title":"Human pose based video compression via forward-referencing using deep learning","authors":"S. Rajin, M. Murshed, M. Paul, S. Teng, Jiangang Ma","doi":"10.1109/VCIP56404.2022.10008897","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008897","url":null,"abstract":"To exploit high temporal correlations in video frames of the same scene, the current frame is predicted from the already-encoded reference frames using block-based motion estimation and compensation techniques. While this approach can efficiently exploit the translation motion of the moving objects, it is susceptible to other types of affine motion and object occlusion/deocclusion. Recently, deep learning has been used to model the high-level structure of human pose in specific actions from short videos and then generate virtual frames in future time by predicting the pose using a generative adversarial network (GAN). Therefore, modelling the high-level structure of human pose is able to exploit semantic correlation by predicting human actions and determining its trajectory. Video surveillance applications will benefit as stored “big” surveillance data can be compressed by estimating human pose trajectories and generating future frames through semantic correlation. This paper explores a new way of video coding by modelling human pose from the already-encoded frames and using the generated frame at the current time as an additional forward-referencing frame. It is expected that the proposed approach can overcome the limitations of the traditional backward-referencing frames by predicting the blocks containing the moving objects with lower residuals. Our experimental results show that the proposed approach can achieve on average up to 2.83 dB PSNR gain and 25.93% bitrate savings for high motion video sequences compared to standard video coding.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123662869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CNN-Based Post-Processing Filter for Video Compression with Multi-Scale Feature Representation 基于cnn的多尺度特征表示视频压缩后处理滤波器
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008797
Zhanyuan Qi, Cheolkon Jung, Yang Liu, Ming Li
{"title":"CNN-Based Post-Processing Filter for Video Compression with Multi-Scale Feature Representation","authors":"Zhanyuan Qi, Cheolkon Jung, Yang Liu, Ming Li","doi":"10.1109/VCIP56404.2022.10008797","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008797","url":null,"abstract":"In this paper, we propose a convolutional neural network (CNN)-based post-processing filter for video compression with multi-scale feature representation. The discrete wavelet transform (DWT) decomposes an image into multi-frequency and multi-directional sub-bands, and can figure out artifacts caused by video compression with multi-scale feature representation. Thus, we combine DWT with CNN and construct two sub-networks: Step-like sub-band network (SLSB) and mixed enhancement network (ME). SLSB takes the wavelet subbands as input, and feeds them into the Res2Net group (R2NG) from high frequency to low frequency. R2NG consists of Res2Net modules and adopts spatial and channel attentions to adaptively enhance features. We combine the high frequency sub-band output with the low frequency sub-band in R2NG to capture multi-scale features. ME uses mixed convolution composed of dilated convolution and standard convolution as the basic block to expand the receptive field without blind spots in dilated convolution and further improve the reconstruction quality. Experimental results demonstrate that the proposed CNN filter achieves average 2.13%, 2.63%, 2.99%, 4.8%, 3.72% and 4.5% BD-rate reductions over VTM 11.0-NNVC anchor for Y channel on A1, A2, B, C, D and E classes of the common test conditions (CTC) in AI, RA and LDP configurations, respectively.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117137861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Annotating Only at Definite Pixels: A Novel Weakly Supervised Semantic Segmentation Method for Sea Fog Recognition 一种用于海雾识别的弱监督语义分割新方法
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008863
Xun Zhu, Mengqiu Xu, Ming Wu, Chuang Zhang, Bin Zhang
{"title":"Annotating Only at Definite Pixels: A Novel Weakly Supervised Semantic Segmentation Method for Sea Fog Recognition","authors":"Xun Zhu, Mengqiu Xu, Ming Wu, Chuang Zhang, Bin Zhang","doi":"10.1109/VCIP56404.2022.10008863","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008863","url":null,"abstract":"Sea fog recognition is a challenging and significant semantic segmentation task in remote sensing images. The fully supervised learning method relies on the pixel-level label, which is labor-intensive and time-consuming. Moreover, it is impossible to accurately annotate all pixels of the sea fog region due to the limited ability of the human eye to distinguish between low clouds and sea fog. In this paper, we propose a novel approach of point-based annotation for weakly supervised semantic segmentation with the auxiliary information of International Comprehensive Ocean-Atmosphere Data Set (ICOADS) visibility data. It only needs several definite points for both foreground and background, which significantly reduces the annotation cost of manpower. We conduct extensive experiments on Himawari-8 satellite remote sensing images to demonstrate the effectiveness of our annotation method. The mean intersection over union (mIoU) and overall recognition accuracy of our annotation method reach 82.72% and 95.18 %, respectively. Compared with the fully supervised learning method, the accuracy and the recognition rate of sea fog area are improved with a maximum increase of 7.69% and 9.69 %, respectively.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121765614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Near-lossless Point Cloud Geometry Compression Based on Adaptive Residual Compensation 基于自适应残差补偿的近无损点云几何压缩
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008796
Dingquan Li, Jing Wang, Ge Li
{"title":"Near-lossless Point Cloud Geometry Compression Based on Adaptive Residual Compensation","authors":"Dingquan Li, Jing Wang, Ge Li","doi":"10.1109/VCIP56404.2022.10008796","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008796","url":null,"abstract":"Point cloud compression (PCC) is a crucial enabler for immersive multimedia applications since point cloud is one of the most primitive forms for representing 3D scenes and objects. Recently, some approaches are proposed to improve the average reconstruction quality of octree-based Geometry-based Point Cloud Compression (G-PCC). However, it is noticed that these approaches suffer considerable loss in terms of point-to-point (D1) Hausdorff distance when compared to G-PCC (octree). Here we introduce a near-lossless point cloud geometry compression method based on adaptive residual compensation by adding and removing points with large errors. It allows controlling of D1 Hausdorff (D1h) distance and maintains a great improvement in average reconstruction performance over G-PCC. Experimental results verify the effectiveness of our method, where our method achieves an average of 78.5% D1 and 11.4% D1h Bjontegaard-delta bitrate savings over the octree-based G-PCC on solid point clouds of the MPEG Cat1A dataset.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114662976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRIQA: Subjective Method and Objective Model for Magnetic Resonance Image Quality Assessment 磁共振图像质量评价的主观方法和客观模型
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008885
Qi Chen, F. Liu, Huiyu Duan, Yao Wang, Xiongkuo Min, Yan Zhou, Guangtao Zhai
{"title":"MRIQA: Subjective Method and Objective Model for Magnetic Resonance Image Quality Assessment","authors":"Qi Chen, F. Liu, Huiyu Duan, Yao Wang, Xiongkuo Min, Yan Zhou, Guangtao Zhai","doi":"10.1109/VCIP56404.2022.10008885","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008885","url":null,"abstract":"Magnetic Resonance Imaging (MRI) is widely used for medical diagnosis, staging and follow-up of disease. However, MRI images may have artifacts due to various reasons such as patient movement or machine distortion, which may be unintentionally introduced during the procedure of medical image acquisition, processing, etc. These artifacts may affect the effectiveness of diagnosis or even cause false diagnosis. To solve this problem, we propose a general medical image quality assessment (MIQA) methodology, including subjective MIQA procedures and objective MIQA algorithms. We further apply this methodology to MRI images in this paper due to its widespread use in practical applications. We first establish a magnetic resonance imaging quality assessment (MRIQA) database, which contains 3809 MRI images. Then a subjective image quality assessment experiment is conducted by expert doctors according to the diagnostic value of these images, which split all MRI images into 1285 low quality images and 2524 high quality images. We then conduct a baseline deep learning experiment, and propose an attention based MIQANet model to automatically separate MRI images into high quality and low quality based on their diagnosis value. Our proposed method achieves a great quality assessment accuracy of 96.59%. The constructed MRIQA database and proposed MIQA model will be public available to further promote medical IQA research.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127960384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fast and Effective Framework for Camera Calibration in Sport Videos 一种快速有效的运动视频摄像机标定框架
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008882
Neng Zhang, E. Izquierdo
{"title":"A Fast and Effective Framework for Camera Calibration in Sport Videos","authors":"Neng Zhang, E. Izquierdo","doi":"10.1109/VCIP56404.2022.10008882","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008882","url":null,"abstract":"Computing the relative homography between the sports field template and the corresponding field in a video frame is an important task in camera calibration. In this paper, a fast and effective framework is proposed for addressing this task. The proposed framework has three processing modules. First, a semantic segmentation network is presented to obtain the segmented video frames. Second, a regression network is developed and combined with the direct linear transformation (DLT) algorithm to compute the homography. Third, the enhanced correlation coefficient (ECC) technique is leveraged to refine the estimated homography. The proposed framework is evaluated on 2014 World Cup dataset. The experimental results are compared to the state-of-the-art approaches. The experimental results demonstrate that the accuracy in the proposed framework is superior and the computation speed is competitive.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134007029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast CU Partition Method Based on Extra Trees for VVC Intra Coding 基于额外树的VVC内部编码快速CU划分方法
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008800
Kaijie Wang, Hong Liang, Saiping Zhang, Fuzheng Yang
{"title":"Fast CU Partition Method Based on Extra Trees for VVC Intra Coding","authors":"Kaijie Wang, Hong Liang, Saiping Zhang, Fuzheng Yang","doi":"10.1109/VCIP56404.2022.10008800","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008800","url":null,"abstract":"In this paper, we propose a method to skip unnecessary CU encoding modes for VVC Intra coding based on the extra trees model. Two extra tree models with calculated features are used to simplify the encoding process, where the first model determines whether to early terminate the partition and the best partition direction and the second model selects the better partition mode between the binary and ternary partition modes. Experimental results show that our proposed method can save encoding time from 34.68% to 46.70% with only from 0.81% to 1.65% increase of BDBR compared to VVC reference software (VTM 10.0). Besides, the method gets a great tradeoff when applied on VVenC 1.0, an efficient encoder ofVVC, at both preset slower and preset medium.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130971422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PickDet: A Detection Framework for Aerial-view Scene PickDet:一个鸟瞰场景的检测框架
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008902
Cheng Lyu, Xiao Deng, Shizun Wang, Ming Wu, Chuang Zhang
{"title":"PickDet: A Detection Framework for Aerial-view Scene","authors":"Cheng Lyu, Xiao Deng, Shizun Wang, Ming Wu, Chuang Zhang","doi":"10.1109/VCIP56404.2022.10008902","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008902","url":null,"abstract":"Detecting objects in the aerial-view scene is challenging for the objects usually have small scales relative to the image, making it hard to achieve high accuracy in full-image detection. Slice detection tries to overcome this by cutting the full image into slices before detecting them, but objects are sparsely distributed and usually clustered in local areas, a large number of background areas without objects can be ignored to improve detection efficiency. In this paper, we present PickDet, a framework for efficient and effective object detection in the aerial-view scene, which only chooses slices containing objects to conduct detection. The key components of PickDet include a lightweight convolutional network (PickNet), a screening strategy (SoftPick), and fine-tuned detectors. Given slices of aerial-view images, PickNet first outputs the probability of object existence. Then SoftPick conducts a double-threshold screening strategy to pick the slices which contain objects. Finally, all picked slices are fed into the detector in parallel and full-image detection is used as an auxiliary mean. Compared with previous methods, PickDet achieves higher accuracy and more efficiency in the aerial-view scene. We evaluate PickDet on Visdrone and Oiltank datasets, experiments show that PickDet can result in up to 28.0% AP improvement compared to full-image detection, and can result in up to 2.9% AP increase and up to 5 times inference speedup compared to slice detection.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129561055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One Shot Object Detection Via Hierarchical Adaptive Alignment 通过分层自适应对齐的单镜头目标检测
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008884
Enquan Zhang, Cheolkon Jung
{"title":"One Shot Object Detection Via Hierarchical Adaptive Alignment","authors":"Enquan Zhang, Cheolkon Jung","doi":"10.1109/VCIP56404.2022.10008884","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008884","url":null,"abstract":"Recently, deep learning based object detectors have achieved good performance with abundant labeled data. However, data labeling is often expensive and time-consuming in real life. Therefore, it is required to introduce one shot learning into object detection. In this paper, we propose one shot object detection based on hierarchical adaptive alignment to address the limited information of one shot in feature representation. We present a multi-adaptive alignment framework based on faster R-CNN to extract effective features from query patch and target image using siamese convolutional feature extraction, then generate a fused feature map by aggregating query and target features. We use the fused feature map in object classification and localization. The proposed framework adaptively adjusts feature representation through hierarchical and aggregated alignment so that it can learn correlation between the target image and the query patch. Experimental results demonstrate that the proposed method significantly improves the unseen-class object detection from 24.3 AP50 to 26.2 AP50 on the MS-COCO dataset.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127892343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Inter Prediction Mode Decision Method Based On Random Forest For H.266/VVC 基于随机森林的H.266/VVC间预测模式快速决策方法
2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) Pub Date : 2022-12-13 DOI: 10.1109/VCIP56404.2022.10008840
Kundan Xie, Jianquan Zhou, Saiping Zhang, Fuzheng Yang
{"title":"Fast Inter Prediction Mode Decision Method Based On Random Forest For H.266/VVC","authors":"Kundan Xie, Jianquan Zhou, Saiping Zhang, Fuzheng Yang","doi":"10.1109/VCIP56404.2022.10008840","DOIUrl":"https://doi.org/10.1109/VCIP56404.2022.10008840","url":null,"abstract":"In H.266/VVC, many new tools are introduced in the inter prediction process. These new techniques enable the encoder in H.266/VVC to predict motion vectors more accurately, but inevitably increase the coding complexity. To solve this problem, in this paper, we propose an early termination algorithm for inter prediction based on random forest which is characterized by the information provided by the temporal co-located block and the spatial adjacent block of the current Coding Unit (CU). Specifically, the random forest is used to predict whether the inter prediction process of the current CU will be terminated in advance. Our proposed algorithm is implemented on Fraunhofer Versatile Video Encoder (VVenC). Experimental results have shown that, in the Random Access (RA) mode, the encoding time of VVenC is reduced by 7.71 % on average while Bjontegaard Delta Bit Rate (BDBR) increases by 1.48 %.","PeriodicalId":269379,"journal":{"name":"2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127927990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信