2023 IEEE International Conference on Multimedia and Expo (ICME)最新文献

筛选
英文 中文
Inpainting of Remote Sensing Sea Surface Temperature image with Multi-scale Physical Constraints 基于多尺度物理约束的遥感海温图像的图像绘制
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00091
Qichen Wei, Zijie Zuo, Jie Nie, Jiahao Du, Yaning Diao, Min Ye, Xinyue Liang
{"title":"Inpainting of Remote Sensing Sea Surface Temperature image with Multi-scale Physical Constraints","authors":"Qichen Wei, Zijie Zuo, Jie Nie, Jiahao Du, Yaning Diao, Min Ye, Xinyue Liang","doi":"10.1109/ICME55011.2023.00091","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00091","url":null,"abstract":"Sea Surface Temperature (SST) is a significant environmental factor indicating marine revolutions, which is popularly applied in the meteorological forecasting and fishing industry. Due to the limited sensing ability and occlusion caused by clouds or ice, it is difficult to obtain complete SST data. Compared to traditional interpolation-based methods which refill missed data only referred to current SST data, inpainting-based methods have been carried out with the advantage of using historical SST images to train Generative adversarial Networks (GAN) by terms of considering SST data reconstruction task as an image inpainting task. However, different from common inpainting tasks constrained by semantics, the SST image is a scientific data visualization image without semantics but physical constraints. To address this problem, this paper proposes a multi-scale inpainting GAN-based neural networks to guarantee the physical constraint and realize reasonable SST image reconstruction. The proposed framework mainly contains two modules including the Average Estimation Module (AEM) to realize a global constraint so as not to generate excessive deviation, and the Multi-scale Anomaly Decouple Module (MSADM) to preserve data specificity of current SST image from well-designed multi-scale and decoupled perspectives. Finally, a post-fusion module concatenates the \"average\" and \"specificity\" features together to accomplish our multi-scale physical constraints SST image inpainting task. Sufficient experiments have been carried out to verify the effectiveness and physical consistency compared with prior SOTA methods applied to the public AVHRR Pathfinder SST dataset.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124168791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UFS-Net: Unsupervised Network For Fashion Style Editing And Generation UFS-Net:用于时尚风格编辑和生成的无监督网络
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00360
Wanqing Wu, Aihua Mao, W. Yan, Qing Liu
{"title":"UFS-Net: Unsupervised Network For Fashion Style Editing And Generation","authors":"Wanqing Wu, Aihua Mao, W. Yan, Qing Liu","doi":"10.1109/ICME55011.2023.00360","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00360","url":null,"abstract":"AI-aided fashion design has attracted growing interest because it eliminates tedious manual operations. However, existing methods are costly because they require abundant labeled data or paired images for training. In addition, they have low flexibility in attribute editing. To overcome these limitations, we propose UFS-Net, a new unsupervised network for fashion style editing and generation. Specifically, we initially design a coarse-to-fine embedding process to embed the user-defined sketch and the real clothing into the latent space of StyleGAN. Subsequently, we propose a feature fusion scheme to generate clothing with attributes provided by the sketch. In this way, our network requires neither labels nor sketches during the training but can perform flexible attribute editing and conditional generation. Extensive experiments reveal that our method significantly outperforms state-of-the-art approaches. In addition, we introduce a new dataset, Fashion-Top, to address the limitations in the existing fashion datasets.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127643563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature Mixing and Disentangling for Occluded Person Re-Identification 遮挡人再识别的特征混合与解纠缠
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00410
Zepeng Wang, Ke Xu, Yuting Mou, Xinghao Jiang
{"title":"Feature Mixing and Disentangling for Occluded Person Re-Identification","authors":"Zepeng Wang, Ke Xu, Yuting Mou, Xinghao Jiang","doi":"10.1109/ICME55011.2023.00410","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00410","url":null,"abstract":"Occluded person re-identification (Re-ID) has recently attracted lots of attention for its applicability in practical scenarios. However, previous pose-based methods always neglect the non-target pedestrian (NTP) problem. In contrast, we propose a feature mixing and disentangling method to train a robust network for occluded person Re-ID without extra data. Based on ViT, we design our network as follows: 1) A multi-target patch mixing (MPM) module is proposed to generate complex multi-target images with refined labels in the training stage. 2) We propose an identity-based patch realignment (IPR) module in the decoder layer to disentangle local features from the multi-target sample. In contrast to pose-guided methods, our approach overcomes the difficulties of NTP. More importantly, our approach does not bring additional computational costs in the training and testing phases. Experimental results show that our method effectively on occluded person Re-ID. For example, our method performs 3.3%/3.2% better than the baseline on Occluded-Duke in terms of mAP/rank-1 and outperforms the previous state-of-the-art.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126454145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminative and Contrastive Consistency for Semi-supervised Domain Adaptive Image Classification 半监督域自适应图像分类的判别一致性和对比一致性
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00188
Yidan Fan, Wenhuan Lu, Yahong Han
{"title":"Discriminative and Contrastive Consistency for Semi-supervised Domain Adaptive Image Classification","authors":"Yidan Fan, Wenhuan Lu, Yahong Han","doi":"10.1109/ICME55011.2023.00188","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00188","url":null,"abstract":"With sufficient source and limited target supervised information, semi-supervised domain adaptation (SSDA) aims to perform well on unlabeled target domain. Although various strategies have been proposed in SSDA field, they fail to fully exploit limited target labels and adequately explore domain-invariant knowledge. In this study, we propose a framework that first introduces consistent processing of augmented training data based on contrastive learning. Specifically, supervised contrastive learning is introduced to assist the classical cross-entropy iteration to make full use of the limited target labels. Additionally, traditional unsupervised contrastive learning and pseudo-labeling are utilized to further minimize the intra-domain discrepancy. Besides, an adversarial loss is then combined with a sharpening function to acquire a more certain category center that is domain-invariant. Experimental results on DomainNet, Office-Home, and Office show the effectiveness of our method. Particularly, for 1-shot case of Office-Home with AlexNet as backbone, our method outperforms the previous state-of-the-art by 5.6% in terms of mean accuracy.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128042036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RTMC: A Rubost Trusted Multi-View Classification Framework RTMC: Rubost可信多视图分类框架
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00105
Hai Zhou, Zhe Xue, Ying Liu, Boang Li, Junping Du, M. Liang
{"title":"RTMC: A Rubost Trusted Multi-View Classification Framework","authors":"Hai Zhou, Zhe Xue, Ying Liu, Boang Li, Junping Du, M. Liang","doi":"10.1109/ICME55011.2023.00105","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00105","url":null,"abstract":"Multi-view learning aims to fully exploit the information from multiple sources to obtain better performance than using a single view. However, real-world data often contains a lot of noise, which can have a large impact on multi-view learning. Therefore, it is necessary to identify noise contained in multi-view data to achieve robust and trusted classification. In this paper, we propose a robust trusted multi-view classification framework, RTMC. Our framework uses multi-view affinity and repellence encoding to learn effective latent encodings of multi-view data. We also propose a trust-aware discriminator to estimate trust scores by identifying noise contained in the data. We adopt prototype queues, which store latent encodings of different classes, to accurately identify the noise. Finally, trusted multi-view classification is proposed to jointly predict the trust scores of classification and achieve robust classification results through a trusted fusion strategy. RTMC is validated on six challenging multi-view datasets and the experimental results demonstrate the robustness and effectiveness of our method.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125484459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fine-grained Learning for Visible-Infrared Person Re-identification 可见-红外人物再识别的细粒度学习
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00412
Mengzan Qi, Sixian Chan, Chen Hang, Guixu Zhang, Zhi Li
{"title":"Fine-grained Learning for Visible-Infrared Person Re-identification","authors":"Mengzan Qi, Sixian Chan, Chen Hang, Guixu Zhang, Zhi Li","doi":"10.1109/ICME55011.2023.00412","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00412","url":null,"abstract":"Visible-Infrared Person Re-identification aims to retrieve specific identities from different modalities. In order to relieve the modality discrepancy, previous works mainly concentrate on aligning the distribution of high-level features, while disregarding the exploration of fine-grained information. In this paper, we propose a novel Fine-grained Information Exploration Network (FIENet) to implement discriminative representation, further alleviating the modality discrepancy. Firstly, we propose a Progressive Feature Aggregation Module (PFAM) to progressively aggregate mid-level features, and a Multi-Perception Interaction Module (MPIM) to achieve the interaction with diverse perceptions. Additionally, combined with PFAM and MPIM, more fine-grained information can be extracted, which is beneficial for FIENet to focus on discriminative human parts in both modalities effectively. Secondly, in terms of the feature center, we introduce an Identity-Guided Center Loss (IGCL) to supervise identity representation with intra-identity and inter-identity information. Finally, extensive experiments are conducted to demonstrate that our method achieves state-of-the-art performance.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121917060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Compressed Video Restoration by Multi-Scale Temporal Fusion and Hierarchical Quality Score Estimation 基于多尺度时间融合和分层质量评分估计的广义压缩视频恢复
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00092
Zhijie Huang, Tianyi Sun, Xiaopeng Guo, Yanze Wang, Jun Sun
{"title":"Generalized Compressed Video Restoration by Multi-Scale Temporal Fusion and Hierarchical Quality Score Estimation","authors":"Zhijie Huang, Tianyi Sun, Xiaopeng Guo, Yanze Wang, Jun Sun","doi":"10.1109/ICME55011.2023.00092","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00092","url":null,"abstract":"Learning-based methods have achieved excellent performance for compressed video restoration (CVR) in recent years. However, existing networks aggregate multi-frame information inefficiently and are usually developed for specific quantization parameters (QPs), which are not convenient for practical usage. Moreover, current works only consider compressed video restoration in Constant QP (CQP) setting, but do not discuss the performance of the model in more realistic scenarios, e.g., Constant Rate Factor (CRF) and Constant Bitrate (CBR). In this paper, we propose a generalized quality-aware compressed video restoration network, namely QCRN. Specifically, to achieve multi-frame aggregation efficiently, we propose a multi-scale deformable temporal fusion. Meanwhile, QCRN decouples the global quality and local quality representations from input via the hierarchical quality score estimator, and then employs them to adjust the feature enhancement. Extensive experiments on compressed videos in various settings demonstrate that our proposed QCRN achieves favorable performance against state-of-the-art methods in terms of both quantitative metrics and visual quality.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121715069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Software Vulnerability Detection via Curriculum Learning 通过课程学习自动检测软件漏洞
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00485
Qianjin Du, Wei Kun, Xiaohui Kuang, Xiang Li, Gang Zhao
{"title":"Automated Software Vulnerability Detection via Curriculum Learning","authors":"Qianjin Du, Wei Kun, Xiaohui Kuang, Xiang Li, Gang Zhao","doi":"10.1109/ICME55011.2023.00485","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00485","url":null,"abstract":"With the development of deep learning, software vulnerability detection methods based on deep learning have achieved great success, which outperform traditional methods in efficiency and precision. At the training stage, all training samples are treated equally and presented in random order. However, in software vulnerability detection tasks, the detection difficulties of different samples vary greatly. Similar to the human learning mechanism following an easy-to-difficult curriculum learning procedure, vulnerability detection models can also benefit from the easy-to-hard curriculums. Motivated by this observation, we introduce curriculum learning for automated software vulnerability detection, which is capable of arranging easy-to-difficult training samples to learn better detection models without any human intervention. Experimental results show that our method achieves obvious performance improvements compared to baseline models.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"219 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132054660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConCAP: Contrastive Context-Aware Prompt for Resource-hungry Action Recognition 资源匮乏行动识别的对比上下文感知提示
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00137
Hailun Zhang, Ziyun Zeng, Qijun Zhao, Zhen Zhai
{"title":"ConCAP: Contrastive Context-Aware Prompt for Resource-hungry Action Recognition","authors":"Hailun Zhang, Ziyun Zeng, Qijun Zhao, Zhen Zhai","doi":"10.1109/ICME55011.2023.00137","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00137","url":null,"abstract":"Existing large-scale image-language pre-trained models, e.g., CLIP [1], have revealed strong spatial recognition capability on various vision tasks. However, they achieve inferior performance in action recognition due to lack of temporal reasoning ability. Moreover, fully tuning large models require expensive computational infrastructures, and state-of-the-art video models yield slow inference speed due to the high frame sampling rate. The above drawbacks make existing video action recognition works impractical to be applied in resource-hungry scenarios, which is common in the real world. In this work, we propose Contrastive Context-Aware Prompt (ConCAP) for resource-hungry action recognition. Specifically, we develop a lightweight PromptFormer to learn the spatio-temporal representations stacking on top of frozen frame-wise visual backbones, where learnable prompt tokens are plugged between frame tokens during self-attention. These prompt tokens are expected to auto-complete the contextual spatiotemporal information between frames and therefore enhance the model’s representation capability. To achieve this goal, we align the prompt-enhanced representation with both category-level textual representations and video representations from densely sampled frames. Extensive experiments on four video benchmarks show that we achieve state-of-the-art or competitive performance compared to existing methods with far fewer trainable parameters and faster inference speed with limited frames, demonstrating the superiority of ConCAP in resource-hungry scenarios.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134517306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Embedding Uncertainty Learning for Image and Text Matching 图像和文本匹配的语义嵌入不确定性学习
2023 IEEE International Conference on Multimedia and Expo (ICME) Pub Date : 2023-07-01 DOI: 10.1109/ICME55011.2023.00153
Yan Wang, Yunzhi Su, Wenhui Li, C. Yan, Bolun Zheng, Xuanya Li, Anjin Liu
{"title":"Semantic Embedding Uncertainty Learning for Image and Text Matching","authors":"Yan Wang, Yunzhi Su, Wenhui Li, C. Yan, Bolun Zheng, Xuanya Li, Anjin Liu","doi":"10.1109/ICME55011.2023.00153","DOIUrl":"https://doi.org/10.1109/ICME55011.2023.00153","url":null,"abstract":"Image and text matching measures the semantic similarity for cross-modal retrieval. The core of this task is semantic embedding, which mines the intrinsic characteristics of visual and textual for discriminative representation. However, cross-modal ambiguity of image and text (the existence of one-to-many associations) is prone to semantic diversity. The mainstream approaches utilized the fixed point embedding to represent semantics, which ignored the embedding uncertainty caused by semantic diversity leading to incorrect results. To address this issue, we propose a novel Semantic Embedding Uncertainty Learning (SEUL), which represents the embedding uncertainty of image and text as Gaussian distributions and simultaneously learns the salient embedding (mean) and uncertainty (variance) in the common space. We design semantic uncertainty embedding for facilitating the robustness of the representation in the semantic diversity context. A combined objective function is proposed, which optimizes the semantic uncertainty and maintains discriminability to enhance cross-modal associations. Extended experiments are performed on two datasets to demonstrate advanced performance.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131599717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信