2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)最新文献

筛选
英文 中文
Burst Reflection Removal using Reflection Motion Aggregation Cues 使用反射运动聚合线索去除突发反射
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00032
B. H. P. Prasad, S. GreenRoshK., R. Lokesh, K. Mitra
{"title":"Burst Reflection Removal using Reflection Motion Aggregation Cues","authors":"B. H. P. Prasad, S. GreenRoshK., R. Lokesh, K. Mitra","doi":"10.1109/WACV56688.2023.00032","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00032","url":null,"abstract":"Single image reflection removal has attracted lot of interest in the recent past with data driven approaches demonstrating significant improvements. However deep learning based approaches for multi-image reflection removal remains relatively less explored. The existing multi-image methods require input images to be captured at sufficiently different view points with wide baselines. This makes it cumbersome for the user who is required to capture the scene by moving the camera in multiple directions. A more convenient way is to capture a burst of images in a short time duration without providing any specific instructions to the user. A burst of images captured on a hand-held device provide crucial cues that rely on the subtle handshakes created during the capture process to separate the reflection and the transmission layers. In this paper, we propose a multi-stage deep learning based approach for burst reflection removal. In the first stage, we perform reflection suppression on the individual images. In the second stage, a novel reflection motion aggregation (RMA) cue is extracted that emphasizes the transmission layer more than the reflection layer to aid better layer separation. In our final stage we use this RMA cue as a guide to remove reflections from the input. We provide the first real world burst images dataset along with ground truth for reflection removal that can enable future benchmarking. We evaluate both qualitatively and quantitatively to demonstrate the superiority of the proposed approach. Our method achieves ~ 2dB improvement in PSNR over single image based methods and ~ 1dB over multi-image based methods.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"663 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113999053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Complementary Cues from Audio Help Combat Noise in Weakly-Supervised Object Detection 来自音频的互补线索有助于在弱监督对象检测中对抗噪声
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00222
Cagri Gungor, Adriana Kovashka
{"title":"Complementary Cues from Audio Help Combat Noise in Weakly-Supervised Object Detection","authors":"Cagri Gungor, Adriana Kovashka","doi":"10.1109/WACV56688.2023.00222","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00222","url":null,"abstract":"We tackle the problem of learning object detectors in a noisy environment, which is one of the significant challenges for weakly-supervised learning. We use multimodal learning to help localize objects of interest, but unlike other methods, we treat audio as an auxiliary modality that assists to tackle noise in detection from visual regions. First, we use the audio-visual model to generate new \"ground-truth\" labels for the training set to remove noise between the visual features and noisy supervision. Second, we propose an \"indirect path\" between audio and class predictions, which combines the link between visual and audio regions, and the link between visual features and predictions. Third, we propose a sound-based \"attention path\" which uses the benefit of complementary audio cues to identify important visual regions. We use contrastive learning to perform region-based audio-visual instance discrimination, which serves as an intermediate task and benefits from the complementary cues from audio to boost object classification and detection performance. We show that our methods, which update noisy ground truth and provide indirect and attention paths, greatly boosting performance on the AudioSet and VGGSound datasets compared to single-modality predictions, even ones that use contrastive learning. Our method outperforms previous weakly-supervised detectors for the task of object detection by reaching the state-of-art on AudioSet, and our sound localization module performs better than several state-of-art methods on AudioSet and MUSIC.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"51 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114057451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MRI Imputation based on Fused Index- and Intensity-Registration 基于指数与强度融合配准的MRI数据输入
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00199
Jiyoon Shin, Jungwoo Lee
{"title":"MRI Imputation based on Fused Index- and Intensity-Registration","authors":"Jiyoon Shin, Jungwoo Lee","doi":"10.1109/WACV56688.2023.00199","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00199","url":null,"abstract":"3D MRI imaging is based on a number of imaging sequences such as T1, T2, T1ce, and Flair, and each of them is performed by a group of two-dimensional scans. In practical MRI, some scans are often missing while many medical applications require a full set of scans. An MRI imputation method is presented, which synthesizes such missing scans. Key components in this method are the index registration and the intensity registration. The index registration models anatomical differences between two different scans in the same imaging sequence, and the intensity registration reflects the image contrast differences between two different scans of the same index. Two registration fields are learned to be invariant, and accordingly, allow two estimates of a missing scan, one within corresponding imaging sequence and another along scan index; the two estimates are combined to yield the final synthesized scan. Experimental results highlight that the proposed method improves prevalent limitations existing in previous synthesis methods, blending both structural and contrast aspects and capturing subtle parts of the brain. Quantitative results also show the superiority in various data sets, transitions, and measures.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115141332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Concealed Weapon Detection on 3D Radar Images for Walk-through Screening System 基于三维雷达图像的隐形武器实时探测技术
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00074
Nagma S. Khan, Kazumine Ogura, E. Cosatto, Masayuki Ariyoshi
{"title":"Real-time Concealed Weapon Detection on 3D Radar Images for Walk-through Screening System","authors":"Nagma S. Khan, Kazumine Ogura, E. Cosatto, Masayuki Ariyoshi","doi":"10.1109/WACV56688.2023.00074","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00074","url":null,"abstract":"This paper presents a framework for real-time concealed weapon detection (CWD) on 3D radar images for walk-through screening systems. The walk-through screening system aims to ensure security in crowded areas by performing CWD on walking persons, hence it requires an accurate and real-time detection approach. To ensure accuracy, a weapon needs to be detected irrespective of its 3D orientation, thus we use the 3D radar images as detection input. For achieving real-time, we reformulate classic U-Net based segmentation networks to perform 3D detection tasks. Our 3D segmentation network predicts peak-shaped probability map, instead of voxel-wise masks, to enable position inference by elementary peak detection operation on the predicted map. In the peak-shaped probability map, the peak marks the weapon’s position. So, weapon detection task translates to peak detection on the probability map. A Gaussian function is used to model weapons in the probability map. We experimentally validate our approach on realistic 3D radar images obtained from a walk-through weapon screening system prototype. Extensive ablation studies verify the effectiveness of our proposed approach over existing conventional approaches. The experimental results demonstrate that our proposed approach can perform accurate and real-time CWD, thus making it suitable for practical applications of walk-through screening.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"160 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132266865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Meta-Auxiliary Learning for Future Depth Prediction in Videos 视频中未来深度预测的元辅助学习
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00571
Huan Liu, Zhixiang Chi, Yuanhao Yu, Yang Wang, Jun Chen, Jingshan Tang
{"title":"Meta-Auxiliary Learning for Future Depth Prediction in Videos","authors":"Huan Liu, Zhixiang Chi, Yuanhao Yu, Yang Wang, Jun Chen, Jingshan Tang","doi":"10.1109/WACV56688.2023.00571","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00571","url":null,"abstract":"We consider a new problem of future depth prediction in videos. Given a sequence of observed frames in a video, the goal is to predict the depth map of a future frame that has not been observed yet. Depth estimation plays a vital role for scene understanding and decision-making in intelligent systems. Predicting future depth maps can be valuable for autonomous vehicles to anticipate the behaviours of their surrounding objects. Our proposed model for this problem has a two-branch architecture. One branch is for the primary task of future depth prediction. The other branch is for an auxiliary task of image reconstruction. The auxiliary branch can act as a regularization. Inspired by some recent work on test-time adaption, we use the auxiliary task during testing to adapt the model to a specific test video. We also propose a novel meta-auxiliary learning that learns the model specifically for the purpose of effective test-time adaptation. Experimental results demonstrate that our proposed approach outperforms other alternative methods.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"490 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134005267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Relaxing Contrastiveness in Multimodal Representation Learning 多模态表征学习中的放松对比
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00226
Zudi Lin, Erhan Bas, Kunwar Yashraj Singh, Gurumurthy Swaminathan, Rahul Bhotika
{"title":"Relaxing Contrastiveness in Multimodal Representation Learning","authors":"Zudi Lin, Erhan Bas, Kunwar Yashraj Singh, Gurumurthy Swaminathan, Rahul Bhotika","doi":"10.1109/WACV56688.2023.00226","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00226","url":null,"abstract":"Multimodal representation learning for images with paired raw texts can improve the usability and generality of the learned semantic concepts while significantly reducing annotation costs. In this paper, we explore the design space of loss functions in visual-linguistic pretraining frameworks and propose a novel Relaxed Contrastive (ReCo) objective, which act as a drop-in replacement of the widely used InfoNCE loss. The key insight of ReCo is to allow a relaxed negative space by not penalizing unpaired multimodal samples (i.e., negative pairs) that are already orthogonal or negatively correlated. Unlike the widely-used InfoNCE, which keeps repelling negative pairs as long as they are not anti-correlated, ReCo by design embraces more diversity and flexibility of the learned embeddings. We conduct exten-sive experiments using ReCo with state-of-the-art models by pretraining on the MIMIC-CXR dataset that consists of chest radiographs and free-text radiology reports, and eval-uating on the CheXpert dataset for multimodal retrieval and disease classification. Our ReCo achieves an absolute improvement of 2.9% over the InfoNCE baseline on the CheXpert Retrieval dataset in average retrieval precision and re-ports better or comparable performance in the linear evaluation and finetuning for classification. We further show that ReCo outperforms InfoNCE on the Flickr30K dataset by 1.7% in retrieval Recall@1, demonstrating the generalizability of our approach to natural images.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"262 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134604892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Marker-removal Networks to Collect Precise 3D Hand Data for RGB-based Estimation and its Application in Piano 基于rgb估计的精确三维手部数据标记去除网络及其在钢琴中的应用
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00299
Erwin Wu, Hayato Nishioka, Shinichi Furuya, H. Koike
{"title":"Marker-removal Networks to Collect Precise 3D Hand Data for RGB-based Estimation and its Application in Piano","authors":"Erwin Wu, Hayato Nishioka, Shinichi Furuya, H. Koike","doi":"10.1109/WACV56688.2023.00299","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00299","url":null,"abstract":"Hand pose analysis is a key step to understanding dexterous hand performances of many high-level skills, such as playing the piano. Currently, most accurate hand tracking systems are using fabric-/marker-based sensing that potentially disturbs users’ performance. On the other hand, markerless computer vision-based methods rely on a precise bare-hand dataset for training, which is difficult to obtain. In this paper, we collect a large-scale high precision 3D hand pose dataset with a small workload using a marker-removal network (MR-Net). The proposed MR-Net translates the marked-hand images to realistic bare-hand images, and the corresponding 3D postures are captured by a motion capture thus few manual annotations are required. A baseline estimation network PiaNet is introduced and we report the accuracy of various metrics together with a blind qualitative test to show the practical effect.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133115838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-Based Self-Learning for Robust Person Re-identification 基于图的鲁棒人物再识别自学习
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00477
Yuqiao Xian, Jinrui Yang, Fufu Yu, Jun Zhang, Xing Sun
{"title":"Graph-Based Self-Learning for Robust Person Re-identification","authors":"Yuqiao Xian, Jinrui Yang, Fufu Yu, Jun Zhang, Xing Sun","doi":"10.1109/WACV56688.2023.00477","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00477","url":null,"abstract":"Existing deep learning approaches for person re-identification (Re-ID) mostly rely on large-scale and well-annotated training data. However, human-annotated labels are prone to label noise in real-world applications. Previous person Re-ID works mainly focus on random label noise, which doesn’t properly reflect the characteristic of label noise in practical human-annotated process. In this work, we find the visual ambiguity noise is more common and reasonable noise assumption in annotation of person Re-ID. To handle the kind of noise, we propose a simple and effective robust person Re-ID framework, namely Graph-Based Self-Learning (GBSL), to iteratively learn discriminative representation and rectify noisy labels with limited annotated samples for each identity. Meanwhile, considering the practical annotation process in person Re-ID, we further extend the visual ambiguity noise assumption and propose a type of more practical label noise in person Re-ID, namely the tracklet-level label noise (TLN). Without modifying network architecture or loss function, our approach significantly improves the robustness against label noise of the Re-ID system. Our model obtains competitive performance with training data corrupted by various types of label noise and outperforms the existing methods for robust Re-ID on public benchmarks.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133455587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts ProtoSeg:具有原型部件的可解释语义分割
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00153
Mikolaj Sacha, Dawid Rymarczyk, Lukasz Struski, J. Tabor, Bartosz Zieli'nski
{"title":"ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts","authors":"Mikolaj Sacha, Dawid Rymarczyk, Lukasz Struski, J. Tabor, Bartosz Zieli'nski","doi":"10.1109/WACV56688.2023.00153","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00153","url":null,"abstract":"We introduce ProtoSeg, a novel model for interpretable semantic image segmentation, which constructs its predictions using similar patches from the training set. To achieve accuracy comparable to baseline methods, we adapt the mechanism of prototypical parts and introduce a diversity loss function that increases the variety of prototypes within each class. We show that ProtoSeg discovers semantic concepts, in contrast to standard segmentation models. Experiments conducted on Pascal VOC and Cityscapes datasets confirm the precision and transparency of the presented method.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133484483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Dynamic Re-weighting for Long-tailed Semi-supervised Learning 长尾半监督学习的动态重加权
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00640
Hanyu Peng, Weiguo Pian, Mingming Sun, P. Li
{"title":"Dynamic Re-weighting for Long-tailed Semi-supervised Learning","authors":"Hanyu Peng, Weiguo Pian, Mingming Sun, P. Li","doi":"10.1109/WACV56688.2023.00640","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00640","url":null,"abstract":"Semi-supervised Learning (SSL) reduces significant human annotations by simply demanding a small number of labelled samples and a large number of unlabelled samples. The research community has often developed SSL regarding the nature of a balanced data set; in contrast, real data is often imbalanced or even long-tailed. The need to study SSL under imbalance is therefore critical. In this paper, we essentially extend FixMatch (a SSL method) to the imbalanced case. We find that the unlabeled data is as well highly imbalanced during the training process; in this respect we propose a re-weighting solution based on the effective number. Furthermore, since prediction uncertainty leads to temporal variations in the number of pseudo-labels, we are innovative in proposing a dynamic reweighting scheme on the unlabeled data. The simplicity and validity of our method are backed up by experimental evidence. Especially on CIFAR-10, CIFAR-100, ImageNet127 data sets, our approach provides the strongest results against previous methods across various scales of imbalance.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126971459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信