2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)最新文献

筛选
英文 中文
Improving the Robustness of Point Convolution on k-Nearest Neighbor Neighborhoods with a Viewpoint-Invariant Coordinate Transform 用点不变坐标变换提高k近邻点卷积的鲁棒性
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00134
Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin
{"title":"Improving the Robustness of Point Convolution on k-Nearest Neighbor Neighborhoods with a Viewpoint-Invariant Coordinate Transform","authors":"Xingyi Li, Wenxuan Wu, Xiaoli Z. Fern, Li Fuxin","doi":"10.1109/WACV56688.2023.00134","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00134","url":null,"abstract":"Recently, there is significant interest in performing convolution over irregularly sampled point clouds. Point clouds are very different from raster images, in that one cannot have a regular sampling grid on point clouds, which makes robustness under irregular neighborhoods an important issue. Especially, the k-nearest neighbor (kNN) neighborhood presents challenges for generalization because the location of the neighbors can be very different between training and testing times. In order to improve the robustness to different neighborhood samplings, this paper proposes a novel viewpoint-invariant coordinate transform as the input to the weight-generating function for point convolution, in addition to the regular 3D coordinates. This allows us to feed the network with non-invariant, scale-invariant and scale+rotation-invariant coordinates simultaneously, so that the network can learn which to include in the convolution function automatically. Empirically, we demonstrate that this effectively improves the performance of point cloud convolutions on the SemanticKITTI and ScanNet datasets, as well as the robustness to significant test-time downsampling, which can substantially change the distance of neighbors in a kNN neighborhood. Experimentally, among pure point-based approaches, we achieve comparable semantic segmentation performance with a comparable point-based convolution framework KPConv on SemanticKITTI and ScanNet, yet is significantly more efficient by virtue of using a kNN neighborhood instead of an ϵ-ball.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123698600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Discrete Cosin TransFormer: Image Modeling From Frequency Domain 离散余弦变压器:从频域图像建模
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00543
Xinyu Li, Yanyi Zhang, Jianbo Yuan, Hanlin Lu, Yibo Zhu
{"title":"Discrete Cosin TransFormer: Image Modeling From Frequency Domain","authors":"Xinyu Li, Yanyi Zhang, Jianbo Yuan, Hanlin Lu, Yibo Zhu","doi":"10.1109/WACV56688.2023.00543","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00543","url":null,"abstract":"In this paper, we propose Discrete Cosin TransFormer (DCFormer) that directly learn semantics from DCT-based frequency domain representation. We first show that transformer-based networks are able to learn semantics directly from frequency domain representation based on discrete cosine transform (DCT) without compromising the performance. To achieve the desired efficiency-effectiveness trade-off, we then leverage an input information compression on its frequency domain representation, which highlights the visually significant signals inspired by JPEG compression. We explore different frequency domain downsampling strategies and show that it is possible to preserve the semantic meaningful information by strategically dropping the high-frequency components. The proposed DCFormer is tested on various downstream tasks including image classification, object detection and instance segmentation, and achieves state-of-the-art comparable performance with less FLOPs, and outperforms the commonly used backbone (e.g. SWIN) at similar FLOPs. Our ablation results also show that the proposed method generalizes well on different transformer backbones.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121642510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
RADIANT: Better rPPG estimation using signal embeddings and Transformer 辐射:更好的rPPG估计使用信号嵌入和变压器
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00495
Anup Kumar Gupta, Rupesh Kumar, L. Birla, Puneet Gupta
{"title":"RADIANT: Better rPPG estimation using signal embeddings and Transformer","authors":"Anup Kumar Gupta, Rupesh Kumar, L. Birla, Puneet Gupta","doi":"10.1109/WACV56688.2023.00495","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00495","url":null,"abstract":"Remote photoplethysmography can provide non-contact heart rate (HR) estimation by analyzing the skin color variations obtained from face videos. These variations are subtle, imperceptible to human eyes, and easily affected by noise. Existing deep learning-based rPPG estimators are incompetent due to three reasons. Firstly, they suppress the noise by utilizing information from the whole face even though different facial regions contain different noise characteristics. Secondly, local noise characteristics inherently affect the convolutional neural network (CNN) architectures. Lastly, the CNN sequential architectures fail to preserve long temporal dependencies. To address these issues, we propose RADIANT, that is, rPPG estimation using Signal Embeddings and Transformer. Our architecture utilizes a multi-head attention mechanism that facilitates feature subspace learning to extract the multiple correlations among the color variations corresponding to the periodic pulse. Also, its global information processing ability helps to suppress local noise characteristics. Furthermore, we propose novel signal embedding to enhance the rPPG feature representation and suppress noise. We have also improved the generalization of our architecture by adding a new training set. To this end, the effectiveness of synthetic temporal signals and data augmentations were explored. Experiments on extensively utilized rPPG datasets demonstrate that our architecture outperforms previous well-known architectures. Code: https://github.com/Deep-Intelligence-Lab/RADIANT.git","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121905156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Meta-Learning for Adaptation of Deep Optical Flow Networks 基于元学习的深光流网络自适应
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00218
Chaerin Min, Tae Hyun Kim, Jongwoo Lim
{"title":"Meta-Learning for Adaptation of Deep Optical Flow Networks","authors":"Chaerin Min, Tae Hyun Kim, Jongwoo Lim","doi":"10.1109/WACV56688.2023.00218","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00218","url":null,"abstract":"In this paper, we propose an instance-wise meta-learning algorithm for optical flow domain adaptation. Typical optical flow algorithms with deep learning suffer from weak cross-domain performance since their trainings largely rely on synthetic datasets in specific domains. This prevents optical flow performance on different scenes from carrying similar performance in practice. Meanwhile, test-time do-main adaptation approaches for optical flow estimation are yet to be studied. Our proposed method, with some training data, learns to adapt more sensitively to incoming in-puts in the target domain. During the inference process, our method readily exploits the information only accessible in the test-time. Since our algorithm adapts to each input image, we incorporate traditional unsupervised losses for optical flow estimation. Moreover, with the observation that optical flows in a single domain typically contain many similar motions, we show that our method demonstrates high performance with only a small number of training data. This allows to save labeling efforts. Through the experiments on KITTI and MPI-Sintel datasets, our algorithm significantly outperforms the results without adaptation and shows consistently better performance in comparison to typical fine-tuning with the same amount of data. Also qualitatively our proposed method demonstrates more accurate results for the images with high errors in the original networks.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120955198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Vis2Rec: A Large-Scale Visual Dataset for Visit Recommendation Vis2Rec:一个用于访问推荐的大规模可视化数据集
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00300
Michael Soumm, Adrian Daniel Popescu, Bertrand Delezoide
{"title":"Vis2Rec: A Large-Scale Visual Dataset for Visit Recommendation","authors":"Michael Soumm, Adrian Daniel Popescu, Bertrand Delezoide","doi":"10.1109/WACV56688.2023.00300","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00300","url":null,"abstract":"Most recommendation datasets for tourism are restricted to one world region and rely on explicit data such as checkins. However, in reality, tourists visit various places world-wide and document their trips primarily through photos. These images contain a wealth of raw information that can be used to capture users’ preferences and recommend personalized content. Visual content was already used in past works, but no large-scale publicly-available dataset that gives access to users’ personal images exists for recommender systems. As such a resource would open-up possibilities for new image-based recommendation algorithms, we introduce Vis2Rec, a new dataset based on visit data extracted from users’ Flickr photographic streams, which includes over 7 million photos, 36k recognizable points of interest, and 14k user profiles. Google Landmarks v2 is used as an auxiliary dataset to identify points of interest in users’ photos, using a state-of-the-art image-matching deep architecture. Image-based user profiles are then constituted by aggregating the points of interest detected for each user. In addition, ground truth visits were determined for the test subset in order to enable accurate evaluation. Finally, we benchmark Vis2Rec using various existing recommender systems, and discuss the possibilities opened up by the availability of user images, as well as the societal issues that come with them. Following good practice in dataset sharing, Vis2Rec is created using only freely distributable content, and additional anonymization is performed to ensure the privacy of users. The raw dataset and the preprocessed user profiles will be publicly available at https://github.com/MSoumm/Vis2Rec.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130344509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Content-Based Music-Image Retrieval Using Self- and Cross-Modal Feature Embedding Memory 基于内容的自模态和跨模态特征嵌入记忆的音乐图像检索
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00221
Takayuki Nakatsuka, Masahiro Hamasaki, Masataka Goto
{"title":"Content-Based Music-Image Retrieval Using Self- and Cross-Modal Feature Embedding Memory","authors":"Takayuki Nakatsuka, Masahiro Hamasaki, Masataka Goto","doi":"10.1109/WACV56688.2023.00221","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00221","url":null,"abstract":"This paper describes a method based on deep metric learning for content-based cross-modal retrieval of a piece of music and its representative image (i.e., a music audio signal and its cover art image). We train music and image encoders so that the embeddings of a positive music-image pair lie close to each other, while those of a random pair lie far from each other, in a shared embedding space. Furthermore, we propose a mechanism called self- and cross-modal feature embedding memory, which stores both the music and image embeddings of any previous iterations in memory and enables the encoders to mine informative pairs for training. To perform such training, we constructed a dataset containing 78,325 music-image pairs. We demonstrate the effectiveness of the proposed mechanism on this dataset: specifically, our mechanism outperforms baseline methods by ×1.93 ∼ 3.38 for the mean reciprocal rank, ×2.19 ∼ 3.56 for recall@50, and 528 ∼ 891 ranks for the median rank.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130966998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Contrastive Learning of Semantic Concepts for Open-set Cross-domain Retrieval 面向开集跨域检索的语义概念对比学习
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00410
Aishwarya Agarwal, S. Karanam, Balaji Vasan Srinivasan, Biplab Banerjee
{"title":"Contrastive Learning of Semantic Concepts for Open-set Cross-domain Retrieval","authors":"Aishwarya Agarwal, S. Karanam, Balaji Vasan Srinivasan, Biplab Banerjee","doi":"10.1109/WACV56688.2023.00410","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00410","url":null,"abstract":"We consider the problem of image retrieval where query images during testing belong to classes and domains both unseen during training. This requires learning a feature space that has the ability to generalize across both classes and domains together. To this end, we propose semantic contrastive concept network (SCNNet), a new learning framework that helps take a step towards class and domain generalization in a principled fashion. Unlike existing methods that rely on global object representations, SCNNet proposes to learn local feature vectors to facilitate unseen-class generalization. To this end, SCNNet’s key innovations include (a) a novel trainable local concept extraction module that learns an orthonormal set of basis vectors, and (b) computes local features for any unseen-class data as a linear combination of the learned basis set. Next, to enable unseen-domain generalization, SCNNet proposes to generate supervisory signals from an adjacent data modality, i.e., natural language, by mining freely available textual label information associated with images. SCNNet derives these signals from our novel trainable semantic ordinal distance constraints that ensure semantic consistency between pairs of images sampled from different domains. Both the proposed modules above enable end-to-end training of the SC-NNet, resulting in a model that helps establish state-of-the-art performance on the standard DomainNet, PACS, and Sketchy benchmark datasets with average Prec@200 improvements of 42.6%, 6.5%, and 13.6% respectively over the most recently reported results.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"298 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131984025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Normality Guided Multiple Instance Learning for Weakly Supervised Video Anomaly Detection 弱监督视频异常检测的正态引导多实例学习
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00269
S. Park, H. Kim, Minsu Kim, Dahye Kim, K. Sohn
{"title":"Normality Guided Multiple Instance Learning for Weakly Supervised Video Anomaly Detection","authors":"S. Park, H. Kim, Minsu Kim, Dahye Kim, K. Sohn","doi":"10.1109/WACV56688.2023.00269","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00269","url":null,"abstract":"Weakly supervised Video Anomaly Detection (wVAD) aims to distinguish anomalies from normal events based on video-level supervision. Most existing works utilize Multiple Instance Learning (MIL) with ranking loss to tackle this task. These methods, however, rely on noisy predictions from a MIL-based classifier for target instance selection in ranking loss, degrading model performance. To overcome this problem, we propose Normality Guided Multiple Instance Learning (NG-MIL) framework, which encodes diverse normal patterns from noise-free normal videos into prototypes for constructing a similarity-based classifier. By ensembling predictions of two classifiers, our method could refine the anomaly scores, reducing training instability from weak labels. Moreover, we introduce normality clustering and normality guided triplet loss constraining inner bag instances to boost the effect of NG-MIL and increase the discriminability of classifiers. Extensive experiments on three public datasets (ShanghaiTech, UCF-Crime, XD-Violence) demonstrate that our method is comparable to or better than existing weakly supervised methods, achieving state-of-the-art results.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129787325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
RIFT: Disentangled Unsupervised Image Translation via Restricted Information Flow 基于受限信息流的解纠缠无监督图像翻译
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00245
Ben Usman, D. Bashkirova, Kate Saenko
{"title":"RIFT: Disentangled Unsupervised Image Translation via Restricted Information Flow","authors":"Ben Usman, D. Bashkirova, Kate Saenko","doi":"10.1109/WACV56688.2023.00245","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00245","url":null,"abstract":"Unsupervised image-to-image translation methods aim to map images from one domain into plausible examples from another domain while preserving the structure shared across two domains. In the many-to-many setting, an additional guidance example from the target domain is used to determine the domain-specific factors of variation of the generated image. In the absence of attribute annotations, methods have to infer which factors of variation are specific to each domain from data during training. In this paper, we show that many state-of-the-art architectures implicitly treat textures and colors as always being domain-specific, and thus fail when they are not. We propose a new method called RIFT that does not rely on such inductive architectural biases and instead infers which attributes are domain-specific vs shared directly from data. As a result, RIFT achieves consistently high cross-domain manipulation accuracy across multiple datasets spanning a wide variety of domain-specific and shared factors of variation.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132327474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Deep Neural Framework to Detect Individual Advertisement (Ad) from Videos 基于深度神经网络的视频广告检测
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Pub Date : 2023-01-01 DOI: 10.1109/WACV56688.2023.00357
Z. Liu
{"title":"A Deep Neural Framework to Detect Individual Advertisement (Ad) from Videos","authors":"Z. Liu","doi":"10.1109/WACV56688.2023.00357","DOIUrl":"https://doi.org/10.1109/WACV56688.2023.00357","url":null,"abstract":"Detecting commercial Ads from a video is important. For example, the commercial break frequency and duration are two metrics to measure the user experience for streaming service providers such as Amazon Freevee. The detection can be done intrusively by intercepting the network traffic and then parsing the service providers data and logs, or non-intrusively by capturing the videos streamed by content providers and then analyzing using the computer vision technologies. In this paper, we present a non-intrusive framework that is able to not only detect an Ad section, but also segment out individual Ads. We show that our algorithm is scalable because it uses light weight audio data to do global segmentation, as well as is domain crossing (movies, TVs and live streaming sports) captured from the popular streaming services such as the Freevee and the Prime Video (PV) live sports.","PeriodicalId":270631,"journal":{"name":"2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132828289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信