2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)最新文献

筛选
英文 中文
Hard-Negatives or Non-Negatives? A Hard-Negative Selection Strategy for Cross-Modal Retrieval Using the Improved Marginal Ranking Loss 硬底片还是非底片?基于改进边际排序损失的跨模态检索硬负选择策略
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00261
Damianos Galanopoulos, V. Mezaris
{"title":"Hard-Negatives or Non-Negatives? A Hard-Negative Selection Strategy for Cross-Modal Retrieval Using the Improved Marginal Ranking Loss","authors":"Damianos Galanopoulos, V. Mezaris","doi":"10.1109/ICCVW54120.2021.00261","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00261","url":null,"abstract":"Cross-modal learning has gained a lot of interest recently, and many applications of it, such as image-text retrieval, cross-modal video search, or video captioning have been proposed. In this work, we deal with the cross-modal video retrieval problem. The state-of-the-art approaches are based on deep network architectures, and rely on mining hard-negative samples during training to optimize the selection of the network’s parameters. Starting from a state-of-the-art cross-modal architecture that uses the improved marginal ranking loss function, we propose a simple strategy for hard-negative mining to identify which training samples are hard-negatives and which, although presently treated as hard-negatives, are likely not negative samples at all and shouldn’t be treated as such. Additionally, to take full advantage of network models trained using different de-sign choices for hard-negative mining, we examine model combination strategies, and we design a hybrid one effectively combining large numbers of trained models.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123428823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Intelligent Radiomic Analysis of Q-SPECT/CT images to optimize pulmonary embolism diagnosis in COVID-19 patients Q-SPECT/CT图像智能放射学分析优化COVID-19患者肺栓塞诊断
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00054
D. Gil, S. Baeza, C. Sánchez, G. Torres, I. García-Olivé, G. Moragas, J. Deportós, M. Salcedo, A. Rosell
{"title":"Intelligent Radiomic Analysis of Q-SPECT/CT images to optimize pulmonary embolism diagnosis in COVID-19 patients","authors":"D. Gil, S. Baeza, C. Sánchez, G. Torres, I. García-Olivé, G. Moragas, J. Deportós, M. Salcedo, A. Rosell","doi":"10.1109/ICCVW54120.2021.00054","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00054","url":null,"abstract":"Coronavirus disease 2019 (COVID-19) pneumonia is associated with a high rate of pulmonary embolism (PE). In patients with contraindications for CT pulmonary angiography (CTPA) or non-diagnostic on CTPA, perfusion single photon emission computed tomography/computed tomography (Q-SPECT/CT) is a diagnosis option. The goal of this work is to develop an Intelligent Radiomic system for the detection of PE in COVID-19 patients from the analysis of Q-SPECT/CT scans.Our Intelligent Radiomic System for identification of patients with PE (with/without pneumonia) is based on a local analysis of SPECT-CT volumes that considers both CT and SPECT values for each volume point. We present an hybrid approach that uses radiomic features extracted from each scan as input to a siamese classification network trained to discriminate among 4 different types of tissue: no pneumonia without PE (control group), no pneumonia with PE, pneumonia without PE and pneumonia with PE.The proposed radiomic system has been tested on 133 patients, 63 with COVID-19 (26 with PE, 22 without PE, 15 indeterminate-PE) and 70 without COVID-19 (31 healthy/control, 39 with PE). The per-patient recall for the detection of COVID-19 pneumonia and COVID-19 pneumonia with PE was, respectively, 91% and 81% with an area under the receiver operating characteristic curves equal to 0.99 and 0.87.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123732635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Learning Where to Cut from Edited Videos 学习从编辑的视频中剪切
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00360
Yuzhong Huang, Xue Bai, Oliver Wang, Fabian Caba, A. Agarwala
{"title":"Learning Where to Cut from Edited Videos","authors":"Yuzhong Huang, Xue Bai, Oliver Wang, Fabian Caba, A. Agarwala","doi":"10.1109/ICCVW54120.2021.00360","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00360","url":null,"abstract":"In this work we propose a new approach for accelerating the video editing process by identifying good moments in time to cut unedited videos. We first validate that there is indeed a consensus among human viewers about good and bad cut moments with a user study, and then formulate this problem as a classification task. In order to train for such a task, we propose a self-supervised scheme that only requires pre-existing edited videos for training, of which there is large and diverse data readily available. We then propose a contrastive learning framework to train a 3D ResNet model to predict good regions to cut. We validate our method with a second user study, which indicates that clips generated by our model are preferred over a number of baselines.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"198200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122060216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Egocentric Biochemical Video-and-Language Dataset 自我中心的生化视频和语言数据集
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00348
Taichi Nishimura, Kojiro Sakoda, Atsushi Hashimoto, Y. Ushiku, Natsuko Tanaka, F. Ono, Hirotaka Kameko, Shinsuke Mori
{"title":"Egocentric Biochemical Video-and-Language Dataset","authors":"Taichi Nishimura, Kojiro Sakoda, Atsushi Hashimoto, Y. Ushiku, Natsuko Tanaka, F. Ono, Hirotaka Kameko, Shinsuke Mori","doi":"10.1109/ICCVW54120.2021.00348","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00348","url":null,"abstract":"This paper proposes a novel biochemical video-and-language (BioVL) dataset, which consists of experimental videos, corresponding protocols, and annotations of alignment between events in the video and instructions in the protocol. The key strength of the dataset is its user-oriented design of data collection. We imagine that biochemical researchers easily take videos and share them for another researcher’s replication in the future. To minimize the burden of video recording, we adopted an unedited first-person video as a visual source. As a result, we collected 16 videos from four protocols with a total length of 1.6 hours. In our experiments, we conduct two zero-shot video-and-language tasks on the BioVL dataset. Our experimental results show a large room for improvement for practical use even utilizing the state-of-the-art pre-trained video-and-language joint embedding model. We are going to release the BioVL dataset. To our knowledge, this work is the first attempt to release the biochemical video-and-language dataset.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125938444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SCAT: Stride Consistency with Auto-regressive regressor and Transformer for hand pose estimation 基于自回归回归器和Transformer的手部姿态估计的步幅一致性
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00256
Daiheng Gao, Bang Zhang, Qi Wang, Xindi Zhang, Pan Pan, Yinghui Xu
{"title":"SCAT: Stride Consistency with Auto-regressive regressor and Transformer for hand pose estimation","authors":"Daiheng Gao, Bang Zhang, Qi Wang, Xindi Zhang, Pan Pan, Yinghui Xu","doi":"10.1109/ICCVW54120.2021.00256","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00256","url":null,"abstract":"The current state-of-the-art monocular 3D hand pose estimation methods are mostly model-based. For instance, MANO is one of the most popular hand parametric models, which can depict hand shapes and poses. It is widely adopted for estimating hand poses in images and videos. However, MANO is a parametric model derived from scanned hand data with limited shapes and poses which constrains its capability in depicting in-the-wild shape and pose variations. In this paper, we propose a 3D hand pose estimation approach which does not depends on any parametric hand models yet can still accurately estimate in-the-wild hand poses. Our approach (Stride Consistency with Autoregressive regressor and Transformer, SCAT) offers a new representation for measuring hand poses. The new representation includes a mean shape hand template and its 21 hand joint offsets depicting the 3D distances between the hand template and the hand that needs to be estimated. Besides, SCAT can generate a robust and smooth linear mapping between visual feature maps and the target 3D off-sets, ensuring inter-frame smoothness and removing motion jittering. We also introduce an auto-regressive refinement procedure for iteratively refining the hand pose estimation. Extensive experiments show that our SCAT can generate more accurate and smoother 3D hand pose estimation results compared with the state-of-the-art methods.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124833191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Audi-Exchange: AI-Guided Hand-based Actions to Assist Human-Human Interactions for the Blind and the Visually Impaired 听觉交流:人工智能引导的手部动作,帮助盲人和视障人士进行人际互动
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00197
Daohan Lu, Yi Fang
{"title":"Audi-Exchange: AI-Guided Hand-based Actions to Assist Human-Human Interactions for the Blind and the Visually Impaired","authors":"Daohan Lu, Yi Fang","doi":"10.1109/ICCVW54120.2021.00197","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00197","url":null,"abstract":"Vision loss or low vision poses significant challenges to blind-or-visually-impaired (BVI) individuals when interacting with humans and objects. Although many apps and assistive devices can help them better interact with the environment and objects, the current state of assistive technology leaves human-human interaction needs of the BVI largely unaddressed. Because of this, we introduce a new wearable mobile assistive platform, named Audi-Exchange, to address part of the problem. Developed with mobile-optimized computer vision and audio engineering techniques, Audi-Exchange facilitates a specific area of human-human interaction by helping the BVI user accurately locate another person’s hand with spatial audio in order to pass objects over to or receive objects from the other person. Audi-Exchange differs from existing academic and commercial assistive technologies in that it is intuitive to use and non-intrusive when worn. We conduct several experiments to investigate Audi-Exchange’s effectiveness as an assistive human-human interaction tool and discover encouraging results.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129684040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Temporal Cues from Socially Unacceptable Trajectories for Anomaly Detection 异常检测中社会不可接受轨迹的时间线索
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00244
Neelu Madan, Arya Farkhondeh, Kamal Nasrollahi, Sergio Escalera, T. Moeslund
{"title":"Temporal Cues from Socially Unacceptable Trajectories for Anomaly Detection","authors":"Neelu Madan, Arya Farkhondeh, Kamal Nasrollahi, Sergio Escalera, T. Moeslund","doi":"10.1109/ICCVW54120.2021.00244","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00244","url":null,"abstract":"State-of-the-Art (SoTA) deep learning-based approaches to detect anomalies in surveillance videos utilize limited temporal information, including basic information from motion, e.g., optical flow computed between consecutive frames. In this paper, we compliment the SoTA methods by including long-range dependencies from trajectories for anomaly detection. To achieve that, we first created trajectories by running a tracker on two SoTA datasets, namely Avenue and Shanghai-Tech. We propose a prediction-based anomaly detection method using trajectories based on Social GANs, also called in this paper as temporal-based anomaly detection. Then, we hypothesize that late fusion of the result of this temporal-based anomaly detection system with spatial-based anomaly detection systems produces SoTA results. We verify this hypothesis on two spatial-based anomaly detection systems. We show that both cases produce results better than baseline spatial-based systems, indicating the usefulness of the temporal information coming from the trajectories for anomaly detection. We observe that the proposed approach depicts the maximum improvement in micro-level Area-Under-the-Curve (AUC) by 4.1% on CUHK Avenue and 3.4% on Shanghai-Tech over one of the baseline method. We also show a high performance on cross-data evaluation, where we learn the weights to combine spatial and temporal information on Shanghai-Tech and perform evaluation on CUHK Avenue and vice-versa.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128540021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Nuisance-Label Supervision: Robustness Improvement by Free Labels 滋扰标签监督:免费标签的鲁棒性改进
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00179
Xinyue Wei, Weichao Qiu, Yi Zhang, Zihao Xiao, A. Yuille
{"title":"Nuisance-Label Supervision: Robustness Improvement by Free Labels","authors":"Xinyue Wei, Weichao Qiu, Yi Zhang, Zihao Xiao, A. Yuille","doi":"10.1109/ICCVW54120.2021.00179","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00179","url":null,"abstract":"In this paper, we present a Nuisance-label Supervision (NLS) module, which can make models more robust to nuisance factor variations. Nuisance factors are those irrelevant to a task, and an ideal model should be invariant to them. For example, an activity recognition model should perform consistently regardless of the change of clothes and background. But our experiments show existing models are far from this capability. So we explicitly supervise a model with nuisance labels to make extracted features less dependent on nuisance factors. Although the values of nuisance factors are rarely annotated, we demonstrate that besides existing annotations, nuisance labels can be acquired freely from data augmentation and synthetic data. Experiments show consistent improvement in robustness towards image corruption and appearance change in action recognition.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128198026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ResSaNet: A Hybrid Backbone of Residual Block and Self-Attention Module for Masked Face Recognition ResSaNet:残差块与自注意模块的混合主干,用于蒙面人脸识别
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00170
Wei-Yi Chang, Ming-Ying Tsai, Shih-Chieh Lo
{"title":"ResSaNet: A Hybrid Backbone of Residual Block and Self-Attention Module for Masked Face Recognition","authors":"Wei-Yi Chang, Ming-Ying Tsai, Shih-Chieh Lo","doi":"10.1109/ICCVW54120.2021.00170","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00170","url":null,"abstract":"In recent years, the performances of face recognition have been improved significantly by using convolution neural networks (CNN) as the feature extractors. On the other hands, in order to avoid spreading COVID-19 virus, people would wear mask even when they want to pass the face recognition system. Thus, it is necessary to improve the performance of masked face recognition so that users could utilize face recognition methods more easily. In this paper, we propose a feature extraction backbone named ResSaNet that integrates CNN (especially Residual block) and Self-attention module into the same network. By capturing the local and global information of face area simultaneously, our proposed ResSaNet could achieve promising results on both masked and non-masked testing data.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
CDAda: A Curriculum Domain Adaptation for Nighttime Semantic Segmentation 夜间语义切分的课程域适应
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00331
Qi Xu, Yinan Ma, Jing Wu, C. Long, Xiaolin Huang
{"title":"CDAda: A Curriculum Domain Adaptation for Nighttime Semantic Segmentation","authors":"Qi Xu, Yinan Ma, Jing Wu, C. Long, Xiaolin Huang","doi":"10.1109/ICCVW54120.2021.00331","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00331","url":null,"abstract":"Autonomous driving needs to ensure all-weather safety, especially in unfavorable environments such as night and rain. However, the current daytime-trained semantic segmentation networks face significant performance degradation at night because of the huge domain divergence. In this paper, we propose a novel Curriculum Domain Adaptation method (CDAda) to realize the smooth semantic knowledge transfer from daytime to nighttime. Specifically, it consists of two steps: 1) inter-domain style adaptation: fine-tune the daytime-trained model on the labeled synthetic nighttime images through the proposed frequency-based style transformation method (replace the low-frequency components of daytime images with those of nighttime images); 2) intra-domain gradual self-training: separate the nighttime domain into the easy split nighttime domain and hard split nighttime domain based on the \"entropy + illumination\" ranking principle, then gradually adapt the model to the two sub-domains through pseudo supervision on easy split data and entropy minimization on hard split data. To the best of our knowledge, we first extend the idea of intra-domain adaptation to self-training and prove different treatments on two parts can reduce the distribution divergence in the nighttime domain itself. In particular, aimed at the adopted unlabeled day-night image pairs, the prediction of the daytime images can guide the segmentation on the nighttime images by ensuring patch-level consistency. Extensive experiments on Nighttime Driving, Dark Zurich, and BDD100K-night dataset highlight the effectiveness of our approach with the more favorable performance 50.9%, 45.0%, and 33.8% Mean IoU against existing state-of-the-art approaches.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126938906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信