2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)最新文献

筛选
英文 中文
Multi-modal Variational Faster R-CNN for Improved Visual Object Detection in Manufacturing 基于多模态变分更快R-CNN的制造业视觉目标检测
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00292
Panagiotis Mouzenidis, Antonios Louros, D. Konstantinidis, K. Dimitropoulos, P. Daras, Theofilos D. Mastos
{"title":"Multi-modal Variational Faster R-CNN for Improved Visual Object Detection in Manufacturing","authors":"Panagiotis Mouzenidis, Antonios Louros, D. Konstantinidis, K. Dimitropoulos, P. Daras, Theofilos D. Mastos","doi":"10.1109/ICCVW54120.2021.00292","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00292","url":null,"abstract":"Visual object detection is a critical task for a variety of industrial applications, such as robot navigation, quality control and product assembling. Modern industrial environments require AI-based object detection methods that can achieve high accuracy, robustness and generalization. To this end, we propose a novel object detection approach that can process and fuse information from RGB-D images for the accurate detection of industrial objects. The proposed approach utilizes a novel Variational Faster R-CNN algorithm that aims to improve the robustness and generalization ability of the original Faster R-CNN algorithm by employing a VAE encoder-decoder network and a very powerful attention layer. Experimental results on two object detection datasets, namely the well-known RGB-D Washington dataset and the QCONPASS dataset of industrial objects that is first presented in this paper, verify the significant performance improvement achieved when the proposed approach is employed.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123045052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DC-VINS: Dynamic Camera Visual Inertial Navigation System with Online Calibration DC-VINS:在线标定的动态相机视觉惯性导航系统
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00289
Jason Rebello, Chunshang Li, Steven L. Waslander
{"title":"DC-VINS: Dynamic Camera Visual Inertial Navigation System with Online Calibration","authors":"Jason Rebello, Chunshang Li, Steven L. Waslander","doi":"10.1109/ICCVW54120.2021.00289","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00289","url":null,"abstract":"Visual-inertial (VI) sensor combinations are becoming ubiquitous in a variety of autonomous driving and aerial navigation applications due to their low cost, limited power consumption and complementary sensing capabilities. However, current VI sensor configurations assume a static rigid transformation between the camera and IMU, precluding manipulating the viewpoint of the camera independent of IMU movement which is important in situations with uneven feature distribution and for high-rate dynamic motions. Gimbal stabilized cameras, as seen on most commercially available drones, have seen limited use in SLAM due to the inability to resolve the time-varying extrinsic calibration between the IMU and camera needed in tight sensor fusion. In this paper, we present the online extrinsic calibration between a dynamic camera mounted to an actuated mechanism and an IMU mounted to the body of the vehicle integrated into a Visual Odometry pipeline. In addition, we provide a degeneracy analysis of the calibration parameters leading to a novel parameterization of the actuated mechanism used in the calibration. We build our calibration into the VINS-Fusion package and show that we are able to accurately recover the calibration parameters online while manipulating the viewpoint of the camera to feature rich areas thereby achieving an average RMSE error of 0.26m over an average trajectory length of 340m, 31.45% lower than a traditional visual inertial pipeline with a static camera.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130011787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object-Based Augmentation for Building Semantic Segmentation: Ventura and Santa Rosa Case Study 基于对象的建筑语义分割增强:文图拉和圣罗莎案例研究
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00191
S. Illarionova, S. Nesteruk, Dmitrii G. Shadrin, V. Ignatiev, M. Pukalchik, I. Oseledets
{"title":"Object-Based Augmentation for Building Semantic Segmentation: Ventura and Santa Rosa Case Study","authors":"S. Illarionova, S. Nesteruk, Dmitrii G. Shadrin, V. Ignatiev, M. Pukalchik, I. Oseledets","doi":"10.1109/ICCVW54120.2021.00191","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00191","url":null,"abstract":"Today deep convolutional neural networks (CNNs) push the limits for most computer vision problems, define trends, and set state-of-the-art results. In remote sensing tasks such as object detection and semantic segmentation, CNNs reach the SotA performance. However, for precise performance, CNNs require much high-quality training data. Rare objects and the variability of environmental conditions strongly affect prediction stability and accuracy. To overcome these data restrictions, it is common to consider various approaches including data augmentation techniques. This study focuses on the development and testing of object-based augmentation. The practical usefulness of the developed augmentation technique is shown in the remote sensing domain, being one of the most demanded in effective augmentation techniques. We propose a novel pipeline for georeferenced image augmentation that enables a significant increase in the number of training samples. The presented pipeline is called object-based augmentation (OBA) and exploits objects’ segmentation masks to produce new realistic training scenes using target objects and various label-free backgrounds. We test the approach on the buildings segmentation dataset with different CNN architectures (U-Net, FPN, HRNet) and show that the proposed method benefits for all the tested models. We also show that further augmentation strategy optimization can improve the results. The proposed method leads to the meaningful improvement of U-Net model predictions from 0.78 to 0.83 F1-score.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127608484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Transformer Meets Part Model: Adaptive Part Division for Person Re-Identification 变压器满足零件模型:用于人员再识别的自适应零件划分
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00461
Shenqi Lai, Z. Chai, Xiaolin Wei
{"title":"Transformer Meets Part Model: Adaptive Part Division for Person Re-Identification","authors":"Shenqi Lai, Z. Chai, Xiaolin Wei","doi":"10.1109/ICCVW54120.2021.00461","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00461","url":null,"abstract":"Part model is one of the key factors to high performance person re-identification (ReID) task. In recent studies, there are mainly two streams for part model. The first one is to divide a person image into several fixed parts to obtain their local information, but it may cause performance degradation in case of misalignment. The other one is to explore external resources like pose estimation or human parsing to locate local parts, but it costs extra storage and computation. Inspired by recent successful transformers on spatial similarity modeling, we propose a novel Adaptive Part Division (APD) model to better extract local features. More specifically, APD mainly consists of two crucial modules: a Transformer-based Part Merge (TPM) module and a Part Mask Generation (PMG) module. In particular, TPM first adaptively assigns the patch tokens of the same semantic object to the identical part. Then, PMG takes these identical parts together and generates several non-overlapping masks for robust part division. We have conducted extensive evaluations on four popular benchmarks, i.e. Market-1501, CUHK03, DukeMTMC-ReID and MSMT17, and the experimental results show that our proposed method achieves the state-of-the-art performance.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127516735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Background/Foreground Separation: Guided Attention based Adversarial Modeling (GAAM) versus Robust Subspace Learning Methods 背景/前景分离:基于引导注意的对抗建模(GAAM)与鲁棒子空间学习方法
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00025
M. Sultana, Arif Mahmood, T. Bouwmans, M. H. Khan, Soon Ki Jung
{"title":"Background/Foreground Separation: Guided Attention based Adversarial Modeling (GAAM) versus Robust Subspace Learning Methods","authors":"M. Sultana, Arif Mahmood, T. Bouwmans, M. H. Khan, Soon Ki Jung","doi":"10.1109/ICCVW54120.2021.00025","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00025","url":null,"abstract":"Background-Foreground separation and appearance generation is a fundamental step in many computer vision applications. Existing methods like Robust Subspace Learning (RSL) suffer performance degradation in the presence of challenges like bad weather, illumination variations, occlusion, dynamic backgrounds and intermittent object motion. In the current work we propose a more accurate deep neural network based model for background-foreground separation and complete appearance generation of the foreground objects. Our proposed model, Guided Attention based Adversarial Model (GAAM), can efficiently extract pixel-level boundaries of the foreground objects for improved appearance generation. Unlike RSL methods our model extracts the binary information of foreground objects labeled as attention map which guides our generator network to segment the foreground objects from the complex background information. Wide range of experiments performed on the benchmark CDnet2014 dataset demonstrate the excellent performance of our proposed model.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124082998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Attention Aware Debiasing for Unbiased Model Prediction 无偏模型预测的注意感知去偏
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00459
P. Majumdar, Richa Singh, Mayank Vatsa
{"title":"Attention Aware Debiasing for Unbiased Model Prediction","authors":"P. Majumdar, Richa Singh, Mayank Vatsa","doi":"10.1109/ICCVW54120.2021.00459","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00459","url":null,"abstract":"Due to the large applicability of AI systems in various applications, fairness in model predictions is extremely important to ensure that the systems work equally well for everyone. Biased feature representations might often lead to unfair model predictions. To address the concern, in this research, a novel method, termed as Attention Aware Debiasing (AAD) method, is proposed to learn unbiased feature representations. The proposed method uses an attention mechanism to focus on the features important for the main task while suppressing the features related to the sensitive attributes. This minimizes the model's dependency on the sensitive attribute while performing the main task. Multiple experiments are performed on two publicly available datasets, MORPH and UTKFace, to showcase the effectiveness of the proposed AAD method for bias mitigation. The proposed AAD method enhances the overall model performance and reduces the disparity in model prediction across different subgroups.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129216946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Causal BERT: Improving object detection by searching for challenging groups 因果伯特:通过搜索具有挑战性的群体来改进目标检测
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00332
Cinjon Resnick, O. Litany, Amlan Kar, Karsten Kreis, James Lucas, Kyunghyun Cho, S. Fidler
{"title":"Causal BERT: Improving object detection by searching for challenging groups","authors":"Cinjon Resnick, O. Litany, Amlan Kar, Karsten Kreis, James Lucas, Kyunghyun Cho, S. Fidler","doi":"10.1109/ICCVW54120.2021.00332","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00332","url":null,"abstract":"Autonomous vehicles (AV) often rely on perception modules built upon neural networks for object detection. These modules frequently have low expected error overall but high error on unknown groups due to biases inherent in the training process. When these errors cause vehicle failure, manufacturers pay humans to comb through the associated images and label what group they are from. Data from that group is then collected, annotated, and added to the training set before retraining the model to fix the issue. In other words, group errors are found and addressed in hindsight. Our main contribution is a method to find such groups in foresight, leveraging advances in simulation as well as masked language modeling in order to perform causal interventions on simulated driving scenes. We then use the found groups to improve detection, exemplified by Diamondback bikes, whose performance we improve by 30 AP points. Such a solution is of high priority because it would greatly improve the robustness and safety of AV systems. Our second contribution is the tooling to run interventions, which will benefit the causal community tremendously.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129289537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Egocentric Indoor Localization from Room Layouts and Image Outer Corners 从房间布局和图像外角看以自我为中心的室内定位
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00385
Xiaowei Chen, Guoliang Fan
{"title":"Egocentric Indoor Localization from Room Layouts and Image Outer Corners","authors":"Xiaowei Chen, Guoliang Fan","doi":"10.1109/ICCVW54120.2021.00385","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00385","url":null,"abstract":"Egocentric indoor localization is an important issue for many in-home smart technologies. Room layouts have been used to characterize indoor scene images by a few typical space configurations defined by boundary lines and junctions, which are mostly detectable or inferable by deep learning methods. In this paper, we study camera pose estimation for egocentric indoor localization from room layouts that is cast as a PnL (Perspective-n-Line) problem. Specifically, image outer corners (IOCs), which are the intersecting points between image borders and room layout boundaries, are introduced to improve PnL optimization by involving additional auxiliary lines in an image. This leads to a new PnL-IOC algorithm where 3D correspondence estimation of IOCs are jointly solved with camera pose optimization in the iterative Gauss-Newton algorithm. Experiment results on both simulated and real images show the advantages of PnL-IOC on the accuracy and robustness of camera pose estimation over the existing PnL methods.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128546989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evasion Attack STeganography: Turning Vulnerability Of Machine Learning To Adversarial Attacks Into A Real-world Application 规避攻击隐写术:将机器学习对抗性攻击的漏洞转化为现实世界的应用
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00010
Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon
{"title":"Evasion Attack STeganography: Turning Vulnerability Of Machine Learning To Adversarial Attacks Into A Real-world Application","authors":"Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon","doi":"10.1109/ICCVW54120.2021.00010","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00010","url":null,"abstract":"Evasion Attacks have been commonly seen as a weakness of Deep Neural Networks. In this paper, we flip the paradigm and envision this vulnerability as a useful application. We propose EAST, a new steganography and watermarking technique based on multi-label targeted evasion attacks. The key idea of EAST is to encode data as the labels of the image that the evasion attacks produce.Our results confirm that our embedding is elusive; it not only passes unnoticed by humans, steganalysis methods, and machine-learning detectors. In addition, our embedding is resilient to soft and aggressive image tampering (87% recovery rate under jpeg compression). EAST outperforms existing deep-learning-based steganography approaches with images that are 70% denser and 73% more robust and supports multiple datasets and architectures.We provide our algorithm and open-source code at https://github.com/yamizi/Adversarial-Embedding","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115689647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An End-to-end Efficient Framework for Remote Physiological Signal Sensing 一种端到端高效的远程生理信号感知框架
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00269
Chengyang Hu, Ke-Yue Zhang, Taiping Yao, Shouhong Ding, Jilin Li, Feiyue Huang, Lizhuang Ma
{"title":"An End-to-end Efficient Framework for Remote Physiological Signal Sensing","authors":"Chengyang Hu, Ke-Yue Zhang, Taiping Yao, Shouhong Ding, Jilin Li, Feiyue Huang, Lizhuang Ma","doi":"10.1109/ICCVW54120.2021.00269","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00269","url":null,"abstract":"Remote photoplethysmography (rPPG) is utilized to estimate the heart activities from videos, which has drawn great interest from both researchers and companies recently. Many existing rPPG deep-learning based approaches focus on measuring the average heart rate (HR) from facial videos, which do not provide enough detailed information for many applications. To recover more detailed rPPG signals for the challenge on Remote Physiological Signal Sensing (RePSS), we propose an end-to-end efficient framework, which measures the average heart rate and estimates corresponding Blood Volume Pulse (BVP) curves simultaneously. For efficiently extracting features containing rPPG information, we adopt the temporal and spatial convolution as Feature Extractor, which alleviates the cost of calculation. Then, BVP Estimation Network estimates the frame-level BVP signal based on the feature maps via a simple 1DCNN. To improve the learning of BVP Estimation Net-work, we further introduce Heartbeat Measuring Network to predict the video-level HR based on global rPPG information. These two networks facilitate each other via super-vising Feature Extractor from different level to promote the accuracy of BVP signal and HR. The proposed method obtains the score 168.08 (MIBI), winning the third place in this challenge.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127154313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信