2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

筛选
英文 中文
A Benchmark for Building Footprint Classification Using Orthorectified RGB Imagery and Digital Surface Models from Commercial Satellites 基于正校正RGB图像和商业卫星数字表面模型的建筑足迹分类基准
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457973
H. Goldberg, M. Brown, Sean Wang
{"title":"A Benchmark for Building Footprint Classification Using Orthorectified RGB Imagery and Digital Surface Models from Commercial Satellites","authors":"H. Goldberg, M. Brown, Sean Wang","doi":"10.1109/AIPR.2017.8457973","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457973","url":null,"abstract":"Identifying building footprints is a critical and challenging problem in many remote sensing applications. Solutions to this problem have been investigated using a variety of sensing modalities as input. In this work, we consider the detection of building footprints from 3D Digital Surface Models (DSMs) created from commercial satellite imagery along with RGB orthorectified imagery. Recent public challenges (SpaceNet 1 and 2, DSTL Satellite Imagery Feature Detection Challenge, and the ISPRS Test Project on Urban Classification) approach this problem using other sensing modalities or higher resolution data. As a result of these challenges and other work, most publically available automated methods for building footprint detection using 2D and 3D data sources as input are meant for high-resolution 3D lidar and 2D airborne imagery, or make use of multispectral imagery as well to aid detection. Performance is typically degraded as the fidelity and post spacing of the 3D lidar data or the 2D imagery is reduced. Furthermore, most software packages do not work well enough with this type of data to enable a fully automated solution. We describe a public benchmark dataset consisting of 50 cm DSMs created from commercial satellite imagery, as well as coincident 50 cm RGB orthorectified imagery products. The dataset includes ground truth building outlines and we propose representative quantitative metrics for evaluating performance. In addition, we provide lessons learned and hope to promote additional research in this field by releasing this public benchmark dataset to the community.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128689010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Breast Cancer Detection Using Transfer Learning in Convolutional Neural Networks 卷积神经网络中使用迁移学习的乳腺癌检测
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457948
Shuyue Guan, M. Loew
{"title":"Breast Cancer Detection Using Transfer Learning in Convolutional Neural Networks","authors":"Shuyue Guan, M. Loew","doi":"10.1109/AIPR.2017.8457948","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457948","url":null,"abstract":"In the U.S., breast cancer is diagnosed in about 12 % of women during their lifetime and it is the second leading reason for women's death. Since early diagnosis could improve treatment outcomes and longer survival times for breast cancer patients, it is significant to develop breast cancer detection techniques. The Convolutional Neural Network (CNN) can extract features from images automatically and then perform classification. To train the CNN from scratch, however, requires a large number of labeled images, which is infeasible for some kinds of medical image data such as mammographic tumor images. A promising solution is to apply transfer learning in CNN. In this paper, we firstly tested three training methods on the MIAS database: 1) trained a CNN from scratch, 2) applied the pre-trained VGG-16 model to extract features from input mammograms and used these features to train a Neural Network (NN)-classifier, 3) updated the weights in several final layers of the pre-trained VGG-16 model by back-propagation (fine-tuning) to detect abnormal regions. We found that method 2) is ideal for study because the classification accuracy of fine-tuning model was just 0.008 higher than that of feature extraction model but time cost of feature extraction model was only about 5% of that of the fine-tuning model. Then, we used method 2) to classify regions: benign vs. normal, malignant vs. normal and abnormal vs. normal from the DDSM database with 10-fold cross validation. The average validation accuracy converged at about 0.905 for abnormal vs. normal cases, and there was no obvious overfitting. This study shows that applying transfer learning in CNN can detect breast cancer from mammograms, and training a NN-classifier by feature extraction is a faster method in transfer learning.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132786649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Roof Damage Assessment using Deep Learning 基于深度学习的屋顶损伤评估
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457946
Mahshad Mahdavi Hezaveh, Christopher Kanan, C. Salvaggio
{"title":"Roof Damage Assessment using Deep Learning","authors":"Mahshad Mahdavi Hezaveh, Christopher Kanan, C. Salvaggio","doi":"10.1109/AIPR.2017.8457946","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457946","url":null,"abstract":"Industrial procedures can be inefficient in terms of time, money and consumer satisfaction. the rivalry among businesses' gradually encourages them to exploit intelligent systems to achieve such goals as increasing profits, market share, and higher productivity. The property casualty insurance industry is not an exception. The inspection of a roof's condition is a preliminary stage of the damage claim processing performed by insurance adjusters. When insurance adjusters inspect a roof, it is a time consuming and potentially dangerous endeavor. In this paper, we propose to automate this assessment using RGB imagery of rooftops that have been inflicted with damage from hail impact collected using small unmanned aircraft systems (sUAS) along with deep learning to infer the extent of roof damage (see Fig. I). We assess multiple convolutional neural networks on our unique rooftop damage dataset that was gathered using a sUAS. Our experiments show that we can accurately identify hail damage automatically using our techniques.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"8 12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130695666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Fusion of Deep Convolutional Neural Networks 深度卷积神经网络的融合
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457945
Robert Suchy, Soundararajan Ezekiel, Maria Scalzo-Cornacchia
{"title":"Fusion of Deep Convolutional Neural Networks","authors":"Robert Suchy, Soundararajan Ezekiel, Maria Scalzo-Cornacchia","doi":"10.1109/AIPR.2017.8457945","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457945","url":null,"abstract":"In recent years, the concept of big data has become a more prominent research topic as the volume of data and the rate at which it is produced are increasing exponentially. By 2020 the amount of data being stored is estimated to be 44 Zettabytes and currently over 31 Terabytes of data is being generated every second. Algorithms and applications must be able to effectively scale to the volume of data being generated. One such application that has excelled due to the surge in Big Data is the Convolutional Neural Network. The breakthroughs in the development of Graphical Processing Units have led to the advancements in the state-of-the-art on tasks such as image classification and speech recognition. These multi-layered convolutional neural networks are very large, complex and require significant computational resources to train and evaluate models. In this paper, we explore several novel architectures for the fusion of multiple convolutional neural networks, including stacked representation fusions and mixed model fusion. We differ from existing fusion methods in that our approaches take in the raw outputs of several CNN models and use classifiers as fusers. Other methods typically hand-craft the fusion or have used the original input space as the fusion method. Advancements in this area will better enable the leveraging of the vast amount of pre-trained models and improve accuracy of these models. The approaches generated are application agnostic and will apply across a breadth of tasks.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132422965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The poor generalization of deep convolutional networks to aerial imagery from new geographic locations: an empirical study with solar array detection 深度卷积网络对来自新地理位置的航空图像的不良泛化:太阳能阵列检测的实证研究
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457960
Rui Wang, Joseph A. Camilo, L. Collins, Kyle Bradbury, Jordan M. Malof
{"title":"The poor generalization of deep convolutional networks to aerial imagery from new geographic locations: an empirical study with solar array detection","authors":"Rui Wang, Joseph A. Camilo, L. Collins, Kyle Bradbury, Jordan M. Malof","doi":"10.1109/AIPR.2017.8457960","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457960","url":null,"abstract":"Convolutional neural networks (CNNs) have recently achieved unprecedented performance for the automatic recognition of objects (e.g., buildings, roads, or vehicles) in color aerial imagery. Although these results are promising, questions remain about their practical applicability. This is because there is a wide variability in the visual characteristics of remote sensing imagery across different geographic locations, and CNNs are often trained and tested on imagery from nearby (or the same) geographic locations. It is therefore unclear whether trained CNNs will perform well on new, previously unseen, geographic locations, which is an important practical consideration. In this work we investigate this problem when applying CNNs for solar array detection on a large aerial imagery dataset comprised of two nearby US cities. We compare the performance of CNNs under two conditions: training and testing on the same city vs training on one city and testing on another city. We discuss several subtle difficulties with these experiments and make recommendations. We show that there can be substantial performance loss in second case, when compared to the first. We also investigate how much training data is required from the unseen city in order to fine-tune the CNN so that it performs well. We investigate several different fine-tuning strategies, yielding a clear winner.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132521904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Automated generation of convolutional neural network training data using video sources 使用视频源自动生成卷积神经网络训练数据
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457936
A. Kalukin, Wade Leonard, Joan Green, L. Burgwardt
{"title":"Automated generation of convolutional neural network training data using video sources","authors":"A. Kalukin, Wade Leonard, Joan Green, L. Burgwardt","doi":"10.1109/AIPR.2017.8457936","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457936","url":null,"abstract":"One of the challenges of using techniques such as convolutional neural networks and deep learning for automated object recognition in images and video is to be able to generate sufficient quantities of labeled training image data in a cost-effective way. It is generally preferred to tag hundreds of thousands of frames for each category or label, and a human being tagging images frame by frame might expect to spend hundreds of hours creating such a training set. One alternative is to use video as a source of training images. A human tagger notes the start and stop time in each clip for the appearance of objects of interest. The video is broken down into component frames using software such as ffmpeg. The frames that fall within the time intervals for objects of interest are labeled as “targets,” and the remaining frames are labeled as “non-targets.” This separation of categories can be automated. The time required by a human viewer using this method would be around ten hours, at least 1–2 orders of magnitude lower than a human tagger labeling frame by frame. The false alarm rate and target detection rate can by optimized by providing the system unambiguous training examples.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125747684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Anatomy of a Neural Network 神经网络的解剖
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457937
J. LaRue, R. Tutwiler, Dennison J. Larue
{"title":"The Anatomy of a Neural Network","authors":"J. LaRue, R. Tutwiler, Dennison J. Larue","doi":"10.1109/AIPR.2017.8457937","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457937","url":null,"abstract":"It is true there have been great improvements with the effectiveness of utilizing Neural Networks. However, these improvements are, for the most part, relegated to improved clock speeds, leveraging increase in memory, and GPU enabled parallelization of up-front processing. However, what has been seemingly forgotten over the last twenty or so years is the understanding of how the internal layers are reacting with respect to convergence in training, and information transformation across layers during test, which in turn may account for a common perception that the internal neural layers are opaque black boxes. This paper will show in two parts that in fact, this is not true. Part one will demonstrate, through matrix visualization, the feed-forward processing throughout a multi-layer convolutional neural network. Part 2 will discuss our unique derivative application of Kohonen's and Kosko's correlation matrix memory methods to the consecutive pairs of layers within the network in order to form stabilized and compressible associative memory matrices. The subtlety of Part 2 is that our stabilized matrices can be simply multiplied together, thus forming a single layer, and therefore realizing The Universal Approximation Theorem of Cybenko and Hornik. In effect, the anatomy of the neural network will reveal how to open up the black box and take advantage of its inner workings.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121863734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Super-Resolution for Color Imagery 彩色图像的超分辨率
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-09-01 DOI: 10.1109/AIPR.2017.8457964
Isabella Herold, S. Young
{"title":"Super-Resolution for Color Imagery","authors":"Isabella Herold, S. Young","doi":"10.1109/AIPR.2017.8457964","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457964","url":null,"abstract":"Super-resolution image reconstruction (SRIR) can improve image resolution using a sequence of low-resolution images without upgrading the sensor's hardware. Here, we consider an efficient approach of super-resolving color images. The direct approach is to super-resolve 3 color bands of the input color image sequence separately; however, it requires performing the super-resolution computation 3 times. We transform images in the default red, green, blue (RGB) color space to another color space where SRIR can be used efficiently. Digital color images can be decomposed into 3 grayscale pictures, each representing a different color space coordinate. In common color spaces, one of the coordinates (i.e., grayscale pictures) contains luminance information while the other 2 contain chrominance information. We use only the luminance component in the US Army Research Laboratory's (ARL) SRIR algorithm and upsample the chrominance components based on ARL's alias-free image upsampling using Fourier-based windowing methods. A reverse transformation is performed on these 3 components/pictures to produce a super-resolved color image in the original RGB color space. Five color spaces (CIE 1976 (L*, a*, b*) color space [CIELAB], YIQ, YCbCr, hue-saturation-value [HSV], and hue-saturation-intensity [HSI]) are considered to test the merit of the proposed approach. The results of super-resolving real-world color images are provided.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128225007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-Scale Spatially Weighted Local Histograms in O(1) O(1)的多尺度空间加权局部直方图
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-05-09 DOI: 10.1109/AIPR.2017.8457944
M. Poostchi, A. Shafiekhani, K. Palaniappan, G. Seetharaman
{"title":"Multi-Scale Spatially Weighted Local Histograms in O(1)","authors":"M. Poostchi, A. Shafiekhani, K. Palaniappan, G. Seetharaman","doi":"10.1109/AIPR.2017.8457944","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457944","url":null,"abstract":"Histograms are commonly used to characterize and analyze the region of interest within an image. Weighting the contributions of the pixels to the histogram is a key feature to handle noise and occlusion and increase object localization accuracy of many histogram-based search problems including object detection, tracking and recognition. The integral histogram method provides an optimum and complete solution to compute the plain histogram of any rectangular region in constant time. However, the matter of how accurately extract the weighted histogram of any arbitrary region within an image using integral histogram has not been addressed. This paper presents a novel fast algorithm to evaluate spatially weighted local histograms at different scale accurately and in constant time using an extension of integral histogram. Utilizing the integral histogram makes it to be fast, multi-scale and flexible to different weighting functions. The pixel-level weighting problem is addressed by decomposing the Manhattan spatial filter and fragmenting the region of interest. We evaluated and compared the computational complexity and accuracy of our proposed approach with brute-force implementation and approximation scheme. The proposed method can be integrated into any detection and tracking framework to provide an efficient exhaustive search, improve target localization accuracy and meet the demand of real-time processing.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117049209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effect of Super Resolution on High Dimensional Features for Unsupervised Face Recognition in the Wild 超分辨率对野外无监督人脸识别高维特征的影响
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-03-23 DOI: 10.1109/AIPR.2017.8457967
A. ElSayed, A. Mahmood, T. Sobh
{"title":"Effect of Super Resolution on High Dimensional Features for Unsupervised Face Recognition in the Wild","authors":"A. ElSayed, A. Mahmood, T. Sobh","doi":"10.1109/AIPR.2017.8457967","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457967","url":null,"abstract":"Majority of the face recognition algorithms use query faces captured from uncontrolled, in the wild, environment. Because of cameras' limited capabilities, it is common for these captured facial images to be blurred or low resolution. Super resolution algorithms are therefore crucial in improving the resolution of such images especially when the image size is small and enlargement is required. This paper aims to demonstrate the effect of one of the state-of-the-art algorithms in the field of image super resolution. To demonstrate the functionality of the algorithm, various before and after 3D face alignment cases are provided using the images from the Labeled Faces in the Wild (lfw) dataset. Resulting images are subject to test on a closed set recognition protocol using unsupervised algorithms with high dimensional extracted features. The inclusion of super resolution algorithm resulted in significant improvement in recognition rate over recently reported results obtained from unsupervised algorithms on the same dataset.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124226764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信