2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

筛选
英文 中文
When to engage video resilience options 何时使用视频弹性选项
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749342
Darrell L. Young
{"title":"When to engage video resilience options","authors":"Darrell L. Young","doi":"10.1109/AIPR.2013.6749342","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749342","url":null,"abstract":"Video bit errors or missing packets can result in freezing or distortion causing a severe loss of interpretability. Resilience options can mitigate the error propagation that occurs due to the spatial and temporal dependencies in the compressed bit stream. Resilience options can be invoked at the encoder, based on feedback from the application layer decoder. If no feedback channel is available, the encoder controller can estimate data link performance and engage sufficient resilience options. Decode errors can be summarized in metadata to alert upstream encoders to engage resilience and downstream consumers of artifacts due to damaged or missing macro blocks.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"14 41","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134505948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A case study on data fusion with overlapping segments 基于重叠段的数据融合实例研究
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749316
A. Cloninger, W. Czaja, T. Doster
{"title":"A case study on data fusion with overlapping segments","authors":"A. Cloninger, W. Czaja, T. Doster","doi":"10.1109/AIPR.2013.6749316","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749316","url":null,"abstract":"With the continual improvement and diversification of existing sensing modalities and the emergence of new sensing technologies, methods to effectively and efficiently fuse the diverse and heterogeneous data sets are increasingly important. When different sensors acquire data over the same region of the Earth, a direct comparison between pixels acquired from one sensor to pixels acquired from a another sensor becomes difficult. For example, there could be different number of bands, or the sensors could measure drastically different spaces (hyperspectral and LIDAR). A solution to this problem is Feature Space Rotation, which realizes the sensor data independently in separate feature spaces via a machine learning algorithm and then a rotation is learned to bring the separate feature spaces into a common feature space. This rotation, in it original form, requires some amount of overlap between the data sets. We propose a study to determine the effect of decreasing the amount of overlap between the two sensors has on the classification accuracy. For this study, we shall rely on hyperspectral data that has been simulated to come from two disjoint sensors.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125256354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Dynamic multistatic synthetic aperture radar (DMSAR) with image reconstruction algorithms and analysis 动态多静态合成孔径雷达(DMSAR)图像重建算法及分析
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749325
G. Seetharaman, Eric T. Hayden, M. Schmalz, William Chapman, S. Ranka, S. Sahni
{"title":"Dynamic multistatic synthetic aperture radar (DMSAR) with image reconstruction algorithms and analysis","authors":"G. Seetharaman, Eric T. Hayden, M. Schmalz, William Chapman, S. Ranka, S. Sahni","doi":"10.1109/AIPR.2013.6749325","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749325","url":null,"abstract":"The imaging of ground objects by circular synthetic aperture radar (CSAR) is a well-known technique that can benefit from the use of multiple receivers in a multistatic configuration. Although static receivers have been employed to determine the location of one or more airborne objects, the use of multiple dynamically positioned airborne receivers with an airborne transmitter represents a novel application of SAR technology for the detection of ground objects. This paper presents theory, algorithms, and experimental results for dynamic multistatic synthetic aperture radar (DMSAR) for airborne sensing of ground-based objects. We emphasize the proper placement of receivers in relationship to the transmitter and object area, in order to achieve a specific quality of image reconstruction. Additionally, we discuss performance issues pertaining to the computation of DMSAR on high-performance platforms such as clusters of graphics processing units or hybrid processors. Example results and analysis are taken from synthetic or public-domain data.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133416056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Error characterization of flight trajectories reconstructed using Structure from Motion 基于运动构造的飞行轨迹重构误差表征
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749308
Daniel C. Alix, K. Walli, J. Raquet
{"title":"Error characterization of flight trajectories reconstructed using Structure from Motion","authors":"Daniel C. Alix, K. Walli, J. Raquet","doi":"10.1109/AIPR.2013.6749308","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749308","url":null,"abstract":"This research assesses the accuracy of Structure from Motion algorithms in replicating aircraft flight trajectories in real world coordinate systems. Structure from Motion techniques can be used to estimate aircraft trajectory and attitude by estimating the position and pose of a camera mounted on the airframe from a series of images taken with that camera. The scale and coordinate systems associated with these pose estimates are arbitrary but can be tied to a real world coordinate system and scale with knowledge of terrain features or aircraft INU/GPS measurements. As a result, Structure from Motion techniques hold great promise for use in image based Simultaneous Localization and Mapping (SLAM); however, the error associated with these techniques must be understood for incorporation into a robust navigation system. The error in the Structure from Motion trajectory reconstruction is affected by feature matching methods, camera geometry and algorithm parameters as well as the method used to convert from reconstruction coordinates to real world coordinates. Methods of using inertial and geographic measurement data to associate a scale and real world coordinate system with the trajectory estimates are developed and compared. The reconstructed flight trajectory output by Structure from Motion is compared to the actual flight trajectory to characterize errors using both simulation and flight test data in a variety of flight regimes.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131344960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Landslide detection on earthen levees with X-band and L-band radar data 基于x波段和l波段雷达数据的土方堤防滑坡探测
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749306
Lalitha Dabbiru, J. Aanstoos, K. Hasan, N. Younan, Wei Li
{"title":"Landslide detection on earthen levees with X-band and L-band radar data","authors":"Lalitha Dabbiru, J. Aanstoos, K. Hasan, N. Younan, Wei Li","doi":"10.1109/AIPR.2013.6749306","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749306","url":null,"abstract":"This paper explores anomaly detection algorithms to detect vulnerabilities on Mississippi river levees using remotely sensed Synthetic Aperture Radar (SAR) data. Earthen levees protect large areas of populated and cultivated land in the United States. One sign of potential levee failure is the occurrence of landslides due to slope instabilities. Such slides could lead to further erosion and through seepage during high water events. This research seeks to design a system that is capable of performing automated target recognition tasks using radar data to detect problem areas on earthen levees. Polarimetric SAR data is effective for detecting such phenomena. In this research, we analyze the ability of different polarization channels in detecting landslides with different frequency bands of synthetic aperture radar data using anomaly detection algorithms. The two SAR datasets used in this study are: (1) the X-band satellite-based radar data from DLR's TerraSAR-X satellite, and (2) the L-band airborne radar data from NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). The RX anomaly detector, an unsupervised classification algorithm, was implemented to detect anomalies on the levee. The discrete wavelet transform (DWT) is used for feature extraction. The algorithm was tested with both the L-band and X-band SAR data and the results demonstrate that landslide detection using L-band radar data has better accuracy compared to the X-band data based on the detection of true positives.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126632455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multi-scale decomposition tool for Content Based Image Retrieval 基于内容的图像检索多尺度分解工具
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749318
Soundararajan Ezekiel, M. Alford, D. Ferris, Eric K. Jones, A. Bubalo, M. Gorniak, Erik Blasch
{"title":"Multi-scale decomposition tool for Content Based Image Retrieval","authors":"Soundararajan Ezekiel, M. Alford, D. Ferris, Eric K. Jones, A. Bubalo, M. Gorniak, Erik Blasch","doi":"10.1109/AIPR.2013.6749318","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749318","url":null,"abstract":"Content Based Image Retrieval (CBIR) is a technical area focused on answering “Who, What, Where and When,” questions associated with the imagery. A multi-scale feature extraction scheme based on wavelet and Contourlet transforms is proposed to reliably extract objects in images. First, we explore Contourlet transformation in association with Pulse Coupled Neural Network (PCNN) while the second technique is based on Rescaled Range (R/S) Analysis. Both methods provide flexible multi-resolution decomposition, directional feature extraction and are suitable for image fusion. The Contourlet transformation is conceptually similar to a wavelet transformation, but simpler, faster and less redundant. The R/S analysis, uses the range R of cumulative deviations from the mean divided by the standard deviation S, to calculate the scaling exponent, or a Hurst exponent, H. Following the original work of Hurst, the exponent H provides a quantitative measure of the persistence of similarities in a signal. For images, if information exhibits self-similarity and fractal correlation then H gives a measure of smoothness of the objects. The experimental results demonstrate that our proposed approach has promising applications for CBIR. We apply our multiscale decomposition approach to images with simple thresholding of wavelet/curvelet coefficients for visually sharper object outlines, salient extraction of object edges, and increased perceptual quality. We further explore these approaches to segment images and, the empirical results reported here are encouraging to determine who or what is in the image.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115250108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Human object interactions recognition based on social network analysis 基于社会网络分析的人与物交互识别
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749320
Guang Yang, Yafeng Yin, H. Man
{"title":"Human object interactions recognition based on social network analysis","authors":"Guang Yang, Yafeng Yin, H. Man","doi":"10.1109/AIPR.2013.6749320","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749320","url":null,"abstract":"Recognizing human-object interactions in videos is a very challenging problem in computer vision research. There are two major difficulties lying in this task: (1) The detection of human body parts and objects is usually affected by the quality of the videos, for instance, low resolutions of the videos, camera motions, and blurring frames caused by fast motions, as well as the self-occlusions during human-object interactions. (2) The spatial and temporal dynamics of human-object interaction are hard to model. In order to overcome those natural obstacles, we propose a new method using social network analysis (SNA) based features to describe the distributions and relationships of low level objects for human-object interaction recognition. In this approach, the detected human body parts and objects are treated as nodes in social network graphs, and a set of SNA features including closeness, centrality and centrality with relative velocity are extracted for action recognition. A major advantage of SNA based feature set is its robustness to varying node numbers and erroneous node detections, which are very common in human-object interactions. An SNA feature vector will be extracted for each frame and different human-object interactions are classified based on these features. Two classification methods, including Support Vector Machine (SVM) and Hidden Markov Model (HMM), have been used to evaluate the proposed feature set on four different human-object interactions from HMDB dataset [1]. The experimental results demonstrated that the proposed framework can effectively capture the dynamical characteristics of human-object interaction and outperforms the state of art methods in human-object interaction recognition.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124534934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Aircraft navigation by means of image registration 利用图像配准的方法进行飞机导航
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749335
M. D. Pritt, Kevin J. LaTourette
{"title":"Aircraft navigation by means of image registration","authors":"M. D. Pritt, Kevin J. LaTourette","doi":"10.1109/AIPR.2013.6749335","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749335","url":null,"abstract":"Most aircraft, whether manned or unmanned, depend on the Global Positioning System (GPS) for navigation. However, GPS signals are susceptible to jamming and spoofing. A researcher recently demonstrated the use of spoofing to take control of a ship in the Mediterranean Sea. By feeding counterfeit signals to the ship, he was able to drive it far off course. All the time, the navigation system reported the vessel was moving calmly along its intended course. The same researcher also demonstrated the ability to spoof unmanned aerial vehicles (UAVs). The purpose of this paper is to demonstrate a navigation system that does not rely on GPS. The only sensor required is a digital camera. The system works by taking a photograph of the ground and georegistering it to a reference image or digital elevation model (DEM) with the GeoMI™ georegistration software. By transferring geospatial information from the DEM to the photo, the system calculates the position and orientation of the aircraft. The process is fast enough to perform in real time and suitable for use as a primary or backup navigation system for aircraft, UAVs, and cruise missiles. For nighttime navigation, the system could be extended to infrared or radar imagery. Experimental results show that navigation accuracies of 2 meters CE90 can be achieved.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114793611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Vision-based navigation system for obstacle avoidance in complex environments 基于视觉的复杂环境避障导航系统
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749314
Yakov Diskin, B. Nair, A. Braun, S. Duning, V. Asari
{"title":"Vision-based navigation system for obstacle avoidance in complex environments","authors":"Yakov Diskin, B. Nair, A. Braun, S. Duning, V. Asari","doi":"10.1109/AIPR.2013.6749314","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749314","url":null,"abstract":"We present a mobile system capable of autonomous navigation through complex unknown environments that contain stationary obstacles and moving targets. The intelligent system is composed of several fine-tuned computer vision algorithms running onboard in real-time. The first of these utilizes onboard cameras to allow for stereoscopic estimation of depths within the surrounding environment. The novelty of the approach lies in algorithmic efficiency and the ability of the system to complete a given task through the utilization of scene reconstruction and in making real-time automated decisions. Secondly, the system performs human body detection and recognition using advanced local binary pattern (LBP) descriptors. The LBP descriptors allow the system to perform human identification and tracking tasks irrespective of lighting conditions. Lastly, face detection and recognition allow for an additional layer of biometrics to ensure the correct target is being tracked. The face detection algorithm utilizes the Voila-Jones cascades, which are combined to create a pose invariant face detection system. Furthermore, we utilize a modular principal component analysis technique to perform pose-invariant face recognition. In this paper, we present the results of a series of experiments designed to automate the security patrol process. Our mobile security system completes a series of tasks within varying scenarios that range in difficulty. The tasks consist of tracking an object in an open environment, following a person of interest through a crowded environment, and following a person who disappears around a corner.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124633372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Stressed vegetation identification by SAR time series as an indicator of slope instability in Mississippi River levee segments 基于SAR时间序列的应力植被识别作为密西西比河堤防段边坡失稳指标
2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2013-10-01 DOI: 10.1109/AIPR.2013.6749307
K. Hasan, J. Aanstoos, Majid Mahrooghy
{"title":"Stressed vegetation identification by SAR time series as an indicator of slope instability in Mississippi River levee segments","authors":"K. Hasan, J. Aanstoos, Majid Mahrooghy","doi":"10.1109/AIPR.2013.6749307","DOIUrl":"https://doi.org/10.1109/AIPR.2013.6749307","url":null,"abstract":"Surface vegetation reflects various characteristics of the soil on which it grows. Vegetation type and growth rate differences were observed between recently cracked surfaces and stable soil on earthen levees along the lower Mississippi River. We attempted to directly characterize the levee surface beneath the vegetation cover using X-band synthetic aperture radar from TerraSAR-X. Due to its short wavelength, however, most of the backscatter is from the vegetation rather than the soil. Hence a time-series of the SAR imagery was made over the time when vegetation growth and biomass were at their lowest and the multi-temporal radar backscatter pattern was used to identify healthy and stressed vegetation growing over stable and unstable (subject to slump slides) levee segments. Field data showed that vegetation was most vigorous over healthy or repaired levee surfaces and poorest over areas with surface cracks. Three time series of the HH and VV and HH-VV images were made for 6 dates over a 7 month period. Correlation with field data and analysis revealed that imagery from October through February was most effective in identifying the target vegetation differences. In this time span the differences in vigor of the vegetation were greatest between healthy levee surfaces and those with cracks on them (which could lead to potential levee failures). The 3 time series were classified independently using field derived training polygons and the stressed vegetation class was extracted from it. More than 90% of the known slump slides were identified by the classification but a significant number of `false positives' resulted.","PeriodicalId":435620,"journal":{"name":"2013 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122192901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信