2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

筛选
英文 中文
Learning tree-structured approximations for conditional random fields 学习树结构近似的条件随机场
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-11-18 DOI: 10.1109/AIPR.2014.7041937
A. Skurikhin
{"title":"Learning tree-structured approximations for conditional random fields","authors":"A. Skurikhin","doi":"10.1109/AIPR.2014.7041937","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041937","url":null,"abstract":"Exact probabilistic inference is computationally intractable in general probabilistic graph-based models, such as Markov Random Fields and Conditional Random Fields (CRFs). We investigate spanning tree approximations for the discriminative CRF model. We decompose the original computationally intractable grid-structured CRF model containing many cycles into a set of tractable sub-models using a set of spanning trees. The structure of spanning trees is generated uniformly at random among all spanning trees of the original graph. These trees are learned independently to address the classification problem and Maximum Posterior Marginal estimation is performed on each individual tree. Classification labels are produced via voting strategy over the marginals obtained on the sampled spanning trees. The learning is computationally efficient because the inference on trees is exact and efficient. Our objective is to investigate the capability of approximation of the original loopy graph model with loopy belief propagation inference via learning a pool of randomly sampled acyclic graphs. We focus on the impact of memorizing the structure of sampled trees. We compare two approaches to create an ensemble of spanning trees, whose parameters are optimized during learning: (1) memorizing the structure of the sampled spanning trees used during learning and, (2) not storing the structure of the sampled spanning trees after learning and regenerating trees anew. Experiments are done on two image datasets consisting of synthetic and real-world images. These datasets were designed for the tasks of binary image denoising and man-made structure recognition.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114545377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Development of spectropolarimetric imagers for imaging of desert soils 沙漠土壤光谱偏振成像仪的研制
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041908
N. Gupta
{"title":"Development of spectropolarimetric imagers for imaging of desert soils","authors":"N. Gupta","doi":"10.1109/AIPR.2014.7041908","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041908","url":null,"abstract":"There is much interest in imaging of desert soils to understand their mineral composition, grain sizes and orientations for various civilian and military applications. We discuss the development of two novel field-portable spectropolarimetric imagers based on acousto-optic tunable filter (AOTF) technology in the visible near-infrared (VNTR) and shortwave infrared (SWIR) wavelength regions. The first imager covers a spectral region from 450 to 800 nm with a bandwidth of 5 nm at 633 nm and the second from 1000 to 1600 nm with a bandwidth of 15 nm at 1350 nm. These imagers will be used in field tests. In this paper, we discuss salient aspect of spectropolarimetric imager development and present some data collected with them.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123164480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Large displacement optical flow based image predictor model 基于大位移光流的图像预测模型
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041943
N. Verma, Aakansha Mishra
{"title":"Large displacement optical flow based image predictor model","authors":"N. Verma, Aakansha Mishra","doi":"10.1109/AIPR.2014.7041943","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041943","url":null,"abstract":"This paper proposes a Large Displacement Optical Flow based Image Predictor Model for generating future image frames by applying past and present image frames. The predictor model is an Artificial Neural Network (ANN) and Radial Basis Function Neural Network (RBFNN) Model whose input set of data is horizontal and vertical components of velocities estimated using Large Displacement Optical Flow for every pixel intensity in a given image sequence. There has been a significant amount of research in the past to generate future image frames for a given set of image frames. The quality of generated images is evaluated by Canny's edge detection Index Metric (CIM) and Mean Structure Similarity Index Metric (MSSIM). For our proposed algorithm, CIM and MSSIM indices for all the future generated images are found better when compared with the most recent existing algorithms for future image frame generation. The objective of this study is to develop a generalized framework that can predict future image frames for any given image sequence with large displacements of objects. In this paper, we have validated our developed Image Predictor Model on an image sequence of landing jet fighter and obtained performance indices are found better as compared to most recent existing image predictor models.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122899927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A 3D pointcloud registration algorithm based on fast coherent point drift 基于快速相干点漂移的三维点云配准算法
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041917
Min Lu, Jian Zhao, Yulan Guo, Jianping Ou, Jonathan Li
{"title":"A 3D pointcloud registration algorithm based on fast coherent point drift","authors":"Min Lu, Jian Zhao, Yulan Guo, Jianping Ou, Jonathan Li","doi":"10.1109/AIPR.2014.7041917","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041917","url":null,"abstract":"Pointcloud registration has a number of applications in various research areas. Computational complexity and accuracy are two major concerns for a pointcloud registration algorithm. This paper proposes a novel Fast Coherent Point Drift (F-CPD) algorithm for 3D pointcloud registration. The original CPD method is very time-consuming. The situation becomes even worse when the number of points is large. In order to overcome the limitations of the original CPD algorithm, a global convergent squared iterative expectation maximization (gSQUAREM) scheme is proposed. The gSQUAREM scheme uses an iterative strategy to estimate the transformations and correspondences between two pointclouds. Experimental results on a synthetic dataset show that the proposed algorithm outperforms the original CPD algorithm and the Iterative Closest Point (ICP) algorithm in terms of both registration accuracy and convergence rate.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130668919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Mathematical model and experimental methodology for calibration of a LWIR polarimetric-hyperspectral imager LWIR偏振-高光谱成像仪定标的数学模型和实验方法
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041909
Joel G. Holder, Jacob A. Martin, K. Gross
{"title":"Mathematical model and experimental methodology for calibration of a LWIR polarimetric-hyperspectral imager","authors":"Joel G. Holder, Jacob A. Martin, K. Gross","doi":"10.1109/AIPR.2014.7041909","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041909","url":null,"abstract":"Polarimetric-hyperspectral imaging brings two traditionally independent modalities together to potentially enhance scene characterization capabilities. This could increase confidence in target detection, material identification, and background characterization over traditional hyperspectral imaging. In order to fully exploit the spectro-polarimetric signal, a careful calibration process is required to remove both the radiometric and polarimetric response of the system (gain). In the long-wave infrared, calibration is further complicated by the polarized self-emission of the instrument itself (offset). This paper presents both the mathematical framework and the experimental methodology for the spectro-polarimetric calibration of a long-wave infrared (LWIR) Telops Hyper-Cam which has been modified with a rotatable wire-grid polarizer at the entrance aperture. The mathematical framework is developed using a Mueller matrix approach to model the polarimetric effects of the system, and this is combined with a standard Fourier-transform spectrometer (FTS) radiometric calibration framework. This is done for two cases: one assuming that the instrument polarizer is ideal, and a second method which accounts for a non-ideal instrument polarizer. It is shown that a standard two-point radiometric calibration at each instrument polarizer angle is sufficient to remove the polarimetric bias of the instrument, if the instrument polarizer can be assumed to be ideal. For the non-ideal polarizer case, the system matrix and the Mueller deviation matrix is experimentally determined for the system, and used to quantify how non-ideal the system is. The noise-equivalent spectral radiance and DoLP are also quantified using a wide-area blackbody. Finally, a scene with a variety of features in it is imaged and analyzed.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116009902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
KWIVER: An open source cross-platform video exploitation framework KWIVER:一个开源的跨平台视频开发框架
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041910
Keith Fieldhouse, Matthew J. Leotta, Arslan Basharat, Russell Blue, David Stoup, Chuck Atkins, Linus Sherrill, B. Boeckel, Paul Tunison, Jacob Becker, Matthew Dawkins, Matthew Woehlke, Roderic Collins, M. Turek, A. Hoogs
{"title":"KWIVER: An open source cross-platform video exploitation framework","authors":"Keith Fieldhouse, Matthew J. Leotta, Arslan Basharat, Russell Blue, David Stoup, Chuck Atkins, Linus Sherrill, B. Boeckel, Paul Tunison, Jacob Becker, Matthew Dawkins, Matthew Woehlke, Roderic Collins, M. Turek, A. Hoogs","doi":"10.1109/AIPR.2014.7041910","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041910","url":null,"abstract":"We introduce KWIVER, a cross-platform video exploitation framework that Kitware has begun releasing as open source. Kitware is utilizing a multi-tiered open-source approach to reach as wide an audience as possible. Kitware's government-funded efforts to develop critical defense technology will be released back to the defense community via Forge.mil, a government open source repository. Infrastructure, algorithms, and systems without release restrictions will be provided to the larger video analytics community via kwiver.org and GitHub. Our goal is to provide a video analytics technology baseline for repeatable and reproducible experiments and to serve as a framework for the development of computer vision and machine learning systems. We hope that KWIVER will provide a focal point for collaboration and contributions from groups across the community.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116685259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Enhanced view invariant gait recognition using feature level fusion 基于特征融合的增强视觉不变步态识别
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041942
H. Chaubey, M. Hanmandlu, S. Vasikarla
{"title":"Enhanced view invariant gait recognition using feature level fusion","authors":"H. Chaubey, M. Hanmandlu, S. Vasikarla","doi":"10.1109/AIPR.2014.7041942","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041942","url":null,"abstract":"In this paper, following the model-free approach for gait image representation, an individual recognition system is developed using the Gait Energy Image (GEI) templates. The GEI templates can easily be obtained from an image sequence of a walking person. Low dimensional feature vectors are extracted from the GEI templates using Principal Component Analysis (PCA) and Multiple Discriminant Analysis (MDA), followed by the nearest neighbor classification for recognition. Genuine and imposter scores are computed to draw the Receiver Operating Characteristics (ROC). In practical scenarios, the viewing angles of gallery data and probe data may not be the same. To tackle such difficulties, View Transformation Model (VTM) is developed using Singular Value Decomposition (SVD). The gallery data at a different viewing angle are transformed to the viewing angle of probe data using the View Transformation Model. This paper attempts to enhance the overall recognition rate by an efficient method of fusion of the features which are transformed from other viewing angles to that of probe data. Experimental results show that fusion of view transformed features enhances the overall performance of the recognition system.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121518042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A comparative study of methods to solve the watchman route problem in a photon mapping-illuminated 3D virtual environment 基于光子映射的三维虚拟环境中哨兵路径问题求解方法的比较研究
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041913
B. A. Johnson, J. Isaacs, H. Qi
{"title":"A comparative study of methods to solve the watchman route problem in a photon mapping-illuminated 3D virtual environment","authors":"B. A. Johnson, J. Isaacs, H. Qi","doi":"10.1109/AIPR.2014.7041913","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041913","url":null,"abstract":"Understanding where to place static sensors such that the amount of information gained is maximized while the number of sensors used to obtain that information is minimized is an instance of solving the NP-hard art gallery problem (AGP). A closely-related problem is the watchman route problem (WRP) which seeks to plan an optimal route by an unmanned vehicle (UV) or multiple UVs such that the amount of information gained is maximized while the distance traveled to gain that information is minimized. In order to solve the WRP, we present the Photon-mapping-informed active-Contour Route Designator (PICRD) algorithm. PICRD heuristically solves the WRP by selecting AGP-solving vertices and connecting them with vertices provided by a 3D mesh generated by a photon-mapping informed segmentation algorithm using some shortest-route path-finding algorithm. Since we are using photon-mapping as our foundation for determining UV-sensor coverage by the PICRD algorithm, we can then take into account the behavior of photons as they propagate through the various environmental conditions that might be encountered by a single or multiple UVs. Furthermore, since we are being agnostic with regard to the segmentation algorithm used to create our WRP-solving mesh, we can adjust the segmentation algorithm used in order to accommodate different environmental and computational circumstances. In this paper, we demonstrate how to adapt our methods to solve the WRP for single and multiple UVs using PICRD using two different segmentation algorithms under varying virtual environmental conditions.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131388130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Secret communication in colored images using saliency map as model 以显著性图为模型的彩色图像秘密通信
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041919
Manish Mahajan, Navdeep Kaur
{"title":"Secret communication in colored images using saliency map as model","authors":"Manish Mahajan, Navdeep Kaur","doi":"10.1109/AIPR.2014.7041919","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041919","url":null,"abstract":"Steganography is a process that involves hiding a message in an appropriate carrier for example an image or an audio file. Many algorithms have been proposed for this purpose in spatial & frequency domain. But in almost all the algorithms it has been noticed that as one embeds the secret data in the image certain characteristics or statistics of the image get disturbed. To deal with this problem another paradigm named as adaptive steganography exists which is based upon some mathematical model. Visual system of human beings does not process the complete area of image rather focus upon limited area of visual image. But in which area does the visual attention focused is a topic of hot research nowadays. Research on psychological phenomenon indicates that attention is attracted to features that differ from its surroundings or the one that are unusual or unfamiliar to the human visual system. Object or region based image processing can be performed more efficiently with information pertaining locations that are visually salient to human perception with the aid of a saliency map. So saliency map may act as model for adaptive steganography in images. Keeping this in view, a novel steganography technique based upon saliency map has been proposed in this work.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132668295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modified deconvolution using wavelet image fusion 改进的小波图像融合反卷积
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041900
Michel McLaughlin, En-Ui Lin, Erik Blasch, A. Bubalo, Maria Scalzo-Cornacchia, M. Alford, M. Thomas
{"title":"Modified deconvolution using wavelet image fusion","authors":"Michel McLaughlin, En-Ui Lin, Erik Blasch, A. Bubalo, Maria Scalzo-Cornacchia, M. Alford, M. Thomas","doi":"10.1109/AIPR.2014.7041900","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041900","url":null,"abstract":"Image quality is affected by two predominant factors, noise and blur. Blur typically manifests itself as a smoothing of edges, and can be described as the convolution of an image with an unknown blur kernel. The inverse of convolution is deconvolution, a difficult process even in the absence of noise, which aims to recover the true image. Removing blur from an image has two stages: identifying or approximating the blur kernel, then performing a deconvolution of the estimated kernel and blurred image. Blur removal is often an iterative process, with successive approximations of the kernel leading to optimal results. However, it is unlikely that a given image is blurred uniformly. In real world situations most images are already blurred due to object motion or camera motion/de focus. Deconvolution, a computationally expensive process, will sharpen blurred regions, but can also degrade the regions previously unaffected by blur. To remedy the limitations of blur deconvolution, we propose a novel, modified deconvolution, using wavelet image fusion (moDuWIF), to remove blur from a no-reference image. First, we estimate the blur kernel, and then we perform a deconvolution. Finally, wavelet techniques are implemented to fuse the blurred and deblurred images. The details in the blurred image that are lost by deconvolution are recovered, and the sharpened features in the deblurred image are retained. The proposed technique is evaluated using several metrics and compared to standard approaches. Our results show that this approach has potential applications to many fields, including: medical imaging, topography, and computer vision.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133041071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信